### A Gaussian Example

So how about that example? Suppose you want to find a solution to the following three equations with three unknowns:

3 * x + 6 * y + 3 * z = 9

2 * x + 4 * y - z = -3

4 * x + 6 * y + 2 * z = 8

The augmented matrix for this system of equations is:

Now you can read off the solution: x = 2, y = -1, z = 3. If you plug these values into the original equations, you'll see that they work.

Hopefully by now you understand Gaussian elimination. It's fairly simple and only requires some easy multiplication, addition, and row swapping. The only catch is that there are a lot of operations so it's easy to make mistakes. Fortunately the computer is very good at performing these sorts of operations without making mistakes.

The following code shows how the PolynomialLeastSquares example program (which is available for download in C# and Visual Basic versions) performs Gaussian elimination. I've put this in a separate method because it's handy to have for solving other systems of equations not just those needed for polynomial least squares.

```
// Perform Gaussian elimination on these coefficients.
// Return the array of values that gives the solution.
private double[] GaussianElimination(double[,] coeffs)
{
int max_equation = coeffs.GetUpperBound(0);
int max_coeff = coeffs.GetUpperBound(1);
for (int i = 0; i <= max_equation; i++)
{
// Use equation_coeffs[i, i] to eliminate the ith
// coefficient in all of the other equations.
// Find a row with non-zero ith coefficient.
if (coeffs[i, i] == 0)
{
for (int j = i + 1; j<= max_equation; j++)
{
// See if this one works.
if (coeffs[j, i] != 0)
{
// This one works. Swap equations i and j.
// This starts at k = i because all
// coefficients to the left are 0.
for (int k = i; k <= max_coeff; k++)
{
double temp = coeffs[i, k];
coeffs[i, k] = coeffs[j, k];
coeffs[j, k] = temp;
}
break;
}
}
}
// Make sure we found an equation with
// a non-zero ith coefficient.
double coeff_i_i = coeffs[i, i];
if (coeff_i_i == 0)
{
throw new ArithmeticException(String.Format(
"There is no unique solution for these points.",
coeffs.GetUpperBound(0) - 1));
}
// Normalize the ith equation.
for (int j = i; j <= max_coeff; j++)
{
coeffs[i, j] /= coeff_i_i;
}
// Use this equation value to zero out
// the other equations' ith coefficients.
for (int j = 0; j<= max_equation; j++)
{
// Skip the ith equation.
if (j != i)
{
// Zero the jth equation's ith coefficient.
double coef_j_i = coeffs[j, i];
for (int d = 0; d <= max_coeff; d++)
{
coeffs[j, d] -= coeffs[i, d] * coef_j_i;
}
}
}
}
// At this point, the ith equation contains
// 2 non-zero entries:
// The ith entry which is 1
// The last entry coeffs[max_coeff]
// This means Ai = equation_coef[max_coeff].
double[] solution = new double[max_equation + 1];
for (int i = 0; i <= max_equation; i++)
{
solution[i] = coeffs[i, max_coeff];
}
// Return the solution values.
return solution;
}
```

### Back to Polynomial Least Squares

That was a long digression from polynomial least squares so let's review where we were when the digression began. To find the best fit polynomial, you found that the partial derivative of the polynomial with respect to Aj is:

?E/?Aj = 2 * (?yi * xij - A0 * ?xij - A1 * ?xij+1 - ... - AD * ?xij+D)

To find the best solution, you set this equation and the partial derivatives for the other Ai equal to 0 and solve for the Ai values. Now that you know how to perform Gaussian elimination, this is a snap. The only tricky part is figuring out what coefficients to use in the equations that you send into the Gaussian elimination method.

The following code shows how the PolynomialLeastSquares example program builds the coeffs array holding the augmented matrix that it passes to the GaussianElimination method.

```
// Find the least squares polynomial fit.
private double[] FindPolynomialLeastSquaresFit(
List
```

```
points, int degree)
{
// Allocate space for (degree + 1) equations with
// (degree + 2) terms each.
double[,] coeffs = new double[degree + 1, degree + 2];
// Calculate the coefficients for the equations.
for (int j = 0; j <= degree; j++)
{
// Calculate the coefficients for the jth equation.
// Calculate the constant term for this equation.
coeffs[j, degree + 1] = 0;
foreach (PointF pt in points)
{
coeffs[j, degree + 1] -= Math.Pow(pt.X, j) * pt.Y;
}
// Calculate the other coefficients.
for (int a_sub = 0; a_sub <= degree; a_sub++)
{
// Calculate the dth coefficient.
coeffs[j, a_sub] = 0;
foreach (PointF pt in points)
{
coeffs[j, a_sub] -= Math.Pow(pt.X, a_sub + j);
}
}
}
// Solve the equations.
return GaussianElimination(coeffs);
}
```

By using programs such as PolynomialLeastSquares, you can get a better understanding of your data. You can find the best fit of a given degree and then judge whether the data points fit that polynomial well.

Ideally if you know something about the process that generated the data, you may be able to figure out what degree to use for the polynomial, but if you can't, beware of fitting data to high degree polynomials. A degree d polynomial can exactly fit d + 1 points so if you use too high a degree you'll get a misleadingly exact match.
For example, Figure 3 shows a degree 9 polynomial exactly fitting 10 data points. The curve fits the points exactly but that doesn't mean it's the best fit or the most useful one.

**Figure 3. Too Tight a Fit:** A high-degree polynomial can fit most any data, possibly obscuring its true structure.

Figure 4 shows the same points fitted by a degree 2 polynomial. Although this doesn't fit the points exactly, it's probably is a better representation of the data than the previous curve.

For example, suppose you want to predict the y value when x is close to 0. The polynomial in Figure 4 would give you a value close to the bottom of the graph while the polynomial in Figure 3 would give you a value way off the top. Which is really better depends on your situation but unless something really complex is happening, the degree 2 curve is probably a better bet.

**Figure 4. Relaxed Fit:** A degree 2 polynomial probably fits this data better than a degree 9 polynomial even though it doesn't fit the points exactly.

You can use techniques similar to those described here to find other kinds of least squares fits, too. For example, depending on your data you may want to use a logarithmic curve of the form y = a + b * ln(x), a power curve of the form y = a * xb, or an exponential curve of the form y = a * ebx. For more information on these kinds of curves, see

Wolfram MathWorld.