Sec2Sess29

From Calculus Notes
Jump to navigation Jump to search


<math> \text {least-squares interpolation: } </math>

finding best fit line for y = ax+b (regression line)
unknowns searched for are a and b
method: minimize total square deviation
deviation: difference between observed and predicted points

Main point of lesson: find critical points, i.e., minimums,
for optimization by applying the least squares scoring function
to x and y, finding partial derivatives of the function, and solving
for unknown dependent variables through linear system equations
where the result is 0. The result being 0 causes the derivative
to be at the minimum.

minimize D
D is the sum (where i is 1 to n) of the squared error (deviation)
<math>\mathrm{D\ =\ \sum\limits_{i=1}^n(y_i-(a*x_i+b))^2}</math>
a function of a and b

partial derivatives of a and b
<math>\mathrm{pd(a):\ \sum\limits_{i=1}^n(2*(y_i-(a*x_i+b))*(-x_i))\ =\ 0}</math>
<math>\mathrm{pd(b):\ \sum\limits_{i=1}^n(2*(y_i-(a*x+b))*(-1)\ =\ 0}</math>

expand product of <math>\mathrm{y_i-(a*x_i+b))*(-x_i)}</math>:
remove 2 from the equations because solving linear system
cancels them out
(y-a*x-b)*(-x)
(-a*x-b+y)*(-x)
x^2*a+x*b-x*y

<math>\mathrm{pd(a):\ \sum\limits_{i=1}^n(2*(y_i-(a*x_i+b))*(-x_i))\ =\ 0}</math>
<math>\mathrm{pd(b):\ \sum\limits_{i=1}^n(2*(y_i-(a*x+b))*(-1)\ =\ 0}</math>

expand product of <math>\mathrm{y_i-(a*x_i+b))*(-x_i)\ =\ 0}</math>:
remove 2 from the equations
(y-a*x-b)*(-x)
(-a*x-b+y)*(-x)
x^2*a+x*b-x*y=0

for part_d of b:
xa+b-y=0

x^2*a+x*b-x*y=0 rewritten becomes:
sum(x^2)a+sum(x)b=sum(xy)

part_b rewritten:
sum(x)a+nb=y where n = sample size from <math>\mathrm{\sum\limits_{i=1}^n}</math> function

rewritten equations become linear system
that can be solved for a,b:
xa+xb=xy
xa+nb=y

often dividing all terms by n is done "to make them more expressive".
n = the total number of points (x,y) sampled

second derivative of function finds a minimum values
(inflection point) when derivative is 0.

example application of least squares given sample data if given here:
https://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/2.-partial-derivatives/part-a-functions-of-two-variables-tangent-approximation-and-optimization/session-29-least-squares/MIT18_02SC_we_13_comb.pdf

Another method:
Line equation used of y = ax+b can be higher order polynomial
if wanted for a more complex line fit. E.g., y = ax^2+bx+c

how to find a best exponential fit:
y = ce^ax where e is natural log
<=> (equivalent to) ln(y)=ln(c)+ax

can be applied to quadratic law.
General method of finding line fit with least squares:

plug points into y = ax+b form. That is rewritten
as y - (ax + b) = 0 . combine results from all points
by adding them together. From results find partial derivative
for a and b. use partial deriv. in linear system to solve for
equations equaling 0 .
a and b values used to solve for 0 are line's coefficients

practice problem example:
https://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/2.-partial-derivatives/part-a-functions-of-two-variables-tangent-approximation-and-optimization/session-29-least-squares/MIT18_02SC_pb_28_comb.pdf