Multivariate Calculus - Bruce E. Shapiro
Multivariate Calculus - Bruce E. Shapiro Multivariate Calculus - Bruce E. Shapiro
112 LECTURE 14. UNCONSTRAINED OPTIMIZATION Dividing by 2 and separating the three sums as before 0 = = n∑ x i (mx i + b − y i ) i=1 n∑ mx 2 i + b i=1 = m n∑ n∑ x i − x i y i i=1 n∑ x 2 i + bX − i=1 i=1 i=1 n∑ x i y i where X is defined in equation 14.8. Next we define, A = C = n∑ x 2 i (14.11) i=1 n∑ x i y i (14.12) i=1 so that 0 = mA + bX − C (14.13) Equations ?? and ?? give us a a system of two linear equations in two variables m and b. Multiplying equation 14.10 by A and equation 14.13 by X gives 0 = A (mX + nb − Y ) = AXm + Anb − AY (14.14) 0 = X (mA + bX − C) = AXm + X 2 b − CX (14.15) Subtracting these two equations gives and therefore 0 = Anb − AY − X 2 b + CX = b(An − X 2 ) + CX − AY b = n∑ x 2 i AY − CX An − X 2 = i=1 n∑ ∑ y i − n x i y i i=1 i=1 n∑ ∑ n n ( n∑ ) 2 x 2 i − x i i=1 i=1 x i i=1 If we instead multiply equation 14.10 by X and equation 14.13 by n we obtain Subtracting these two equations, 0 = X (mX + nb − Y ) = mX 2 + nXb − Y X 0 = n (mA + bX − C) = nAm + nXb − nC 0 = m ( X 2 − nA ) − (Y X − nC) Solving for m and substituting the definitions of A, C, X and Y , gives Revised December 6, 2006. Math 250, Fall 2006
LECTURE 14. UNCONSTRAINED OPTIMIZATION 113 n∑ n∑ ∑ x i y i − n n x i y i XY − nC m = X 2 − nA = i=1 i=1 i=1 ( n∑ ) 2 ∑ x i − n n x 2 i i=1 i=1 Generally this algorithm needs to implemented computationally, because there are so many sums to calculate. It is also implemented on many calculators. Least Squares Algorithm To find a best-fit line to a set of n data points calculate, The best fit line is (x 1 , y 1 ), (x 2 , y 2 ), ..., (x n , y n ) X = ∑ n x i i=1 Y = ∑ n y i i=1 A = ∑ n i=1 x2 i C = ∑ n x iy i i=1 y = mx + b where XY − nC m = X 2 − nA AY − CX b = An − X 2 Example 14.8 Find the least squares fit to the data (3, 2), (4,3), (5,4), (6,4) and (7,5). Solution. First we calculate the numbers X, Y , A, and C, X = ∑ n x i = 3 + 4 + 5 + 6 + 7 = 25 i=1 Y = ∑ n y i = 2 + 3 + 4 + 4 + 5 = 18 i=1 A = ∑ n i=1 x2 i = 9 + 16 + 25 + 36 + 49 = 135 C = ∑ n x iy i = (3)(2) + (4)(3) + (5)(4) + (6)(4) + (7)(5) = 97 i=1 Therefore XY − nC (25)(18) − (5)(97) 450 − 485 m = X 2 = − nA (25) 2 = − 5(135) 625 − 675 = −35 −50 = 0.7 and AY − CX (135)(18) − (97)(25) 2430 − 2425 b = = An − X2 (135)(5) − 25 2 = = 5 50 50 = 0.1 So the best fit line is y = 0.7x + 0.1 Math 250, Fall 2006 Revised December 6, 2006.
- Page 73 and 74: LECTURE 9. THE PARTIAL DERIVATIVE 6
- Page 75 and 76: LECTURE 9. THE PARTIAL DERIVATIVE 6
- Page 77 and 78: LECTURE 9. THE PARTIAL DERIVATIVE 6
- Page 79 and 80: Lecture 10 Limits and Continuity In
- Page 81 and 82: LECTURE 10. LIMITS AND CONTINUITY 6
- Page 83 and 84: LECTURE 10. LIMITS AND CONTINUITY 7
- Page 85 and 86: Lecture 11 Gradients and the Direct
- Page 87 and 88: LECTURE 11. GRADIENTS AND THE DIREC
- Page 89 and 90: LECTURE 11. GRADIENTS AND THE DIREC
- Page 91 and 92: LECTURE 11. GRADIENTS AND THE DIREC
- Page 93 and 94: Lecture 12 The Chain Rule Recall th
- Page 95 and 96: LECTURE 12. THE CHAIN RULE 83 The p
- Page 97 and 98: LECTURE 12. THE CHAIN RULE 85 Examp
- Page 99 and 100: LECTURE 12. THE CHAIN RULE 87 becau
- Page 101 and 102: LECTURE 12. THE CHAIN RULE 89 Solut
- Page 103 and 104: LECTURE 12. THE CHAIN RULE 91 Examp
- Page 105 and 106: Lecture 13 Tangent Planes Since the
- Page 107 and 108: LECTURE 13. TANGENT PLANES 95 Solut
- Page 109 and 110: LECTURE 13. TANGENT PLANES 97 Multi
- Page 111 and 112: LECTURE 13. TANGENT PLANES 99 Accor
- Page 113 and 114: Lecture 14 Unconstrained Optimizati
- Page 115 and 116: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 117 and 118: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 119 and 120: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 121 and 122: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 123: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 127 and 128: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 129 and 130: Lecture 15 Constrained Optimization
- Page 131 and 132: LECTURE 15. CONSTRAINED OPTIMIZATIO
- Page 133 and 134: LECTURE 15. CONSTRAINED OPTIMIZATIO
- Page 135 and 136: LECTURE 15. CONSTRAINED OPTIMIZATIO
- Page 137 and 138: LECTURE 15. CONSTRAINED OPTIMIZATIO
- Page 139 and 140: Lecture 16 Double Integrals over Re
- Page 141 and 142: LECTURE 16. DOUBLE INTEGRALS OVER R
- Page 143 and 144: LECTURE 16. DOUBLE INTEGRALS OVER R
- Page 145 and 146: LECTURE 16. DOUBLE INTEGRALS OVER R
- Page 147 and 148: Lecture 17 Double Integrals over Ge
- Page 149 and 150: LECTURE 17. DOUBLE INTEGRALS OVER G
- Page 151 and 152: LECTURE 17. DOUBLE INTEGRALS OVER G
- Page 153 and 154: LECTURE 17. DOUBLE INTEGRALS OVER G
- Page 155 and 156: LECTURE 17. DOUBLE INTEGRALS OVER G
- Page 157 and 158: Lecture 18 Double Integrals in Pola
- Page 159 and 160: LECTURE 18. DOUBLE INTEGRALS IN POL
- Page 161 and 162: LECTURE 18. DOUBLE INTEGRALS IN POL
- Page 163 and 164: LECTURE 18. DOUBLE INTEGRALS IN POL
- Page 165 and 166: LECTURE 18. DOUBLE INTEGRALS IN POL
- Page 167 and 168: Lecture 19 Surface Area with Double
- Page 169 and 170: LECTURE 19. SURFACE AREA WITH DOUBL
- Page 171 and 172: LECTURE 19. SURFACE AREA WITH DOUBL
- Page 173 and 174: Lecture 20 Triple Integrals Triple
112 LECTURE 14. UNCONSTRAINED OPTIMIZATION<br />
Dividing by 2 and separating the three sums as before<br />
0 =<br />
=<br />
n∑<br />
x i (mx i + b − y i )<br />
i=1<br />
n∑<br />
mx 2 i + b<br />
i=1<br />
= m<br />
n∑ n∑<br />
x i − x i y i<br />
i=1<br />
n∑<br />
x 2 i + bX −<br />
i=1<br />
i=1<br />
i=1<br />
n∑<br />
x i y i<br />
where X is defined in equation 14.8. Next we define,<br />
A =<br />
C =<br />
n∑<br />
x 2 i (14.11)<br />
i=1<br />
n∑<br />
x i y i (14.12)<br />
i=1<br />
so that<br />
0 = mA + bX − C (14.13)<br />
Equations ?? and ?? give us a a system of two linear equations in two variables<br />
m and b. Multiplying equation 14.10 by A and equation 14.13 by X gives<br />
0 = A (mX + nb − Y ) = AXm + Anb − AY (14.14)<br />
0 = X (mA + bX − C) = AXm + X 2 b − CX (14.15)<br />
Subtracting these two equations gives<br />
and therefore<br />
0 = Anb − AY − X 2 b + CX = b(An − X 2 ) + CX − AY<br />
b =<br />
n∑<br />
x 2 i<br />
AY − CX<br />
An − X 2 = i=1<br />
n∑ ∑<br />
y i − n x i y i<br />
i=1<br />
i=1<br />
n∑<br />
∑<br />
n n ( n∑<br />
) 2<br />
x 2 i − x i<br />
i=1 i=1<br />
x i<br />
i=1<br />
If we instead multiply equation 14.10 by X and equation 14.13 by n we obtain<br />
Subtracting these two equations,<br />
0 = X (mX + nb − Y ) = mX 2 + nXb − Y X<br />
0 = n (mA + bX − C) = nAm + nXb − nC<br />
0 = m ( X 2 − nA ) − (Y X − nC)<br />
Solving for m and substituting the definitions of A, C, X and Y , gives<br />
Revised December 6, 2006. Math 250, Fall 2006