Multivariate Calculus - Bruce E. Shapiro
Multivariate Calculus - Bruce E. Shapiro Multivariate Calculus - Bruce E. Shapiro
74 LECTURE 11. GRADIENTS AND THE DIRECTIONAL DERIVATIVE Definition 11.2 A function f(x, y) : R 2 ↦→ R is said to be locally linear at the point (a, b) if there exist numbers h and k such that where f(a + h, b + k) = f(a, b) + hf x (a, b) + kf y (a, b) + hɛ(h, k) + kδ(h, k) (11.2) lim ɛ(h) = 0 h→0 lim δ(k) = 0 k→0 Definition 11.3 A function is said to be differentiable at P if it is locally linear at P. Definition 11.4 A function is said to be differentiable on an open set R if it is differentiable at every point in R. If we define the vectors Then equation 11.2 becomes P = (a, b) h = (h, k) e(h) = (ɛ, δ) f(P + h) = f(P) + h · (f x (P), f y (P)) + h · e Rearranging terms, dividing by h = ‖h‖, and taking the limit as ‖h‖ → 0 f(P + h) − f(P) h · (f x (P), f y (P)) h · e lim = lim + lim ‖h‖→0 ‖h‖ ‖h‖→0 ‖h‖ ‖h‖→0 ‖h‖ Defining the unit vector ĥ = h/‖h‖, f(P + h) − f(P) lim = lim ĥ · (f x (P), f y (P)) + lim ĥ · e ‖h‖→0 ‖h‖ ‖h‖→0 ‖h‖→0 The first limit on the right does not depend on the length ‖h‖ because h only appears as a unit vector, which has length 1, so that f(P + h) − f(P) lim = ‖h‖→0 ‖h‖ ĥ · (f x(P), f y (P)) + lim ĥ · e ‖h‖→0 The second limit on the right depends on h through the vector e; but the components of the vector e approach 0 as the components of h approach 0. Since the dot product of a vector of length 1 with a vector length 0 is zero, f(P + h) − f(P) lim = ‖h‖→0 ‖h‖ ĥ · (f x(P), f y (P)) (11.3) This gives us a generalized definition of the derivative in the direction of any vector h. We first define the gradient vector of functions of two and three variables, which we will use heavily in the remainder of this course. Revised December 6, 2006. Math 250, Fall 2006
LECTURE 11. GRADIENTS AND THE DIRECTIONAL DERIVATIVE 75 Definition 11.5 The gradient of a function f(x, y) : R 2 ↦→ R of two variables is given by ∇f(x, y) = gradf(x, y) = i ∂f ∂x + j∂f (11.4) ∂y The gradient of a function f(x, y, z) : R 3 ↦→ R of three variables is ∇f(x, y) = gradf(x, y) = i ∂f ∂x + j∂f ∂y + k∂f ∂z (11.5) Note that if we apply the definition in three-dimensions to a function of two variables, we obtain the same result as the first definition, because the partial derivative of f(x, y) with respect to z is zero (z does not appear in the equation). Example 11.1 Find the gradient of f(x, y) = sin 3 (x 2 y) Solution. ∇f = i ∂f ∂x + j∂f ∂y = i ∂ ∂x sin3 (x 2 y) + j ∂ ∂y sin3 (x 2 y) [ = i 3 sin 2 (x 2 y)) ∂ ] [ ∂x sin(x2 y) + j 3 sin 2 (x 2 y)) ∂ ] ∂y sin(x2 y) = i [ 3 sin 2 (x 2 y)) cos(x 2 y)(2xy) ] + j [ 3 sin 2 (x 2 y)) cos(x 2 y)(x 2 ) ] = 3x sin 2 ( x 2 y ) cos ( x 2 y ) (2yi + xj) Example 11.2 Find the gradient of f(x, y) = x 2 y + y 2 z + z 2 x Solution. ∇f = i ∂f ∂x + j∂f ∂y + k∂f ∂z = i ∂ ( x 2 y + y 2 z + z 2 x ) + j ∂ ( x 2 y + y 2 z + z 2 x ) + ∂x ∂y k ∂ ( x 2 y + y 2 z + z 2 x ) ∂z = i(2xy + z 2 ) + j(x 2 + 2yz) + k(y 2 + 2zx) Theorem 11.1 Properties of the Gradient VectorSuppose that f and g are functions and c is a constant. Then the following are true: ∇[f + g] = ∇f + ∇g (11.6) ∇(cf) = c∇f (11.7) ∇(fg) = f∇g + g∇f (11.8) Math 250, Fall 2006 Revised December 6, 2006.
- Page 35 and 36: LECTURE 3. THE CROSS PRODUCT 23 Pro
- Page 37 and 38: LECTURE 3. THE CROSS PRODUCT 25 Exa
- Page 39 and 40: LECTURE 3. THE CROSS PRODUCT 27 5.
- Page 41 and 42: Lecture 4 Lines and Curves in 3D We
- Page 43 and 44: LECTURE 4. LINES AND CURVES IN 3D 3
- Page 45 and 46: LECTURE 4. LINES AND CURVES IN 3D 3
- Page 47 and 48: LECTURE 4. LINES AND CURVES IN 3D 3
- Page 49 and 50: Lecture 5 Velocity, Acceleration, a
- Page 51 and 52: LECTURE 5. VELOCITY, ACCELERATION,
- Page 53 and 54: LECTURE 5. VELOCITY, ACCELERATION,
- Page 55 and 56: LECTURE 5. VELOCITY, ACCELERATION,
- Page 57 and 58: Lecture 6 Surfaces in 3D The text f
- Page 59 and 60: Lecture 7 Cylindrical and Spherical
- Page 61 and 62: LECTURE 7. CYLINDRICAL AND SPHERICA
- Page 63 and 64: Lecture 8 Functions of Two Variable
- Page 65 and 66: LECTURE 8. FUNCTIONS OF TWO VARIABL
- Page 67 and 68: LECTURE 8. FUNCTIONS OF TWO VARIABL
- Page 69 and 70: LECTURE 8. FUNCTIONS OF TWO VARIABL
- Page 71 and 72: Lecture 9 The Partial Derivative De
- Page 73 and 74: LECTURE 9. THE PARTIAL DERIVATIVE 6
- Page 75 and 76: LECTURE 9. THE PARTIAL DERIVATIVE 6
- Page 77 and 78: LECTURE 9. THE PARTIAL DERIVATIVE 6
- Page 79 and 80: Lecture 10 Limits and Continuity In
- Page 81 and 82: LECTURE 10. LIMITS AND CONTINUITY 6
- Page 83 and 84: LECTURE 10. LIMITS AND CONTINUITY 7
- Page 85: Lecture 11 Gradients and the Direct
- Page 89 and 90: LECTURE 11. GRADIENTS AND THE DIREC
- Page 91 and 92: LECTURE 11. GRADIENTS AND THE DIREC
- Page 93 and 94: Lecture 12 The Chain Rule Recall th
- Page 95 and 96: LECTURE 12. THE CHAIN RULE 83 The p
- Page 97 and 98: LECTURE 12. THE CHAIN RULE 85 Examp
- Page 99 and 100: LECTURE 12. THE CHAIN RULE 87 becau
- Page 101 and 102: LECTURE 12. THE CHAIN RULE 89 Solut
- Page 103 and 104: LECTURE 12. THE CHAIN RULE 91 Examp
- Page 105 and 106: Lecture 13 Tangent Planes Since the
- Page 107 and 108: LECTURE 13. TANGENT PLANES 95 Solut
- Page 109 and 110: LECTURE 13. TANGENT PLANES 97 Multi
- Page 111 and 112: LECTURE 13. TANGENT PLANES 99 Accor
- Page 113 and 114: Lecture 14 Unconstrained Optimizati
- Page 115 and 116: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 117 and 118: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 119 and 120: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 121 and 122: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 123 and 124: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 125 and 126: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 127 and 128: LECTURE 14. UNCONSTRAINED OPTIMIZAT
- Page 129 and 130: Lecture 15 Constrained Optimization
- Page 131 and 132: LECTURE 15. CONSTRAINED OPTIMIZATIO
- Page 133 and 134: LECTURE 15. CONSTRAINED OPTIMIZATIO
- Page 135 and 136: LECTURE 15. CONSTRAINED OPTIMIZATIO
74 LECTURE 11. GRADIENTS AND THE DIRECTIONAL DERIVATIVE<br />
Definition 11.2 A function f(x, y) : R 2 ↦→ R is said to be locally linear at the<br />
point (a, b) if there exist numbers h and k such that<br />
where<br />
f(a + h, b + k) = f(a, b) + hf x (a, b) + kf y (a, b) + hɛ(h, k) + kδ(h, k) (11.2)<br />
lim ɛ(h) = 0<br />
h→0<br />
lim δ(k) = 0<br />
k→0<br />
Definition 11.3 A function is said to be differentiable at P if it is locally linear<br />
at P.<br />
Definition 11.4 A function is said to be differentiable on an open set R if it<br />
is differentiable at every point in R.<br />
If we define the vectors<br />
Then equation 11.2 becomes<br />
P = (a, b)<br />
h = (h, k)<br />
e(h) = (ɛ, δ)<br />
f(P + h) = f(P) + h · (f x (P), f y (P)) + h · e<br />
Rearranging terms, dividing by h = ‖h‖, and taking the limit as ‖h‖ → 0<br />
f(P + h) − f(P) h · (f x (P), f y (P)) h · e<br />
lim<br />
= lim<br />
+ lim<br />
‖h‖→0 ‖h‖<br />
‖h‖→0 ‖h‖<br />
‖h‖→0 ‖h‖<br />
Defining the unit vector ĥ = h/‖h‖,<br />
f(P + h) − f(P)<br />
lim<br />
= lim ĥ · (f x (P), f y (P)) + lim ĥ · e<br />
‖h‖→0 ‖h‖<br />
‖h‖→0<br />
‖h‖→0<br />
The first limit on the right does not depend on the length ‖h‖ because h only<br />
appears as a unit vector, which has length 1, so that<br />
f(P + h) − f(P)<br />
lim<br />
=<br />
‖h‖→0 ‖h‖ ĥ · (f x(P), f y (P)) + lim ĥ · e<br />
‖h‖→0<br />
The second limit on the right depends on h through the vector e; but the components<br />
of the vector e approach 0 as the components of h approach 0. Since the dot product<br />
of a vector of length 1 with a vector length 0 is zero,<br />
f(P + h) − f(P)<br />
lim<br />
=<br />
‖h‖→0 ‖h‖ ĥ · (f x(P), f y (P)) (11.3)<br />
This gives us a generalized definition of the derivative in the direction of any vector<br />
h. We first define the gradient vector of functions of two and three variables, which<br />
we will use heavily in the remainder of this course.<br />
Revised December 6, 2006. Math 250, Fall 2006