10.07.2015 Views

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

204 CHAPTER 3. GEOMETRY OF CONVEX FUNCTIONS3.1.8 gradientGradient ∇f of any differentiable multidimensional function f (formallydefined inD.1) maps each entry f i to a space having the same dimensionas the ambient space of its domain. Notation ∇f is shorthand for gradient∇ x f(x) of f with respect to x . ∇f(y) can mean ∇ y f(y) or gradient∇ x f(y) of f(x) with respect to x evaluated at y ; a distinction that shouldbecome clear from context.Gradient of a differentiable real function f(x) : R K →R with respect toits vector domain is defined⎡∇f(x) =∆ ⎢⎣∂f(x)∂x 1∂f(x)∂x 2.∂f(x)∂x K⎤⎥⎦ ∈ RK (1531)while the second-order gradient of the twice differentiable real function withrespect to its vector domain is traditionally called the Hessian ; 3.7⎡∇ 2 f(x) =∆ ⎢⎣∂ 2 f(x)∂ 2 x 1∂ 2 f(x)∂x 2 ∂x 1.∂ 2 f(x)∂x K ∂x 1∂ 2 f(x)∂x 1 ∂x 2· · ·∂ 2 f(x)∂ 2 x 2· · ·....∂ 2 f(x)∂x K ∂x 2· · ·∂ 2 f(x)∂x 1 ∂x K∂ 2 f(x)∂x 2 ∂x K.∂ 2 f(x)∂ 2 x K⎤∈ S K (1532)⎥⎦The gradient can be interpreted as a vector pointing in the direction ofgreatest change. [160,15.6] The gradient can also be interpreted as thatvector normal to a level set; e.g., Figure 60, Figure 53.For the quadratic bowl in Figure 59, the gradient maps to R 2 ; illustratedin Figure 58. For a one-dimensional function of real variable, for example,the gradient evaluated at any point in the function domain is just the slope(or derivative) of that function there. (conferD.1.4.1)For any differentiable multidimensional function, zero gradient ∇f = 0is a necessary condition for its unconstrained minimization [103,3.2]:3.7 Jacobian is the Hessian transpose, so commonly confused in matrix calculus.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!