12.07.2015 Views

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

252 CHAPTER 3. GEOMETRY OF CONVEX FUNCTIONSSetting the gradient to 0Ax = x(x T x) (591)is necessary for optimal solution. Replace vector x with a normalizedeigenvector v i of A∈ S N , corresponding to a positive eigenvalue λ i , scaledby square root of that eigenvalue. Then (591) is satisfiedx ← v i√λi ⇒ Av i = v i λ i (592)xx T = λ i v i vi T is a rank-1 matrix on the positive semidefinite cone boundary,and the minimum is achieved (7.1.2) when λ i =λ 1 is the largest positiveeigenvalue of A . If A has no positive eigenvalue, then x=0 yields theminimum.Differentiability is a prerequisite neither to convexity or to numericalsolution of a convex optimization problem. The gradient provides a necessaryand sufficient condition (352) (452) for optimality in the constrained case,nevertheless, as it does in the unconstrained case:For any differentiable multidimensional convex function, zero gradient∇f = 0 is a necessary and sufficient condition for its unconstrainedminimization [61,5.5.3]:3.6.0.0.2 Example. Pseudoinverse.The pseudoinverse matrix is the unique solution to an unconstrained convexoptimization problem [159,5.5.4]: given A∈ R m×nwherewhose gradient (D.2.3)vanishes whenminimizeX∈R n×m ‖XA − I‖2 F (593)‖XA − I‖ 2 F = tr ( A T X T XA − XA − A T X T + I ) (594)∇ X ‖XA − I‖ 2 F = 2 ( XAA T − A T) = 0 (595)XAA T = A T (596)When A is fat full-rank, then AA T is invertible, X ⋆ = A T (AA T ) −1 is thepseudoinverse A † , and AA † =I . Otherwise, we can make AA T invertibleby adding a positively scaled identity, for any A∈ R m×nX = A T (AA T + t I) −1 (597)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!