10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.1. CONVEX FUNCTION 223<br />

Setting the gradient to 0<br />

Ax = x(x T x) (528)<br />

is necessary for optimal solution. Replace vector x with a normalized<br />

eigenvector v i of A∈ S N , corresponding to a positive eigenvalue λ i , scaled<br />

by square root of that eigenvalue. Then (528) is satisfied<br />

x ← v i<br />

√<br />

λi ⇒ Av i = v i λ i (529)<br />

xx T = λ i v i vi<br />

T is a rank-1 matrix on the positive semidefinite cone boundary,<br />

and the minimum is achieved (7.1.2) when λ i =λ 1 is the largest positive<br />

eigenvalue of A . If A has no positive eigenvalue, then x=0 yields the<br />

minimum.<br />

<br />

Differentiability is a prerequisite neither to convexity or to numerical<br />

solution of a convex optimization problem. The gradient provides a necessary<br />

and sufficient condition (319) (405) for optimality in the constrained case,<br />

nevertheless, as it does in the unconstrained case:<br />

For any differentiable multidimensional convex function, zero gradient<br />

∇f = 0 is a necessary and sufficient condition for its unconstrained<br />

minimization [53,5.5.3]:<br />

3.1.8.0.2 Example. Pseudoinverse.<br />

The pseudoinverse matrix is the unique solution to an unconstrained convex<br />

optimization problem [134,5.5.4]: given A∈ R m×n<br />

minimize<br />

X∈R n×m ‖XA − I‖2 F (530)<br />

where<br />

‖XA − I‖ 2 F = tr ( A T X T XA − XA − A T X T + I ) (531)<br />

whose gradient (D.2.3)<br />

∇ X ‖XA − I‖ 2 F = 2 ( XAA T − A T) = 0 (532)<br />

vanishes when<br />

XAA T = A T (533)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!