10.07.2015 Views

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 559in magnitude and direction to Y . D.3 Hence the directional derivative,→Ydg(X) ∆ =⎡⎢⎣⎡= ⎢⎣⎡dg 11 (X) dg 12 (X) · · · dg 1N (X)dg 21 (X) dg 22 (X) · · · dg 2N (X). .dg M1 (X) dg M2 (X) · · ·.dg MN (X)⎤⎥∈ R M×N⎦∣∣dX→Ytr ( ∇g 11 (X) T Y ) tr ( ∇g 12 (X) T Y ) · · · tr ( ∇g 1N (X) T Y ) ⎤tr ( ∇g 21 (X) T Y ) tr ( ∇g 22 (X) T Y ) · · · tr ( ∇g 2N (X) T Y )⎥...tr ( ∇g M1 (X) T Y ) tr ( ∇g M2 (X) T Y ) · · · tr ( ∇g MN (X) T Y ) ⎦∑k,l∑=k,l⎢⎣ ∑k,l∂g 11 (X)∂X klY kl∑k,l∂g 21 (X)∂X klY kl∑k,l.∂g M1 (X)∑∂X klY klk,l∂g 12 (X)∂X klY kl · · ·∂g 22 (X)∂X klY kl · · ·.∂g M2 (X)∂X klY kl · · ·∑k,l∑k,l∑k,l∂g 1N (X)∂X klY kl∂g 2N (X)∂X klY kl.∂g MN (X)∂X kl⎤⎥⎦Y kl(1575)from which it follows→Ydg(X) = ∑ k,l∂g(X)∂X klY kl (1576)Yet for all X ∈ domg , any Y ∈R K×L , and some open interval of t∈ Rg(X+ t Y ) = g(X) + t →Ydg(X) + o(t 2 ) (1577)which is the first-order Taylor series expansion about X . [160,18.4][103,2.3.4] Differentiation with respect to t and subsequent t-zeroingisolates the second term of expansion. Thus differentiating and zeroingg(X+ t Y ) in t is an operation equivalent to individually differentiating andzeroing every entry g mn (X+ t Y ) as in (1574). So the directional derivativeof g(X) : R K×L →R M×N in any direction Y ∈ R K×L evaluated at X ∈ domgbecomesD.3 Although Y is a matrix, we may regard it as a vector in R KL .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!