10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

628 APPENDIX D. MATRIX CALCULUS<br />

When, additionally, g(X) : R K →R has vector argument,<br />

∇ X g(X+ t Y ) T Y = d g(X+ t Y ) (1723)<br />

dt<br />

D.1.8.1.1 Example. Gradient.<br />

g(X) = w T X T Xw , X ∈ R K×L , w ∈R L . Using the tables inD.2,<br />

tr ( ∇ X g(X+ t Y ) T Y ) = tr ( 2ww T (X T + t Y T )Y ) (1724)<br />

Applying the equivalence (1722),<br />

= 2w T (X T Y + t Y T Y )w (1725)<br />

d<br />

dt g(X+ t Y ) = d dt wT (X+ t Y ) T (X+ t Y )w (1726)<br />

= w T( X T Y + Y T X + 2t Y T Y ) w (1727)<br />

= 2w T (X T Y + t Y T Y )w (1728)<br />

which is the same as (1725); hence, equivalence is demonstrated.<br />

It is easy to extract ∇g(X) from (1728) knowing only (1722):<br />

D.1.8.2<br />

tr ( ∇ X g(X+ t Y ) T Y ) = 2w T (X T Y + t Y T Y )w<br />

= 2 tr ( ww T (X T + t Y T )Y )<br />

tr ( ∇ X g(X) T Y ) = 2 tr ( ww T X T Y )<br />

(1729)<br />

⇔<br />

∇ X g(X) = 2Xww T<br />

second-order<br />

Likewise removing the evaluation at t = 0 from (1707),<br />

→Y<br />

dg 2 (X+ t Y ) = d2<br />

dt2g(X+ t Y ) (1730)<br />

we can find a similar relationship between the second-order gradient and the<br />

second derivative: In the general case g(X) : R K×L →R M×N from (1700) and<br />

(1703),<br />

tr<br />

(∇ X tr ( ∇ X g mn (X+ t Y ) T Y ) T<br />

Y<br />

)<br />

= d2<br />

dt 2g mn(X+ t Y ) (1731)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!