10.07.2015 Views

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 569From (1604), the simpler case, where the real function g(X) : R K →R hasvector argument,Y T ∇ 2 Xd2g(X+ t Y )Y =dt2g(X+ t Y ) (1621)D.1.8.2.1 Example. Second-order gradient.Given real function g(X) = log detX having domain int S K + , we want tofind ∇ 2 g(X)∈ R K×K×K×K . From the tables inD.2,h(X) = ∆ ∇g(X) = X −1 ∈ int S K + (1622)so ∇ 2 g(X)=∇h(X). By (1609) and (1577), for Y ∈ S Ktr ( ∇h mn (X) T Y ) =d dt∣ h mn (X+ t Y ) (1623)( t=0)d =dt∣∣ h(X+ t Y ) (1624)t=0 mn( )d =dt∣∣ (X+ t Y ) −1 (1625)t=0mn= − ( X −1 Y X −1) mn(1626)Setting Y to a member of {e k e T l ∈ R K×K | k,l=1... K} , and employing aproperty (32) of the trace function we find∇ 2 g(X) mnkl = tr ( ∇h mn (X) T e k e T l)= ∇hmn (X) kl = − ( X −1 e k e T l X−1) mn(1627)∇ 2 g(X) kl = ∇h(X) kl = − ( X −1 e k e T l X −1) ∈ R K×K (1628)From all these first- and second-order expressions, we may generate newones by evaluating both sides at arbitrary t (in some open interval) but onlyafter the differentiation.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!