10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 629<br />

In the case of a real function g(X) : R K×L →R we have, of course,<br />

tr<br />

(∇ X tr ( ∇ X g(X+ t Y ) T Y ) )<br />

T<br />

Y = d2<br />

dt2g(X+ t Y ) (1732)<br />

From (1712), the simpler case, where the real function g(X) : R K →R has<br />

vector argument,<br />

Y T ∇ 2 X<br />

d2<br />

g(X+ t Y )Y =<br />

dt2g(X+ t Y ) (1733)<br />

D.1.8.2.1 Example. Second-order gradient.<br />

Given real function g(X) = log detX having domain int S K + , we want to<br />

find ∇ 2 g(X)∈ R K×K×K×K . From the tables inD.2,<br />

h(X) ∆ = ∇g(X) = X −1 ∈ int S K + (1734)<br />

so ∇ 2 g(X)=∇h(X). By (1721) and (1685), for Y ∈ S K<br />

tr ( ∇h mn (X) T Y ) =<br />

=<br />

=<br />

d dt∣ h mn (X+ t Y ) (1735)<br />

( t=0<br />

)<br />

d dt∣<br />

∣ h(X+ t Y ) (1736)<br />

t=0 mn<br />

( )<br />

d dt∣<br />

∣ (X+ t Y ) −1 (1737)<br />

t=0<br />

mn<br />

= − ( X −1 Y X −1) mn<br />

(1738)<br />

Setting Y to a member of {e k e T l ∈ R K×K | k,l=1... K} , and employing a<br />

property (32) of the trace function we find<br />

∇ 2 g(X) mnkl = tr ( )<br />

∇h mn (X) T e k e T l = ∇hmn (X) kl = − ( X −1 e k e T l X−1) mn<br />

(1739)<br />

∇ 2 g(X) kl = ∇h(X) kl = − ( X −1 e k e T l X −1) ∈ R K×K (1740)<br />

<br />

From all these first- and second-order expressions, we may generate new<br />

ones by evaluating both sides at arbitrary t (in some open interval) but only<br />

after the differentiation.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!