10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 611<br />

The gradient of real function g(X) : R K×L →R on matrix domain is<br />

⎡<br />

∇g(X) =<br />

∆ ⎢<br />

⎣<br />

=<br />

∂g(X)<br />

∂X 11<br />

∂g(X)<br />

∂X 21<br />

.<br />

∂g(X)<br />

∂X K1<br />

∂g(X)<br />

∂X 12<br />

· · ·<br />

∂g(X)<br />

[<br />

∇X(:,1) g(X)<br />

∂X 22<br />

.<br />

· · ·<br />

∂g(X)<br />

∂X K2<br />

· · ·<br />

∂g(X)<br />

∂X 1L<br />

∂g(X)<br />

∂X 2L<br />

.<br />

∂g(X)<br />

∂X KL<br />

⎤<br />

∈ R K×L<br />

⎥<br />

⎦<br />

∇ X(:,2) g(X)<br />

...<br />

∇ X(:,L) g(X) ] ∈ R K×1×L (1643)<br />

where the gradient ∇ X(:,i) is with respect to the i th column of X . The<br />

strange appearance of (1643) in R K×1×L is meant to suggest a third dimension<br />

perpendicular to the page (not a diagonal matrix). The second-order gradient<br />

has representation<br />

⎡<br />

∇ 2 g(X) =<br />

∆ ⎢<br />

⎣<br />

=<br />

∇ ∂g(X)<br />

∂X 11<br />

∇ ∂g(X)<br />

∂X 21<br />

.<br />

∇ ∂g(X)<br />

∂X K1<br />

[<br />

∇∇X(:,1) g(X)<br />

∇ ∂g(X)<br />

∂X 12<br />

· · · ∇ ∂g(X)<br />

∂X 1L<br />

∇ ∂g(X)<br />

∂X 22<br />

· · · ∇ ∂g(X)<br />

∂X 2L<br />

. .<br />

∇ ∂g(X)<br />

∂X K2<br />

· · · ∇ ∂g(X)<br />

∂X KL<br />

⎤<br />

∈ R K×L×K×L<br />

⎥<br />

⎦<br />

∇∇ X(:,2) g(X)<br />

...<br />

∇∇ X(:,L) g(X) ] ∈ R K×1×L×K×L (1644)<br />

where the gradient ∇ is with respect to matrix X .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!