10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Appendix D<br />

Matrix calculus<br />

From too much study, and from extreme passion, cometh madnesse.<br />

−Isaac Newton [129,5]<br />

D.1 Directional derivative, Taylor series<br />

D.1.1<br />

Gradients<br />

Gradient of a differentiable real function f(x) : R K →R with respect to its<br />

vector argument is defined<br />

∇ 2 f(x) =<br />

⎢<br />

⎣<br />

⎡<br />

∇f(x) =<br />

⎢<br />

⎣<br />

∂ 2 f(x)<br />

∂ 2 x 1<br />

∂f(x)<br />

∂x 1<br />

∂f(x)<br />

∂x 2<br />

.<br />

∂f(x)<br />

∂x K<br />

⎤<br />

⎥<br />

⎦ ∈ RK (1637)<br />

while the second-order gradient of the twice differentiable real function with<br />

respect to its vector argument is traditionally called the Hessian ;<br />

⎡<br />

⎤<br />

∂ 2 f(x)<br />

∂x 1 ∂x 2<br />

· · ·<br />

∂ 2 f(x)<br />

∂x 2 ∂x 1<br />

.<br />

∂ 2 f(x)<br />

∂x K ∂x 1<br />

∂ 2 f(x)<br />

∂ 2 x 2<br />

· · ·<br />

.<br />

. . .<br />

∂ 2 f(x)<br />

∂x K ∂x 2<br />

· · ·<br />

∂ 2 f(x)<br />

∂x 1 ∂x K<br />

∂ 2 f(x)<br />

∂x 2 ∂x K<br />

.<br />

∂ 2 f(x)<br />

∂ 2 x K<br />

∈ S K (1638)<br />

⎥<br />

⎦<br />

2001 Jon Dattorro. CO&EDG version 2009.01.01. All rights reserved.<br />

Citation: Jon Dattorro, <strong>Convex</strong> <strong>Optimization</strong> & Euclidean Distance Geometry,<br />

Meboo Publishing USA, 2005.<br />

609

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!