12.07.2015 Views

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 677∇ x g ( f(x) T , h(x) T) =[ 1 + ε 00 1 + ε] ([ ] [ ])(A +A T x1 εx1) +εx 2 x 2(1795)from Table D.2.1.limε→0 ∇ xg ( f(x) T , h(x) T) = (A +A T )x (1796)These foregoing formulae remain correct when gradient produceshyperdimensional representation:D.1.4First directional derivativeAssume that a differentiable function g(X) : R K×L →R M×N has continuousfirst- and second-order gradients ∇g and ∇ 2 g over domg which is an openset. We seek simple expressions for the first and second directional derivativesin direction Y ∈R K×L →Y: respectively, dg ∈ R M×N and dg →Y2 ∈ R M×N .Assuming that the limit exists, we may state the partial derivative of themn th entry of g with respect to the kl th entry of X ;∂g mn (X)∂X klg mn (X + ∆t e= limk e T l ) − g mn(X)∆t→0 ∆t∈ R (1797)where e k is the k th standard basis vector in R K while e l is the l th standardbasis vector in R L . The total number of partial derivatives equals KLMNwhile the gradient is defined in their terms; the mn th entry of the gradient is⎡∇g mn (X) =⎢⎣∂g mn(X)∂X 11∂g mn(X)∂X 21.∂g mn(X)∂X K1∂g mn(X)∂X 12· · ·∂g mn(X)∂X 22· · ·.∂g mn(X)∂X K2· · ·∂g mn(X)∂X 1L∂g mn(X)∂X 2L.∂g mn(X)∂X KL⎤∈ R K×L (1798)⎥⎦while the gradient is a quartix⎡⎤∇g 11 (X) ∇g 12 (X) · · · ∇g 1N (X)∇g 21 (X) ∇g 22 (X) · · · ∇g 2N (X)∇g(X) = ⎢⎥⎣ . .. ⎦ ∈ RM×N×K×L (1799)∇g M1 (X) ∇g M2 (X) · · · ∇g MN (X)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!