10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

630 APPENDIX D. MATRIX CALCULUS<br />

D.2 Tables of gradients and derivatives<br />

[140] [57]<br />

When proving results for symmetric matrices algebraically, it is critical<br />

to take gradients ignoring symmetry and to then substitute symmetric<br />

entries afterward.<br />

a,b∈R n , x,y ∈ R k , A,B∈ R m×n , X,Y ∈ R K×L , t,µ∈R ,<br />

i,j,k,l,K,L,m,n,M,N are integers, unless otherwise noted.<br />

x µ means δ(δ(x) µ ) for µ∈R ; id est, entrywise vector exponentiation.<br />

δ is the main-diagonal linear operator (1318). x 0 ∆ = 1, X 0 ∆ = I if square.<br />

d<br />

dx<br />

⎡<br />

∆ ⎢<br />

= ⎣<br />

d<br />

dx 1<br />

.<br />

d<br />

dx k<br />

⎤<br />

⎥<br />

⎦,<br />

→y<br />

dg(x) ,<br />

→y<br />

dg 2 (x) (directional derivativesD.1), log x , e x ,<br />

√<br />

|x| , sgnx, x/y (Hadamard quotient), x (entrywise square root),<br />

etcetera, are maps f : R k → R k that maintain dimension; e.g., (A.1.1)<br />

d<br />

dx x−1 ∆ = ∇ x 1 T δ(x) −1 1 (1741)<br />

For A a scalar or square matrix, we have the Taylor series [68,3.6]<br />

e A ∆ =<br />

∞∑<br />

k=0<br />

1<br />

k! Ak (1742)<br />

Further, [287,5.4]<br />

e A ≻ 0 ∀A ∈ S m (1743)<br />

For all square A and integer k<br />

det k A = detA k (1744)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!