10.07.2015 Views

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

570 APPENDIX D. MATRIX CALCULUSD.2 Tables of gradients and derivatives[115] [50]When proving results for symmetric matrices algebraically, it is criticalto take gradients ignoring symmetry and to then substitute symmetricentries afterward.a,b∈R n , x,y ∈ R k , A,B∈ R m×n , X,Y ∈ R K×L , t,µ∈R ,i,j,k,l,K,L,m,n,M,N are integers, unless otherwise noted.x µ means δ(δ(x) µ ) for µ∈R ; id est, entrywise vector exponentiation.δ is the main-diagonal linear operator (1216). x 0 ∆ = 1, X 0 ∆ = I if square.ddx⎡∆ ⎢= ⎣ddx 1.ddx k⎤⎥⎦,→ydg(x) ,→ydg 2 (x) (directional derivativesD.1), log x ,√sgnx, x/y (Hadamard quotient), x (entrywise square root),etcetera, are maps f : R k → R k that maintain dimension; e.g., (A.1.1)ddx x−1 ∆ = ∇ x 1 T δ(x) −1 1 (1629)For A a scalar or matrix, we have the Taylor series [55,3.6]e A ∆ =∞∑k=01k! Ak (1630)Further, [247,5.4]e A ≻ 0 ∀A ∈ S m (1631)For all square A and integer kdet k A = detA k (1632)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!