v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

690 APPENDIX D. MATRIX CALCULUSD.2 Tables of gradients and derivativesResults may be numerically proven by Romberg extrapolation. [108]When proving results for symmetric matrices algebraically, it is criticalto take gradients ignoring symmetry and to then substitute symmetricentries afterward. [166] [65]a,b∈R n , x,y ∈ R k , A,B∈ R m×n , X,Y ∈ R K×L , t,µ∈R ,i,j,k,l,K,L,m,n,M,N are integers, unless otherwise noted.x µ means δ(δ(x) µ ) for µ∈R ; id est, entrywise vector exponentiation.δ is the main-diagonal linear operator (1415). x 0 1, X 0 I if square.d⎡ ⎢dx ⎣ddx 1.ddx k⎤⎥⎦,→ydg(x) ,→ydg 2 (x) (directional derivativesD.1), log x , e x ,√|x| , sgnx, x/y (Hadamard quotient), x (entrywise square root),etcetera, are maps f : R k → R k that maintain dimension; e.g., (A.1.1)ddx x−1 ∇ x 1 T δ(x) −1 1 (1865)For A a scalar or square matrix, we have the Taylor series [77,3.6]e A ∞∑k=01k! Ak (1866)Further, [331,5.4]e A ≻ 0 ∀A ∈ S m (1867)For all square A and integer kdet k A = detA k (1868)

D.2. TABLES OF GRADIENTS AND DERIVATIVES 691D.2.1algebraic∇ x x = ∇ x x T = I ∈ R k×k ∇ X X = ∇ X X T I ∈ R K×L×K×L (identity)∇ x (Ax − b) = A T∇ x(x T A − b T) = A∇ x (Ax − b) T (Ax − b) = 2A T (Ax − b)∇ 2 x (Ax − b) T (Ax − b) = 2A T A∇ x ‖Ax − b‖ = A T (Ax − b)/‖Ax − b‖∇ x z T |Ax − b| = A T δ(z)sgn(Ax ( − b), z i ≠ 0 ⇒ (Ax − b) i ≠ 0∇ x 1 T f(|Ax − b|) = A T df(y)δ ∣)sgn(Ax − b)y=|Ax−b|dy(∇ x x T Ax + 2x T By + y T Cy ) = ( A +A T) x + 2By∇ x (x(+ y) T A(x + y) = (A +A T )(x + y)∇x2 x T Ax + 2x T By + y T Cy ) = A +A T ∇ X a T Xb = ∇ X b T X T a = ab T∇ X a T X 2 b = X T ab T + ab T X T∇ X a T X −1 b = −X −T ab T X −T∇ X (X −1 ) kl = ∂X−1∂X kl= −X −1 e ke T l X−1 ,∇ x a T x T xb = 2xa T b ∇ X a T X T Xb = X(ab T + ba T )confer(1800)(1864)∇ x a T xx T b = (ab T + ba T )x ∇ X a T XX T b = (ab T + ba T )X∇ x a T x T xa = 2xa T a∇ x a T xx T a = 2aa T x∇ x a T yx T b = ba T y∇ x a T y T xb = yb T a∇ x a T xy T b = ab T y∇ x a T x T yb = ya T b∇ X a T X T Xa = 2Xaa T∇ X a T XX T a = 2aa T X∇ X a T YX T b = ba T Y∇ X a T Y T Xb = Y ab T∇ X a T XY T b = ab T Y∇ X a T X T Y b = Y ba T

690 APPENDIX D. MATRIX CALCULUSD.2 Tables of gradients and derivativesResults may be numerically proven by Romberg extrapolation. [108]When proving results for symmetric matrices algebraically, it is criticalto take gradients ignoring symmetry and to then substitute symmetricentries afterward. [166] [65]a,b∈R n , x,y ∈ R k , A,B∈ R m×n , X,Y ∈ R K×L , t,µ∈R ,i,j,k,l,K,L,m,n,M,N are integers, unless otherwise noted.x µ means δ(δ(x) µ ) for µ∈R ; id est, entrywise vector exponentiation.δ is the main-diagonal linear operator (1415). x 0 1, X 0 I if square.d⎡ ⎢dx ⎣ddx 1.ddx k⎤⎥⎦,→ydg(x) ,→ydg 2 (x) (directional derivativesD.1), log x , e x ,√|x| , sgnx, x/y (Hadamard quotient), x (entrywise square root),etcetera, are maps f : R k → R k that maintain dimension; e.g., (A.1.1)ddx x−1 ∇ x 1 T δ(x) −1 1 (1865)For A a scalar or square matrix, we have the Taylor series [77,3.6]e A ∞∑k=01k! Ak (1866)Further, [331,5.4]e A ≻ 0 ∀A ∈ S m (1867)For all square A and integer kdet k A = detA k (1868)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!