v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

694 APPENDIX D. MATRIX CALCULUStrace continueddtrg(X+ t Y ) = tr d g(X+ t Y )dt dt[203, p.491]dtr(X+ t Y ) = trYdtddt trj (X+ t Y ) = j tr j−1 (X+ t Y ) tr Ydtr(X+ t Y dt )j = j tr((X+ t Y ) j−1 Y )(∀j)d2tr((X+ t Y )Y ) = trYdtdtr( (X+ t Y ) k Y ) = d tr(Y (X+ t Y dt dt )k ) = k tr ( (X+ t Y ) k−1 Y 2) , k ∈{0, 1, 2}dtr( (X+ t Y ) k Y ) = d tr(Y (X+ t Y dt dt )k ) = tr k−1 ∑(X+ t Y ) i Y (X+ t Y ) k−1−i Ydtr((X+ t Y dt )−1 Y ) = − tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )dtr( B T (X+ t Y ) −1 A ) = − tr ( B T (X+ t Y ) −1 Y (X+ t Y ) −1 A )dtdtr( B T (X+ t Y ) −T A ) = − tr ( B T (X+ t Y ) −T Y T (X+ t Y ) −T A )dtdtr( B T (X+ t Y ) −k A ) = ..., k>0dtdtr( B T (X+ t Y ) µ A ) = ..., −1 ≤ µ ≤ 1, X, Y ∈ S M dt +d 2tr ( B T (X+ t Y ) −1 A ) = 2 tr ( B T (X+ t Y ) −1 Y (X+ t Y ) −1 Y (X+ t Y ) −1 A )dt 2dtr( (X+ t Y ) T A(X+ t Y ) ) = tr ( Y T AX + X T AY + 2tY T AY )dtd 2tr ( (X+ t Y ) T A(X+ t Y ) ) = 2 tr ( Y T AY )dt 2 (d ((X+ tr dt t Y ) T A(X+ t Y ) ) ) −1( ((X+= − tr t Y ) T A(X+ t Y ) ) −1(Y T AX + X T AY + 2tY T AY ) ( (X+ t Y ) T A(X+ t Y ) ) ) −1ddttr((X+ t Y )A(X+ t Y )) = tr(YAX + XAY + 2tYAY )d 2dt 2 tr((X+ t Y )A(X+ t Y )) = 2 tr(YAY )i=0

D.2. TABLES OF GRADIENTS AND DERIVATIVES 695D.2.4logarithmic determinantx≻0, detX >0 on some neighborhood of X , and det(X+ t Y )>0 onsome open interval of t ; otherwise, log( ) would be discontinuous. [82, p.75]dlog x = x−1∇dxddx log x−1 = −x −1ddx log xµ = µx −1X log detX = X −T∇ 2 X log det(X) kl = ∂X−T∂X kl= − ( X −1 e k e T l X−1) T, confer (1817)(1864)∇ X log detX −1 = −X −T∇ X log det µ X = µX −T∇ X log detX µ = µX −T∇ X log detX k = ∇ X log det k X = kX −T∇ X log det µ (X+ t Y ) = µ(X+ t Y ) −T∇ x log(a T x + b) = a 1a T x+b∇ X log det(AX+ B) = A T (AX+ B) −T∇ X log det(I ± A T XA) = ±A(I ± A T XA) −T A T∇ X log det(X+ t Y ) k = ∇ X log det k (X+ t Y ) = k(X+ t Y ) −Tdlog det(X+ t Y ) = tr((X+ t Y dt )−1 Y )d 2dt 2 log det(X+ t Y ) = − tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )dlog det(X+ t Y dt )−1 = − tr((X+ t Y ) −1 Y )d 2dt 2 log det(X+ t Y ) −1 = tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )ddt log det(δ(A(x + t y) + a)2 + µI)= tr ( (δ(A(x + t y) + a) 2 + µI) −1 2δ(A(x + t y) + a)δ(Ay) )

D.2. TABLES OF GRADIENTS AND DERIVATIVES 695D.2.4logarithmic determinantx≻0, detX >0 on some neighborhood of X , and det(X+ t Y )>0 onsome open interval of t ; otherwise, log( ) would be discontinuous. [82, p.75]dlog x = x−1∇dxddx log x−1 = −x −1ddx log xµ = µx −1X log detX = X −T∇ 2 X log det(X) kl = ∂X−T∂X kl= − ( X −1 e k e T l X−1) T, confer (1817)(1864)∇ X log detX −1 = −X −T∇ X log det µ X = µX −T∇ X log detX µ = µX −T∇ X log detX k = ∇ X log det k X = kX −T∇ X log det µ (X+ t Y ) = µ(X+ t Y ) −T∇ x log(a T x + b) = a 1a T x+b∇ X log det(AX+ B) = A T (AX+ B) −T∇ X log det(I ± A T XA) = ±A(I ± A T XA) −T A T∇ X log det(X+ t Y ) k = ∇ X log det k (X+ t Y ) = k(X+ t Y ) −Tdlog det(X+ t Y ) = tr((X+ t Y dt )−1 Y )d 2dt 2 log det(X+ t Y ) = − tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )dlog det(X+ t Y dt )−1 = − tr((X+ t Y ) −1 Y )d 2dt 2 log det(X+ t Y ) −1 = tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )ddt log det(δ(A(x + t y) + a)2 + µI)= tr ( (δ(A(x + t y) + a) 2 + µI) −1 2δ(A(x + t y) + a)δ(Ay) )

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!