v2010.10.26 - Convex Optimization
v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization
694 APPENDIX D. MATRIX CALCULUStrace continueddtrg(X+ t Y ) = tr d g(X+ t Y )dt dt[203, p.491]dtr(X+ t Y ) = trYdtddt trj (X+ t Y ) = j tr j−1 (X+ t Y ) tr Ydtr(X+ t Y dt )j = j tr((X+ t Y ) j−1 Y )(∀j)d2tr((X+ t Y )Y ) = trYdtdtr( (X+ t Y ) k Y ) = d tr(Y (X+ t Y dt dt )k ) = k tr ( (X+ t Y ) k−1 Y 2) , k ∈{0, 1, 2}dtr( (X+ t Y ) k Y ) = d tr(Y (X+ t Y dt dt )k ) = tr k−1 ∑(X+ t Y ) i Y (X+ t Y ) k−1−i Ydtr((X+ t Y dt )−1 Y ) = − tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )dtr( B T (X+ t Y ) −1 A ) = − tr ( B T (X+ t Y ) −1 Y (X+ t Y ) −1 A )dtdtr( B T (X+ t Y ) −T A ) = − tr ( B T (X+ t Y ) −T Y T (X+ t Y ) −T A )dtdtr( B T (X+ t Y ) −k A ) = ..., k>0dtdtr( B T (X+ t Y ) µ A ) = ..., −1 ≤ µ ≤ 1, X, Y ∈ S M dt +d 2tr ( B T (X+ t Y ) −1 A ) = 2 tr ( B T (X+ t Y ) −1 Y (X+ t Y ) −1 Y (X+ t Y ) −1 A )dt 2dtr( (X+ t Y ) T A(X+ t Y ) ) = tr ( Y T AX + X T AY + 2tY T AY )dtd 2tr ( (X+ t Y ) T A(X+ t Y ) ) = 2 tr ( Y T AY )dt 2 (d ((X+ tr dt t Y ) T A(X+ t Y ) ) ) −1( ((X+= − tr t Y ) T A(X+ t Y ) ) −1(Y T AX + X T AY + 2tY T AY ) ( (X+ t Y ) T A(X+ t Y ) ) ) −1ddttr((X+ t Y )A(X+ t Y )) = tr(YAX + XAY + 2tYAY )d 2dt 2 tr((X+ t Y )A(X+ t Y )) = 2 tr(YAY )i=0
D.2. TABLES OF GRADIENTS AND DERIVATIVES 695D.2.4logarithmic determinantx≻0, detX >0 on some neighborhood of X , and det(X+ t Y )>0 onsome open interval of t ; otherwise, log( ) would be discontinuous. [82, p.75]dlog x = x−1∇dxddx log x−1 = −x −1ddx log xµ = µx −1X log detX = X −T∇ 2 X log det(X) kl = ∂X−T∂X kl= − ( X −1 e k e T l X−1) T, confer (1817)(1864)∇ X log detX −1 = −X −T∇ X log det µ X = µX −T∇ X log detX µ = µX −T∇ X log detX k = ∇ X log det k X = kX −T∇ X log det µ (X+ t Y ) = µ(X+ t Y ) −T∇ x log(a T x + b) = a 1a T x+b∇ X log det(AX+ B) = A T (AX+ B) −T∇ X log det(I ± A T XA) = ±A(I ± A T XA) −T A T∇ X log det(X+ t Y ) k = ∇ X log det k (X+ t Y ) = k(X+ t Y ) −Tdlog det(X+ t Y ) = tr((X+ t Y dt )−1 Y )d 2dt 2 log det(X+ t Y ) = − tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )dlog det(X+ t Y dt )−1 = − tr((X+ t Y ) −1 Y )d 2dt 2 log det(X+ t Y ) −1 = tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )ddt log det(δ(A(x + t y) + a)2 + µI)= tr ( (δ(A(x + t y) + a) 2 + µI) −1 2δ(A(x + t y) + a)δ(Ay) )
- Page 643 and 644: B.4. AUXILIARY V -MATRICES 643is an
- Page 645 and 646: B.4. AUXILIARY V -MATRICES 64514. [
- Page 647 and 648: B.5. ORTHOGONAL MATRIX 647Given X
- Page 649 and 650: B.5. ORTHOGONAL MATRIX 649Figure 15
- Page 651 and 652: B.5. ORTHOGONAL MATRIX 651which is
- Page 653 and 654: Appendix CSome analytical optimal r
- Page 655 and 656: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 657 and 658: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 659 and 660: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 661 and 662: C.3. ORTHOGONAL PROCRUSTES PROBLEM
- Page 663 and 664: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 665 and 666: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 667 and 668: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 669 and 670: Appendix DMatrix calculusFrom too m
- Page 671 and 672: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 673 and 674: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 675 and 676: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 677 and 678: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 679 and 680: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 681 and 682: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 683 and 684: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 685 and 686: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 687 and 688: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 689 and 690: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 691 and 692: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 693: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 697 and 698: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 699 and 700: Appendix EProjectionFor any A∈ R
- Page 701 and 702: 701U T = U † for orthonormal (inc
- Page 703 and 704: E.1. IDEMPOTENT MATRICES 703where A
- Page 705 and 706: E.1. IDEMPOTENT MATRICES 705order,
- Page 707 and 708: E.1. IDEMPOTENT MATRICES 707are lin
- Page 709 and 710: E.3. SYMMETRIC IDEMPOTENT MATRICES
- Page 711 and 712: E.3. SYMMETRIC IDEMPOTENT MATRICES
- Page 713 and 714: E.3. SYMMETRIC IDEMPOTENT MATRICES
- Page 715 and 716: E.4. ALGEBRA OF PROJECTION ON AFFIN
- Page 717 and 718: E.5. PROJECTION EXAMPLES 717a ∗ 2
- Page 719 and 720: E.5. PROJECTION EXAMPLES 719where Y
- Page 721 and 722: E.5. PROJECTION EXAMPLES 721(B.4.2)
- Page 723 and 724: E.6. VECTORIZATION INTERPRETATION,
- Page 725 and 726: E.6. VECTORIZATION INTERPRETATION,
- Page 727 and 728: E.6. VECTORIZATION INTERPRETATION,
- Page 729 and 730: E.7. PROJECTION ON MATRIX SUBSPACES
- Page 731 and 732: E.7. PROJECTION ON MATRIX SUBSPACES
- Page 733 and 734: E.8. RANGE/ROWSPACE INTERPRETATION
- Page 735 and 736: E.9. PROJECTION ON CONVEX SET 735As
- Page 737 and 738: E.9. PROJECTION ON CONVEX SET 737Wi
- Page 739 and 740: E.9. PROJECTION ON CONVEX SET 739R(
- Page 741 and 742: E.9. PROJECTION ON CONVEX SET 741E.
- Page 743 and 744: E.9. PROJECTION ON CONVEX SET 743E.
D.2. TABLES OF GRADIENTS AND DERIVATIVES 695D.2.4logarithmic determinantx≻0, detX >0 on some neighborhood of X , and det(X+ t Y )>0 onsome open interval of t ; otherwise, log( ) would be discontinuous. [82, p.75]dlog x = x−1∇dxddx log x−1 = −x −1ddx log xµ = µx −1X log detX = X −T∇ 2 X log det(X) kl = ∂X−T∂X kl= − ( X −1 e k e T l X−1) T, confer (1817)(1864)∇ X log detX −1 = −X −T∇ X log det µ X = µX −T∇ X log detX µ = µX −T∇ X log detX k = ∇ X log det k X = kX −T∇ X log det µ (X+ t Y ) = µ(X+ t Y ) −T∇ x log(a T x + b) = a 1a T x+b∇ X log det(AX+ B) = A T (AX+ B) −T∇ X log det(I ± A T XA) = ±A(I ± A T XA) −T A T∇ X log det(X+ t Y ) k = ∇ X log det k (X+ t Y ) = k(X+ t Y ) −Tdlog det(X+ t Y ) = tr((X+ t Y dt )−1 Y )d 2dt 2 log det(X+ t Y ) = − tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )dlog det(X+ t Y dt )−1 = − tr((X+ t Y ) −1 Y )d 2dt 2 log det(X+ t Y ) −1 = tr((X+ t Y ) −1 Y (X+ t Y ) −1 Y )ddt log det(δ(A(x + t y) + a)2 + µI)= tr ( (δ(A(x + t y) + a) 2 + µI) −1 2δ(A(x + t y) + a)δ(Ay) )