v2010.10.26 - Convex Optimization
v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization
656 APPENDIX C. SOME ANALYTICAL OPTIMAL RESULTSFor A∈ S N + and β ∈ Rβ trA = maximize tr(XA)X∈ S Nsubject to X ≼ βI(1689)But the following statement is numerically stable, preventing anunbounded solution in direction of a 0 eigenvalue:maximize sgn(β) tr(XA)X∈ S Nsubject to X ≼ |β|IX ≽ −|β|I(1690)where β trA = tr(X ⋆ A). If β ≥ 0, then X ≽−|β|I ← X ≽ 0.For symmetric A∈ S N , its smallest and largest eigenvalue in λ(A)∈ R Nis respectively [11,4.1] [43,I.6.15] [202,4.2] [239,2.1] [240]min{λ(A) i } = inf x T Ax = minimizei‖x‖=1X∈ S N +subject to trX = 1max{λ(A) i } = sup x T Ax = maximizei‖x‖=1X∈ S N +subject to trX = 1tr(XA) = maximize tt∈Rsubject to A ≽ tI(1691)tr(XA) = minimize tt∈Rsubject to A ≼ tI(1692)The largest eigenvalue λ 1 is always convex in A∈ S N because, givenparticular x , x T Ax is linear in matrix A ; supremum of a family oflinear functions is convex, as illustrated in Figure 74. So for A,B∈ S N ,λ 1 (A + B) ≤ λ 1 (A) + λ 1 (B). (1504) Similarly, the smallest eigenvalueλ N of any symmetric matrix is a concave function of its entries;λ N (A + B) ≥ λ N (A) + λ N (B). (1504) For v 1 a normalized eigenvectorof A corresponding to the largest eigenvalue, and v N a normalizedeigenvector corresponding to the smallest eigenvalue,v N = arg inf x T Ax (1693)‖x‖=1v 1 = arg sup x T Ax (1694)‖x‖=1
C.2. TRACE, SINGULAR AND EIGEN VALUES 657For A∈ S N having eigenvalues λ(A)∈ R N , consider the unconstrainednonconvex optimization that is a projection on the rank-1 subset(2.9.2.1,3.6.0.0.1) of the boundary of positive semidefinite cone S N + :Defining λ 1 max i {λ(A) i } and corresponding eigenvector v 1minimizex‖xx T − A‖ 2 F = minimize tr(xx T (x T x) − 2Axx T + A T A)x{‖λ(A)‖ 2 , λ 1 ≤ 0=(1695)‖λ(A)‖ 2 − λ 2 1 , λ 1 > 0arg minimizex‖xx T − A‖ 2 F ={0 , λ1 ≤ 0v 1√λ1 , λ 1 > 0(1696)Proof. This is simply the Eckart & Young solution from7.1.2:x ⋆ x ⋆T ={0 , λ1 ≤ 0λ 1 v 1 v T 1 , λ 1 > 0(1697)minimizexGiven nonincreasingly ordered diagonalization A = QΛQ T whereΛ = δ(λ(A)) (A.5), then (1695) has minimum value⎧‖QΛQ T ‖ 2 F = ‖δ(Λ)‖2 , λ 1 ≤ 0⎪⎨⎛⎡‖xx T −A‖ 2 F =λ 1 Q ⎜⎢0⎝⎣. ..⎪⎩ ∥0⎤ ⎞ ∥ ∥∥∥∥∥∥2⎡⎥ ⎟⎦− Λ⎠Q T ⎢=⎣∥Fλ 10.0⎤⎥⎦− δ(Λ)∥2(1698), λ 1 > 0C.2.0.0.2 Exercise. Rank-1 approximation.Given symmetric matrix A∈ S N , prove:v 1 = arg minimize ‖xx T − A‖ 2 Fxsubject to ‖x‖ = 1(1699)where v 1 is a normalized eigenvector of A corresponding to its largesteigenvalue. What is the objective’s optimal value?
- Page 605 and 606: A.3. PROPER STATEMENTS 605Diagonali
- Page 607 and 608: A.3. PROPER STATEMENTS 607For A,B
- Page 609 and 610: A.3. PROPER STATEMENTS 609When B is
- Page 611 and 612: A.4. SCHUR COMPLEMENT 611A.4 Schur
- Page 613 and 614: A.4. SCHUR COMPLEMENT 613A.4.0.0.3
- Page 615 and 616: A.4. SCHUR COMPLEMENT 615From Corol
- Page 617 and 618: A.5. EIGENVALUE DECOMPOSITION 617wh
- Page 619 and 620: A.5. EIGENVALUE DECOMPOSITION 619A.
- Page 621 and 622: A.6. SINGULAR VALUE DECOMPOSITION,
- Page 623 and 624: A.6. SINGULAR VALUE DECOMPOSITION,
- Page 625 and 626: A.7. ZEROS 625A.6.5SVD of symmetric
- Page 627 and 628: A.7. ZEROS 627(Transpose.)Likewise,
- Page 629 and 630: A.7. ZEROS 629For X,A∈ S M +[34,2
- Page 631 and 632: A.7. ZEROS 631A.7.5.0.1 Proposition
- Page 633 and 634: Appendix BSimple matricesMathematic
- Page 635 and 636: B.1. RANK-ONE MATRIX (DYAD) 635R(v)
- Page 637 and 638: B.1. RANK-ONE MATRIX (DYAD) 637B.1.
- Page 639 and 640: B.2. DOUBLET 639R([u v ])R(Π)= R([
- Page 641 and 642: B.3. ELEMENTARY MATRIX 641has N −
- Page 643 and 644: B.4. AUXILIARY V -MATRICES 643is an
- Page 645 and 646: B.4. AUXILIARY V -MATRICES 64514. [
- Page 647 and 648: B.5. ORTHOGONAL MATRIX 647Given X
- Page 649 and 650: B.5. ORTHOGONAL MATRIX 649Figure 15
- Page 651 and 652: B.5. ORTHOGONAL MATRIX 651which is
- Page 653 and 654: Appendix CSome analytical optimal r
- Page 655: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 659 and 660: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 661 and 662: C.3. ORTHOGONAL PROCRUSTES PROBLEM
- Page 663 and 664: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 665 and 666: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 667 and 668: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 669 and 670: Appendix DMatrix calculusFrom too m
- Page 671 and 672: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 673 and 674: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 675 and 676: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 677 and 678: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 679 and 680: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 681 and 682: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 683 and 684: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 685 and 686: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 687 and 688: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 689 and 690: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 691 and 692: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 693 and 694: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 695 and 696: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 697 and 698: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 699 and 700: Appendix EProjectionFor any A∈ R
- Page 701 and 702: 701U T = U † for orthonormal (inc
- Page 703 and 704: E.1. IDEMPOTENT MATRICES 703where A
- Page 705 and 706: E.1. IDEMPOTENT MATRICES 705order,
C.2. TRACE, SINGULAR AND EIGEN VALUES 657For A∈ S N having eigenvalues λ(A)∈ R N , consider the unconstrainednonconvex optimization that is a projection on the rank-1 subset(2.9.2.1,3.6.0.0.1) of the boundary of positive semidefinite cone S N + :Defining λ 1 max i {λ(A) i } and corresponding eigenvector v 1minimizex‖xx T − A‖ 2 F = minimize tr(xx T (x T x) − 2Axx T + A T A)x{‖λ(A)‖ 2 , λ 1 ≤ 0=(1695)‖λ(A)‖ 2 − λ 2 1 , λ 1 > 0arg minimizex‖xx T − A‖ 2 F ={0 , λ1 ≤ 0v 1√λ1 , λ 1 > 0(1696)Proof. This is simply the Eckart & Young solution from7.1.2:x ⋆ x ⋆T ={0 , λ1 ≤ 0λ 1 v 1 v T 1 , λ 1 > 0(1697)minimizexGiven nonincreasingly ordered diagonalization A = QΛQ T whereΛ = δ(λ(A)) (A.5), then (1695) has minimum value⎧‖QΛQ T ‖ 2 F = ‖δ(Λ)‖2 , λ 1 ≤ 0⎪⎨⎛⎡‖xx T −A‖ 2 F =λ 1 Q ⎜⎢0⎝⎣. ..⎪⎩ ∥0⎤ ⎞ ∥ ∥∥∥∥∥∥2⎡⎥ ⎟⎦− Λ⎠Q T ⎢=⎣∥Fλ 10.0⎤⎥⎦− δ(Λ)∥2(1698), λ 1 > 0C.2.0.0.2 Exercise. Rank-1 approximation.Given symmetric matrix A∈ S N , prove:v 1 = arg minimize ‖xx T − A‖ 2 Fxsubject to ‖x‖ = 1(1699)where v 1 is a normalized eigenvector of A corresponding to its largesteigenvalue. What is the objective’s optimal value?