v2010.10.26 - Convex Optimization
v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization
628 APPENDIX A. LINEAR ALGEBRAThe (right-)eigenvectors of a diagonalizable matrix A∈ R m×m are linearlyindependent if and only if the left-eigenvectors are. So, matrix A hasa representation in terms of its right- and left-eigenvectors; from thediagonalization (1547), assuming 0 eigenvalues are ordered last,A =m∑λ i s i wi T =i=1k∑≤ mi=1λ i ≠0λ i s i w T i (1586)From the linearly independent dyads theorem (B.1.1.0.2), the dyads {s i w T i }must be independent because each set of eigenvectors are; hence rankA = k ,the number of nonzero eigenvalues. Complex eigenvectors and eigenvaluesare common for real matrices, and must come in complex conjugate pairs forthe summation to remain real. Assume that conjugate pairs of eigenvaluesappear in sequence. Given any particular conjugate pair from (1586), we getthe partial summationλ i s i w T i + λ ∗ i s ∗ iw ∗Ti = 2re(λ i s i w T i )= 2 ( res i re(λ i w T i ) − im s i im(λ i w T i ) ) (1587)where A.18 λ ∗ i λ i+1 , s ∗ i s i+1 , and w ∗ i w i+1 . Then (1586) isequivalently writtenA = 2 ∑ iλ ∈ Cλ i ≠0res 2i re(λ 2i w T 2i) − im s 2i im(λ 2i w T 2i) + ∑ jλ ∈ Rλ j ≠0λ j s j w T j (1588)The summation (1588) shows: A is a linear combination of real and imaginaryparts of its (right-)eigenvectors corresponding to nonzero eigenvalues. Thek vectors {re s i ∈ R m , ims i ∈ R m | λ i ≠0, i∈{1... m}} must therefore spanthe range of diagonalizable matrix A .The argument is similar regarding span of the left-eigenvectors. A.7.40 trace and matrix productFor X,A∈ R M×N+ (39)tr(X T A) = 0 ⇔ X ◦ A = A ◦ X = 0 (1589)A.18 Complex conjugate of w is denoted w ∗ . Conjugate transpose is denoted w H = w ∗T .
A.7. ZEROS 629For X,A∈ S M +[34,2.6.1 exer.2.8] [359,3.1]tr(XA) = 0 ⇔ XA = AX = 0 (1590)Proof. (⇐) Suppose XA = AX = 0. Then tr(XA)=0 is obvious.(⇒) Suppose tr(XA)=0. tr(XA)= tr( √ AX √ A) whose argument ispositive semidefinite by Corollary A.3.1.0.5. Trace of any square matrix isequivalent to the sum of its eigenvalues. Eigenvalues of a positive semidefinitematrix can total 0 if and only if each and every nonnegative eigenvalue is 0.The only positive semidefinite matrix, having all 0 eigenvalues, resides at theorigin; (confer (1614)) id est,√AX√A =(√X√A) T√X√A = 0 (1591)implying √ X √ A = 0 which in turn implies √ X( √ X √ A) √ A = XA = 0.Arguing similarly yields AX = 0.Diagonalizable matrices A and X are simultaneously diagonalizable if andonly if they are commutative under multiplication; [202,1.3.12] id est, iffthey share a complete set of eigenvectors.A.7.4.0.1 Example. An equivalence in nonisomorphic spaces.Identity (1590) leads to an unusual equivalence relating convex geometry totraditional linear algebra: The convex sets, given A ≽ 0{X | 〈X , A〉 = 0} ∩ {X ≽ 0} ≡ {X | N(X) ⊇ R(A)} ∩ {X ≽ 0} (1592)(one expressed in terms of a hyperplane, the other in terms of nullspace andrange) are equivalent only when symmetric matrix A is positive semidefinite.We might apply this equivalence to the geometric center subspace, forexample,S M c = {Y ∈ S M | Y 1 = 0}= {Y ∈ S M | N(Y ) ⊇ 1} = {Y ∈ S M | R(Y ) ⊆ N(1 T )}(1998)from which we derive (confer (994))S M c ∩ S M + ≡ {X ≽ 0 | 〈X , 11 T 〉 = 0} (1593)
- Page 577 and 578: 7.2. SECOND PREVALENT PROBLEM: 577a
- Page 579 and 580: 7.3. THIRD PREVALENT PROBLEM: 579is
- Page 581 and 582: 7.3. THIRD PREVALENT PROBLEM: 581We
- Page 583 and 584: 7.3. THIRD PREVALENT PROBLEM: 583su
- Page 585 and 586: 7.3. THIRD PREVALENT PROBLEM: 585Gi
- Page 587 and 588: 7.3. THIRD PREVALENT PROBLEM: 587Op
- Page 589 and 590: 7.4. CONCLUSION 589filtering, multi
- Page 591 and 592: Appendix ALinear algebraA.1 Main-di
- Page 593 and 594: A.1. MAIN-DIAGONAL δ OPERATOR, λ
- Page 595 and 596: A.1. MAIN-DIAGONAL δ OPERATOR, λ
- Page 597 and 598: A.2. SEMIDEFINITENESS: DOMAIN OF TE
- Page 599 and 600: A.3. PROPER STATEMENTS 599(AB) T
- Page 601 and 602: A.3. PROPER STATEMENTS 601A.3.1Semi
- Page 603 and 604: A.3. PROPER STATEMENTS 603For A dia
- Page 605 and 606: A.3. PROPER STATEMENTS 605Diagonali
- Page 607 and 608: A.3. PROPER STATEMENTS 607For A,B
- Page 609 and 610: A.3. PROPER STATEMENTS 609When B is
- Page 611 and 612: A.4. SCHUR COMPLEMENT 611A.4 Schur
- Page 613 and 614: A.4. SCHUR COMPLEMENT 613A.4.0.0.3
- Page 615 and 616: A.4. SCHUR COMPLEMENT 615From Corol
- Page 617 and 618: A.5. EIGENVALUE DECOMPOSITION 617wh
- Page 619 and 620: A.5. EIGENVALUE DECOMPOSITION 619A.
- Page 621 and 622: A.6. SINGULAR VALUE DECOMPOSITION,
- Page 623 and 624: A.6. SINGULAR VALUE DECOMPOSITION,
- Page 625 and 626: A.7. ZEROS 625A.6.5SVD of symmetric
- Page 627: A.7. ZEROS 627(Transpose.)Likewise,
- Page 631 and 632: A.7. ZEROS 631A.7.5.0.1 Proposition
- Page 633 and 634: Appendix BSimple matricesMathematic
- Page 635 and 636: B.1. RANK-ONE MATRIX (DYAD) 635R(v)
- Page 637 and 638: B.1. RANK-ONE MATRIX (DYAD) 637B.1.
- Page 639 and 640: B.2. DOUBLET 639R([u v ])R(Π)= R([
- Page 641 and 642: B.3. ELEMENTARY MATRIX 641has N −
- Page 643 and 644: B.4. AUXILIARY V -MATRICES 643is an
- Page 645 and 646: B.4. AUXILIARY V -MATRICES 64514. [
- Page 647 and 648: B.5. ORTHOGONAL MATRIX 647Given X
- Page 649 and 650: B.5. ORTHOGONAL MATRIX 649Figure 15
- Page 651 and 652: B.5. ORTHOGONAL MATRIX 651which is
- Page 653 and 654: Appendix CSome analytical optimal r
- Page 655 and 656: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 657 and 658: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 659 and 660: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 661 and 662: C.3. ORTHOGONAL PROCRUSTES PROBLEM
- Page 663 and 664: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 665 and 666: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 667 and 668: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 669 and 670: Appendix DMatrix calculusFrom too m
- Page 671 and 672: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 673 and 674: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 675 and 676: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 677 and 678: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
A.7. ZEROS 629For X,A∈ S M +[34,2.6.1 exer.2.8] [359,3.1]tr(XA) = 0 ⇔ XA = AX = 0 (1590)Proof. (⇐) Suppose XA = AX = 0. Then tr(XA)=0 is obvious.(⇒) Suppose tr(XA)=0. tr(XA)= tr( √ AX √ A) whose argument ispositive semidefinite by Corollary A.3.1.0.5. Trace of any square matrix isequivalent to the sum of its eigenvalues. Eigenvalues of a positive semidefinitematrix can total 0 if and only if each and every nonnegative eigenvalue is 0.The only positive semidefinite matrix, having all 0 eigenvalues, resides at theorigin; (confer (1614)) id est,√AX√A =(√X√A) T√X√A = 0 (1591)implying √ X √ A = 0 which in turn implies √ X( √ X √ A) √ A = XA = 0.Arguing similarly yields AX = 0.Diagonalizable matrices A and X are simultaneously diagonalizable if andonly if they are commutative under multiplication; [202,1.3.12] id est, iffthey share a complete set of eigenvectors.A.7.4.0.1 Example. An equivalence in nonisomorphic spaces.Identity (1590) leads to an unusual equivalence relating convex geometry totraditional linear algebra: The convex sets, given A ≽ 0{X | 〈X , A〉 = 0} ∩ {X ≽ 0} ≡ {X | N(X) ⊇ R(A)} ∩ {X ≽ 0} (1592)(one expressed in terms of a hyperplane, the other in terms of nullspace andrange) are equivalent only when symmetric matrix A is positive semidefinite.We might apply this equivalence to the geometric center subspace, forexample,S M c = {Y ∈ S M | Y 1 = 0}= {Y ∈ S M | N(Y ) ⊇ 1} = {Y ∈ S M | R(Y ) ⊆ N(1 T )}(1998)from which we derive (confer (994))S M c ∩ S M + ≡ {X ≽ 0 | 〈X , 11 T 〉 = 0} (1593)