v2010.10.26 - Convex Optimization
v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization
626 APPENDIX A. LINEAR ALGEBRAA.7.20 entryIf a positive semidefinite matrix A = [A ij ] ∈ R n×n has a 0 entry A ii on itsmain diagonal, then A ij + A ji = 0 ∀j . [272,1.3.1]Any symmetric positive semidefinite matrix having a 0 entry on its maindiagonal must be 0 along the entire row and column to which that 0 entrybelongs. [159,4.2.8] [202,7.1 prob.2]A.7.3 0 eigenvalues theoremThis theorem is simple, powerful, and widely applicable:A.7.3.0.1 Theorem. Number of 0 eigenvalues.For any matrix A∈ R m×nrank(A) + dim N(A) = n (1583)by conservation of dimension. [202,0.4.4]For any square matrix A∈ R m×m , number of 0 eigenvalues is at leastequal to dim N(A)dim N(A) ≤ number of 0 eigenvalues ≤ m (1584)while all eigenvectors corresponding to those 0 eigenvalues belong to N(A).[331,5.1] A.16For diagonalizable matrix A (A.5), the number of 0 eigenvalues isprecisely dim N(A) while the corresponding eigenvectors span N(A). Realand imaginary parts of the eigenvectors remaining span R(A).A.16 We take as given the well-known fact that the number of 0 eigenvalues cannot be lessthan dimension of the nullspace. We offer an example of the converse:⎡A = ⎢⎣1 0 1 00 0 1 00 0 0 01 0 0 0dim N(A)=2, λ(A)=[0 0 0 1] T ; three eigenvectors in the nullspace but only two areindependent. The right-hand side of (1584) is tight for nonzero matrices; e.g., (B.1) dyaduv T ∈ R m×m has m 0-eigenvalues when u ∈ v ⊥ .⎤⎥⎦
A.7. ZEROS 627(Transpose.)Likewise, for any matrix A∈ R m×nrank(A T ) + dim N(A T ) = m (1585)For any square A∈ R m×m , number of 0 eigenvalues is at least equalto dim N(A T )=dim N(A) while all left-eigenvectors (eigenvectors of A T )corresponding to those 0 eigenvalues belong to N(A T ).For diagonalizable A , number of 0 eigenvalues is precisely dim N(A T )while the corresponding left-eigenvectors span N(A T ). Real and imaginaryparts of the left-eigenvectors remaining span R(A T ).⋄Proof. First we show, for a diagonalizable matrix, the number of 0eigenvalues is precisely the dimension of its nullspace while the eigenvectorscorresponding to those 0 eigenvalues span the nullspace:Any diagonalizable matrix A∈ R m×m must possess a complete set oflinearly independent eigenvectors. If A is full-rank (invertible), then allm=rank(A) eigenvalues are nonzero. [331,5.1]Suppose rank(A)< m . Then dim N(A) = m−rank(A). Thus there isa set of m−rank(A) linearly independent vectors spanning N(A). Eachof those can be an eigenvector associated with a 0 eigenvalue becauseA is diagonalizable ⇔ ∃ m linearly independent eigenvectors. [331,5.2]Eigenvectors of a real matrix corresponding to 0 eigenvalues must be real. A.17Thus A has at least m−rank(A) eigenvalues equal to 0.Now suppose A has more than m−rank(A) eigenvalues equal to 0.Then there are more than m−rank(A) linearly independent eigenvectorsassociated with 0 eigenvalues, and each of those eigenvectors must be inN(A). Thus there are more than m−rank(A) linearly independent vectorsin N(A) ; a contradiction.Diagonalizable A therefore has rank(A) nonzero eigenvalues and exactlym−rank(A) eigenvalues equal to 0 whose corresponding eigenvectorsspan N(A).By similar argument, the left-eigenvectors corresponding to 0 eigenvaluesspan N(A T ).Next we show when A is diagonalizable, the real and imaginary parts ofits eigenvectors (corresponding to nonzero eigenvalues) span R(A) :A.17 Proof. Let ∗ denote complex conjugation. Suppose A=A ∗ and As i =0. Thens i = s ∗ i ⇒ As i =As ∗ i ⇒ As ∗ i =0. Conversely, As∗ i =0 ⇒ As i=As ∗ i ⇒ s i= s ∗ i .
- Page 575 and 576: 7.2. SECOND PREVALENT PROBLEM: 5757
- Page 577 and 578: 7.2. SECOND PREVALENT PROBLEM: 577a
- Page 579 and 580: 7.3. THIRD PREVALENT PROBLEM: 579is
- Page 581 and 582: 7.3. THIRD PREVALENT PROBLEM: 581We
- Page 583 and 584: 7.3. THIRD PREVALENT PROBLEM: 583su
- Page 585 and 586: 7.3. THIRD PREVALENT PROBLEM: 585Gi
- Page 587 and 588: 7.3. THIRD PREVALENT PROBLEM: 587Op
- Page 589 and 590: 7.4. CONCLUSION 589filtering, multi
- Page 591 and 592: Appendix ALinear algebraA.1 Main-di
- Page 593 and 594: A.1. MAIN-DIAGONAL δ OPERATOR, λ
- Page 595 and 596: A.1. MAIN-DIAGONAL δ OPERATOR, λ
- Page 597 and 598: A.2. SEMIDEFINITENESS: DOMAIN OF TE
- Page 599 and 600: A.3. PROPER STATEMENTS 599(AB) T
- Page 601 and 602: A.3. PROPER STATEMENTS 601A.3.1Semi
- Page 603 and 604: A.3. PROPER STATEMENTS 603For A dia
- Page 605 and 606: A.3. PROPER STATEMENTS 605Diagonali
- Page 607 and 608: A.3. PROPER STATEMENTS 607For A,B
- Page 609 and 610: A.3. PROPER STATEMENTS 609When B is
- Page 611 and 612: A.4. SCHUR COMPLEMENT 611A.4 Schur
- Page 613 and 614: A.4. SCHUR COMPLEMENT 613A.4.0.0.3
- Page 615 and 616: A.4. SCHUR COMPLEMENT 615From Corol
- Page 617 and 618: A.5. EIGENVALUE DECOMPOSITION 617wh
- Page 619 and 620: A.5. EIGENVALUE DECOMPOSITION 619A.
- Page 621 and 622: A.6. SINGULAR VALUE DECOMPOSITION,
- Page 623 and 624: A.6. SINGULAR VALUE DECOMPOSITION,
- Page 625: A.7. ZEROS 625A.6.5SVD of symmetric
- Page 629 and 630: A.7. ZEROS 629For X,A∈ S M +[34,2
- Page 631 and 632: A.7. ZEROS 631A.7.5.0.1 Proposition
- Page 633 and 634: Appendix BSimple matricesMathematic
- Page 635 and 636: B.1. RANK-ONE MATRIX (DYAD) 635R(v)
- Page 637 and 638: B.1. RANK-ONE MATRIX (DYAD) 637B.1.
- Page 639 and 640: B.2. DOUBLET 639R([u v ])R(Π)= R([
- Page 641 and 642: B.3. ELEMENTARY MATRIX 641has N −
- Page 643 and 644: B.4. AUXILIARY V -MATRICES 643is an
- Page 645 and 646: B.4. AUXILIARY V -MATRICES 64514. [
- Page 647 and 648: B.5. ORTHOGONAL MATRIX 647Given X
- Page 649 and 650: B.5. ORTHOGONAL MATRIX 649Figure 15
- Page 651 and 652: B.5. ORTHOGONAL MATRIX 651which is
- Page 653 and 654: Appendix CSome analytical optimal r
- Page 655 and 656: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 657 and 658: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 659 and 660: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 661 and 662: C.3. ORTHOGONAL PROCRUSTES PROBLEM
- Page 663 and 664: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 665 and 666: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 667 and 668: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 669 and 670: Appendix DMatrix calculusFrom too m
- Page 671 and 672: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 673 and 674: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 675 and 676: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
626 APPENDIX A. LINEAR ALGEBRAA.7.20 entryIf a positive semidefinite matrix A = [A ij ] ∈ R n×n has a 0 entry A ii on itsmain diagonal, then A ij + A ji = 0 ∀j . [272,1.3.1]Any symmetric positive semidefinite matrix having a 0 entry on its maindiagonal must be 0 along the entire row and column to which that 0 entrybelongs. [159,4.2.8] [202,7.1 prob.2]A.7.3 0 eigenvalues theoremThis theorem is simple, powerful, and widely applicable:A.7.3.0.1 Theorem. Number of 0 eigenvalues.For any matrix A∈ R m×nrank(A) + dim N(A) = n (1583)by conservation of dimension. [202,0.4.4]For any square matrix A∈ R m×m , number of 0 eigenvalues is at leastequal to dim N(A)dim N(A) ≤ number of 0 eigenvalues ≤ m (1584)while all eigenvectors corresponding to those 0 eigenvalues belong to N(A).[331,5.1] A.16For diagonalizable matrix A (A.5), the number of 0 eigenvalues isprecisely dim N(A) while the corresponding eigenvectors span N(A). Realand imaginary parts of the eigenvectors remaining span R(A).A.16 We take as given the well-known fact that the number of 0 eigenvalues cannot be lessthan dimension of the nullspace. We offer an example of the converse:⎡A = ⎢⎣1 0 1 00 0 1 00 0 0 01 0 0 0dim N(A)=2, λ(A)=[0 0 0 1] T ; three eigenvectors in the nullspace but only two areindependent. The right-hand side of (1584) is tight for nonzero matrices; e.g., (B.1) dyaduv T ∈ R m×m has m 0-eigenvalues when u ∈ v ⊥ .⎤⎥⎦