v2009.01.01 - Convex Optimization
v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization
560 APPENDIX A. LINEAR ALGEBRA A.4.0.0.5 Lemma. Rank of Schur-form block. [114] [112] Matrix B ∈ R m×n has rankB≤ ρ if and only if there exist matrices A∈ S m and C ∈ S n such that [ ] [ ] A 0 A B rank 0 T ≤ 2ρ and G = C B T ≽ 0 (1428) C ⋄ Schur-form positive semidefiniteness alone implies rankA ≥ rankB and rankC ≥ rankB . But, even in absence of semidefiniteness, we must always have rankG ≥ rankA, rankB, rankC by fundamental linear algebra. A.4.1 Determinant [ ] A B G = B T C (1429) We consider again a matrix G partitioned like (1410), but not necessarily positive (semi)definite, where A and C are symmetric. When A is invertible, When C is invertible, detG = detA det(C − B T A −1 B) (1430) detG = detC det(A − BC −1 B T ) (1431) When B is full-rank and skinny, C = 0, and A ≽ 0, then [53,10.1.1] detG ≠ 0 ⇔ A + BB T ≻ 0 (1432) When B is a (column) vector, then for all C ∈ R and all A of dimension compatible with G detG = det(A)C − B T A T cofB (1433) while for C ≠ 0 detG = C det(A − 1 C BBT ) (1434) where A cof is the matrix of cofactors [287,4] corresponding to A .
A.5. EIGEN DECOMPOSITION 561 When B is full-rank and fat, A = 0, and C ≽ 0, then detG ≠ 0 ⇔ C + B T B ≻ 0 (1435) When B is a row-vector, then for A ≠ 0 and all C of dimension compatible with G while for all A∈ R detG = A det(C − 1 A BT B) (1436) detG = det(C)A − BC T cofB T (1437) where C cof is the matrix of cofactors corresponding to C . A.5 eigen decomposition When a square matrix X ∈ R m×m is diagonalizable, [287,5.6] then ⎡ ⎤ w T1 m∑ X = SΛS −1 = [s 1 · · · s m ] Λ⎣ . ⎦ = λ i s i wi T (1438) where {s i ∈ N(X − λ i I)⊆ C m } are l.i. (right-)eigenvectors A.12 constituting the columns of S ∈ C m×m defined by w T m i=1 XS = SΛ (1439) {w i ∈ N(X T − λ i I)⊆ C m } are linearly independent left-eigenvectors of X (eigenvectors of X T ) constituting the rows of S −1 defined by [176] S −1 X = ΛS −1 (1440) and where {λ i ∈ C} are eigenvalues (populating diagonal matrix Λ∈ C m×m ) corresponding to both left and right eigenvectors; id est, λ(X) = λ(X T ). There is no connection between diagonalizability and invertibility of X . [287,5.2] Diagonalizability is guaranteed by a full set of linearly independent eigenvectors, whereas invertibility is guaranteed by all nonzero eigenvalues. distinct eigenvalues ⇒ l.i. eigenvectors ⇔ diagonalizable not diagonalizable ⇒ repeated eigenvalue (1441) A.12 Eigenvectors must, of course, be nonzero. The prefix eigen is from the German; in this context meaning, something akin to “characteristic”. [285, p.14]
- Page 509 and 510: 7.1. FIRST PREVALENT PROBLEM: 509 7
- Page 511 and 512: 7.1. FIRST PREVALENT PROBLEM: 511 w
- Page 513 and 514: 7.1. FIRST PREVALENT PROBLEM: 513 T
- Page 515 and 516: 7.2. SECOND PREVALENT PROBLEM: 515
- Page 517 and 518: 7.2. SECOND PREVALENT PROBLEM: 517
- Page 519 and 520: 7.2. SECOND PREVALENT PROBLEM: 519
- Page 521 and 522: 7.2. SECOND PREVALENT PROBLEM: 521
- Page 523 and 524: 7.2. SECOND PREVALENT PROBLEM: 523
- Page 525 and 526: 7.3. THIRD PREVALENT PROBLEM: 525 g
- Page 527 and 528: 7.3. THIRD PREVALENT PROBLEM: 527 w
- Page 529 and 530: 7.3. THIRD PREVALENT PROBLEM: 529 7
- Page 531 and 532: 7.3. THIRD PREVALENT PROBLEM: 531 7
- Page 533 and 534: 7.3. THIRD PREVALENT PROBLEM: 533 O
- Page 535 and 536: 7.4. CONCLUSION 535 The rank constr
- Page 537 and 538: Appendix A Linear algebra A.1 Main-
- Page 539 and 540: A.1. MAIN-DIAGONAL δ OPERATOR, λ
- Page 541 and 542: A.1. MAIN-DIAGONAL δ OPERATOR, λ
- Page 543 and 544: A.2. SEMIDEFINITENESS: DOMAIN OF TE
- Page 545 and 546: A.3. PROPER STATEMENTS 545 (AB) T
- Page 547 and 548: A.3. PROPER STATEMENTS 547 A.3.1 Se
- Page 549 and 550: A.3. PROPER STATEMENTS 549 For A di
- Page 551 and 552: A.3. PROPER STATEMENTS 551 Diagonal
- Page 553 and 554: A.3. PROPER STATEMENTS 553 For A,B
- Page 555 and 556: A.3. PROPER STATEMENTS 555 A.3.1.0.
- Page 557 and 558: A.4. SCHUR COMPLEMENT 557 A.4 Schur
- Page 559: A.4. SCHUR COMPLEMENT 559 A.4.0.0.2
- Page 563 and 564: A.5. EIGEN DECOMPOSITION 563 dim N(
- Page 565 and 566: A.6. SINGULAR VALUE DECOMPOSITION,
- Page 567 and 568: A.6. SINGULAR VALUE DECOMPOSITION,
- Page 569 and 570: A.7. ZEROS 569 Given symmetric matr
- Page 571 and 572: A.7. ZEROS 571 (TRANSPOSE.) Likewis
- Page 573 and 574: A.7. ZEROS 573 For X,A∈ S M + [31
- Page 575 and 576: A.7. ZEROS 575 A.7.5.0.1 Propositio
- Page 577 and 578: Appendix B Simple matrices Mathemat
- Page 579 and 580: B.1. RANK-ONE MATRIX (DYAD) 579 R(v
- Page 581 and 582: B.1. RANK-ONE MATRIX (DYAD) 581 ran
- Page 583 and 584: B.2. DOUBLET 583 R([u v ]) R(Π)= R
- Page 585 and 586: B.3. ELEMENTARY MATRIX 585 If λ
- Page 587 and 588: B.4. AUXILIARY V -MATRICES 587 the
- Page 589 and 590: B.4. AUXILIARY V -MATRICES 589 18.
- Page 591 and 592: B.5. ORTHOGONAL MATRIX 591 B.5 Orth
- Page 593 and 594: B.5. ORTHOGONAL MATRIX 593 Figure 1
- Page 595 and 596: Appendix C Some analytical optimal
- Page 597 and 598: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 599 and 600: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 601 and 602: C.2. TRACE, SINGULAR AND EIGEN VALU
- Page 603 and 604: C.3. ORTHOGONAL PROCRUSTES PROBLEM
- Page 605 and 606: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 607 and 608: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 609 and 610: Appendix D Matrix calculus From too
A.5. EIGEN DECOMPOSITION 561<br />
When B is full-rank and fat, A = 0, and C ≽ 0, then<br />
detG ≠ 0 ⇔ C + B T B ≻ 0 (1435)<br />
When B is a row-vector, then for A ≠ 0 and all C of dimension<br />
compatible with G<br />
while for all A∈ R<br />
detG = A det(C − 1 A BT B) (1436)<br />
detG = det(C)A − BC T cofB T (1437)<br />
where C cof is the matrix of cofactors corresponding to C .<br />
A.5 eigen decomposition<br />
When a square matrix X ∈ R m×m is diagonalizable, [287,5.6] then<br />
⎡ ⎤<br />
w T1 m∑<br />
X = SΛS −1 = [s 1 · · · s m ] Λ⎣<br />
. ⎦ = λ i s i wi T (1438)<br />
where {s i ∈ N(X − λ i I)⊆ C m } are l.i. (right-)eigenvectors A.12 constituting<br />
the columns of S ∈ C m×m defined by<br />
w T m<br />
i=1<br />
XS = SΛ (1439)<br />
{w i ∈ N(X T − λ i I)⊆ C m } are linearly independent left-eigenvectors of X<br />
(eigenvectors of X T ) constituting the rows of S −1 defined by [176]<br />
S −1 X = ΛS −1 (1440)<br />
and where {λ i ∈ C} are eigenvalues (populating diagonal matrix Λ∈ C m×m )<br />
corresponding to both left and right eigenvectors; id est, λ(X) = λ(X T ).<br />
There is no connection between diagonalizability and invertibility of X .<br />
[287,5.2] Diagonalizability is guaranteed by a full set of linearly independent<br />
eigenvectors, whereas invertibility is guaranteed by all nonzero eigenvalues.<br />
distinct eigenvalues ⇒ l.i. eigenvectors ⇔ diagonalizable<br />
not diagonalizable ⇒ repeated eigenvalue<br />
(1441)<br />
A.12 Eigenvectors must, of course, be nonzero. The prefix eigen is from the German; in<br />
this context meaning, something akin to “characteristic”. [285, p.14]