v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

560 APPENDIX A. LINEAR ALGEBRA A.4.0.0.5 Lemma. Rank of Schur-form block. [114] [112] Matrix B ∈ R m×n has rankB≤ ρ if and only if there exist matrices A∈ S m and C ∈ S n such that [ ] [ ] A 0 A B rank 0 T ≤ 2ρ and G = C B T ≽ 0 (1428) C ⋄ Schur-form positive semidefiniteness alone implies rankA ≥ rankB and rankC ≥ rankB . But, even in absence of semidefiniteness, we must always have rankG ≥ rankA, rankB, rankC by fundamental linear algebra. A.4.1 Determinant [ ] A B G = B T C (1429) We consider again a matrix G partitioned like (1410), but not necessarily positive (semi)definite, where A and C are symmetric. When A is invertible, When C is invertible, detG = detA det(C − B T A −1 B) (1430) detG = detC det(A − BC −1 B T ) (1431) When B is full-rank and skinny, C = 0, and A ≽ 0, then [53,10.1.1] detG ≠ 0 ⇔ A + BB T ≻ 0 (1432) When B is a (column) vector, then for all C ∈ R and all A of dimension compatible with G detG = det(A)C − B T A T cofB (1433) while for C ≠ 0 detG = C det(A − 1 C BBT ) (1434) where A cof is the matrix of cofactors [287,4] corresponding to A .

A.5. EIGEN DECOMPOSITION 561 When B is full-rank and fat, A = 0, and C ≽ 0, then detG ≠ 0 ⇔ C + B T B ≻ 0 (1435) When B is a row-vector, then for A ≠ 0 and all C of dimension compatible with G while for all A∈ R detG = A det(C − 1 A BT B) (1436) detG = det(C)A − BC T cofB T (1437) where C cof is the matrix of cofactors corresponding to C . A.5 eigen decomposition When a square matrix X ∈ R m×m is diagonalizable, [287,5.6] then ⎡ ⎤ w T1 m∑ X = SΛS −1 = [s 1 · · · s m ] Λ⎣ . ⎦ = λ i s i wi T (1438) where {s i ∈ N(X − λ i I)⊆ C m } are l.i. (right-)eigenvectors A.12 constituting the columns of S ∈ C m×m defined by w T m i=1 XS = SΛ (1439) {w i ∈ N(X T − λ i I)⊆ C m } are linearly independent left-eigenvectors of X (eigenvectors of X T ) constituting the rows of S −1 defined by [176] S −1 X = ΛS −1 (1440) and where {λ i ∈ C} are eigenvalues (populating diagonal matrix Λ∈ C m×m ) corresponding to both left and right eigenvectors; id est, λ(X) = λ(X T ). There is no connection between diagonalizability and invertibility of X . [287,5.2] Diagonalizability is guaranteed by a full set of linearly independent eigenvectors, whereas invertibility is guaranteed by all nonzero eigenvalues. distinct eigenvalues ⇒ l.i. eigenvectors ⇔ diagonalizable not diagonalizable ⇒ repeated eigenvalue (1441) A.12 Eigenvectors must, of course, be nonzero. The prefix eigen is from the German; in this context meaning, something akin to “characteristic”. [285, p.14]

A.5. EIGEN DECOMPOSITION 561<br />

When B is full-rank and fat, A = 0, and C ≽ 0, then<br />

detG ≠ 0 ⇔ C + B T B ≻ 0 (1435)<br />

When B is a row-vector, then for A ≠ 0 and all C of dimension<br />

compatible with G<br />

while for all A∈ R<br />

detG = A det(C − 1 A BT B) (1436)<br />

detG = det(C)A − BC T cofB T (1437)<br />

where C cof is the matrix of cofactors corresponding to C .<br />

A.5 eigen decomposition<br />

When a square matrix X ∈ R m×m is diagonalizable, [287,5.6] then<br />

⎡ ⎤<br />

w T1 m∑<br />

X = SΛS −1 = [s 1 · · · s m ] Λ⎣<br />

. ⎦ = λ i s i wi T (1438)<br />

where {s i ∈ N(X − λ i I)⊆ C m } are l.i. (right-)eigenvectors A.12 constituting<br />

the columns of S ∈ C m×m defined by<br />

w T m<br />

i=1<br />

XS = SΛ (1439)<br />

{w i ∈ N(X T − λ i I)⊆ C m } are linearly independent left-eigenvectors of X<br />

(eigenvectors of X T ) constituting the rows of S −1 defined by [176]<br />

S −1 X = ΛS −1 (1440)<br />

and where {λ i ∈ C} are eigenvalues (populating diagonal matrix Λ∈ C m×m )<br />

corresponding to both left and right eigenvectors; id est, λ(X) = λ(X T ).<br />

There is no connection between diagonalizability and invertibility of X .<br />

[287,5.2] Diagonalizability is guaranteed by a full set of linearly independent<br />

eigenvectors, whereas invertibility is guaranteed by all nonzero eigenvalues.<br />

distinct eigenvalues ⇒ l.i. eigenvectors ⇔ diagonalizable<br />

not diagonalizable ⇒ repeated eigenvalue<br />

(1441)<br />

A.12 Eigenvectors must, of course, be nonzero. The prefix eigen is from the German; in<br />

this context meaning, something akin to “characteristic”. [285, p.14]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!