12.07.2015 Views

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

616 APPENDIX A. LINEAR ALGEBRAWhen B is full-rank and skinny, C = 0, and A ≽ 0, then [61,10.1.1]detG ≠ 0 ⇔ A + BB T ≻ 0 (1541)When B is a (column) vector, then for all C ∈ R and all A of dimensioncompatible with GdetG = det(A)C − B T A T cofB (1542)while for C ≠ 0detG = C det(A − 1 C BBT ) (1543)where A cof is the matrix of cofactors [331,4] corresponding to A .When B is full-rank and fat, A = 0, and C ≽ 0, thendetG ≠ 0 ⇔ C + B T B ≻ 0 (1544)When B is a row-vector, then for A ≠ 0 and all C of dimensioncompatible with Gwhile for all A∈ RdetG = A det(C − 1 A BT B) (1545)detG = det(C)A − BC T cofB T (1546)where C cof is the matrix of cofactors corresponding to C .A.5 Eigenvalue decompositionAll square matrices have associated eigenvalues λ and eigenvectors; if notsquare, Ax = λ i x becomes impossible dimensionally. Eigenvectors mustbe nonzero. Prefix eigen is from the German; in this context meaning,something akin to “characteristic”. [328, p.14]When a square matrix X ∈ R m×m is diagonalizable, [331,5.6] then⎡ ⎤w T1 m∑X = SΛS −1 = [s 1 · · · s m ] Λ⎣. ⎦ = λ i s i wi T (1547)w T mi=1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!