v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

636 APPENDIX B. SIMPLE MATRICESProof. Figure 156 shows the four fundamental subspaces for the dyad.Linear operator Ψ : R N →R M provides a map between vector spaces thatremain distinct when M =N ;u ∈ R(uv T )u ∈ N(uv T ) ⇔ v T u = 0R(uv T ) ∩ N(uv T ) = ∅(1615)B.1.0.1rank-one modificationFor A∈ R M×N , x∈ R N , y ∈ R M , and y T Ax≠0 [206,2.1]( )rank A − AxyT A= rank(A) − 1 (1616)y T AxIf A∈ R N×N is any nonsingular matrix and 1 + v T A −1 u ≠ 0, then[221, App.6] [393,2.3 prob.16] [153,4.11.2] (Sherman-Morrison)(A + uv T ) −1 = A −1 − A−1 uv T A −11 + v T A −1 u(1617)B.1.0.2dyad symmetryIn the specific circumstance that v = u , then uu T ∈ R N×N is symmetric,rank-one, and positive semidefinite having exactly N −1 0-eigenvalues. Infact, (Theorem A.3.1.0.7)uv T ≽ 0 ⇔ v = u (1618)and the remaining eigenvalue is almost always positive;λ = u T u = tr(uu T ) > 0 unless u=0 (1619)The matrix [ Ψ uu T 1](1620)for example, is rank-1 positive semidefinite if and only if Ψ = uu T .

B.1. RANK-ONE MATRIX (DYAD) 637B.1.1Dyad independenceNow we consider a sum of dyads like (1606) as encountered in diagonalizationand singular value decomposition:( k∑)k∑R s i wiT = R ( )k∑s i wiT = R(s i ) ⇐ w i ∀i are l.i. (1621)i=1i=1range of summation is the vector sum of ranges. B.3 (Theorem B.1.1.1.1)Under the assumption the dyads are linearly independent (l.i.), then vectorsums are unique (p.772): for {w i } l.i. and {s i } l.i.( k∑R s i wiTi=1)i=1= R ( s 1 w T 1)⊕ ... ⊕ R(sk w T k)= R(s1 ) ⊕ ... ⊕ R(s k ) (1622)B.1.1.0.1 Definition. Linearly independent dyads. [211, p.29 thm.11][339, p.2] The set of k dyads{si w T i | i=1... k } (1623)where s i ∈ C M and w i ∈ C N , is said to be linearly independent iff()k∑rank SW T s i wiT = k (1624)i=1where S [s 1 · · · s k ] ∈ C M×k and W [w 1 · · · w k ] ∈ C N×k .△Dyad independence does not preclude existence of a nullspace N(SW T ) ,as defined, nor does it imply SW T were full-rank. In absence of assumptionof independence, generally, rankSW T ≤ k . Conversely, any rank-k matrixcan be written in the form SW T by singular value decomposition. (A.6)B.1.1.0.2 Theorem. Linearly independent (l.i.) dyads.Vectors {s i ∈ C M , i=1... k} are l.i. and vectors {w i ∈ C N , i=1... k} arel.i. if and only if dyads {s i wi T ∈ C M×N , i=1... k} are l.i.⋄B.3 Move of range R to inside summation depends on linear independence of {w i }.

636 APPENDIX B. SIMPLE MATRICESProof. Figure 156 shows the four fundamental subspaces for the dyad.Linear operator Ψ : R N →R M provides a map between vector spaces thatremain distinct when M =N ;u ∈ R(uv T )u ∈ N(uv T ) ⇔ v T u = 0R(uv T ) ∩ N(uv T ) = ∅(1615)B.1.0.1rank-one modificationFor A∈ R M×N , x∈ R N , y ∈ R M , and y T Ax≠0 [206,2.1]( )rank A − AxyT A= rank(A) − 1 (1616)y T AxIf A∈ R N×N is any nonsingular matrix and 1 + v T A −1 u ≠ 0, then[221, App.6] [393,2.3 prob.16] [153,4.11.2] (Sherman-Morrison)(A + uv T ) −1 = A −1 − A−1 uv T A −11 + v T A −1 u(1617)B.1.0.2dyad symmetryIn the specific circumstance that v = u , then uu T ∈ R N×N is symmetric,rank-one, and positive semidefinite having exactly N −1 0-eigenvalues. Infact, (Theorem A.3.1.0.7)uv T ≽ 0 ⇔ v = u (1618)and the remaining eigenvalue is almost always positive;λ = u T u = tr(uu T ) > 0 unless u=0 (1619)The matrix [ Ψ uu T 1](1620)for example, is rank-1 positive semidefinite if and only if Ψ = uu T .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!