10.07.2015 Views

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

B.1. RANK-ONE MATRIX (DYAD) 521range of summation is the vector sum of ranges. B.3 (Theorem B.1.1.1.1)Under the assumption the dyads are linearly independent (l.i.), then thevector sums are unique (p.674): for {w i } l.i. and {s i } l.i.( k∑)R s i wiT = R ( ) ( )s 1 w1 T ⊕ ... ⊕ R sk wk T = R(s1 ) ⊕ ... ⊕ R(s k ) (1401)i=1B.1.1.0.1 Definition. Linearly independent dyads. [154, p.29, thm.11][254, p.2] The set of k dyads{siwi T | i=1... k } (1402)where s i ∈C M and w i ∈C N , is said to be linearly independent iff()k∑rank SW T =∆ s i wiT = k (1403)i=1where S ∆ = [s 1 · · · s k ] ∈ C M×k and W ∆ = [w 1 · · · w k ] ∈ C N×k .△As defined, dyad independence does not preclude existence of a nullspaceN(SW T ) , nor does it imply SW T is full-rank. In absence of an assumptionof independence, generally, rankSW T ≤ k . Conversely, any rank-k matrixcan be written in the form SW T by singular value decomposition. (A.6)B.1.1.0.2 Theorem. Linearly independent (l.i.) dyads.Vectors {s i ∈ C M , i = 1... k} are l.i. and vectors {w i ∈ C N , i = 1... k} arel.i. if and only if dyads {s i wi T ∈ C M×N , i=1... k} are l.i.⋄Proof. Linear independence of k dyads is identical to definition (1403).(⇒) Suppose {s i } and {w i } are each linearly independent sets. InvokingSylvester’s rank inequality, [149,0.4] [298,2.4]rankS+rankW − k ≤ rank(SW T ) ≤ min{rankS , rankW } (≤ k) (1404)Then k ≤rank(SW T )≤k that implies the dyads are independent.(⇐) Conversely, suppose rank(SW T )=k . Thenk ≤ min{rankS , rankW } ≤ k (1405)implying the vector sets are each independent.B.3 Move of range R to inside the summation depends on linear independence of {w i }.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!