v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

582 APPENDIX B. SIMPLE MATRICES B.1.1.1 Biorthogonality condition, Range and Nullspace of Sum Dyads characterized by a biorthogonality condition W T S =I are independent; id est, for S ∈ C M×k and W ∈ C N×k , if W T S =I then rank(SW T )=k by the linearly independent dyads theorem because (conferE.1.1) W T S =I ⇒ rankS=rankW =k≤M =N (1512) To see that, we need only show: N(S)=0 ⇔ ∃ B BS =I . B.4 (⇐) Assume BS =I . Then N(BS)=0={x | BSx = 0} ⊇ N(S). (1493) (⇒) If N(S)=0[ then] S must be full-rank skinny-or-square. B ∴ ∃ A,B,C [ S A] = I (id est, [S A] is invertible) ⇒ BS =I . C Left inverse B is given as W T here. Because of reciprocity with S , it immediately follows: N(W)=0 ⇔ ∃ S S T W =I . Dyads produced by diagonalization, for example, are independent because of their inherent biorthogonality. (A.5.1) The converse is generally false; id est, linearly independent dyads are not necessarily biorthogonal. B.1.1.1.1 Theorem. Nullspace and range of dyad sum. Given a sum of dyads represented by SW T where S ∈C M×k and W ∈ C N×k N(SW T ) = N(W T ) ⇐ ∃ B BS = I R(SW T ) = R(S) ⇐ ∃ Z W T Z = I (1513) ⋄ Proof. (⇒) N(SW T )⊇ N(W T ) and R(SW T )⊆ R(S) are obvious. (⇐) Assume the existence of a left inverse B ∈ R k×N and a right inverse Z ∈ R N×k . B.5 N(SW T ) = {x | SW T x = 0} ⊆ {x | BSW T x = 0} = N(W T ) (1514) R(SW T ) = {SW T x | x∈ R N } ⊇ {SW T Zy | Zy ∈ R N } = R(S) (1515) B.4 Left inverse is not unique, in general. B.5 By counter-example, the theorem’s converse cannot be true; e.g., S = W = [1 0].

B.2. DOUBLET 583 R([u v ]) R(Π)= R([u v]) 0 0 N(Π)=v ⊥ ∩ u ⊥ ([ ]) v R N T = R([u v ]) ⊕ N u T v ⊥ ∩ u ⊥ ([ ]) v T N u T ⊕ R([u v ]) = R N Figure 133: Four fundamental subspaces [289,3.6] of a doublet Π = uv T + vu T ∈ S N . Π(x) = (uv T + vu T )x is a linear bijective mapping from R([u v ]) to R([u v ]). B.2 Doublet Consider a sum of two linearly independent square dyads, one a transposition of the other: Π = uv T + vu T = [ u v ] [ ] v T u T = SW T ∈ S N (1516) where u,v ∈ R N . Like the dyad, a doublet can be 0 only when u or v is 0; Π = uv T + vu T = 0 ⇔ u = 0 or v = 0 (1517) By assumption of independence, a nonzero doublet has two nonzero eigenvalues λ 1 ∆ = u T v + ‖uv T ‖ , λ 2 ∆ = u T v − ‖uv T ‖ (1518) where λ 1 > 0 >λ 2 , with corresponding eigenvectors x 1 ∆ = u ‖u‖ + v ‖v‖ , x 2 ∆ = u ‖u‖ − v ‖v‖ (1519) spanning the doublet range. Eigenvalue λ 1 cannot be 0 unless u and v have opposing directions, but that is antithetical since then the dyads would no longer be independent. Eigenvalue λ 2 is 0 if and only if u and v share the same direction, again antithetical. Generally we have λ 1 > 0 and λ 2 < 0, so Π is indefinite.

582 APPENDIX B. SIMPLE MATRICES<br />

B.1.1.1 Biorthogonality condition, Range and Nullspace of Sum<br />

Dyads characterized by a biorthogonality condition W T S =I are<br />

independent; id est, for S ∈ C M×k and W ∈ C N×k , if W T S =I then<br />

rank(SW T )=k by the linearly independent dyads theorem because<br />

(conferE.1.1)<br />

W T S =I ⇒ rankS=rankW =k≤M =N (1512)<br />

To see that, we need only show: N(S)=0 ⇔ ∃ B BS =I . B.4<br />

(⇐) Assume BS =I . Then N(BS)=0={x | BSx = 0} ⊇ N(S). (1493)<br />

(⇒) If N(S)=0[ then]<br />

S must be full-rank skinny-or-square.<br />

B<br />

∴ ∃ A,B,C [ S A] = I (id est, [S A] is invertible) ⇒ BS =I .<br />

C<br />

Left inverse B is given as W T here. Because of reciprocity with S , it<br />

immediately follows: N(W)=0 ⇔ ∃ S S T W =I . <br />

Dyads produced by diagonalization, for example, are independent because<br />

of their inherent biorthogonality. (A.5.1) The converse is generally false;<br />

id est, linearly independent dyads are not necessarily biorthogonal.<br />

B.1.1.1.1 Theorem. Nullspace and range of dyad sum.<br />

Given a sum of dyads represented by SW T where S ∈C M×k and W ∈ C N×k<br />

N(SW T ) = N(W T ) ⇐ ∃ B BS = I<br />

R(SW T ) = R(S) ⇐ ∃ Z W T Z = I<br />

(1513)<br />

⋄<br />

Proof. (⇒) N(SW T )⊇ N(W T ) and R(SW T )⊆ R(S) are obvious.<br />

(⇐) Assume the existence of a left inverse B ∈ R k×N and a right inverse<br />

Z ∈ R N×k . B.5<br />

N(SW T ) = {x | SW T x = 0} ⊆ {x | BSW T x = 0} = N(W T ) (1514)<br />

R(SW T ) = {SW T x | x∈ R N } ⊇ {SW T Zy | Zy ∈ R N } = R(S) (1515)<br />

<br />

B.4 Left inverse is not unique, in general.<br />

B.5 By counter-example, the theorem’s converse cannot be true; e.g., S = W = [1 0].

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!