10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

578 APPENDIX B. SIMPLE MATRICES<br />

B.1 Rank-one matrix (dyad)<br />

Any matrix formed from the unsigned outer product of two vectors,<br />

Ψ = uv T ∈ R M×N (1492)<br />

where u ∈ R M and v ∈ R N , is rank-one and called a dyad. Conversely, any<br />

rank-one matrix must have the form Ψ . [176, prob.1.4.1] Product −uv T is<br />

a negative dyad. For matrix products AB T , in general, we have<br />

R(AB T ) ⊆ R(A) , N(AB T ) ⊇ N(B T ) (1493)<br />

with equality when B =A [287,3.3,3.6] B.1 or respectively when B is<br />

invertible and N(A)=0. Yet for all nonzero dyads we have<br />

R(uv T ) = R(u) , N(uv T ) = N(v T ) ≡ v ⊥ (1494)<br />

where dim v ⊥ =N −1.<br />

It is obvious a dyad can be 0 only when u or v is 0;<br />

Ψ = uv T = 0 ⇔ u = 0 or v = 0 (1495)<br />

The matrix 2-norm for Ψ is equivalent to the Frobenius norm;<br />

‖Ψ‖ 2 = ‖uv T ‖ F = ‖uv T ‖ 2 = ‖u‖ ‖v‖ (1496)<br />

When u and v are normalized, the pseudoinverse is the transposed dyad.<br />

Otherwise,<br />

Ψ † = (uv T ) † vu T<br />

=<br />

(1497)<br />

‖u‖ 2 ‖v‖ 2<br />

B.1 Proof. R(AA T ) ⊆ R(A) is obvious.<br />

R(AA T ) = {AA T y | y ∈ R m }<br />

⊇ {AA T y | A T y ∈ R(A T )} = R(A) by (132)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!