12.07.2015 Views

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

710 APPENDIX E. PROJECTIONWe get P =AA † so this projection matrix must be symmetric. Then forany matrix A∈ R m×n , symmetric idempotent P projects a given vector xin R m orthogonally on R(A). Under either condition (1907) or (1908), theprojection Px is unique minimum-distance; for subspaces, perpendicularityand minimum-distance conditions are equivalent.E.3.1Four subspacesWe summarize the orthogonal projectors projecting on the four fundamentalsubspaces: for A∈ R m×nA † A : R n on ( R(A † A) = R(A T ) )AA † : R m on ( R(AA † ) = R(A) )I −A † A : R n on ( R(I −A † A) = N(A) )I −AA † : R m on ( R(I −AA † )= N(A T ) ) (1910)A basis for each fundamental subspace comprises the linearly independentcolumn vectors from its associated symmetric projection matrix:basis R(A T ) ⊆ A † A ⊆ R(A T )basis R(A) ⊆ AA † ⊆ R(A)basis N(A) ⊆ I −A † A ⊆ N(A)basis N(A T ) ⊆ I −AA † ⊆ N(A T )(1911)For completeness: E.7 (1902)N(A † A) = N(A)N(AA † ) = N(A T )N(I −A † A) = R(A T )N(I −AA † ) = R(A)(1912)E.3.2Orthogonal characterizationAny symmetric projector P 2 =P ∈ S m projecting on nontrivial R(Q) canbe defined by the orthonormality condition Q T Q = I . When skinny matrixE.7 Proof is by singular value decomposition (A.6.2): N(A † A)⊆ N(A) is obvious.Conversely, suppose A † Ax=0. Then x T A † Ax=x T QQ T x=‖Q T x‖ 2 =0 where A=UΣQ Tis the subcompact singular value decomposition. Because R(Q)= R(A T ) , then x∈N(A)which implies N(A † A)⊇ N(A). ∴ N(A † A)= N(A).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!