v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

90 CHAPTER 2. CONVEX GEOMETRYPcx ⋆∂HFigure 31: Maximizing hyperplane ∂H , whose normal is vector c∈P , overpolyhedral set P in R 2 is a linear program (157). Optimal solution x ⋆ at • .Suppose the columns of a matrix Z constitute a basis for N(A) while thecolumns of a matrix W constitute a basis for N(BZ). Then [159,12.4.2]N(A) ∩ N(B) = R(ZW) (159)If each basis is orthonormal, then the columns of ZW constitute anorthonormal basis for the intersection.In the particular circumstance A and B are each positive semidefinite[21,6], or in the circumstance A and B are two linearly independent dyads(B.1.1), thenN(A) ∩ N(B) = N(A + B),⎧⎨⎩A,B ∈ S M +orA + B = u 1 v T 1 + u 2 v T 22.5.3 Visualization of matrix subspacesFundamental subspace relations, such as(l.i.)R(A T ) ⊥ N(A), N(A T ) ⊥ R(A) (137)(160)

2.5. SUBSPACE REPRESENTATIONS 91are partially defining. But to aid visualization of involved geometry, itsometimes helps to vectorize matrices. For any square matrix A , s∈ N(A) ,and w ∈ N(A T )〈A, ss T 〉 = 0, 〈A, ww T 〉 = 0 (161)because s T As = w T Aw = 0. This innocuous observation becomes a sharpinstrument for visualization of diagonalizable matrices (A.5.1): for rank-ρmatrix A∈ R M×MA = SΛS −1 = [ s 1 · · · s M ] Λ⎣⎡w T1.w T M⎤⎦ =M∑λ i s i wi T (1547)where nullspace eigenvectors are real by theorem A.5.0.0.1 and where (B.1.1)( )M∑R{s i ∈ R M |λ i =0} = R s i s T i = N(A)i=ρ+1( )(162)M∑R{w i ∈ R M |λ i =0} = R w i wiT = N(A T )i=ρ+1Define an unconventional basis among column vectors of each summation:i=1basis N(A) ⊆basis N(A T ) ⊆M ∑i=ρ+1∑ Mi=ρ+1s i s T i⊆ N(A)w i w T i ⊆ N(A T )(163)We shall regard a vectorized subspace as vectorization of any M ×M matrixwhose columns comprise an overcomplete basis for that subspace; e.g.,E.3.1∑vec basis N(A) = vec M s i s T ii=ρ+1∑vec basis N(A T ) = vec M w i wiTi=ρ+1(164)By this reckoning, vec basis R(A)= vec A but is not unique. Now, because〈 〉 〈 〉M∑M∑A , s i s T i = 0, A , w i wiT = 0 (165)i=ρ+1i=ρ+1

2.5. SUBSPACE REPRESENTATIONS 91are partially defining. But to aid visualization of involved geometry, itsometimes helps to vectorize matrices. For any square matrix A , s∈ N(A) ,and w ∈ N(A T )〈A, ss T 〉 = 0, 〈A, ww T 〉 = 0 (161)because s T As = w T Aw = 0. This innocuous observation becomes a sharpinstrument for visualization of diagonalizable matrices (A.5.1): for rank-ρmatrix A∈ R M×MA = SΛS −1 = [ s 1 · · · s M ] Λ⎣⎡w T1.w T M⎤⎦ =M∑λ i s i wi T (1547)where nullspace eigenvectors are real by theorem A.5.0.0.1 and where (B.1.1)( )M∑R{s i ∈ R M |λ i =0} = R s i s T i = N(A)i=ρ+1( )(162)M∑R{w i ∈ R M |λ i =0} = R w i wiT = N(A T )i=ρ+1Define an unconventional basis among column vectors of each summation:i=1basis N(A) ⊆basis N(A T ) ⊆M ∑i=ρ+1∑ Mi=ρ+1s i s T i⊆ N(A)w i w T i ⊆ N(A T )(163)We shall regard a vectorized subspace as vectorization of any M ×M matrixwhose columns comprise an overcomplete basis for that subspace; e.g.,E.3.1∑vec basis N(A) = vec M s i s T ii=ρ+1∑vec basis N(A T ) = vec M w i wiTi=ρ+1(164)By this reckoning, vec basis R(A)= vec A but is not unique. Now, because〈 〉 〈 〉M∑M∑A , s i s T i = 0, A , w i wiT = 0 (165)i=ρ+1i=ρ+1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!