v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

658 APPENDIX E. PROJECTION Because the extreme directions of this cone K are linearly independent, the component projections are unique in the sense: there is only one linear combination of extreme directions of K that yields a particular point x∈ R(A) whenever R(A) = aff K = R(a 1 ) ⊕ R(a 2 ) ⊕ ... ⊕ R(a n ) (1815) E.5.0.0.4 Example. Nonorthogonal projection on elementary matrix. Suppose P Y is a linear nonorthogonal projector projecting on subspace Y , and suppose the range of a vector u is linearly independent of Y ; id est, for some other subspace M containing Y suppose M = R(u) ⊕ Y (1816) Assuming P M x = P u x + P Y x holds, then it follows for vector x∈M P u x = x − P Y x , P Y x = x − P u x (1817) nonorthogonal projection of x on R(u) can be determined from nonorthogonal projection of x on Y , and vice versa. Such a scenario is realizable were there some arbitrary basis for Y populating a full-rank skinny-or-square matrix A A ∆ = [ basis Y u ] ∈ R n+1 (1818) Then P M =AA † fulfills the requirements, with P u =A(:,n + 1)A † (n + 1,:) and P Y =A(:,1 : n)A † (1 : n,:). Observe, P M is an orthogonal projector whereas P Y and P u are nonorthogonal projectors. Now suppose, for example, P Y is an elementary matrix (B.3); in particular, P Y = I − e 1 1 T = [0 √ ] 2V N ∈ R N×N (1819) where Y = N(1 T ) . We have M = R N , A = [ √ 2V N e 1 ] , and u = e 1 . Thus P u = e 1 1 T is a nonorthogonal projector projecting on R(u) in a direction parallel to a vector in Y (E.3.5), and P Y x = x − e 1 1 T x is a nonorthogonal projection of x on Y in a direction parallel to u .

E.5. PROJECTION EXAMPLES 659 E.5.0.0.5 Example. Projecting the origin on a hyperplane. (confer2.4.2.0.2) Given the hyperplane representation having b∈R and nonzero normal a∈ R m ∂H = {y | a T y = b} ⊂ R m (105) orthogonal projection of the origin P0 on that hyperplane is the unique optimal solution to a minimization problem: (1784) ‖P0 − 0‖ 2 = inf y∈∂H ‖y − 0‖ 2 = inf ξ∈R m−1 ‖Zξ + x‖ 2 (1820) where x is any solution to a T y=b , and where the columns of Z ∈ R m×m−1 constitute a basis for N(a T ) so that y = Zξ + x ∈ ∂H for all ξ ∈ R m−1 . The infimum can be found by setting the gradient (with respect to ξ) of the strictly convex norm-square to 0. We find the minimizing argument so and from (1786) P0 = y ⋆ = a(a T a) −1 a T x = a ‖a‖ ‖a‖ x = ξ ⋆ = −(Z T Z) −1 Z T x (1821) y ⋆ = ( I − Z(Z T Z) −1 Z T) x (1822) a T a ‖a‖ 2 aT x ∆ = A † Ax = a b ‖a‖ 2 (1823) In words, any point x in the hyperplane ∂H projected on its normal a (confer (1848)) yields that point y ⋆ in the hyperplane closest to the origin. E.5.0.0.6 Example. Projection on affine subset. The technique of Example E.5.0.0.5 is extensible. Given an intersection of hyperplanes A = {y | Ay = b} ⊂ R m (1824) where each row of A ∈ R m×n is nonzero and b ∈ R(A) , then the orthogonal projection Px of any point x∈ R n on A is the solution to a minimization problem: ‖Px − x‖ 2 = inf y∈A ‖y − x‖ 2 = inf ξ∈R n−rank A ‖Zξ + y p − x‖ 2 (1825)

658 APPENDIX E. PROJECTION<br />

Because the extreme directions of this cone K are linearly independent,<br />

the component projections are unique in the sense:<br />

there is only one linear combination of extreme directions of K that<br />

yields a particular point x∈ R(A) whenever<br />

R(A) = aff K = R(a 1 ) ⊕ R(a 2 ) ⊕ ... ⊕ R(a n ) (1815)<br />

E.5.0.0.4 Example. Nonorthogonal projection on elementary matrix.<br />

Suppose P Y is a linear nonorthogonal projector projecting on subspace Y ,<br />

and suppose the range of a vector u is linearly independent of Y ; id est,<br />

for some other subspace M containing Y suppose<br />

M = R(u) ⊕ Y (1816)<br />

Assuming P M x = P u x + P Y x holds, then it follows for vector x∈M<br />

P u x = x − P Y x , P Y x = x − P u x (1817)<br />

nonorthogonal projection of x on R(u) can be determined from<br />

nonorthogonal projection of x on Y , and vice versa.<br />

Such a scenario is realizable were there some arbitrary basis for Y<br />

populating a full-rank skinny-or-square matrix A<br />

A ∆ = [ basis Y u ] ∈ R n+1 (1818)<br />

Then P M =AA † fulfills the requirements, with P u =A(:,n + 1)A † (n + 1,:)<br />

and P Y =A(:,1 : n)A † (1 : n,:). Observe, P M is an orthogonal projector<br />

whereas P Y and P u are nonorthogonal projectors.<br />

Now suppose, for example, P Y is an elementary matrix (B.3); in<br />

particular,<br />

P Y = I − e 1 1 T =<br />

[0 √ ]<br />

2V N ∈ R N×N (1819)<br />

where Y = N(1 T ) . We have M = R N , A = [ √ 2V N e 1 ] , and u = e 1 .<br />

Thus P u = e 1 1 T is a nonorthogonal projector projecting on R(u) in a<br />

direction parallel to a vector in Y (E.3.5), and P Y x = x − e 1 1 T x is a<br />

nonorthogonal projection of x on Y in a direction parallel to u .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!