v2009.01.01 - Convex Optimization
v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization
658 APPENDIX E. PROJECTION Because the extreme directions of this cone K are linearly independent, the component projections are unique in the sense: there is only one linear combination of extreme directions of K that yields a particular point x∈ R(A) whenever R(A) = aff K = R(a 1 ) ⊕ R(a 2 ) ⊕ ... ⊕ R(a n ) (1815) E.5.0.0.4 Example. Nonorthogonal projection on elementary matrix. Suppose P Y is a linear nonorthogonal projector projecting on subspace Y , and suppose the range of a vector u is linearly independent of Y ; id est, for some other subspace M containing Y suppose M = R(u) ⊕ Y (1816) Assuming P M x = P u x + P Y x holds, then it follows for vector x∈M P u x = x − P Y x , P Y x = x − P u x (1817) nonorthogonal projection of x on R(u) can be determined from nonorthogonal projection of x on Y , and vice versa. Such a scenario is realizable were there some arbitrary basis for Y populating a full-rank skinny-or-square matrix A A ∆ = [ basis Y u ] ∈ R n+1 (1818) Then P M =AA † fulfills the requirements, with P u =A(:,n + 1)A † (n + 1,:) and P Y =A(:,1 : n)A † (1 : n,:). Observe, P M is an orthogonal projector whereas P Y and P u are nonorthogonal projectors. Now suppose, for example, P Y is an elementary matrix (B.3); in particular, P Y = I − e 1 1 T = [0 √ ] 2V N ∈ R N×N (1819) where Y = N(1 T ) . We have M = R N , A = [ √ 2V N e 1 ] , and u = e 1 . Thus P u = e 1 1 T is a nonorthogonal projector projecting on R(u) in a direction parallel to a vector in Y (E.3.5), and P Y x = x − e 1 1 T x is a nonorthogonal projection of x on Y in a direction parallel to u .
E.5. PROJECTION EXAMPLES 659 E.5.0.0.5 Example. Projecting the origin on a hyperplane. (confer2.4.2.0.2) Given the hyperplane representation having b∈R and nonzero normal a∈ R m ∂H = {y | a T y = b} ⊂ R m (105) orthogonal projection of the origin P0 on that hyperplane is the unique optimal solution to a minimization problem: (1784) ‖P0 − 0‖ 2 = inf y∈∂H ‖y − 0‖ 2 = inf ξ∈R m−1 ‖Zξ + x‖ 2 (1820) where x is any solution to a T y=b , and where the columns of Z ∈ R m×m−1 constitute a basis for N(a T ) so that y = Zξ + x ∈ ∂H for all ξ ∈ R m−1 . The infimum can be found by setting the gradient (with respect to ξ) of the strictly convex norm-square to 0. We find the minimizing argument so and from (1786) P0 = y ⋆ = a(a T a) −1 a T x = a ‖a‖ ‖a‖ x = ξ ⋆ = −(Z T Z) −1 Z T x (1821) y ⋆ = ( I − Z(Z T Z) −1 Z T) x (1822) a T a ‖a‖ 2 aT x ∆ = A † Ax = a b ‖a‖ 2 (1823) In words, any point x in the hyperplane ∂H projected on its normal a (confer (1848)) yields that point y ⋆ in the hyperplane closest to the origin. E.5.0.0.6 Example. Projection on affine subset. The technique of Example E.5.0.0.5 is extensible. Given an intersection of hyperplanes A = {y | Ay = b} ⊂ R m (1824) where each row of A ∈ R m×n is nonzero and b ∈ R(A) , then the orthogonal projection Px of any point x∈ R n on A is the solution to a minimization problem: ‖Px − x‖ 2 = inf y∈A ‖y − x‖ 2 = inf ξ∈R n−rank A ‖Zξ + y p − x‖ 2 (1825)
- Page 607 and 608: C.4. TWO-SIDED ORTHOGONAL PROCRUSTE
- Page 609 and 610: Appendix D Matrix calculus From too
- Page 611 and 612: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 613 and 614: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 615 and 616: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 617 and 618: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 619 and 620: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 621 and 622: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 623 and 624: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 625 and 626: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 627 and 628: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 629 and 630: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 631 and 632: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 633 and 634: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 635 and 636: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 637 and 638: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 639 and 640: Appendix E Projection For any A∈
- Page 641 and 642: 641 U T = U † for orthonormal (in
- Page 643 and 644: E.1. IDEMPOTENT MATRICES 643 where
- Page 645 and 646: E.1. IDEMPOTENT MATRICES 645 order,
- Page 647 and 648: E.1. IDEMPOTENT MATRICES 647 When t
- Page 649 and 650: E.3. SYMMETRIC IDEMPOTENT MATRICES
- Page 651 and 652: E.3. SYMMETRIC IDEMPOTENT MATRICES
- Page 653 and 654: E.3. SYMMETRIC IDEMPOTENT MATRICES
- Page 655 and 656: E.5. PROJECTION EXAMPLES 655 E.4.0.
- Page 657: E.5. PROJECTION EXAMPLES 657 a ∗
- Page 661 and 662: E.6. VECTORIZATION INTERPRETATION,
- Page 663 and 664: E.6. VECTORIZATION INTERPRETATION,
- Page 665 and 666: E.6. VECTORIZATION INTERPRETATION,
- Page 667 and 668: E.6. VECTORIZATION INTERPRETATION,
- Page 669 and 670: E.7. ON VECTORIZED MATRICES OF HIGH
- Page 671 and 672: E.7. ON VECTORIZED MATRICES OF HIGH
- Page 673 and 674: E.8. RANGE/ROWSPACE INTERPRETATION
- Page 675 and 676: E.9. PROJECTION ON CONVEX SET 675 A
- Page 677 and 678: E.9. PROJECTION ON CONVEX SET 677 W
- Page 679 and 680: E.9. PROJECTION ON CONVEX SET 679 P
- Page 681 and 682: E.9. PROJECTION ON CONVEX SET 681 E
- Page 683 and 684: E.9. PROJECTION ON CONVEX SET 683 T
- Page 685 and 686: E.9. PROJECTION ON CONVEX SET 685
- Page 687 and 688: E.10. ALTERNATING PROJECTION 687 E.
- Page 689 and 690: E.10. ALTERNATING PROJECTION 689 b
- Page 691 and 692: E.10. ALTERNATING PROJECTION 691 a
- Page 693 and 694: E.10. ALTERNATING PROJECTION 693 (a
- Page 695 and 696: E.10. ALTERNATING PROJECTION 695 wh
- Page 697 and 698: E.10. ALTERNATING PROJECTION 697 E.
- Page 699 and 700: E.10. ALTERNATING PROJECTION 699 10
- Page 701 and 702: E.10. ALTERNATING PROJECTION 701 E.
- Page 703 and 704: E.10. ALTERNATING PROJECTION 703 E
- Page 705 and 706: Appendix F Notation and a few defin
- Page 707 and 708: 707 a.i. c.i. l.i. w.r.t affinely i
658 APPENDIX E. PROJECTION<br />
Because the extreme directions of this cone K are linearly independent,<br />
the component projections are unique in the sense:<br />
there is only one linear combination of extreme directions of K that<br />
yields a particular point x∈ R(A) whenever<br />
R(A) = aff K = R(a 1 ) ⊕ R(a 2 ) ⊕ ... ⊕ R(a n ) (1815)<br />
E.5.0.0.4 Example. Nonorthogonal projection on elementary matrix.<br />
Suppose P Y is a linear nonorthogonal projector projecting on subspace Y ,<br />
and suppose the range of a vector u is linearly independent of Y ; id est,<br />
for some other subspace M containing Y suppose<br />
M = R(u) ⊕ Y (1816)<br />
Assuming P M x = P u x + P Y x holds, then it follows for vector x∈M<br />
P u x = x − P Y x , P Y x = x − P u x (1817)<br />
nonorthogonal projection of x on R(u) can be determined from<br />
nonorthogonal projection of x on Y , and vice versa.<br />
Such a scenario is realizable were there some arbitrary basis for Y<br />
populating a full-rank skinny-or-square matrix A<br />
A ∆ = [ basis Y u ] ∈ R n+1 (1818)<br />
Then P M =AA † fulfills the requirements, with P u =A(:,n + 1)A † (n + 1,:)<br />
and P Y =A(:,1 : n)A † (1 : n,:). Observe, P M is an orthogonal projector<br />
whereas P Y and P u are nonorthogonal projectors.<br />
Now suppose, for example, P Y is an elementary matrix (B.3); in<br />
particular,<br />
P Y = I − e 1 1 T =<br />
[0 √ ]<br />
2V N ∈ R N×N (1819)<br />
where Y = N(1 T ) . We have M = R N , A = [ √ 2V N e 1 ] , and u = e 1 .<br />
Thus P u = e 1 1 T is a nonorthogonal projector projecting on R(u) in a<br />
direction parallel to a vector in Y (E.3.5), and P Y x = x − e 1 1 T x is a<br />
nonorthogonal projection of x on Y in a direction parallel to u .