v2009.01.01 - Convex Optimization
v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization
666 APPENDIX E. PROJECTION E.6.4.1.1 Example. Eigenvalues λ as coefficients of orthogonal projection. Let C represent any convex subset of subspace S M , and let C 1 be any element of C . Then C 1 can be expressed as the orthogonal expansion C 1 = M∑ i=1 M∑ 〈E ij , C 1 〉 E ij ∈ C ⊂ S M (1856) j=1 j ≥ i where E ij ∈ S M is a member of the standard orthonormal basis for S M (52). This expansion is a sum of one-dimensional orthogonal projections of C 1 ; each projection on the range of a vectorized standard basis matrix. Vector inner-product 〈E ij , C 1 〉 is the coefficient of projection of svec C 1 on R(svec E ij ). When C 1 is any member of a convex set C whose dimension is L , Carathéodory’s theorem [96] [266] [173] [36] [37] guarantees that no more than L +1 affinely independent members from C are required to faithfully represent C 1 by their linear combination. E.11 Dimension of S M is L=M(M+1)/2 in isometrically isomorphic R M(M+1)/2 . Yet because any symmetric matrix can be diagonalized, (A.5.2) C 1 ∈ S M is a linear combination of its M eigenmatrices q i q T i (A.5.1) weighted by its eigenvalues λ i ; C 1 = QΛQ T = M∑ λ i q i qi T (1857) i=1 where Λ ∈ S M is a diagonal matrix having δ(Λ) i =λ i , and Q=[q 1 · · · q M ] is an orthogonal matrix in R M×M containing corresponding eigenvectors. To derive eigen decomposition (1857) from expansion (1856), M standard basis matrices E ij are rotated (B.5) into alignment with the M eigenmatrices q i qi T of C 1 by applying a similarity transformation; [287,5.6] {QE ij Q T } = { qi q T i , i = j = 1... M } ( ) √1 qi 2 qj T + q j qi T , 1 ≤ i < j ≤ M (1858) E.11 Carathéodory’s theorem guarantees existence of a biorthogonal expansion for any element in aff C when C is any pointed closed convex cone.
E.6. VECTORIZATION INTERPRETATION, 667 which remains an orthonormal basis for S M . Then remarkably ∑ C 1 = M 〈QE ij Q T , C 1 〉 QE ij Q T i,j=1 j ≥ i ∑ = M ∑ 〈q i qi T , C 1 〉 q i qi T + M 〈QE ij Q T , QΛQ T 〉 QE ij Q T i=1 ∑ = M 〈q i qi T , C 1 〉 q i qi T i=1 i,j=1 j > i ∆ ∑ = M ∑ 〈P i , C 1 〉 P i = M q i qi T C 1 q i qi T i=1 ∑ = M λ i q i qi T i=1 i=1 ∑ = M P i C 1 P i i=1 (1859) this orthogonal expansion becomes the diagonalization; still a sum of one-dimensional orthogonal projections. The eigenvalues λ i = 〈q i q T i , C 1 〉 (1860) are clearly coefficients of projection of C 1 on the range of each vectorized eigenmatrix. (conferE.6.2.1.1) The remaining M(M −1)/2 coefficients (i≠j) are zeroed by projection. When P i is rank-one symmetric as in (1859), R(svec P i C 1 P i ) = R(svec q i q T i ) = R(svec P i ) in R M(M+1)/2 (1861) and P i C 1 P i − C 1 ⊥ P i in R M(M+1)/2 (1862) E.6.4.2 Positive semidefiniteness test as orthogonal projection For any given X ∈ R m×m the familiar quadratic construct y T Xy ≥ 0, over broad domain, is a fundamental test for positive semidefiniteness. (A.2) It is a fact that y T Xy is always proportional to a coefficient of orthogonal projection; letting z in formula (1851) become y ∈ R m , then P 2 =P 1 =yy T /y T y=yy T /‖yy T ‖ 2 (confer (1496)) and formula (1852) becomes 〈yy T , X〉 〈yy T , yy T 〉 yyT = yT Xy y T y yy T y T y = yyT y T y X yyT y T y ∆ = P 1 XP 1 (1863)
- Page 615 and 616: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 617 and 618: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 619 and 620: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 621 and 622: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 623 and 624: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 625 and 626: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 627 and 628: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 629 and 630: D.1. DIRECTIONAL DERIVATIVE, TAYLOR
- Page 631 and 632: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 633 and 634: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 635 and 636: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 637 and 638: D.2. TABLES OF GRADIENTS AND DERIVA
- Page 639 and 640: Appendix E Projection For any A∈
- Page 641 and 642: 641 U T = U † for orthonormal (in
- Page 643 and 644: E.1. IDEMPOTENT MATRICES 643 where
- Page 645 and 646: E.1. IDEMPOTENT MATRICES 645 order,
- Page 647 and 648: E.1. IDEMPOTENT MATRICES 647 When t
- Page 649 and 650: E.3. SYMMETRIC IDEMPOTENT MATRICES
- Page 651 and 652: E.3. SYMMETRIC IDEMPOTENT MATRICES
- Page 653 and 654: E.3. SYMMETRIC IDEMPOTENT MATRICES
- Page 655 and 656: E.5. PROJECTION EXAMPLES 655 E.4.0.
- Page 657 and 658: E.5. PROJECTION EXAMPLES 657 a ∗
- Page 659 and 660: E.5. PROJECTION EXAMPLES 659 E.5.0.
- Page 661 and 662: E.6. VECTORIZATION INTERPRETATION,
- Page 663 and 664: E.6. VECTORIZATION INTERPRETATION,
- Page 665: E.6. VECTORIZATION INTERPRETATION,
- Page 669 and 670: E.7. ON VECTORIZED MATRICES OF HIGH
- Page 671 and 672: E.7. ON VECTORIZED MATRICES OF HIGH
- Page 673 and 674: E.8. RANGE/ROWSPACE INTERPRETATION
- Page 675 and 676: E.9. PROJECTION ON CONVEX SET 675 A
- Page 677 and 678: E.9. PROJECTION ON CONVEX SET 677 W
- Page 679 and 680: E.9. PROJECTION ON CONVEX SET 679 P
- Page 681 and 682: E.9. PROJECTION ON CONVEX SET 681 E
- Page 683 and 684: E.9. PROJECTION ON CONVEX SET 683 T
- Page 685 and 686: E.9. PROJECTION ON CONVEX SET 685
- Page 687 and 688: E.10. ALTERNATING PROJECTION 687 E.
- Page 689 and 690: E.10. ALTERNATING PROJECTION 689 b
- Page 691 and 692: E.10. ALTERNATING PROJECTION 691 a
- Page 693 and 694: E.10. ALTERNATING PROJECTION 693 (a
- Page 695 and 696: E.10. ALTERNATING PROJECTION 695 wh
- Page 697 and 698: E.10. ALTERNATING PROJECTION 697 E.
- Page 699 and 700: E.10. ALTERNATING PROJECTION 699 10
- Page 701 and 702: E.10. ALTERNATING PROJECTION 701 E.
- Page 703 and 704: E.10. ALTERNATING PROJECTION 703 E
- Page 705 and 706: Appendix F Notation and a few defin
- Page 707 and 708: 707 a.i. c.i. l.i. w.r.t affinely i
- Page 709 and 710: 709 is or ← → t → 0 + as in
- Page 711 and 712: 711 ∑ π(γ) Ξ Π ∏ ψ(Z) D D
- Page 713 and 714: 713 R m×n Euclidean vector space o
- Page 715 and 716: 715 H − H + ∂H ∂H ∂H −
666 APPENDIX E. PROJECTION<br />
E.6.4.1.1 Example. Eigenvalues λ as coefficients of orthogonal projection.<br />
Let C represent any convex subset of subspace S M , and let C 1 be any element<br />
of C . Then C 1 can be expressed as the orthogonal expansion<br />
C 1 =<br />
M∑<br />
i=1<br />
M∑<br />
〈E ij , C 1 〉 E ij ∈ C ⊂ S M (1856)<br />
j=1<br />
j ≥ i<br />
where E ij ∈ S M is a member of the standard orthonormal basis for S M<br />
(52). This expansion is a sum of one-dimensional orthogonal projections<br />
of C 1 ; each projection on the range of a vectorized standard basis matrix.<br />
Vector inner-product 〈E ij , C 1 〉 is the coefficient of projection of svec C 1 on<br />
R(svec E ij ).<br />
When C 1 is any member of a convex set C whose dimension is L ,<br />
Carathéodory’s theorem [96] [266] [173] [36] [37] guarantees that no more<br />
than L +1 affinely independent members from C are required to faithfully<br />
represent C 1 by their linear combination. E.11<br />
Dimension of S M is L=M(M+1)/2 in isometrically isomorphic<br />
R M(M+1)/2 . Yet because any symmetric matrix can be diagonalized, (A.5.2)<br />
C 1 ∈ S M is a linear combination of its M eigenmatrices q i q T i (A.5.1) weighted<br />
by its eigenvalues λ i ;<br />
C 1 = QΛQ T =<br />
M∑<br />
λ i q i qi T (1857)<br />
i=1<br />
where Λ ∈ S M is a diagonal matrix having δ(Λ) i =λ i , and Q=[q 1 · · · q M ]<br />
is an orthogonal matrix in R M×M containing corresponding eigenvectors.<br />
To derive eigen decomposition (1857) from expansion (1856), M standard<br />
basis matrices E ij are rotated (B.5) into alignment with the M eigenmatrices<br />
q i qi<br />
T of C 1 by applying a similarity transformation; [287,5.6]<br />
{QE ij Q T } =<br />
{<br />
qi q T i , i = j = 1... M<br />
}<br />
( )<br />
√1<br />
qi 2<br />
qj T + q j qi<br />
T , 1 ≤ i < j ≤ M<br />
(1858)<br />
E.11 Carathéodory’s theorem guarantees existence of a biorthogonal expansion for any<br />
element in aff C when C is any pointed closed convex cone.