v2010.10.26 - Convex Optimization
v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization
32 CHAPTER 1. OVERVIEWto a given matrix H :minimize ‖−V (D − H)V ‖ 2 FDsubject to rankV DV ≤ ρD ∈ EDM Nminimize ‖D − H‖ 2 FDsubject to rankV DV ≤ ρD ∈ EDM Nminimize ‖ ◦√ D − H‖◦√ 2 FDsubject to rankV DV ≤ ρ◦√ √D ∈ EDMNminimize ‖−V ( ◦√ D − H)V ‖◦√ 2 FDsubject to rankV DV ≤ ρ◦√ √D ∈ EDMN(1310)We apply a convex iteration method for constraining rank. Known heuristicsfor rank minimization are also explained. We offer new geometrical proof, ofa famous discovery by Eckart & Young in 1936 [129], with particular regardto Euclidean projection of a point on that generally nonconvex subset ofthe positive semidefinite cone boundary comprising all semidefinite matriceshaving rank not exceeding a prescribed bound ρ . We explain how thisproblem is transformed to a convex optimization for any rank ρ .appendicesToolboxes are provided so as to be more self-contained:linear algebra (appendix A is primarily concerned with properstatements of semidefiniteness for square matrices),simple matrices (dyad, doublet, elementary, Householder, Schoenberg,orthogonal, etcetera, in appendix B),collection of known analytical solutions to some important optimizationproblems (appendix C),matrix calculus remains somewhat unsystematized when comparedto ordinary calculus (appendix D concerns matrix-valued functions,matrix differentiation and directional derivatives, Taylor series, andtables of first- and second-order gradients and matrix derivatives),
an elaborate exposition offering insight into orthogonal andnonorthogonal projection on convex sets (the connection betweenprojection and positive semidefiniteness, for example, or betweenprojection and a linear objective function in appendix E),Matlab code on Wıκımization to discriminate EDMs, to determineconic independence, to reduce or constrain rank of an optimal solutionto a semidefinite program, compressed sensing (compressive sampling)for digital image and audio signal processing, and two distinct methodsof reconstructing a map of the United States: one given only distancedata, the other given only comparative distance data.33
- Page 1 and 2: DATTORROCONVEXOPTIMIZATION&EUCLIDEA
- Page 3 and 4: Convex Optimization&Euclidean Dista
- Page 5 and 6: for Jennie Columba♦Antonio♦♦&
- Page 7 and 8: PreludeThe constant demands of my d
- Page 9 and 10: Convex Optimization&Euclidean Dista
- Page 11 and 12: CONVEX OPTIMIZATION & EUCLIDEAN DIS
- Page 13 and 14: List of Figures1 Overview 211 Orion
- Page 15 and 16: LIST OF FIGURES 1562 Shrouded polyh
- Page 17: LIST OF FIGURES 17130 Elliptope E 3
- Page 21 and 22: Chapter 1OverviewConvex Optimizatio
- Page 23 and 24: ˇx 4ˇx 3ˇx 2Figure 2: Applicatio
- Page 25 and 26: 25Figure 4: This coarsely discretiz
- Page 27 and 28: (biorthogonal expansion) is examine
- Page 29 and 30: 29cardinality Boolean solution to a
- Page 31: 31Figure 8: Robotic vehicles in con
- Page 35 and 36: Chapter 2Convex geometryConvexity h
- Page 37 and 38: 2.1. CONVEX SET 372.1.2 linear inde
- Page 39 and 40: 2.1. CONVEX SET 392.1.6 empty set v
- Page 41 and 42: 2.1. CONVEX SET 41(a)R(b)R 2(c)R 3F
- Page 43 and 44: 2.1. CONVEX SET 43where Q∈ R 3×3
- Page 45 and 46: 2.1. CONVEX SET 45Now let’s move
- Page 47 and 48: 2.1. CONVEX SET 47By additive inver
- Page 49 and 50: 2.1. CONVEX SET 49R nR mR(A T )x px
- Page 51 and 52: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 53 and 54: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 55 and 56: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 57 and 58: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 59 and 60: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 61 and 62: 2.3. HULLS 61Figure 20: Convex hull
- Page 63 and 64: 2.3. HULLS 63Aaffine hull (drawn tr
- Page 65 and 66: 2.3. HULLS 65subset of the affine h
- Page 67 and 68: 2.3. HULLS 672.3.2.0.2 Example. Nuc
- Page 69 and 70: 2.3. HULLS 692.3.2.0.3 Exercise. Co
- Page 71 and 72: 2.3. HULLS 71Figure 24: A simplicia
- Page 73 and 74: 2.4. HALFSPACE, HYPERPLANE 73H + =
- Page 75 and 76: 2.4. HALFSPACE, HYPERPLANE 7511−1
- Page 77 and 78: 2.4. HALFSPACE, HYPERPLANE 772.4.2.
- Page 79 and 80: 2.4. HALFSPACE, HYPERPLANE 792.4.2.
- Page 81 and 82: 2.4. HALFSPACE, HYPERPLANE 81tradit
32 CHAPTER 1. OVERVIEWto a given matrix H :minimize ‖−V (D − H)V ‖ 2 FDsubject to rankV DV ≤ ρD ∈ EDM Nminimize ‖D − H‖ 2 FDsubject to rankV DV ≤ ρD ∈ EDM Nminimize ‖ ◦√ D − H‖◦√ 2 FDsubject to rankV DV ≤ ρ◦√ √D ∈ EDMNminimize ‖−V ( ◦√ D − H)V ‖◦√ 2 FDsubject to rankV DV ≤ ρ◦√ √D ∈ EDMN(1310)We apply a convex iteration method for constraining rank. Known heuristicsfor rank minimization are also explained. We offer new geometrical proof, ofa famous discovery by Eckart & Young in 1936 [129], with particular regardto Euclidean projection of a point on that generally nonconvex subset ofthe positive semidefinite cone boundary comprising all semidefinite matriceshaving rank not exceeding a prescribed bound ρ . We explain how thisproblem is transformed to a convex optimization for any rank ρ .appendicesToolboxes are provided so as to be more self-contained:linear algebra (appendix A is primarily concerned with properstatements of semidefiniteness for square matrices),simple matrices (dyad, doublet, elementary, Householder, Schoenberg,orthogonal, etcetera, in appendix B),collection of known analytical solutions to some important optimizationproblems (appendix C),matrix calculus remains somewhat unsystematized when comparedto ordinary calculus (appendix D concerns matrix-valued functions,matrix differentiation and directional derivatives, Taylor series, andtables of first- and second-order gradients and matrix derivatives),