v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

484 CHAPTER 5. EUCLIDEAN DISTANCE MATRIX5.13.2.2 Isotonic solution with sort constraintBecause problems involving rank are generally difficult, we will partition(1147) into two problems we know how to solve and then alternate theirsolution until convergence:minimize ‖−VN T(D − O)V N ‖ FDsubject to rankVN TDV N ≤ 3D ∈ EDM Nminimize ‖σ − Πd‖σsubject to σ ∈ K M+(a)(b)(1149)where sort-index matrix O (a given constant in (a)) becomes an implicitvector variable o i solving the i th instance of (1149b)1√2dvec O i = o i Π T σ ⋆ ∈ R N(N−1)/2 , i∈{1, 2, 3...} (1150)As mentioned in discussion of relaxed problem (1148), a closed-formsolution to problem (1149a) exists. Only the first iteration of (1149a)sees the original sort-index matrix O whose entries are nonnegative wholenumbers; id est, O 0 =O∈ S N h ∩ R N×N+ (1144). Subsequent iterations i takethe previous solution of (1149b) as inputO i = dvec −1 ( √ 2o i ) ∈ S N (1151)real successors, estimating distance-square not order, to the sort-indexmatrix O .New convex problem (1149b) finds the unique minimum-distanceprojection of Πd on the monotone nonnegative cone K M+ . By definingY †T = [e 1 − e 2 e 2 −e 3 e 3 −e 4 · · · e m ] ∈ R m×m (430)where mN(N −1)/2, we may rewrite (1149b) as an equivalent quadraticprogram; a convex problem in terms of the halfspace-description of K M+ :minimize (σ − Πd) T (σ − Πd)σsubject to Y † σ ≽ 0(1152)

5.13. RECONSTRUCTION EXAMPLES 485This quadratic program can be converted to a semidefinite program viaSchur-form (3.5.2); we get the equivalent problemminimizet∈R , σsubject tot[tI σ − Πd(σ − Πd) T 1]≽ 0(1153)Y † σ ≽ 05.13.2.3 ConvergenceInE.10 we discuss convergence of alternating projection on intersectingconvex sets in a Euclidean vector space; convergence to a point in theirintersection. Here the situation is different for two reasons:Firstly, sets of positive semidefinite matrices having an upper bound onrank are generally not convex. Yet in7.1.4.0.1 we prove (1149a) is equivalentto a projection of nonincreasingly ordered eigenvalues on a subset of thenonnegative orthant:minimize ‖−VN T(D − O)V N ‖ FDsubject to rankVN TDV N ≤ 3D ∈ EDM N≡minimize ‖Υ − Λ‖ FΥ [ ]R3+subject to δ(Υ) ∈0(1154)where −VN TDV N UΥU T ∈ S N−1 and −VN TOV N QΛQ T ∈ S N−1 areordered diagonalizations (A.5). It so happens: optimal orthogonal U ⋆always equals Q given. Linear operator T(A) = U ⋆T AU ⋆ , acting on squarematrix A , is an isometry because Frobenius’ norm is orthogonally invariant(48). This isometric isomorphism T thus maps a nonconvex problem to aconvex one that preserves distance.Secondly, the second half (1149b) of the alternation takes place in adifferent vector space; S N h (versus S N−1 ). From5.6 we know these twovector spaces are related by an isomorphism, S N−1 =V N (S N h ) (1018), butnot by an isometry.We have, therefore, no guarantee from theory of alternating projectionthat the alternation (1149) converges to a point, in the set of allEDMs corresponding to affine dimension not in excess of 3, belonging todvec EDM N ∩ Π T K M+ .

5.13. RECONSTRUCTION EXAMPLES 485This quadratic program can be converted to a semidefinite program viaSchur-form (3.5.2); we get the equivalent problemminimizet∈R , σsubject tot[tI σ − Πd(σ − Πd) T 1]≽ 0(1153)Y † σ ≽ 05.13.2.3 ConvergenceInE.10 we discuss convergence of alternating projection on intersectingconvex sets in a Euclidean vector space; convergence to a point in theirintersection. Here the situation is different for two reasons:Firstly, sets of positive semidefinite matrices having an upper bound onrank are generally not convex. Yet in7.1.4.0.1 we prove (1149a) is equivalentto a projection of nonincreasingly ordered eigenvalues on a subset of thenonnegative orthant:minimize ‖−VN T(D − O)V N ‖ FDsubject to rankVN TDV N ≤ 3D ∈ EDM N≡minimize ‖Υ − Λ‖ FΥ [ ]R3+subject to δ(Υ) ∈0(1154)where −VN TDV N UΥU T ∈ S N−1 and −VN TOV N QΛQ T ∈ S N−1 areordered diagonalizations (A.5). It so happens: optimal orthogonal U ⋆always equals Q given. Linear operator T(A) = U ⋆T AU ⋆ , acting on squarematrix A , is an isometry because Frobenius’ norm is orthogonally invariant(48). This isometric isomorphism T thus maps a nonconvex problem to aconvex one that preserves distance.Secondly, the second half (1149b) of the alternation takes place in adifferent vector space; S N h (versus S N−1 ). From5.6 we know these twovector spaces are related by an isomorphism, S N−1 =V N (S N h ) (1018), butnot by an isometry.We have, therefore, no guarantee from theory of alternating projectionthat the alternation (1149) converges to a point, in the set of allEDMs corresponding to affine dimension not in excess of 3, belonging todvec EDM N ∩ Π T K M+ .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!