v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

552 CHAPTER 7. PROXIMITY PROBLEMSThis is best understood referring to Figure 152: Suppose nonnegativeinput H is demanded, and then the problem realization correctly projectsits input first on S N h and then directly on C = EDM N . That demandfor nonnegativity effectively requires imposition of K on input H prior tooptimization so as to obtain correct order of projection (on S N h first). Yetsuch an imposition prior to projection on EDM N generally introduces anelbow into the path of projection (illustrated in Figure 153) caused bythe technique itself; that being, a particular proximity problem realizationrequiring nonnegative input.Any procedure for imposition of nonnegativity on input H can only beincorrect in this circumstance. There is no resolution unless input H isguaranteed nonnegative with no tinkering. Otherwise, we have no choice butto employ a different problem realization; one not demanding nonnegativeinput.7.0.2 Lower boundMost of the problems we encounter in this chapter have the general form:minimize ‖B − A‖ FBsubject to B ∈ C(1306)where A ∈ R m×n is given data. This particular objective denotes Euclideanprojection (E) of vectorized matrix A on the set C which may or may not beconvex. When C is convex, then the projection is unique minimum-distancebecause Frobenius’ norm when squared is a strictly convex function ofvariable B and because the optimal solution is the same regardless of thesquare (513). When C is a subspace, then the direction of projection isorthogonal to C .Denoting by AU A Σ A QA T and BU BΣ B QB T their full singular valuedecompositions (whose singular values are always nonincreasingly ordered(A.6)), there exists a tight lower bound on the objective over the manifoldof orthogonal matrices;‖Σ B − Σ A ‖ F ≤infU A ,U B ,Q A ,Q B‖B − A‖ F (1307)This least lower bound holds more generally for any orthogonally invariantnorm on R m×n (2.2.1) including the Frobenius and spectral norm [328,II.3].[202,7.4.51]

5537.0.3 Problem approachProblems traditionally posed in terms of point position {x i ∈ R n , i=1... N }such as∑minimize (‖x i − x j ‖ − h ij ) 2 (1308){x i }orminimize{x i }i , j ∈ I∑(‖x i − x j ‖ 2 − h ij ) 2 (1309)i , j ∈ I(where I is an abstract set of indices and h ij is given data) are everywhereconverted herein to the distance-square variable D or to Gram matrix G ;the Gram matrix acting as bridge between position and distance. (Thatconversion is performed regardless of whether known data is complete.)Then the techniques of chapter 5 or chapter 6 are applied to find relativeor absolute position. This approach is taken because we prefer introductionof rank constraints into convex problems rather than searching a googol oflocal minima in nonconvex problems like (1308) [105] (3.6.4.0.3,7.2.2.7.1)or (1309).7.0.4 Three prevalent proximity problemsThere are three statements of the closest-EDM problem prevalent in theliterature, the multiplicity due primarily to choice of projection on theEDM versus positive semidefinite (PSD) cone and vacillation between thedistance-square variable d ij versus absolute distance √ d ij . In their mostfundamental form, the three prevalent proximity problems are (1310.1),(1310.2), and (1310.3): [342] for D [d ij ] and ◦√ D [ √ d ij ](1)(3)minimize ‖−V (D − H)V ‖ 2 FDsubject to rankV DV ≤ ρD ∈ EDM Nminimize ‖D − H‖ 2 FDsubject to rankV DV ≤ ρD ∈ EDM Nminimize ‖ ◦√ D − H‖◦√ 2 FDsubject to rankV DV ≤ ρ◦√ √D ∈ EDMNminimize ‖−V ( ◦√ D − H)V ‖◦√ 2 FDsubject to rankV DV ≤ ρ◦√ √D ∈ EDMN(2)(1310)(4)

5537.0.3 Problem approachProblems traditionally posed in terms of point position {x i ∈ R n , i=1... N }such as∑minimize (‖x i − x j ‖ − h ij ) 2 (1308){x i }orminimize{x i }i , j ∈ I∑(‖x i − x j ‖ 2 − h ij ) 2 (1309)i , j ∈ I(where I is an abstract set of indices and h ij is given data) are everywhereconverted herein to the distance-square variable D or to Gram matrix G ;the Gram matrix acting as bridge between position and distance. (Thatconversion is performed regardless of whether known data is complete.)Then the techniques of chapter 5 or chapter 6 are applied to find relativeor absolute position. This approach is taken because we prefer introductionof rank constraints into convex problems rather than searching a googol oflocal minima in nonconvex problems like (1308) [105] (3.6.4.0.3,7.2.2.7.1)or (1309).7.0.4 Three prevalent proximity problemsThere are three statements of the closest-EDM problem prevalent in theliterature, the multiplicity due primarily to choice of projection on theEDM versus positive semidefinite (PSD) cone and vacillation between thedistance-square variable d ij versus absolute distance √ d ij . In their mostfundamental form, the three prevalent proximity problems are (1310.1),(1310.2), and (1310.3): [342] for D [d ij ] and ◦√ D [ √ d ij ](1)(3)minimize ‖−V (D − H)V ‖ 2 FDsubject to rankV DV ≤ ρD ∈ EDM Nminimize ‖D − H‖ 2 FDsubject to rankV DV ≤ ρD ∈ EDM Nminimize ‖ ◦√ D − H‖◦√ 2 FDsubject to rankV DV ≤ ρ◦√ √D ∈ EDMNminimize ‖−V ( ◦√ D − H)V ‖◦√ 2 FDsubject to rankV DV ≤ ρ◦√ √D ∈ EDMN(2)(1310)(4)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!