v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

338 CHAPTER 4. SEMIDEFINITE PROGRAMMINGSubstituting 1 − z ← z the direction vector becomesy = 1 − arg maximize z T x ← arg minimize z T xz∈R n z∈R nsubject to 0 ≼ z ≼ 1 subject to 0 ≼ z ≼ 1 (525)z T 1 = kz T 1 = n − k4.5.1.5 optimality conditions for compressed sensingNow we see how global optimality conditions can be stated without referenceto a dual problem: From conditions (469) for optimality of (530), it isnecessary [61,5.5.3] thatx ⋆ ≽ 0 (1)Ax ⋆ = b (2)∇‖x ⋆ ‖ 1 − ∇‖x ⋆ ‖n + A T ν ⋆ ≽ 0k(3)〈∇‖x ⋆ ‖ 1 − ∇‖x ⋆ ‖n + A T ν ⋆ , x ⋆ 〉 = 0k(4l)(764)These conditions must hold at any optimal solution (locally or globally). By(762), the fourth condition is identical toBecause a 1-norm‖x ⋆ ‖ 1 − ‖x ⋆ ‖nk+ ν ⋆T Ax ⋆ = 0 (4l) (765)‖x‖ 1 = ‖x‖nk+ ‖π(|x|) k+1:n ‖ 1 (766)is separable into k largest and n −k smallest absolute entries,‖π(|x|) k+1:n ‖ 1 = 0 ⇔ ‖x‖ 0 ≤ k (4g) (767)is a necessary condition for global optimality. By assumption, matrix Ais fat and b ≠ 0 ⇒ Ax ⋆ ≠ 0. This means ν ⋆ ∈ N(A T ) ⊂ R m , and ν ⋆ = 0when A is full-rank. By definition, ∇‖x‖ 1 ≽ ∇‖x‖nkalways holds. Assumingexistence of a cardinality-k solution, then only three of the four conditionsare necessary and sufficient for global optimality of (530):‖x ⋆ ‖ 1 − ‖x ⋆ ‖nk= 0x ⋆ ≽ 0 (1)Ax ⋆ = b (2)(4g)(768)meaning, global optimality of a feasible solution to (530) is identified by azero objective.

4.5. CONSTRAINING CARDINALITY 339m/k76Donoho boundapproximationx > 0 constraintminimize ‖x‖ 1xsubject to Ax = b (518)543m > k log 2 (1+n/k)minimize ‖x‖ 1xsubject to Ax = bx ≽ 0(523)210 0.2 0.4 0.6 0.8 1k/nFigure 100: For Gaussian random matrix A∈ R m×n , graph illustratesDonoho/Tanner least lower bound on number of measurements m belowwhich recovery of k-sparse n-length signal x by linear programming failswith overwhelming probability. Hard problems are below curve, but not thereverse; id est, failure above depends on proximity. Inequality demarcatesapproximation (dashed curve) empirically observed in [23]. Problems havingnonnegativity constraint (dotted) are easier to solve. [122] [123]

338 CHAPTER 4. SEMIDEFINITE PROGRAMMINGSubstituting 1 − z ← z the direction vector becomesy = 1 − arg maximize z T x ← arg minimize z T xz∈R n z∈R nsubject to 0 ≼ z ≼ 1 subject to 0 ≼ z ≼ 1 (525)z T 1 = kz T 1 = n − k4.5.1.5 optimality conditions for compressed sensingNow we see how global optimality conditions can be stated without referenceto a dual problem: From conditions (469) for optimality of (530), it isnecessary [61,5.5.3] thatx ⋆ ≽ 0 (1)Ax ⋆ = b (2)∇‖x ⋆ ‖ 1 − ∇‖x ⋆ ‖n + A T ν ⋆ ≽ 0k(3)〈∇‖x ⋆ ‖ 1 − ∇‖x ⋆ ‖n + A T ν ⋆ , x ⋆ 〉 = 0k(4l)(764)These conditions must hold at any optimal solution (locally or globally). By(762), the fourth condition is identical toBecause a 1-norm‖x ⋆ ‖ 1 − ‖x ⋆ ‖nk+ ν ⋆T Ax ⋆ = 0 (4l) (765)‖x‖ 1 = ‖x‖nk+ ‖π(|x|) k+1:n ‖ 1 (766)is separable into k largest and n −k smallest absolute entries,‖π(|x|) k+1:n ‖ 1 = 0 ⇔ ‖x‖ 0 ≤ k (4g) (767)is a necessary condition for global optimality. By assumption, matrix Ais fat and b ≠ 0 ⇒ Ax ⋆ ≠ 0. This means ν ⋆ ∈ N(A T ) ⊂ R m , and ν ⋆ = 0when A is full-rank. By definition, ∇‖x‖ 1 ≽ ∇‖x‖nkalways holds. Assumingexistence of a cardinality-k solution, then only three of the four conditionsare necessary and sufficient for global optimality of (530):‖x ⋆ ‖ 1 − ‖x ⋆ ‖nk= 0x ⋆ ≽ 0 (1)Ax ⋆ = b (2)(4g)(768)meaning, global optimality of a feasible solution to (530) is identified by azero objective.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!