v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

336 CHAPTER 4. SEMIDEFINITE PROGRAMMINGconv{u∈ R n | cardu = n − k , u i ∈ {0, 1}} = {a∈ R n | 1 ≽ a ≽ 0, 〈1, a〉= n − k}(759)This set, argument to conv{ } , comprises the extreme points of set (759)which is a nonnegative hypercube slice. An optimal solution y to (525),that is an extreme point of its feasible set, is known in closed form: ithas 1 in each entry corresponding to the n−k smallest entries of x ⋆ andhas 0 elsewhere. That particular polar direction −y can be interpreted 4.33by Proposition 7.1.3.0.3 as pointing toward the nonnegative orthant inthe Cartesian subspace, whose basis is a subset of the Cartesian axes,containing all cardinality k (or less) vectors having the same ordering as x ⋆ .Consequently, for that closed-form solution, (confer (738))n∑π(|x ⋆ |) i = 〈|x ⋆ |, y〉 = |x ⋆ | T y (760)i=k+1When y = 1, as in 1-norm minimization for example, then polar direction−y points directly at the origin (the cardinality-0 nonnegative vector) as inFigure 99. We sometimes solve (525) instead of employing a known closedform because a direction vector is not unique. Setting direction vector yinstead in accordance with an iterative inverse weighting scheme, calledreweighting [161], was described for the 1-norm by Huo [207,4.11.3] in 1999.4.5.1.3 convergence can mean stallingConvex iteration (156) (525) always converges to a locally optimal solution,a fixed point of possibly infeasible cardinality, by virtue of a monotonicallynonincreasing real objective sequence. [258,1.2] [42,1.1] There can beno proof of global convergence, defined by (758). Constraining cardinality,solution to problem (530), can often be achieved but simple examples canbe contrived that stall at a fixed point of infeasible cardinality; at a positiveobjective value 〈x ⋆ , y〉= τ >0. Direction vector y is then manipulated, ascountermeasure, to steer out of local minima; e.g., complete randomizationas in Example 4.5.1.5.1, or reinitialization to a random cardinality-(n−k)vector in the same nonnegative orthant face demanded by the current4.33 Convex iteration (156) (525) is not a projection method because there is nothresholding or discard of variable-vector x entries. An optimal direction vector y mustalways reside on the feasible set boundary in (525) page 334; id est, it is ill-advised toattempt simultaneous optimization of variables x and y .

4.5. CONSTRAINING CARDINALITY 337iterate: y has nonnegative uniformly distributed random entries in (0, 1]corresponding to the n−k smallest entries of x ⋆ and has 0 elsewhere.When this particular heuristic is successful, cardinality versus iteration ischaracterized by noisy monotonicity. Zero entries behave like memory whilerandomness greatly diminishes likelihood of a stall.4.5.1.4 algebraic derivation of direction vector for convex iterationIn3.2.2.1.2, the compressed sensing problem was precisely represented as anonconvex difference of convex functions bounded below by 0find x ∈ R nsubject to Ax = bx ≽ 0‖x‖ 0 ≤ k≡minimize ‖x‖ 1 − ‖x‖nx∈R nksubject to Ax = bx ≽ 0(530)where convex k-largest norm ‖x‖nkis monotonic on R n + . There we showedhow (530) is equivalently stated in terms of gradientsbecauseminimizex∈R n 〈subject to Ax = bx ≽ 0x , ∇‖x‖ 1 − ∇‖x‖nk〉(761)‖x‖ 1 = x T ∇‖x‖ 1 , ‖x‖nk= x T ∇‖x‖nk, x ≽ 0 (762)The objective function from (761) is a directional derivative (at x in directionx ,D.1.6, conferD.1.4.1.1) of the objective function from (530) while thedirection vector of convex iterationy = ∇‖x‖ 1 − ∇‖x‖nk(763)is an objective gradient where ∇‖x‖ 1 = 1 under nonnegativity and∇‖x‖n = arg maximize z T xk z∈R nsubject to 0 ≼ z ≼ 1z T 1 = k⎫⎪⎬⎪⎭ , x ≽ 0 (533)

336 CHAPTER 4. SEMIDEFINITE PROGRAMMINGconv{u∈ R n | cardu = n − k , u i ∈ {0, 1}} = {a∈ R n | 1 ≽ a ≽ 0, 〈1, a〉= n − k}(759)This set, argument to conv{ } , comprises the extreme points of set (759)which is a nonnegative hypercube slice. An optimal solution y to (525),that is an extreme point of its feasible set, is known in closed form: ithas 1 in each entry corresponding to the n−k smallest entries of x ⋆ andhas 0 elsewhere. That particular polar direction −y can be interpreted 4.33by Proposition 7.1.3.0.3 as pointing toward the nonnegative orthant inthe Cartesian subspace, whose basis is a subset of the Cartesian axes,containing all cardinality k (or less) vectors having the same ordering as x ⋆ .Consequently, for that closed-form solution, (confer (738))n∑π(|x ⋆ |) i = 〈|x ⋆ |, y〉 = |x ⋆ | T y (760)i=k+1When y = 1, as in 1-norm minimization for example, then polar direction−y points directly at the origin (the cardinality-0 nonnegative vector) as inFigure 99. We sometimes solve (525) instead of employing a known closedform because a direction vector is not unique. Setting direction vector yinstead in accordance with an iterative inverse weighting scheme, calledreweighting [161], was described for the 1-norm by Huo [207,4.11.3] in 1999.4.5.1.3 convergence can mean stalling<strong>Convex</strong> iteration (156) (525) always converges to a locally optimal solution,a fixed point of possibly infeasible cardinality, by virtue of a monotonicallynonincreasing real objective sequence. [258,1.2] [42,1.1] There can beno proof of global convergence, defined by (758). Constraining cardinality,solution to problem (530), can often be achieved but simple examples canbe contrived that stall at a fixed point of infeasible cardinality; at a positiveobjective value 〈x ⋆ , y〉= τ >0. Direction vector y is then manipulated, ascountermeasure, to steer out of local minima; e.g., complete randomizationas in Example 4.5.1.5.1, or reinitialization to a random cardinality-(n−k)vector in the same nonnegative orthant face demanded by the current4.33 <strong>Convex</strong> iteration (156) (525) is not a projection method because there is nothresholding or discard of variable-vector x entries. An optimal direction vector y mustalways reside on the feasible set boundary in (525) page 334; id est, it is ill-advised toattempt simultaneous optimization of variables x and y .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!