v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

324 CHAPTER 4. SEMIDEFINITE PROGRAMMING desired cardinality cardδ(X) , and Y to find an approximating rank-one matrix X : maximize 〈X , A − w 1 Y 〉 − w 2 〈δ(X) , δ(W)〉 X∈S N subject to 〈X , I〉 = 1 X ≽ 0 (744) where w 1 and w 2 are positive scalars respectively weighting tr(XY ) and δ(X) T δ(W) just enough to insure that they vanish to within some numerical precision, where direction matrix Y is an optimal solution to semidefinite program minimize Y ∈ S N 〈X ⋆ , Y 〉 subject to 0 ≼ Y ≼ I trY = N − 1 (745) and where diagonal direction matrix W ∈ S N optimally solves linear program minimize 〈δ(X ⋆ ) , δ(W)〉 W=δ 2 (W) subject to 0 ≼ δ(W) ≼ 1 trW = N − c (746) Both direction matrix programs are derived from (1581a) whose analytical solution is known but is not necessarily unique. We emphasize (confer p.278): because this iteration (744) (745) (746) (initial Y,W = 0) is not a projection method, success relies on existence of matrices in the feasible set of (744) having desired rank and diagonal cardinality. In particular, the feasible set of convex problem (744) is a Fantope (83) whose extreme points constitute the set of all normalized rank-one matrices; among those are found rank-one matrices of any desired diagonal cardinality. Convex problem (744) is neither a relaxation of cardinality problem (740); instead, problem (744) is a convex equivalent to (740) at convergence of iteration (744) (745) (746). Because the feasible set of convex problem (744) contains all normalized (B.1) symmetric rank-one matrices of every nonzero diagonal cardinality, a constraint too low or high in cardinality c will not prevent solution. An optimal rank-one solution X ⋆ , whose diagonal cardinality is equal to cardinality of a principal eigenvector of matrix A , will produce the lowest residual Frobenius norm (to within machine noise processes) in the original problem statement (739).

4.6. CARDINALITY AND RANK CONSTRAINT EXAMPLES 325 phantom(256) Figure 86: Shepp-Logan phantom from Matlab image processing toolbox. 4.6.0.0.10 Example. Compressive sampling of a phantom. In summer of 2004, Candès, Romberg, & Tao [61] and Donoho [100] released papers on perfect signal reconstruction from samples that stand in violation of Shannon’s classical sampling theorem. The only condition on these defiant signals is that they be sparse or some affine transformation of them be sparse; essentially, they proposed sparse sampling theorems asserting average sample rate independent of signal bandwidth and less than Shannon’s rate. Minimum sampling rate: of Ω-bandlimited signal: 2Ω (Shannon) of k-sparse length-n signal: k log 2 (1+n/k) (Candès/Donoho) (confer Figure 80). Certainly, much was already known about nonuniform or random sampling [32] [180] and about subsampling or multirate systems [80] [310]. Vetterli, Marziliano, & Blu [319] had congealed a theory of noiseless signal reconstruction, in May 2001, from samples that violate the Shannon rate. They anticipated the Candès/Donoho sparsifying transform by recognizing: it is the innovation (onset) of functions constituting a (not necessarily bandlimited) signal that determines minimum sampling rate for perfect reconstruction. Average onset (sparsity) Vetterli et alii call rate of innovation. The vector inner-products that Candès/Donoho call measurements, Vetterli calls projections. From those projections Vetterli demonstrates reconstruction (by digital signal

4.6. CARDINALITY AND RANK CONSTRAINT EXAMPLES 325<br />

phantom(256)<br />

Figure 86: Shepp-Logan phantom from Matlab image processing toolbox.<br />

4.6.0.0.10 Example. Compressive sampling of a phantom.<br />

In summer of 2004, Candès, Romberg, & Tao [61] and Donoho [100] released<br />

papers on perfect signal reconstruction from samples that stand in violation<br />

of Shannon’s classical sampling theorem. The only condition on these defiant<br />

signals is that they be sparse or some affine transformation of them be<br />

sparse; essentially, they proposed sparse sampling theorems asserting average<br />

sample rate independent of signal bandwidth and less than Shannon’s rate.<br />

Minimum sampling rate:<br />

of Ω-bandlimited signal: 2Ω (Shannon)<br />

of k-sparse length-n signal: k log 2 (1+n/k)<br />

(Candès/Donoho)<br />

(confer Figure 80). Certainly, much was already known about nonuniform<br />

or random sampling [32] [180] and about subsampling or multirate<br />

systems [80] [310]. Vetterli, Marziliano, & Blu [319] had congealed a<br />

theory of noiseless signal reconstruction, in May 2001, from samples<br />

that violate the Shannon rate. They anticipated the Candès/Donoho<br />

sparsifying transform by recognizing: it is the innovation (onset) of<br />

functions constituting a (not necessarily bandlimited) signal that determines<br />

minimum sampling rate for perfect reconstruction. Average onset<br />

(sparsity) Vetterli et alii call rate of innovation. The vector inner-products<br />

that Candès/Donoho call measurements, Vetterli calls projections. From<br />

those projections Vetterli demonstrates reconstruction (by digital signal

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!