10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

3.1. CONVEX FUNCTION 203<br />

k/m<br />

1<br />

0.9<br />

1<br />

0.9<br />

0.8<br />

0.8<br />

0.7<br />

signed<br />

0.7<br />

positive<br />

0.6<br />

0.6<br />

0.5<br />

0.5<br />

0.4<br />

0.4<br />

0.3<br />

0.3<br />

0.2<br />

0.2<br />

0.1<br />

0.1<br />

0<br />

0 0.2 0.4 0.6 0.8 1<br />

0<br />

0 0.2 0.4 0.6 0.8 1<br />

m/n<br />

Figure 62: [103,5] [34] Exact recovery transition: Respectively signed<br />

or positive solutions x to Ax=b with sparsity k below thick curve are<br />

recoverable. For Gaussian random matrix A∈ R m×n , thick curve demarcates<br />

phase transition in ability to find sparsest solution x by linear programming.<br />

Then (462) is equivalent to<br />

minimize c<br />

c∈R , x∈R n , a∈R 2n<br />

subject to x = [I −I ]a<br />

a T 1 = c<br />

a ≽ 0<br />

Ax = b<br />

≡<br />

minimize<br />

a∈R 2n ‖a‖ 1<br />

subject to [A −A ]a = b<br />

a ≽ 0<br />

where x ⋆ = [I −I ]a ⋆ . (confer (459)) Significance of this result:<br />

(464)<br />

Any vector 1-norm minimization problem may have its variable<br />

replaced with a nonnegative variable of twice the length.<br />

All other things being equal, nonnegative variables are easier to solve for<br />

sparse solutions. (Figure 62, Figure 80) The compressed sensing problem<br />

becomes easier to interpret; e.g.,<br />

minimize ‖x‖ 1<br />

x<br />

subject to Ax = b<br />

x ≽ 0<br />

≡<br />

minimize 1 T x<br />

x<br />

subject to Ax = b<br />

x ≽ 0<br />

(465)<br />

movement of a hyperplane (Figure 22, Figure 26) over a bounded polyhedron<br />

always has a vertex solution.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!