12.07.2015 Views

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.2. PRACTICAL NORM FUNCTIONS, ABSOLUTE VALUE 2333.2.2.1.1 Exercise. Polyhedral epigraph of k-largest norm.Make those card I = 2 k n!/(k!(n − k)!) linear functions explicit for ‖x‖ 22 and‖x‖ 21 on R 2 and ‖x‖ 32 on R 3 . Plot ‖x‖ 22 and ‖x‖ 21 in three dimensions. 3.2.2.1.2 Example. Compressed sensing problem.Conventionally posed as convex problem (518), we showed: the compressedsensing problem can always be posed equivalently with a nonnegative variableas in convex statement (523). The 1-norm predominantly appears in theliterature because it is convex, it inherently minimizes cardinality under sometechnical conditions, [68] and because the desirable 0-norm is intractable.Assuming a cardinality-k solution exists, the compressed sensing problemmay be written as a difference of two convex functions: for A∈ R m×nminimize ‖x‖ 1 − ‖x‖nx∈R nksubject to Ax = bx ≽ 0≡find x ∈ R nsubject to Ax = bx ≽ 0‖x‖ 0 ≤ k(530)which is a nonconvex statement, a minimization of n−k smallest entriesof variable vector x , minimization of a concave function on R n + (3.2.1)[307,32]; but a statement of compressed sensing more precise than (523)because of its equivalence to 0-norm. ‖x‖nkis the convex k-largest norm of x(monotonic on R n +) while ‖x‖ 0 (quasiconcave on R n +) expresses its cardinality.Global optimality occurs at a zero objective of minimization; id est, when thesmallest n −k entries of variable vector x are zeroed. Under nonnegativityconstraint, this compressed sensing problem (530) becomes the same asminimize (1 − z) T xz(x) , x∈R nsubject to Ax = bx ≽ 0(531)where1 = ∇‖x‖ 1z = ∇‖x‖nk}, x ≽ 0 (532)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!