10.03.2015 Views

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4.11. DISCUSSION 97<br />

ρ(β) =‖β‖ 2 2 , the method (PR) is called ridge regression by Hoerl and Kennard [81, 80].<br />

4.11.2 Non-convex Sparsity Measure<br />

An ideal measure of sparsity is usually nonconvex. For example, in (4.2), the number of<br />

nonzero elements in x is the most intuitive measure of sparsity. The l 0 norm of x, ‖x‖ 0 ,is<br />

equal to the number of nonzero elements, but it is not a convex function. Another choice<br />

of measure of sparsity is the logarithmic function; for x =(x 1 ,... ,x N ) T ∈ R N ,wecan<br />

have ρ(x) = ∑ N<br />

i=1 log |x i|. In <strong>sparse</strong> <strong>image</strong> component analysis, another nonconvex sparsity<br />

measure is used: ρ(x) = ∑ N<br />

i=1 log(1 + x2 i ) [53].<br />

Generally speaking, a nonconvex optimization problem is a combinatorial optimization<br />

problem, and hence it is NP hard. Some discussion about how to use reweighting methods<br />

to solve a nonconvex optimization problem is given in the next subsection.<br />

4.11.3 Iterative Algorithm for Non-convex Optimization Problems<br />

Sometimes, a reweighted iterative method canbeusedtofindalocal minimum for a nonconvex<br />

optimization problem. Let’s consider the following problem:<br />

(LO)<br />

minimize<br />

x<br />

N∑<br />

log |x i |, subject to y =Φx;<br />

i=1<br />

and its corresponding version with a Lagrangian multiplier λ, 1<br />

(LO λ )<br />

minimize ‖y − Φx‖ 2<br />

x<br />

2 + λ<br />

N∑<br />

log(|x i | + δ).<br />

i=1<br />

Note that the objective function of (LO) is not convex.<br />

Let’s consider a reweighted iterative algorithm: for δ>0,<br />

(RIA)<br />

x (k+1) = argmin<br />

x<br />

N∑<br />

i=1<br />

|x i |<br />

|x (k) , subject to y =Φx;<br />

i<br />

| + δ<br />

1 More precisely, (LO λ ) is the Lagrangian multiplier version of the following optimization problem:<br />

minimize<br />

x<br />

N∑<br />

log(|x i| + δ), subject to ‖y − Φx‖ ≤ε.<br />

i=1<br />

Note when δ and ε are small, it is close to (LO).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!