10.03.2015 Views

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

86 CHAPTER 4. COMBINED IMAGE REPRESENTATION<br />

for example, linear programming. The previous result shows that we can attack a combinatorial<br />

optimization problem by solving a convex optimization problem; if the solution<br />

satisfies certain conditions, then the solution of the convex optimization problem is the same<br />

as the solution of the combinatorial optimization problem. This gives a new possibility to<br />

solve an NP hard problem.<br />

More examples of identical minimum l 1 norm decomposition and minimum l 0 norm<br />

decomposition are given in [43].<br />

An exact minimum l 1 norm problem is<br />

(eP 1 )<br />

minimize<br />

x<br />

‖x‖ 1 , subject to y =Φx.<br />

We consider a problem whose constraint is based on the l 2 norm:<br />

(P 1 ) minimize<br />

x<br />

‖x‖ 1 , subject to ‖y − Φx‖ 2 ≤ ɛ,<br />

where ɛ is a constant. Section 4.4 explains how to solve this problem.<br />

4.4 Lagrange Multipliers<br />

We explain how to solve (P 1 ) based on some insights from the interior point method. One<br />

key idea is to select a barrier function and then minimize the sum of the objective function<br />

and a multiplication of a positive constant and the barrier function. When the solution x<br />

approaches the boundary of the feasible set, the barrier function becomes infinite, thereby<br />

guaranteeing that the solution is always within the feasible set. Note that the subsequent<br />

optimization problem has become a nonconstrained optimization problem. Hence we can<br />

apply some standard methods—for example, the Newton method—to solve it.<br />

A typical interior point method uses a logarithmic barrier function [113]. The algorithm<br />

in [25] is equivalent to using an l 2 penalty function. Since the feasible set in (P 1 ) is the whole<br />

Euclidean space, the demand of restricting the solution in a feasible set is not essential. We<br />

actually solve the following problem<br />

minimize ‖y − Φx‖ 2<br />

x<br />

2 + λρ(x), (4.2)<br />

where λ is a scalar parameter and ρ is a convex separable function: ρ(x) = ∑ N<br />

i=1 ¯ρ(x i), where

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!