10.03.2015 Views

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.7. NEWTON DIRECTION 93<br />

4.7 Newton Direction<br />

For fixed γ, we use Newton’s method for convex optimization to solve problem (4.2). Starting<br />

from an initial guess x (0) , at every step, Newton’s then generates a new vector that is<br />

closer to the true solution:<br />

x (i+1) = x (i) + β i n(x (i) ), i =0, 1,... , (4.11)<br />

where β i is a damping parameter (β i is chosen by line search to make sure that the value of<br />

the objective function is reduced), n(x (i) ) is the Newton direction, a function of the current<br />

guess x (i) . Let f(x) =‖y − Φx‖ 2 2 + λρ(x) denote the objective function in problem (4.2).<br />

The gradient and Hessian of f(x) are defined in (4.8) and (4.7). The Newton direction at<br />

x (i) satisfies<br />

[ ]<br />

H(x (i) ) · n(x (i) )=−g(x (i) ). (4.12)<br />

This is a system of linear equations. We choose iterative methods to solve it, as discussed<br />

in the next section and also the next chapter.<br />

4.8 Comparison with Existing Algorithms<br />

We compare our method with the method proposed by Chen, Donoho and Saunders (CDS)<br />

[27]. The basic conclusion is that the two methods are very similar, but our method is<br />

simpler in derivation, requires fewer variables and is potentially more efficient in numerical<br />

computing. We start by reviewing the CDS approach, describe the difference between our<br />

approach and theirs and then discuss benefits of these changes.<br />

Chen, Donoho and Saunders proposed a primal-dual log-barrier perturbed LP algorithm.<br />

Basically, they solve [27, equation (6.3), page 56]<br />

where<br />

minimize<br />

x o c T x o + 1 2 ‖γxo ‖ 2 + 1 2 ‖p‖2 subject to Ax o + δp = y, x o ≥ 0,<br />

• γ and δ are normally small (e.g., 10 −4 ) regularization parameters;<br />

• c = λ1, where λ is the penalization parameter as defined in (4.2) and 1 is an all-one

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!