sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

92 CHAPTER 4. COMBINED IMAGE REPRESENTATION are interested in. We suspect that the convergence is still true when the second assumption is relaxed. We leave the analysis for future research. Assumption 1 For given y, Φandλ, the solution to the problem in (4.9) exists and is unique. Since the objective function is convex and its Hessian, as in (4.8), is always positive definite, it is easy to prove that the above assumption is true in most cases. Assumption 2 Φ T Φ is a diagonally dominant matrix, which means that there exists a constant ɛ>0 such that |(Φ T Φ) ii |≥(1 + ɛ) ∑ k≠i |(Φ T Φ) ik |, i =1, 2,... ,N. As we mentioned, this assumption is too rigorous in many cases. For example, if Φ is a concatenation of the Dirac basis and the Fourier basis, this assumption does not hold. Theorem 4.1 For fixed γ, letx(γ) denote the solution to problem (4.2). Let x ′ denote the solution to problem (4.9). If the previous two assumptions are true, we have lim γ→+∞ x(γ) =x′ . (4.10) From the above result, we can apply the following method. Starting with a small γ 1 ,we get the solution x(γ 1 ) of problem (4.2). Next we choose γ 2 >γ 1 ,setx(γ 1 ) as the initial guess, and apply an iterative method to find the solution (denoted by x(γ 2 )) of problem (4.2). We repeat this process, obtaining a parameter sequence γ 1 ,γ 2 ,γ 3 ,..., and a solution sequence x(γ 1 ),x(γ 2 ),x(γ 3 ),.... From Theorem 4.1, the sequence {x(γ i ),i =1, 2,...} converges to the solution of problem (4.9). This method may save computing time because when γ is small, an iterative method takes a small number of steps to converge. After finding an approximate solution by using a small valued γ, we then take it as a starting point for an iterative method to find a more precise solution. Some iterations may be saved in the early stage.

4.7. NEWTON DIRECTION 93 4.7 Newton Direction For fixed γ, we use Newton’s method for convex optimization to solve problem (4.2). Starting from an initial guess x (0) , at every step, Newton’s then generates a new vector that is closer to the true solution: x (i+1) = x (i) + β i n(x (i) ), i =0, 1,... , (4.11) where β i is a damping parameter (β i is chosen by line search to make sure that the value of the objective function is reduced), n(x (i) ) is the Newton direction, a function of the current guess x (i) . Let f(x) =‖y − Φx‖ 2 2 + λρ(x) denote the objective function in problem (4.2). The gradient and Hessian of f(x) are defined in (4.8) and (4.7). The Newton direction at x (i) satisfies [ ] H(x (i) ) · n(x (i) )=−g(x (i) ). (4.12) This is a system of linear equations. We choose iterative methods to solve it, as discussed in the next section and also the next chapter. 4.8 Comparison with Existing Algorithms We compare our method with the method proposed by Chen, Donoho and Saunders (CDS) [27]. The basic conclusion is that the two methods are very similar, but our method is simpler in derivation, requires fewer variables and is potentially more efficient in numerical computing. We start by reviewing the CDS approach, describe the difference between our approach and theirs and then discuss benefits of these changes. Chen, Donoho and Saunders proposed a primal-dual log-barrier perturbed LP algorithm. Basically, they solve [27, equation (6.3), page 56] where minimize x o c T x o + 1 2 ‖γxo ‖ 2 + 1 2 ‖p‖2 subject to Ax o + δp = y, x o ≥ 0, • γ and δ are normally small (e.g., 10 −4 ) regularization parameters; • c = λ1, where λ is the penalization parameter as defined in (4.2) and 1 is an all-one

92 CHAPTER 4. COMBINED IMAGE REPRESENTATION<br />

are interested in. We suspect that the convergence is still true when the second assumption<br />

is relaxed. We leave the analysis for future research.<br />

Assumption 1 For given y, Φandλ, the solution to the problem in (4.9) exists and is<br />

unique.<br />

Since the objective function is convex and its Hessian, as in (4.8), is always positive definite,<br />

it is easy to prove that the above assumption is true in most cases.<br />

Assumption 2 Φ T Φ is a diagonally dominant matrix, which means that there exists a<br />

constant ɛ>0 such that<br />

|(Φ T Φ) ii |≥(1 + ɛ) ∑ k≠i<br />

|(Φ T Φ) ik |,<br />

i =1, 2,... ,N.<br />

As we mentioned, this assumption is too rigorous in many cases. For example, if Φ is a<br />

concatenation of the Dirac basis and the Fourier basis, this assumption does not hold.<br />

Theorem 4.1 For fixed γ, letx(γ) denote the solution to problem (4.2). Let x ′ denote the<br />

solution to problem (4.9). If the previous two assumptions are true, we have<br />

lim<br />

γ→+∞ x(γ) =x′ . (4.10)<br />

From the above result, we can apply the following method. Starting with a small γ 1 ,we<br />

get the solution x(γ 1 ) of problem (4.2). Next we choose γ 2 >γ 1 ,setx(γ 1 ) as the initial guess,<br />

and apply an iterative method to find the solution (denoted by x(γ 2 )) of problem (4.2). We<br />

repeat this process, obtaining a parameter sequence γ 1 ,γ 2 ,γ 3 ,..., and a solution sequence<br />

x(γ 1 ),x(γ 2 ),x(γ 3 ),.... From Theorem 4.1, the sequence {x(γ i ),i =1, 2,...} converges to<br />

the solution of problem (4.9). This method may save computing time because when γ is<br />

small, an iterative method takes a small number of steps to converge. After finding an<br />

approximate solution by using a small valued γ, we then take it as a starting point for an<br />

iterative method to find a more precise solution. Some iterations may be saved in the early<br />

stage.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!