sparse image representation via combined transforms - Convex ...
sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...
92 CHAPTER 4. COMBINED IMAGE REPRESENTATION are interested in. We suspect that the convergence is still true when the second assumption is relaxed. We leave the analysis for future research. Assumption 1 For given y, Φandλ, the solution to the problem in (4.9) exists and is unique. Since the objective function is convex and its Hessian, as in (4.8), is always positive definite, it is easy to prove that the above assumption is true in most cases. Assumption 2 Φ T Φ is a diagonally dominant matrix, which means that there exists a constant ɛ>0 such that |(Φ T Φ) ii |≥(1 + ɛ) ∑ k≠i |(Φ T Φ) ik |, i =1, 2,... ,N. As we mentioned, this assumption is too rigorous in many cases. For example, if Φ is a concatenation of the Dirac basis and the Fourier basis, this assumption does not hold. Theorem 4.1 For fixed γ, letx(γ) denote the solution to problem (4.2). Let x ′ denote the solution to problem (4.9). If the previous two assumptions are true, we have lim γ→+∞ x(γ) =x′ . (4.10) From the above result, we can apply the following method. Starting with a small γ 1 ,we get the solution x(γ 1 ) of problem (4.2). Next we choose γ 2 >γ 1 ,setx(γ 1 ) as the initial guess, and apply an iterative method to find the solution (denoted by x(γ 2 )) of problem (4.2). We repeat this process, obtaining a parameter sequence γ 1 ,γ 2 ,γ 3 ,..., and a solution sequence x(γ 1 ),x(γ 2 ),x(γ 3 ),.... From Theorem 4.1, the sequence {x(γ i ),i =1, 2,...} converges to the solution of problem (4.9). This method may save computing time because when γ is small, an iterative method takes a small number of steps to converge. After finding an approximate solution by using a small valued γ, we then take it as a starting point for an iterative method to find a more precise solution. Some iterations may be saved in the early stage.
4.7. NEWTON DIRECTION 93 4.7 Newton Direction For fixed γ, we use Newton’s method for convex optimization to solve problem (4.2). Starting from an initial guess x (0) , at every step, Newton’s then generates a new vector that is closer to the true solution: x (i+1) = x (i) + β i n(x (i) ), i =0, 1,... , (4.11) where β i is a damping parameter (β i is chosen by line search to make sure that the value of the objective function is reduced), n(x (i) ) is the Newton direction, a function of the current guess x (i) . Let f(x) =‖y − Φx‖ 2 2 + λρ(x) denote the objective function in problem (4.2). The gradient and Hessian of f(x) are defined in (4.8) and (4.7). The Newton direction at x (i) satisfies [ ] H(x (i) ) · n(x (i) )=−g(x (i) ). (4.12) This is a system of linear equations. We choose iterative methods to solve it, as discussed in the next section and also the next chapter. 4.8 Comparison with Existing Algorithms We compare our method with the method proposed by Chen, Donoho and Saunders (CDS) [27]. The basic conclusion is that the two methods are very similar, but our method is simpler in derivation, requires fewer variables and is potentially more efficient in numerical computing. We start by reviewing the CDS approach, describe the difference between our approach and theirs and then discuss benefits of these changes. Chen, Donoho and Saunders proposed a primal-dual log-barrier perturbed LP algorithm. Basically, they solve [27, equation (6.3), page 56] where minimize x o c T x o + 1 2 ‖γxo ‖ 2 + 1 2 ‖p‖2 subject to Ax o + δp = y, x o ≥ 0, • γ and δ are normally small (e.g., 10 −4 ) regularization parameters; • c = λ1, where λ is the penalization parameter as defined in (4.2) and 1 is an all-one
- Page 69 and 70: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 71 and 72: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 73 and 74: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 75 and 76: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 77 and 78: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 79 and 80: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 81 and 82: 3.2. WAVELETS AND POINT SINGULARITI
- Page 83 and 84: 3.2. WAVELETS AND POINT SINGULARITI
- Page 85 and 86: 3.2. WAVELETS AND POINT SINGULARITI
- Page 87 and 88: 3.2. WAVELETS AND POINT SINGULARITI
- Page 89 and 90: 3.2. WAVELETS AND POINT SINGULARITI
- Page 91 and 92: 3.2. WAVELETS AND POINT SINGULARITI
- Page 93 and 94: 3.3. EDGELETS AND LINEAR SINGULARIT
- Page 95 and 96: 3.4. OTHER TRANSFORMS 67 uncertaint
- Page 97 and 98: 3.4. OTHER TRANSFORMS 69 Chirplets
- Page 99 and 100: 3.4. OTHER TRANSFORMS 71 Folding. A
- Page 101 and 102: 3.4. OTHER TRANSFORMS 73 We can app
- Page 103 and 104: 3.5. DISCUSSION 75 give only a few
- Page 105 and 106: 3.7. PROOFS 77 the ijth component o
- Page 107 and 108: 3.7. PROOFS 79 Similarly, we have [
- Page 109 and 110: Chapter 4 Combined Image Representa
- Page 111 and 112: 4.2. SPARSE DECOMPOSITION 83 interi
- Page 113 and 114: 4.3. MINIMUM l 1 NORM SOLUTION 85 l
- Page 115 and 116: 4.4. LAGRANGE MULTIPLIERS 87 ρ( x
- Page 117 and 118: 4.5. HOW TO CHOOSE ρ AND λ 89 3 (
- Page 119: 4.6. HOMOTOPY 91 A way to interpret
- Page 123 and 124: 4.9. ITERATIVE METHODS 95 1. Avoidi
- Page 125 and 126: 4.11. DISCUSSION 97 ρ(β) =‖β
- Page 127 and 128: 4.12. PROOFS 99 4.12.2 Proof of The
- Page 129 and 130: 4.12. PROOFS 101 case of (4.16). Co
- Page 131 and 132: Chapter 5 Iterative Methods This ch
- Page 133 and 134: 5.1. OVERVIEW 105 the k-th iteratio
- Page 135 and 136: 5.1. OVERVIEW 107 5.1.4 Preconditio
- Page 137 and 138: 5.2. LSQR 109 among all the block d
- Page 139 and 140: 5.2. LSQR 111 5.2.3 Algorithm LSQR
- Page 141 and 142: 5.3. MINRES 113 2. For k =1, 2,...,
- Page 143 and 144: 5.3. MINRES 115 using the precondit
- Page 145 and 146: 5.4. DISCUSSION 117 From (I + S 1 )
- Page 147 and 148: Chapter 6 Simulations Section 6.1 d
- Page 149 and 150: 6.3. DECOMPOSITION 121 10 5 5 5 20
- Page 151 and 152: 6.4. DECAY OF COEFFICIENTS 123 10 2
- Page 153 and 154: 6.5. COMPARISON WITH MATCHING PURSU
- Page 155 and 156: 6.6. SUMMARY OF COMPUTATIONAL EXPER
- Page 157 and 158: Chapter 7 Future Work In the future
- Page 159 and 160: 7.2. MODIFYING EDGELET DICTIONARY 1
- Page 161 and 162: 7.3. ACCELERATING THE ITERATIVE ALG
- Page 163 and 164: Appendix A Direct Edgelet Transform
- Page 165 and 166: A.2. EXAMPLES 137 edgelet transform
- Page 167 and 168: A.3. DETAILS 139 (a) Stick image (b
- Page 169 and 170: A.3. DETAILS 141 (a) Lenna image (b
92 CHAPTER 4. COMBINED IMAGE REPRESENTATION<br />
are interested in. We suspect that the convergence is still true when the second assumption<br />
is relaxed. We leave the analysis for future research.<br />
Assumption 1 For given y, Φandλ, the solution to the problem in (4.9) exists and is<br />
unique.<br />
Since the objective function is convex and its Hessian, as in (4.8), is always positive definite,<br />
it is easy to prove that the above assumption is true in most cases.<br />
Assumption 2 Φ T Φ is a diagonally dominant matrix, which means that there exists a<br />
constant ɛ>0 such that<br />
|(Φ T Φ) ii |≥(1 + ɛ) ∑ k≠i<br />
|(Φ T Φ) ik |,<br />
i =1, 2,... ,N.<br />
As we mentioned, this assumption is too rigorous in many cases. For example, if Φ is a<br />
concatenation of the Dirac basis and the Fourier basis, this assumption does not hold.<br />
Theorem 4.1 For fixed γ, letx(γ) denote the solution to problem (4.2). Let x ′ denote the<br />
solution to problem (4.9). If the previous two assumptions are true, we have<br />
lim<br />
γ→+∞ x(γ) =x′ . (4.10)<br />
From the above result, we can apply the following method. Starting with a small γ 1 ,we<br />
get the solution x(γ 1 ) of problem (4.2). Next we choose γ 2 >γ 1 ,setx(γ 1 ) as the initial guess,<br />
and apply an iterative method to find the solution (denoted by x(γ 2 )) of problem (4.2). We<br />
repeat this process, obtaining a parameter sequence γ 1 ,γ 2 ,γ 3 ,..., and a solution sequence<br />
x(γ 1 ),x(γ 2 ),x(γ 3 ),.... From Theorem 4.1, the sequence {x(γ i ),i =1, 2,...} converges to<br />
the solution of problem (4.9). This method may save computing time because when γ is<br />
small, an iterative method takes a small number of steps to converge. After finding an<br />
approximate solution by using a small valued γ, we then take it as a starting point for an<br />
iterative method to find a more precise solution. Some iterations may be saved in the early<br />
stage.