sparse image representation via combined transforms - Convex ...
sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...
118 CHAPTER 5. ITERATIVE METHODS eigendecomposition of the inverse of à may not satisfy this criterion. To see this, let’s look at a 2 × 2 block matrix. Suppose ( S1 + I I ) à = I S 2 + I can be factorized as à = ( λ11 λ 12 λ 21 λ 22 )( D1 D 2 )( λ T 11 λ T 21 λ T 12 λ T 22 ) , where matrix ( ) λ11 λ 12 λ 21 λ 22 is an orthogonal matrix and D 1 and D 2 are diagonal matrices. We have ( )( à −1 λ11 λ 12 D −1 1 = λ 22 λ 21 D −1 2 )( λ T 11 λ T 21 λ T 12 λ T 22 ) . At the same time, ( ) ( (S1 + I) −1 I (S1 + I) −1 = (S 2 + I − (S 1 + I) −1 ) −1 I ) à −1 ( I (S 1 + I) −1 I ) . Hence (S 2 + I − (S 1 + I) −1 ) −1 = λ 21 D −1 1 λT 21 + λ 22 D −1 2 λT 22. So if there were fast algorithms to multiply with matrices λ 21 , λ 22 and their transposes, then there would be a fast algorithm to multiply with matrix (S 2 + I − (S 1 + I) −1 ) −1 . But we know that in general, there is no fast algorithm to multiply with this matrix (see also the previous subsection). So in general, such a eigendecomposition will not give a block preconditioner that has fast algorithms to multiply with its block elements. Hence we proved that at least in the 2 × 2 case, an eigendecomposition of inverse approach is not favored.
Chapter 6 Simulations Section 6.1 describes the dictionary that we use. Section 6.2 describes our testing images. Section 6.3 discusses the decompositions based on our approach and its implication. Section 6.4 discusses the decay of amplitudes of coefficients and how it reflects the sparsity in representation. Section 6.5 reports a comparison with Matching Pursuit. Section 6.6 summarizes the computing time. Section 6.7 describes the forthcoming software package that is used for this project. Finally, Section 6.8 talks about some related efforts. 6.1 Dictionary The dictionary we choose is a combination of an orthonormal 2-D wavelet basis and a set of edgelet-like features. 2-D wavelets are tensor products of two 1-D wavelets. We choose a type of 1-D wavelets that have a minimum size support for a given number of vanishing moments but are as symmetrical as possible. This class of 1-D wavelets is called “Symmlets” in WaveLab [42]. We choose the Symmlets with 8 vanishing moments and size of the support being 16. An illustration of some of these 2-D wavelets is in Figure 3.6. Our “edgelet dictionary” is in fact a collection of edgelet features. See the discussion of Sections 3.3.1–3.3.3. In Appendix B, we define a collection of linear functionals ˜λ e [x] operating on x belonging to the space of N × N images. These linear functionals are associated with the evaluation of an approximate Radon transform as described in Appendix B. In effect, the Riesz representers of these linear functionals, { ˜ψ e (k 1 ,k 2 ):0≤ k 1 ,k 2
- Page 95 and 96: 3.4. OTHER TRANSFORMS 67 uncertaint
- Page 97 and 98: 3.4. OTHER TRANSFORMS 69 Chirplets
- Page 99 and 100: 3.4. OTHER TRANSFORMS 71 Folding. A
- Page 101 and 102: 3.4. OTHER TRANSFORMS 73 We can app
- Page 103 and 104: 3.5. DISCUSSION 75 give only a few
- Page 105 and 106: 3.7. PROOFS 77 the ijth component o
- Page 107 and 108: 3.7. PROOFS 79 Similarly, we have [
- Page 109 and 110: Chapter 4 Combined Image Representa
- Page 111 and 112: 4.2. SPARSE DECOMPOSITION 83 interi
- Page 113 and 114: 4.3. MINIMUM l 1 NORM SOLUTION 85 l
- Page 115 and 116: 4.4. LAGRANGE MULTIPLIERS 87 ρ( x
- Page 117 and 118: 4.5. HOW TO CHOOSE ρ AND λ 89 3 (
- Page 119 and 120: 4.6. HOMOTOPY 91 A way to interpret
- Page 121 and 122: 4.7. NEWTON DIRECTION 93 4.7 Newton
- Page 123 and 124: 4.9. ITERATIVE METHODS 95 1. Avoidi
- Page 125 and 126: 4.11. DISCUSSION 97 ρ(β) =‖β
- Page 127 and 128: 4.12. PROOFS 99 4.12.2 Proof of The
- Page 129 and 130: 4.12. PROOFS 101 case of (4.16). Co
- Page 131 and 132: Chapter 5 Iterative Methods This ch
- Page 133 and 134: 5.1. OVERVIEW 105 the k-th iteratio
- Page 135 and 136: 5.1. OVERVIEW 107 5.1.4 Preconditio
- Page 137 and 138: 5.2. LSQR 109 among all the block d
- Page 139 and 140: 5.2. LSQR 111 5.2.3 Algorithm LSQR
- Page 141 and 142: 5.3. MINRES 113 2. For k =1, 2,...,
- Page 143 and 144: 5.3. MINRES 115 using the precondit
- Page 145: 5.4. DISCUSSION 117 From (I + S 1 )
- Page 149 and 150: 6.3. DECOMPOSITION 121 10 5 5 5 20
- Page 151 and 152: 6.4. DECAY OF COEFFICIENTS 123 10 2
- Page 153 and 154: 6.5. COMPARISON WITH MATCHING PURSU
- Page 155 and 156: 6.6. SUMMARY OF COMPUTATIONAL EXPER
- Page 157 and 158: Chapter 7 Future Work In the future
- Page 159 and 160: 7.2. MODIFYING EDGELET DICTIONARY 1
- Page 161 and 162: 7.3. ACCELERATING THE ITERATIVE ALG
- Page 163 and 164: Appendix A Direct Edgelet Transform
- Page 165 and 166: A.2. EXAMPLES 137 edgelet transform
- Page 167 and 168: A.3. DETAILS 139 (a) Stick image (b
- Page 169 and 170: A.3. DETAILS 141 (a) Lenna image (b
- Page 171 and 172: A.3. DETAILS 143 Ordering of Dyadic
- Page 173 and 174: A.3. DETAILS 145 (1,K +1), (1,K +2)
- Page 175 and 176: A.3. DETAILS 147 x 1 , y 1 x 2 , y
- Page 177 and 178: Appendix B Fast Edgelet-like Transf
- Page 179 and 180: B.1. TRANSFORMS FOR 2-D CONTINUOUS
- Page 181 and 182: B.2. DISCRETE ALGORITHM 153 B.2.1 S
- Page 183 and 184: B.2. DISCRETE ALGORITHM 155 extensi
- Page 185 and 186: B.2. DISCRETE ALGORITHM 157 For the
- Page 187 and 188: B.3. ADJOINT OF THE FAST TRANSFORM
- Page 189 and 190: B.4. ANALYSIS 161 above matrix, whi
- Page 191 and 192: B.5. EXAMPLES 163 B.5 Examples B.5.
- Page 193 and 194: B.5. EXAMPLES 165 And so on. Note f
- Page 195 and 196: B.6. MISCELLANEOUS 167 It takes abo
118 CHAPTER 5. ITERATIVE METHODS<br />
eigendecomposition of the inverse of à may not satisfy this criterion. To see this, let’s look<br />
at a 2 × 2 block matrix. Suppose<br />
(<br />
S1 + I I<br />
)<br />
à =<br />
I<br />
S 2 + I<br />
can be factorized as<br />
à =<br />
(<br />
λ11 λ 12<br />
λ 21 λ 22<br />
)(<br />
D1<br />
D 2<br />
)(<br />
λ<br />
T<br />
11 λ T 21<br />
λ T 12 λ T 22<br />
)<br />
,<br />
where matrix<br />
( )<br />
λ11 λ 12<br />
λ 21 λ 22<br />
is an orthogonal matrix and D 1 and D 2 are diagonal matrices. We have<br />
( )(<br />
à −1 λ11 λ 12 D<br />
−1<br />
1<br />
=<br />
λ 22<br />
λ 21<br />
D −1<br />
2<br />
)(<br />
λ<br />
T<br />
11 λ T 21<br />
λ T 12 λ T 22<br />
)<br />
.<br />
At the same time,<br />
( ) (<br />
(S1 + I) −1 I (S1 + I) −1<br />
=<br />
(S 2 + I − (S 1 + I) −1 ) −1 I<br />
)<br />
à −1 (<br />
I<br />
(S 1 + I) −1 I<br />
)<br />
.<br />
Hence<br />
(S 2 + I − (S 1 + I) −1 ) −1 = λ 21 D −1<br />
1 λT 21 + λ 22 D −1<br />
2 λT 22.<br />
So if there were fast algorithms to multiply with matrices λ 21 , λ 22 and their transposes,<br />
then there would be a fast algorithm to multiply with matrix (S 2 + I − (S 1 + I) −1 ) −1 .<br />
But we know that in general, there is no fast algorithm to multiply with this matrix (see<br />
also the previous subsection).<br />
So in general, such a eigendecomposition will not give a<br />
block preconditioner that has fast algorithms to multiply with its block elements. Hence<br />
we proved that at least in the 2 × 2 case, an eigendecomposition of inverse approach is not<br />
favored.