sparse image representation via combined transforms - Convex ...
sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...
130 CHAPTER 7. FUTURE WORK These images possess a variety of features. The results of a sparse decomposition can be used to determine which class of features is dominant in an image, and then to tell which transform is well-suited to processing the image. We are going to experiment on different dictionaries. We have experimented on a dictionary that is a combination of 2-D wavelets and edgelet-like features. Some other possible combinations are: •{2-D DCT + 2-D wavelets + edgelets}. This dictionary contains elements for homogeneous image components, point singularities and linear singularities. It should provides sparser decomposition, but will increase the computational cost. •{2-D DCT + 2-D wavelets + curvelets}. It is similar to the idea for the previous dictionary, but with a better-desiged curvelets set in place of the edgelets set, it may lead to an improvement in the computational efficiency. •{2-D DCT + curvelets}. Comparing with the previous result, we may be able to tell how important the wavelet components are in the image, by knowing how many wavelets we need in sparsely representing the desired image. We can explore the same idea for 2-D DCT and curvelets, respectively. There are some open questions. We hope to gain more insights (and hopefully answer the questions) via more computational experiments and theoretical analysis. Some of these open questions are: • We want to explore the limit of our approach in finding a sparse atomic decomposition. For example, if the desired image can be made by a few atoms from a dictionary, will our approach find the sparsest atomic decomposition in the dictionary? • For most of natural images (e.g., those images we have seen in our ordinary life), can they be represented by a few components from a dictionary made by 2-D DCT, 2-D wavelets, edgelets and curvelets? If not, what are other transforms we should bring in (or develop)? • We think our sparse image representation approach can be used as a tool to preprocess a class of images. Based on the sparse representation results, we can decide which image transform should be chosen for the specific class of images. We are going to explore this idea. This gives a way of doing method selection and it has applications in image search, biological image analysis, etc.
7.2. MODIFYING EDGELET DICTIONARY 131 7.2 Modifying Edgelet Dictionary By inspection of Figure 6.2, one sees that the existing edgelet dictionary can be further improved. Indeed, in those figures, one sees that the wavelet component of the reconstruction is carrying a significant amount of the burden of edge representation. It seems that the terms in our edgelet dictionary do not have a width matched to the edge width, and that in consequence, many fine-scale wavelets are needed in the representation. In future experiments, we would try using edgelet features that have a finer width at fine scales and coarser width at coarse scales. This intuitive discussion matches some of the concerns in the paper [54]. The idea in [54] is to construct a new tight frame intended for efficient representation of 2-D objects with singularities along curves. The frame elements exhibit a range of dyadic positions, scales and angular orientations. The useful frame elements are highly directionally selective at fine scales. Moreover, the width of the frame elements scales with length according to the square of length. The frame construction combines ideas from ridgelet and wavelet analysis. One tool is the monoscale ridgelet transform. The other tool is the use of multiresolution filter banks. The frame coefficients are obtained by first separating the object into special multiresolution subbands f s , s ≥ 0 by applying filtering operation ∆ s f =Ψ 2s ∗ f for s ≥ 0, where Ψ 2s is built from frequencies in an annulus extending from radius 2 2s to 2 2s+2 . To the s-th subband one applies the monoscale ridgelet transform at scale s. Note that the monoscale ridgelet transform at scale index s is composed with multiresolution filtering near index 2s. The (s, 2s) pairing makes the useful frame elements highly anisotropic at fine scales. The frame gives rise to a near-optimal atomic decomposition of objects which have discontinuities along a closed C 2 curve. Simple thresholding of frame coefficients gives rise to new methods of approximation and smoothing that are highly anisotropic and provably optimal. We refer readers to the original paper cited at the beginning of this section for more details. 7.3 Accelerating the Iterative Algorithm The idea of block coordinate relaxation can be found in [124, 125]. A proof of the convergence can be found in [135]. We happen to know an independent work in [130]. The idea
- Page 107 and 108: 3.7. PROOFS 79 Similarly, we have [
- Page 109 and 110: Chapter 4 Combined Image Representa
- Page 111 and 112: 4.2. SPARSE DECOMPOSITION 83 interi
- Page 113 and 114: 4.3. MINIMUM l 1 NORM SOLUTION 85 l
- Page 115 and 116: 4.4. LAGRANGE MULTIPLIERS 87 ρ( x
- Page 117 and 118: 4.5. HOW TO CHOOSE ρ AND λ 89 3 (
- Page 119 and 120: 4.6. HOMOTOPY 91 A way to interpret
- Page 121 and 122: 4.7. NEWTON DIRECTION 93 4.7 Newton
- Page 123 and 124: 4.9. ITERATIVE METHODS 95 1. Avoidi
- Page 125 and 126: 4.11. DISCUSSION 97 ρ(β) =‖β
- Page 127 and 128: 4.12. PROOFS 99 4.12.2 Proof of The
- Page 129 and 130: 4.12. PROOFS 101 case of (4.16). Co
- Page 131 and 132: Chapter 5 Iterative Methods This ch
- Page 133 and 134: 5.1. OVERVIEW 105 the k-th iteratio
- Page 135 and 136: 5.1. OVERVIEW 107 5.1.4 Preconditio
- Page 137 and 138: 5.2. LSQR 109 among all the block d
- Page 139 and 140: 5.2. LSQR 111 5.2.3 Algorithm LSQR
- Page 141 and 142: 5.3. MINRES 113 2. For k =1, 2,...,
- Page 143 and 144: 5.3. MINRES 115 using the precondit
- Page 145 and 146: 5.4. DISCUSSION 117 From (I + S 1 )
- Page 147 and 148: Chapter 6 Simulations Section 6.1 d
- Page 149 and 150: 6.3. DECOMPOSITION 121 10 5 5 5 20
- Page 151 and 152: 6.4. DECAY OF COEFFICIENTS 123 10 2
- Page 153 and 154: 6.5. COMPARISON WITH MATCHING PURSU
- Page 155 and 156: 6.6. SUMMARY OF COMPUTATIONAL EXPER
- Page 157: Chapter 7 Future Work In the future
- Page 161 and 162: 7.3. ACCELERATING THE ITERATIVE ALG
- Page 163 and 164: Appendix A Direct Edgelet Transform
- Page 165 and 166: A.2. EXAMPLES 137 edgelet transform
- Page 167 and 168: A.3. DETAILS 139 (a) Stick image (b
- Page 169 and 170: A.3. DETAILS 141 (a) Lenna image (b
- Page 171 and 172: A.3. DETAILS 143 Ordering of Dyadic
- Page 173 and 174: A.3. DETAILS 145 (1,K +1), (1,K +2)
- Page 175 and 176: A.3. DETAILS 147 x 1 , y 1 x 2 , y
- Page 177 and 178: Appendix B Fast Edgelet-like Transf
- Page 179 and 180: B.1. TRANSFORMS FOR 2-D CONTINUOUS
- Page 181 and 182: B.2. DISCRETE ALGORITHM 153 B.2.1 S
- Page 183 and 184: B.2. DISCRETE ALGORITHM 155 extensi
- Page 185 and 186: B.2. DISCRETE ALGORITHM 157 For the
- Page 187 and 188: B.3. ADJOINT OF THE FAST TRANSFORM
- Page 189 and 190: B.4. ANALYSIS 161 above matrix, whi
- Page 191 and 192: B.5. EXAMPLES 163 B.5 Examples B.5.
- Page 193 and 194: B.5. EXAMPLES 165 And so on. Note f
- Page 195 and 196: B.6. MISCELLANEOUS 167 It takes abo
- Page 197 and 198: B.6. MISCELLANEOUS 169 The function
- Page 199 and 200: Bibliography [1] Sensor Data Manage
- Page 201 and 202: BIBLIOGRAPHY 173 [24] C. Victor Che
- Page 203 and 204: BIBLIOGRAPHY 175 [50] David L. Dono
- Page 205 and 206: BIBLIOGRAPHY 177 [75] Vivek K. Goya
- Page 207 and 208: BIBLIOGRAPHY 179 [100] Stéphane Ma
130 CHAPTER 7. FUTURE WORK<br />
These <strong>image</strong>s possess a variety of features. The results of a <strong>sparse</strong> decomposition can be<br />
used to determine which class of features is dominant in an <strong>image</strong>, and then to tell which<br />
transform is well-suited to processing the <strong>image</strong>.<br />
We are going to experiment on different dictionaries. We have experimented on a dictionary<br />
that is a combination of 2-D wavelets and edgelet-like features. Some other possible<br />
combinations are:<br />
•{2-D DCT + 2-D wavelets + edgelets}. This dictionary contains elements for homogeneous<br />
<strong>image</strong> components, point singularities and linear singularities. It should<br />
provides <strong>sparse</strong>r decomposition, but will increase the computational cost.<br />
•{2-D DCT + 2-D wavelets + curvelets}. It is similar to the idea for the previous<br />
dictionary, but with a better-desiged curvelets set in place of the edgelets set, it may<br />
lead to an improvement in the computational efficiency.<br />
•{2-D DCT + curvelets}. Comparing with the previous result, we may be able to<br />
tell how important the wavelet components are in the <strong>image</strong>, by knowing how many<br />
wavelets we need in <strong>sparse</strong>ly representing the desired <strong>image</strong>. We can explore the same<br />
idea for 2-D DCT and curvelets, respectively.<br />
There are some open questions. We hope to gain more insights (and hopefully answer<br />
the questions) <strong>via</strong> more computational experiments and theoretical analysis. Some of these<br />
open questions are:<br />
• We want to explore the limit of our approach in finding a <strong>sparse</strong> atomic decomposition.<br />
For example, if the desired <strong>image</strong> can be made by a few atoms from a dictionary, will<br />
our approach find the <strong>sparse</strong>st atomic decomposition in the dictionary?<br />
• For most of natural <strong>image</strong>s (e.g., those <strong>image</strong>s we have seen in our ordinary life), can<br />
they be represented by a few components from a dictionary made by 2-D DCT, 2-D<br />
wavelets, edgelets and curvelets? If not, what are other <strong>transforms</strong> we should bring<br />
in (or develop)?<br />
• We think our <strong>sparse</strong> <strong>image</strong> <strong>representation</strong> approach can be used as a tool to preprocess<br />
a class of <strong>image</strong>s. Based on the <strong>sparse</strong> <strong>representation</strong> results, we can decide which<br />
<strong>image</strong> transform should be chosen for the specific class of <strong>image</strong>s. We are going to<br />
explore this idea. This gives a way of doing method selection and it has applications<br />
in <strong>image</strong> search, biological <strong>image</strong> analysis, etc.