sparse image representation via combined transforms - Convex ...
sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...
136 APPENDIX A. DIRECT EDGELET TRANSFORM i, j =1, 2,... ,N. It’s natural to assume that a vertex mentioned in [E2] must be located at a pixel. The cardinality of an edgelet system has O(N 2 log 2 N). More details are given in Section A.3.2. An edgel is a line segment connecting a pair of pixels in an image. Note if we take all the possible edgels in an N × N image, we have O(N 4 ) of them. Moreover, for any edgel, it’s proven in [50] that it takes at most O(log 2 N) edgelets to approximate it within a distance 1/N + δ, where δ is a constant. The coefficients of the edgelet transform are simply the integration of the 2-D function along these edgelets. There is a fast algorithm to compute an approximate edgelet transform [50]. For an N × N image, the complexity of the fast algorithm is O(N 2 log 2 N). The fast edgelet transform will be the topic of the next chapter. This transform has been implemented in C and it is callable via a Matlab MEX function. It can serve as a benchmark for testing other transforms, which are designed to capture linear features in images. A.2 Examples Before we present details, let’s first look at some examples. The key idea of developing this transform is hoping that if the original image is made by a few needle-like components, then this transform will give a small number of significant coefficients, and the rest of the coefficients will be relatively small. Moreover, if we apply the adjoint transform to the coefficients selected by keeping only these with significant amplitudes, then the reconstructed image should be close to the original. To test the above idea, we select four images: [Huo] a Chinese character, 64 × 64; [Sticky] a sticky figure, 128 × 128; [WoodGrain] an image of wood grain, 512 × 512; [Lenna] the lenna image, 512 × 512. [Huo] was selected because it is made by a few lines, so it is an ideal testing image. [Sticky] has a patch in the head. We apply an edge filter to this image before we apply the
A.2. EXAMPLES 137 edgelet transform. [WoodGrain] is a natural image, but it has significant linear features in it. [Lenna] is a standard testing image in image processing. As for [Sticky], we apply an edge filter to [Lenna] before doing the edgelet transform. The images in the above list are roughly ranked by the abundance (of course this could be subjective) of linear features. Figure A.1, A.2, A.3 and A.4 show the numerical results. From the reconstructions based on partial coefficients—Figure A.1 (b), (c), (e) and (f), Figure A.2 (d), (e) and (f), Figure A.3 (b), (c), (e) and (f), Figure A.4 (d), (e) and (f)—we see that the significant edgelet coefficients capture the linear features of the images. The following table shows the percentages of the coefficients being used in the reconstructions. Note all the percentages Recon. 1 Recon. 2 Recon. 3 Recon. 4 [Huo] 0.75 % 1.50 % 2.99 % 5.98 % [Sticky] 0.03 % 0.08 % 0.14 % NA [WoodGrain] 0.54 % 1.09 % 2.17 % 4.35 % [Lenna] 0.23 % 0.47 % 0.93 % NA Table A.1: Percentages of edgelet coefficients being used in the reconstructions. in the above table are small, say, less than 6%. The larger the percentage is, the better the reconstruction captures the linear features in the images. A.2.1 Edge Filter Here we design an edge filter. Let A represent the original image. Define 2-D filters D 1 , D 2 and D 3 as ( ) ( ) ( ) − + − − − + D 1 = , D 2 = , D 3 = . − + + + + − Let “⋆” denote 2-D convolution. The notation [·]·2 means square each element of the matrix in the brackets. Let “+” be an elementwise addition operator. The edge filtered image of A is defined as A E =[A⋆D 1 ]·2 +[A⋆D 2 ]·2 +[A⋆D 3 ]·2 . As we have explained, for the [Sticky] and [Lenna] image, we first apply an edge filter.
- Page 113 and 114: 4.3. MINIMUM l 1 NORM SOLUTION 85 l
- Page 115 and 116: 4.4. LAGRANGE MULTIPLIERS 87 ρ( x
- Page 117 and 118: 4.5. HOW TO CHOOSE ρ AND λ 89 3 (
- Page 119 and 120: 4.6. HOMOTOPY 91 A way to interpret
- Page 121 and 122: 4.7. NEWTON DIRECTION 93 4.7 Newton
- Page 123 and 124: 4.9. ITERATIVE METHODS 95 1. Avoidi
- Page 125 and 126: 4.11. DISCUSSION 97 ρ(β) =‖β
- Page 127 and 128: 4.12. PROOFS 99 4.12.2 Proof of The
- Page 129 and 130: 4.12. PROOFS 101 case of (4.16). Co
- Page 131 and 132: Chapter 5 Iterative Methods This ch
- Page 133 and 134: 5.1. OVERVIEW 105 the k-th iteratio
- Page 135 and 136: 5.1. OVERVIEW 107 5.1.4 Preconditio
- Page 137 and 138: 5.2. LSQR 109 among all the block d
- Page 139 and 140: 5.2. LSQR 111 5.2.3 Algorithm LSQR
- Page 141 and 142: 5.3. MINRES 113 2. For k =1, 2,...,
- Page 143 and 144: 5.3. MINRES 115 using the precondit
- Page 145 and 146: 5.4. DISCUSSION 117 From (I + S 1 )
- Page 147 and 148: Chapter 6 Simulations Section 6.1 d
- Page 149 and 150: 6.3. DECOMPOSITION 121 10 5 5 5 20
- Page 151 and 152: 6.4. DECAY OF COEFFICIENTS 123 10 2
- Page 153 and 154: 6.5. COMPARISON WITH MATCHING PURSU
- Page 155 and 156: 6.6. SUMMARY OF COMPUTATIONAL EXPER
- Page 157 and 158: Chapter 7 Future Work In the future
- Page 159 and 160: 7.2. MODIFYING EDGELET DICTIONARY 1
- Page 161 and 162: 7.3. ACCELERATING THE ITERATIVE ALG
- Page 163: Appendix A Direct Edgelet Transform
- Page 167 and 168: A.3. DETAILS 139 (a) Stick image (b
- Page 169 and 170: A.3. DETAILS 141 (a) Lenna image (b
- Page 171 and 172: A.3. DETAILS 143 Ordering of Dyadic
- Page 173 and 174: A.3. DETAILS 145 (1,K +1), (1,K +2)
- Page 175 and 176: A.3. DETAILS 147 x 1 , y 1 x 2 , y
- Page 177 and 178: Appendix B Fast Edgelet-like Transf
- Page 179 and 180: B.1. TRANSFORMS FOR 2-D CONTINUOUS
- Page 181 and 182: B.2. DISCRETE ALGORITHM 153 B.2.1 S
- Page 183 and 184: B.2. DISCRETE ALGORITHM 155 extensi
- Page 185 and 186: B.2. DISCRETE ALGORITHM 157 For the
- Page 187 and 188: B.3. ADJOINT OF THE FAST TRANSFORM
- Page 189 and 190: B.4. ANALYSIS 161 above matrix, whi
- Page 191 and 192: B.5. EXAMPLES 163 B.5 Examples B.5.
- Page 193 and 194: B.5. EXAMPLES 165 And so on. Note f
- Page 195 and 196: B.6. MISCELLANEOUS 167 It takes abo
- Page 197 and 198: B.6. MISCELLANEOUS 169 The function
- Page 199 and 200: Bibliography [1] Sensor Data Manage
- Page 201 and 202: BIBLIOGRAPHY 173 [24] C. Victor Che
- Page 203 and 204: BIBLIOGRAPHY 175 [50] David L. Dono
- Page 205 and 206: BIBLIOGRAPHY 177 [75] Vivek K. Goya
- Page 207 and 208: BIBLIOGRAPHY 179 [100] Stéphane Ma
- Page 209 and 210: BIBLIOGRAPHY 181 [127] C. E. Shanno
A.2. EXAMPLES 137<br />
edgelet transform. [WoodGrain] is a natural <strong>image</strong>, but it has significant linear features in<br />
it. [Lenna] is a standard testing <strong>image</strong> in <strong>image</strong> processing. As for [Sticky], we apply an<br />
edge filter to [Lenna] before doing the edgelet transform. The <strong>image</strong>s in the above list are<br />
roughly ranked by the abundance (of course this could be subjective) of linear features.<br />
Figure A.1, A.2, A.3 and A.4 show the numerical results. From the reconstructions<br />
based on partial coefficients—Figure A.1 (b), (c), (e) and (f), Figure A.2 (d), (e) and (f),<br />
Figure A.3 (b), (c), (e) and (f), Figure A.4 (d), (e) and (f)—we see that the significant<br />
edgelet coefficients capture the linear features of the <strong>image</strong>s. The following table shows the<br />
percentages of the coefficients being used in the reconstructions. Note all the percentages<br />
Recon. 1 Recon. 2 Recon. 3 Recon. 4<br />
[Huo] 0.75 % 1.50 % 2.99 % 5.98 %<br />
[Sticky] 0.03 % 0.08 % 0.14 % NA<br />
[WoodGrain] 0.54 % 1.09 % 2.17 % 4.35 %<br />
[Lenna] 0.23 % 0.47 % 0.93 % NA<br />
Table A.1: Percentages of edgelet coefficients being used in the reconstructions.<br />
in the above table are small, say, less than 6%. The larger the percentage is, the better the<br />
reconstruction captures the linear features in the <strong>image</strong>s.<br />
A.2.1<br />
Edge Filter<br />
Here we design an edge filter. Let A represent the original <strong>image</strong>. Define 2-D filters D 1 , D 2<br />
and D 3 as<br />
( )<br />
( )<br />
( )<br />
− +<br />
− −<br />
− +<br />
D 1 =<br />
, D 2 =<br />
, D 3 =<br />
.<br />
− +<br />
+ +<br />
+ −<br />
Let “⋆” denote 2-D convolution. The notation [·]·2 means square each element of the matrix<br />
in the brackets. Let “+” be an elementwise addition operator. The edge filtered <strong>image</strong> of<br />
A is defined as<br />
A E =[A⋆D 1 ]·2 +[A⋆D 2 ]·2 +[A⋆D 3 ]·2 .<br />
As we have explained, for the [Sticky] and [Lenna] <strong>image</strong>, we first apply an edge filter.