sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

118 CHAPTER 5. ITERATIVE METHODS eigendecomposition of the inverse of à may not satisfy this criterion. To see this, let’s look at a 2 × 2 block matrix. Suppose ( S1 + I I ) à = I S 2 + I can be factorized as à = ( λ11 λ 12 λ 21 λ 22 )( D1 D 2 )( λ T 11 λ T 21 λ T 12 λ T 22 ) , where matrix ( ) λ11 λ 12 λ 21 λ 22 is an orthogonal matrix and D 1 and D 2 are diagonal matrices. We have ( )( à −1 λ11 λ 12 D −1 1 = λ 22 λ 21 D −1 2 )( λ T 11 λ T 21 λ T 12 λ T 22 ) . At the same time, ( ) ( (S1 + I) −1 I (S1 + I) −1 = (S 2 + I − (S 1 + I) −1 ) −1 I ) à −1 ( I (S 1 + I) −1 I ) . Hence (S 2 + I − (S 1 + I) −1 ) −1 = λ 21 D −1 1 λT 21 + λ 22 D −1 2 λT 22. So if there were fast algorithms to multiply with matrices λ 21 , λ 22 and their transposes, then there would be a fast algorithm to multiply with matrix (S 2 + I − (S 1 + I) −1 ) −1 . But we know that in general, there is no fast algorithm to multiply with this matrix (see also the previous subsection). So in general, such a eigendecomposition will not give a block preconditioner that has fast algorithms to multiply with its block elements. Hence we proved that at least in the 2 × 2 case, an eigendecomposition of inverse approach is not favored.

Chapter 6 Simulations Section 6.1 describes the dictionary that we use. Section 6.2 describes our testing images. Section 6.3 discusses the decompositions based on our approach and its implication. Section 6.4 discusses the decay of amplitudes of coefficients and how it reflects the sparsity in representation. Section 6.5 reports a comparison with Matching Pursuit. Section 6.6 summarizes the computing time. Section 6.7 describes the forthcoming software package that is used for this project. Finally, Section 6.8 talks about some related efforts. 6.1 Dictionary The dictionary we choose is a combination of an orthonormal 2-D wavelet basis and a set of edgelet-like features. 2-D wavelets are tensor products of two 1-D wavelets. We choose a type of 1-D wavelets that have a minimum size support for a given number of vanishing moments but are as symmetrical as possible. This class of 1-D wavelets is called “Symmlets” in WaveLab [42]. We choose the Symmlets with 8 vanishing moments and size of the support being 16. An illustration of some of these 2-D wavelets is in Figure 3.6. Our “edgelet dictionary” is in fact a collection of edgelet features. See the discussion of Sections 3.3.1–3.3.3. In Appendix B, we define a collection of linear functionals ˜λ e [x] operating on x belonging to the space of N × N images. These linear functionals are associated with the evaluation of an approximate Radon transform as described in Appendix B. In effect, the Riesz representers of these linear functionals, { ˜ψ e (k 1 ,k 2 ):0≤ k 1 ,k 2

118 CHAPTER 5. ITERATIVE METHODS<br />

eigendecomposition of the inverse of à may not satisfy this criterion. To see this, let’s look<br />

at a 2 × 2 block matrix. Suppose<br />

(<br />

S1 + I I<br />

)<br />

à =<br />

I<br />

S 2 + I<br />

can be factorized as<br />

à =<br />

(<br />

λ11 λ 12<br />

λ 21 λ 22<br />

)(<br />

D1<br />

D 2<br />

)(<br />

λ<br />

T<br />

11 λ T 21<br />

λ T 12 λ T 22<br />

)<br />

,<br />

where matrix<br />

( )<br />

λ11 λ 12<br />

λ 21 λ 22<br />

is an orthogonal matrix and D 1 and D 2 are diagonal matrices. We have<br />

( )(<br />

à −1 λ11 λ 12 D<br />

−1<br />

1<br />

=<br />

λ 22<br />

λ 21<br />

D −1<br />

2<br />

)(<br />

λ<br />

T<br />

11 λ T 21<br />

λ T 12 λ T 22<br />

)<br />

.<br />

At the same time,<br />

( ) (<br />

(S1 + I) −1 I (S1 + I) −1<br />

=<br />

(S 2 + I − (S 1 + I) −1 ) −1 I<br />

)<br />

à −1 (<br />

I<br />

(S 1 + I) −1 I<br />

)<br />

.<br />

Hence<br />

(S 2 + I − (S 1 + I) −1 ) −1 = λ 21 D −1<br />

1 λT 21 + λ 22 D −1<br />

2 λT 22.<br />

So if there were fast algorithms to multiply with matrices λ 21 , λ 22 and their transposes,<br />

then there would be a fast algorithm to multiply with matrix (S 2 + I − (S 1 + I) −1 ) −1 .<br />

But we know that in general, there is no fast algorithm to multiply with this matrix (see<br />

also the previous subsection).<br />

So in general, such a eigendecomposition will not give a<br />

block preconditioner that has fast algorithms to multiply with its block elements. Hence<br />

we proved that at least in the 2 × 2 case, an eigendecomposition of inverse approach is not<br />

favored.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!