10.03.2015 Views

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

5.4. DISCUSSION 117<br />

From<br />

(I + S 1 ) 1/2 = T 1 (I + D 1 ) 1/2 T T 1 ,<br />

(I + S 1 ) −1/2 = T 1 (I + D 1 ) −1/2 T T 1 ,<br />

and there are fast algorithms to implement matrix-vector multiplication with matrix T 1 ,<br />

T1 T ,(I + D 1) −1/2 and (I + D 1 ) 1/2 , so there are fast algorithms to multiply with a 11 and a 12 .<br />

But for a 22 ,noneofT 1 , inverse of T 1 , T 2 and inverse of T 2 can simultaneously diagonalize<br />

I + S 2 and (I + S 1 ) −1 . Hence there is no tri<strong>via</strong>l fast algorithm to multiply with matrix a 22 .<br />

In general, a complete Cholesky factorization will destroy the structure of matrices from<br />

which we can have fast algorithms. The fast algorithms of matrix-vector multiplication are<br />

so vital for solving large-scale problems with iterative methods (and an intrinsic property of<br />

our problem is that the size of data is huge) that we do not want to sacrifice the existence<br />

of fast algorithms. Preconditioning may reduce the number of iterations, but the amount<br />

of computation within each iteration is increased significantly, so overall, the total amount<br />

of computing may increase. Because of this philosophy, we stop plodding in the direction<br />

of complete Cholesky factorization.<br />

5.4.2 Sparse Approximate Inverse<br />

The idea of a <strong>sparse</strong> approximate inverse (SAI) is that if we can find an approximate<br />

eigendecomposition of the inverse matrix, then we can use it to precondition the linear<br />

system. More precisely, suppose Z is an orthonormal matrix and at the same time the<br />

columns of Z, denoted by z i , i =1, 2,... ,N,areÃ-conjugate orthogonal to each other:<br />

Z =[z 1 ,z 2 ,... ,z N ], and ZÃZT = D,<br />

where D is a diagonal matrix. We have à = ZT DZ and Ã−1 = Z T D −1 Z. If we can find<br />

such a Z, then (D −1/2Z)Ã(D−1/2 Z) T ≈ I, so(D −1/2 Z) is a good preconditioner.<br />

We now consider its block matrix analogue. Actually in the previous subsection, we<br />

have already argued that when we have the block matrix version of a preconditioner, each<br />

element of the block matrix should be able to be associated with a fast algorithm that is<br />

based on matrix multiplication with matrices T 1 ,T 2 ,... ,T m and matrix-vector multiplication<br />

with diagonal matrices. Unfortunately, a preconditioner derived by using the idea of

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!