sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

116 CHAPTER 5. ITERATIVE METHODS (c) Compute p k−1 =[w k −T (k−1,k)p k−2 −T (k−2,k)p k−3 ]/T (k, k), where undefined terms are zeros for k

5.4. DISCUSSION 117 From (I + S 1 ) 1/2 = T 1 (I + D 1 ) 1/2 T T 1 , (I + S 1 ) −1/2 = T 1 (I + D 1 ) −1/2 T T 1 , and there are fast algorithms to implement matrix-vector multiplication with matrix T 1 , T1 T ,(I + D 1) −1/2 and (I + D 1 ) 1/2 , so there are fast algorithms to multiply with a 11 and a 12 . But for a 22 ,noneofT 1 , inverse of T 1 , T 2 and inverse of T 2 can simultaneously diagonalize I + S 2 and (I + S 1 ) −1 . Hence there is no trivial fast algorithm to multiply with matrix a 22 . In general, a complete Cholesky factorization will destroy the structure of matrices from which we can have fast algorithms. The fast algorithms of matrix-vector multiplication are so vital for solving large-scale problems with iterative methods (and an intrinsic property of our problem is that the size of data is huge) that we do not want to sacrifice the existence of fast algorithms. Preconditioning may reduce the number of iterations, but the amount of computation within each iteration is increased significantly, so overall, the total amount of computing may increase. Because of this philosophy, we stop plodding in the direction of complete Cholesky factorization. 5.4.2 Sparse Approximate Inverse The idea of a sparse approximate inverse (SAI) is that if we can find an approximate eigendecomposition of the inverse matrix, then we can use it to precondition the linear system. More precisely, suppose Z is an orthonormal matrix and at the same time the columns of Z, denoted by z i , i =1, 2,... ,N,areÃ-conjugate orthogonal to each other: Z =[z 1 ,z 2 ,... ,z N ], and ZÃZT = D, where D is a diagonal matrix. We have à = ZT DZ and Ã−1 = Z T D −1 Z. If we can find such a Z, then (D −1/2Z)Ã(D−1/2 Z) T ≈ I, so(D −1/2 Z) is a good preconditioner. We now consider its block matrix analogue. Actually in the previous subsection, we have already argued that when we have the block matrix version of a preconditioner, each element of the block matrix should be able to be associated with a fast algorithm that is based on matrix multiplication with matrices T 1 ,T 2 ,... ,T m and matrix-vector multiplication with diagonal matrices. Unfortunately, a preconditioner derived by using the idea of

116 CHAPTER 5. ITERATIVE METHODS<br />

(c) Compute p k−1 =[w k −T (k−1,k)p k−2 −T (k−2,k)p k−3 ]/T (k, k), where<br />

undefined terms are zeros for k

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!