sparse image representation via combined transforms - Convex ...
sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...
44 CHAPTER 3. IMAGE TRANSFORMS AND IMAGE FEATURES then we have √ 2 πkl • DST-I: S N (k, l) = N sin N ; √ 2 • DST-II: S N (k, l) =b k+1 N sin π(k+1)(l+ 1 2 ) √ 2 • DST-III: S N (k, l) =b l+1 √ 2 • DST-IV: S N (k, l) = N sin π(k+ 1 2 )(l+ 1 2 ) N . N ; N sin π(k+ 1 2 )(l+1) N ; For all the four types of DST, the transform matrices are orthogonal (and also unitary). have If DST II N and DSTIII N {S N (k, l)} k=0,1,2,... ,N−1 l=0,1,2,... ,N−1 denote the DST-II and DST-III operators, similar to DCT, we 3.1.4 Homogeneous Components DST II N =(DST III N ) −1 . The reason that the DCT is so powerful in analyzing homogeneous signals is that it is nearly (in some asymptotic sense) the Karhunen-Loéve transform (KLT) of some Gaussian Markov Random Fields (GMRFs). In this section, we first describe the definition of Gaussian Markov Random Fields, then argue that the covariance matrix is the key statistic for GMRFs; for a covariance matrix, we give the necessary and sufficient conditions of diagonalizability of different types of DCTs; finally we conclude that under some appropriate boundary conditions, the DCT is the KLT of GMRFs. As we stated earlier, in this thesis, not much attention is given to mathematical rigor. Gaussian Markov Random Field This subsubsection is organized in the following way: we start with the definition of a random field, then introduce the definition of a Markov random field and a Gibbs random field; the Hammersley-Clifford theorem creates an equivalence between a Markov random field and a Gibbs random field; then we describe the definition of a Gaussian Markov random field; eventually we argue that DCT is a KLT of a GMRF.
3.1. DCT AND HOMOGENEOUS COMPONENTS 45 Definition of a random field. We define a random field on a lattice. Let Z d denote the d-dimensional integers, or the lattice points, in d-dimensional space, which is denoted by R d . The finite set D isasubsetofZ d : D ⊂ Z d . For two lattice points x, y ∈ Z d , let |x − y| denote the Euclidean distance between x and y. ThesetD is connected if and only if for any x, y ∈ D, there exists a finite subset {x 1 ,x 2 ,... ,x n } of D, n ∈ N, such that (1) |x−x 1 |≤1, (2) |x i − x i+1 |≤1,i =1, 2,... ,n− 1, and (3) |x n − y| ≤1. We call a connected set D a domain. The dimension of the set D is, by definition, the number of integer points in the set D. We denote the dimension of D by dim(D). On each lattice point in set D, a real value is assigned. The set R D , which is equivalent to R dim(D) , is called a state space. Follow some conventions, we denote the state space by Ω, so we have Ω = R dim(D) .LetF be the σ-algebra that is generated from the Borel sets in Ω. Let P be the Lebesgue measure. The triple (Ω, F, P) is called a random field (RF) on the domain D. Note that we define a random field on a subset of all the lattice points. Now we give the definition of a neighbor. Intuitively, under Euclidean distance, two integer (lattice) points x and y are neighbors when |x − y| ≤1. This definition can be extended. We define a non-negative, symmetric and translation-invariant bivariate function N(x, y) on domain D, such that for x, y ∈ D, the function N satisfies 1. N(x, x) =0, 2. N(x, y) ≥ 0 (non-negativity), 3. N(x, y) =N(y, x) (symmetry), 4. N(x, y) =N(0,y− x) (homogeneity, or translation invariance). Any two points are called neighbors if and only if N(x, y) > 0. For example, in Euclidean space, if we let N(x, y) =1when|x − y| =1,andN(x, y) = 0 elsewhere, then we have the ordinary definition of neighbor that is mentioned at the beginning of this paragraph. Definition of a Markov random field. The definition of a Markov random field is based upon conditional probability. The key idea of Markovity is that conditional probability should depend only on neighbors. To be more precise, we need some terminology. Let ω denote an element of Ω. We call ω a realization. Let p(ω) denote the probability density function of ω. The p.d.f. p is associated with the Lebesgue measure P. Let ω(x) be the value of the realization ω at the point x. For a subset A ⊂ D, suppose the values at points
- Page 21 and 22: List of Figures 2.1 Shannon’s sch
- Page 23 and 24: A.3 Edgelet transform of the wood g
- Page 25 and 26: Nomenclature Special sets N .......
- Page 27 and 28: List of Abbreviations BCR .........
- Page 29 and 30: Chapter 1 Introduction 1.1 Overview
- Page 31 and 32: Chapter 2 Sparsity in Image Coding
- Page 33 and 34: 2.1. IMAGE CODING 5 INFORMATION SOU
- Page 35 and 36: 2.1. IMAGE CODING 7 2.1.2 Source an
- Page 37 and 38: 2.1. IMAGE CODING 9 x ✲ T y ERROR
- Page 39 and 40: 2.1. IMAGE CODING 11 where Q stands
- Page 41 and 42: 2.2. SPARSITY AND COMPRESSION 13 Pr
- Page 43 and 44: 2.2. SPARSITY AND COMPRESSION 15 av
- Page 45 and 46: 2.2. SPARSITY AND COMPRESSION 17 wi
- Page 47 and 48: 2.2. SPARSITY AND COMPRESSION 19 lo
- Page 49 and 50: 2.3. DISCUSSION 21 tail compact. Th
- Page 51 and 52: 2.4. PROOF 23 The index l does not
- Page 53 and 54: Chapter 3 Image Transforms and Imag
- Page 55 and 56: 27 Some of the figures show the bas
- Page 57 and 58: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 59 and 60: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 61 and 62: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 63 and 64: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 65 and 66: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 67 and 68: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 69 and 70: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 71: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 75 and 76: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 77 and 78: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 79 and 80: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 81 and 82: 3.2. WAVELETS AND POINT SINGULARITI
- Page 83 and 84: 3.2. WAVELETS AND POINT SINGULARITI
- Page 85 and 86: 3.2. WAVELETS AND POINT SINGULARITI
- Page 87 and 88: 3.2. WAVELETS AND POINT SINGULARITI
- Page 89 and 90: 3.2. WAVELETS AND POINT SINGULARITI
- Page 91 and 92: 3.2. WAVELETS AND POINT SINGULARITI
- Page 93 and 94: 3.3. EDGELETS AND LINEAR SINGULARIT
- Page 95 and 96: 3.4. OTHER TRANSFORMS 67 uncertaint
- Page 97 and 98: 3.4. OTHER TRANSFORMS 69 Chirplets
- Page 99 and 100: 3.4. OTHER TRANSFORMS 71 Folding. A
- Page 101 and 102: 3.4. OTHER TRANSFORMS 73 We can app
- Page 103 and 104: 3.5. DISCUSSION 75 give only a few
- Page 105 and 106: 3.7. PROOFS 77 the ijth component o
- Page 107 and 108: 3.7. PROOFS 79 Similarly, we have [
- Page 109 and 110: Chapter 4 Combined Image Representa
- Page 111 and 112: 4.2. SPARSE DECOMPOSITION 83 interi
- Page 113 and 114: 4.3. MINIMUM l 1 NORM SOLUTION 85 l
- Page 115 and 116: 4.4. LAGRANGE MULTIPLIERS 87 ρ( x
- Page 117 and 118: 4.5. HOW TO CHOOSE ρ AND λ 89 3 (
- Page 119 and 120: 4.6. HOMOTOPY 91 A way to interpret
- Page 121 and 122: 4.7. NEWTON DIRECTION 93 4.7 Newton
3.1. DCT AND HOMOGENEOUS COMPONENTS 45<br />
Definition of a random field. We define a random field on a lattice. Let Z d denote the<br />
d-dimensional integers, or the lattice points, in d-dimensional space, which is denoted by<br />
R d . The finite set D isasubsetofZ d : D ⊂ Z d . For two lattice points x, y ∈ Z d , let |x − y|<br />
denote the Euclidean distance between x and y. ThesetD is connected if and only if for any<br />
x, y ∈ D, there exists a finite subset {x 1 ,x 2 ,... ,x n } of D, n ∈ N, such that (1) |x−x 1 |≤1,<br />
(2) |x i − x i+1 |≤1,i =1, 2,... ,n− 1, and (3) |x n − y| ≤1. We call a connected set D a<br />
domain. The dimension of the set D is, by definition, the number of integer points in the<br />
set D. We denote the dimension of D by dim(D). On each lattice point in set D, a real<br />
value is assigned. The set R D , which is equivalent to R dim(D) , is called a state space. Follow<br />
some conventions, we denote the state space by Ω, so we have Ω = R dim(D) .LetF be the<br />
σ-algebra that is generated from the Borel sets in Ω. Let P be the Lebesgue measure. The<br />
triple (Ω, F, P) is called a random field (RF) on the domain D.<br />
Note that we define a random field on a subset of all the lattice points.<br />
Now we give the definition of a neighbor. Intuitively, under Euclidean distance, two<br />
integer (lattice) points x and y are neighbors when |x − y| ≤1. This definition can be<br />
extended. We define a non-negative, symmetric and translation-invariant bivariate function<br />
N(x, y) on domain D, such that for x, y ∈ D, the function N satisfies<br />
1. N(x, x) =0,<br />
2. N(x, y) ≥ 0 (non-negativity),<br />
3. N(x, y) =N(y, x) (symmetry),<br />
4. N(x, y) =N(0,y− x) (homogeneity, or translation invariance).<br />
Any two points are called neighbors if and only if N(x, y) > 0. For example, in Euclidean<br />
space, if we let N(x, y) =1when|x − y| =1,andN(x, y) = 0 elsewhere, then we have the<br />
ordinary definition of neighbor that is mentioned at the beginning of this paragraph.<br />
Definition of a Markov random field. The definition of a Markov random field is based<br />
upon conditional probability. The key idea of Markovity is that conditional probability<br />
should depend only on neighbors. To be more precise, we need some terminology. Let ω<br />
denote an element of Ω. We call ω a realization. Let p(ω) denote the probability density<br />
function of ω. The p.d.f. p is associated with the Lebesgue measure P. Let ω(x) be the<br />
value of the realization ω at the point x. For a subset A ⊂ D, suppose the values at points