sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

44 CHAPTER 3. IMAGE TRANSFORMS AND IMAGE FEATURES then we have √ 2 πkl • DST-I: S N (k, l) = N sin N ; √ 2 • DST-II: S N (k, l) =b k+1 N sin π(k+1)(l+ 1 2 ) √ 2 • DST-III: S N (k, l) =b l+1 √ 2 • DST-IV: S N (k, l) = N sin π(k+ 1 2 )(l+ 1 2 ) N . N ; N sin π(k+ 1 2 )(l+1) N ; For all the four types of DST, the transform matrices are orthogonal (and also unitary). have If DST II N and DSTIII N {S N (k, l)} k=0,1,2,... ,N−1 l=0,1,2,... ,N−1 denote the DST-II and DST-III operators, similar to DCT, we 3.1.4 Homogeneous Components DST II N =(DST III N ) −1 . The reason that the DCT is so powerful in analyzing homogeneous signals is that it is nearly (in some asymptotic sense) the Karhunen-Loéve transform (KLT) of some Gaussian Markov Random Fields (GMRFs). In this section, we first describe the definition of Gaussian Markov Random Fields, then argue that the covariance matrix is the key statistic for GMRFs; for a covariance matrix, we give the necessary and sufficient conditions of diagonalizability of different types of DCTs; finally we conclude that under some appropriate boundary conditions, the DCT is the KLT of GMRFs. As we stated earlier, in this thesis, not much attention is given to mathematical rigor. Gaussian Markov Random Field This subsubsection is organized in the following way: we start with the definition of a random field, then introduce the definition of a Markov random field and a Gibbs random field; the Hammersley-Clifford theorem creates an equivalence between a Markov random field and a Gibbs random field; then we describe the definition of a Gaussian Markov random field; eventually we argue that DCT is a KLT of a GMRF.

3.1. DCT AND HOMOGENEOUS COMPONENTS 45 Definition of a random field. We define a random field on a lattice. Let Z d denote the d-dimensional integers, or the lattice points, in d-dimensional space, which is denoted by R d . The finite set D isasubsetofZ d : D ⊂ Z d . For two lattice points x, y ∈ Z d , let |x − y| denote the Euclidean distance between x and y. ThesetD is connected if and only if for any x, y ∈ D, there exists a finite subset {x 1 ,x 2 ,... ,x n } of D, n ∈ N, such that (1) |x−x 1 |≤1, (2) |x i − x i+1 |≤1,i =1, 2,... ,n− 1, and (3) |x n − y| ≤1. We call a connected set D a domain. The dimension of the set D is, by definition, the number of integer points in the set D. We denote the dimension of D by dim(D). On each lattice point in set D, a real value is assigned. The set R D , which is equivalent to R dim(D) , is called a state space. Follow some conventions, we denote the state space by Ω, so we have Ω = R dim(D) .LetF be the σ-algebra that is generated from the Borel sets in Ω. Let P be the Lebesgue measure. The triple (Ω, F, P) is called a random field (RF) on the domain D. Note that we define a random field on a subset of all the lattice points. Now we give the definition of a neighbor. Intuitively, under Euclidean distance, two integer (lattice) points x and y are neighbors when |x − y| ≤1. This definition can be extended. We define a non-negative, symmetric and translation-invariant bivariate function N(x, y) on domain D, such that for x, y ∈ D, the function N satisfies 1. N(x, x) =0, 2. N(x, y) ≥ 0 (non-negativity), 3. N(x, y) =N(y, x) (symmetry), 4. N(x, y) =N(0,y− x) (homogeneity, or translation invariance). Any two points are called neighbors if and only if N(x, y) > 0. For example, in Euclidean space, if we let N(x, y) =1when|x − y| =1,andN(x, y) = 0 elsewhere, then we have the ordinary definition of neighbor that is mentioned at the beginning of this paragraph. Definition of a Markov random field. The definition of a Markov random field is based upon conditional probability. The key idea of Markovity is that conditional probability should depend only on neighbors. To be more precise, we need some terminology. Let ω denote an element of Ω. We call ω a realization. Let p(ω) denote the probability density function of ω. The p.d.f. p is associated with the Lebesgue measure P. Let ω(x) be the value of the realization ω at the point x. For a subset A ⊂ D, suppose the values at points

3.1. DCT AND HOMOGENEOUS COMPONENTS 45<br />

Definition of a random field. We define a random field on a lattice. Let Z d denote the<br />

d-dimensional integers, or the lattice points, in d-dimensional space, which is denoted by<br />

R d . The finite set D isasubsetofZ d : D ⊂ Z d . For two lattice points x, y ∈ Z d , let |x − y|<br />

denote the Euclidean distance between x and y. ThesetD is connected if and only if for any<br />

x, y ∈ D, there exists a finite subset {x 1 ,x 2 ,... ,x n } of D, n ∈ N, such that (1) |x−x 1 |≤1,<br />

(2) |x i − x i+1 |≤1,i =1, 2,... ,n− 1, and (3) |x n − y| ≤1. We call a connected set D a<br />

domain. The dimension of the set D is, by definition, the number of integer points in the<br />

set D. We denote the dimension of D by dim(D). On each lattice point in set D, a real<br />

value is assigned. The set R D , which is equivalent to R dim(D) , is called a state space. Follow<br />

some conventions, we denote the state space by Ω, so we have Ω = R dim(D) .LetF be the<br />

σ-algebra that is generated from the Borel sets in Ω. Let P be the Lebesgue measure. The<br />

triple (Ω, F, P) is called a random field (RF) on the domain D.<br />

Note that we define a random field on a subset of all the lattice points.<br />

Now we give the definition of a neighbor. Intuitively, under Euclidean distance, two<br />

integer (lattice) points x and y are neighbors when |x − y| ≤1. This definition can be<br />

extended. We define a non-negative, symmetric and translation-invariant bivariate function<br />

N(x, y) on domain D, such that for x, y ∈ D, the function N satisfies<br />

1. N(x, x) =0,<br />

2. N(x, y) ≥ 0 (non-negativity),<br />

3. N(x, y) =N(y, x) (symmetry),<br />

4. N(x, y) =N(0,y− x) (homogeneity, or translation invariance).<br />

Any two points are called neighbors if and only if N(x, y) > 0. For example, in Euclidean<br />

space, if we let N(x, y) =1when|x − y| =1,andN(x, y) = 0 elsewhere, then we have the<br />

ordinary definition of neighbor that is mentioned at the beginning of this paragraph.<br />

Definition of a Markov random field. The definition of a Markov random field is based<br />

upon conditional probability. The key idea of Markovity is that conditional probability<br />

should depend only on neighbors. To be more precise, we need some terminology. Let ω<br />

denote an element of Ω. We call ω a realization. Let p(ω) denote the probability density<br />

function of ω. The p.d.f. p is associated with the Lebesgue measure P. Let ω(x) be the<br />

value of the realization ω at the point x. For a subset A ⊂ D, suppose the values at points

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!