sparse image representation via combined transforms - Convex ...
sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...
16 CHAPTER 2. SPARSITY IN IMAGE CODING First, the formulation. Suppose θ = {θ i : i ∈ N} is an infinite-length real-valued sequence: θ ∈ R ∞ . Further assume that the sequence θ can be sorted by absolute values. Suppose the sorted sequence is {|θ| (i) ,i∈ N}, where |θ| (i) is the i-th largest absolute value. The weak l p norm of the sequence θ is defined as |θ| wl p =supi 1/p |θ| (i) . (2.3) i Note that this is a quasi-norm. (A norm satisfies a triangular inequality, ‖x+y‖ ≤‖x‖+‖y‖; a quasi-norm satisfies a quasi-triangular inequality, ‖x+y‖ ≤K(‖x‖+‖y‖) for some K>1.) The weak l p norm has a close connection with three other measures that are related to vector sparsity. Numerosity A straightforward way to measure the sparsity of a vector is via its numerosity: the number of elements whose amplitudes are above a given threshold δ. In a more mathematical language, for a fixed real value δ, the numerosity is equal to #{i : |θ i | >δ}. The following lemma is cited from [47]. Lemma 2.1 For any sequence θ, the following inequality is true: #{i : |θ i | >δ}≤|θ| p wl p δ −p , δ > 0. From the above lemma, a small weak l p norm leads to a small number of elements that are significantly above zero. Since numerosity, which basically counts significantly large elements, is an obvious way to measure the sparsity of a vector, the weak l p norm is a measure of sparsity. Compression Number Another way to measure sparsity is to use the compression number, defined as ( ∞ 1/2 ∑ c(n) = |θ| (i)) 2 . i=n+1 Again, the compression number is based on the sorted amplitudes. In an orthogonal basis, we have isometry. If we perform a thresholding scheme by keeping the coefficients associated
2.2. SPARSITY AND COMPRESSION 17 with the largest n amplitudes, then the compression number c(n) is the square root of the RSS distortion of the signal reconstructed by the coefficients with the n largest amplitudes. The following result can be found in [47]. Lemma 2.2 For any sequence θ, ifm =1/p − 1/2, the following inequality is true: c(N) ≤ α p N −m |θ| wl p, N ≥ 1, where α p is a constant determined only by the value of p. From the above lemma, a small weak l p norm implies a small compression number. Rate of Recovery The rate of recovery comes from statistics, particularly in density estimation. For a sequence θ, the rate of recovery is defined as r(ɛ) = ∞∑ min{θi 2 ,ɛ 2 }. i=1 Lemma 2.3 For any sequence θ, ifr =1− p/2, the following inequality is true: r(ɛ) ≤ α ′ p|θ| p wl p (ɛ 2 ) r , ɛ > 0, where α ′ p is a constant. This implies that a small weak l p norm leads to a small rate of recovery. In some cases (for example, in density estimation) we choose rate of recovery as a measure of sparsity. The weak l p norm is therefore a good measure of sparsity too. Lemma 1 in [46] shows that all these measures are equivalent in an asymptotic sense. Critical Index In order to define the critical index of a functional space, we need to introduce some new notation. A detailed discussion of this can be found in [47]. Suppose Θ is the functional space that we are considering. (In the transform coding scenario, the functional space Θ includes all the coefficient vectors.) An infinite-length sequence θ = {θ i : i ∈ N} is in a weak
- Page 1 and 2: SPARSE IMAGE REPRESENTATION VIA COM
- Page 3: I certify that I have read this dis
- Page 7 and 8: To find a sparse image representati
- Page 9: Abstract We consider sparse image d
- Page 12 and 13: xii
- Page 14 and 15: 2.3 Discussion.....................
- Page 16 and 17: 6 Simulations 119 6.1 Dictionary...
- Page 18 and 19: xviii
- Page 21 and 22: List of Figures 2.1 Shannon’s sch
- Page 23 and 24: A.3 Edgelet transform of the wood g
- Page 25 and 26: Nomenclature Special sets N .......
- Page 27 and 28: List of Abbreviations BCR .........
- Page 29 and 30: Chapter 1 Introduction 1.1 Overview
- Page 31 and 32: Chapter 2 Sparsity in Image Coding
- Page 33 and 34: 2.1. IMAGE CODING 5 INFORMATION SOU
- Page 35 and 36: 2.1. IMAGE CODING 7 2.1.2 Source an
- Page 37 and 38: 2.1. IMAGE CODING 9 x ✲ T y ERROR
- Page 39 and 40: 2.1. IMAGE CODING 11 where Q stands
- Page 41 and 42: 2.2. SPARSITY AND COMPRESSION 13 Pr
- Page 43: 2.2. SPARSITY AND COMPRESSION 15 av
- Page 47 and 48: 2.2. SPARSITY AND COMPRESSION 19 lo
- Page 49 and 50: 2.3. DISCUSSION 21 tail compact. Th
- Page 51 and 52: 2.4. PROOF 23 The index l does not
- Page 53 and 54: Chapter 3 Image Transforms and Imag
- Page 55 and 56: 27 Some of the figures show the bas
- Page 57 and 58: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 59 and 60: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 61 and 62: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 63 and 64: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 65 and 66: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 67 and 68: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 69 and 70: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 71 and 72: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 73 and 74: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 75 and 76: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 77 and 78: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 79 and 80: 3.1. DCT AND HOMOGENEOUS COMPONENTS
- Page 81 and 82: 3.2. WAVELETS AND POINT SINGULARITI
- Page 83 and 84: 3.2. WAVELETS AND POINT SINGULARITI
- Page 85 and 86: 3.2. WAVELETS AND POINT SINGULARITI
- Page 87 and 88: 3.2. WAVELETS AND POINT SINGULARITI
- Page 89 and 90: 3.2. WAVELETS AND POINT SINGULARITI
- Page 91 and 92: 3.2. WAVELETS AND POINT SINGULARITI
- Page 93 and 94: 3.3. EDGELETS AND LINEAR SINGULARIT
16 CHAPTER 2. SPARSITY IN IMAGE CODING<br />
First, the formulation. Suppose θ = {θ i : i ∈ N} is an infinite-length real-valued<br />
sequence: θ ∈ R ∞ . Further assume that the sequence θ can be sorted by absolute values.<br />
Suppose the sorted sequence is {|θ| (i) ,i∈ N}, where |θ| (i) is the i-th largest absolute value.<br />
The weak l p norm of the sequence θ is defined as<br />
|θ| wl p =supi 1/p |θ| (i) . (2.3)<br />
i<br />
Note that this is a quasi-norm. (A norm satisfies a triangular inequality, ‖x+y‖ ≤‖x‖+‖y‖;<br />
a quasi-norm satisfies a quasi-triangular inequality, ‖x+y‖ ≤K(‖x‖+‖y‖) for some K>1.)<br />
The weak l p norm has a close connection with three other measures that are related to vector<br />
sparsity.<br />
Numerosity<br />
A straightforward way to measure the sparsity of a vector is <strong>via</strong> its numerosity: the number<br />
of elements whose amplitudes are above a given threshold δ. In a more mathematical<br />
language, for a fixed real value δ, the numerosity is equal to #{i : |θ i | >δ}. The following<br />
lemma is cited from [47].<br />
Lemma 2.1 For any sequence θ, the following inequality is true:<br />
#{i : |θ i | >δ}≤|θ| p wl p δ −p , δ > 0.<br />
From the above lemma, a small weak l p norm leads to a small number of elements that<br />
are significantly above zero. Since numerosity, which basically counts significantly large<br />
elements, is an obvious way to measure the sparsity of a vector, the weak l p norm is a<br />
measure of sparsity.<br />
Compression Number<br />
Another way to measure sparsity is to use the compression number, defined as<br />
( ∞ 1/2<br />
∑<br />
c(n) = |θ|<br />
(i)) 2 .<br />
i=n+1<br />
Again, the compression number is based on the sorted amplitudes. In an orthogonal basis,<br />
we have isometry. If we perform a thresholding scheme by keeping the coefficients associated