10.03.2015 Views

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.2. SPARSITY AND COMPRESSION 15<br />

average length of a bit string that is necessary to code a signal in a quantization-compression<br />

scheme. A variety of schemes have been developed in the literature. In different circumstances,<br />

some schemes approach this entropy lower bound. The field that includes this<br />

research is called vector quantization.<br />

Furthermore, if the coefficients after a certain transform are independent and identically<br />

distributed (IID), then we can deploy the same scalar quantizer for every coordinate. The<br />

scalar quantizer can nearly achieve the informatic lower bound (the lower bound specified<br />

in information theory). For example, one can use the Lloyd-Max quantizer.<br />

The statistical (or entropic) approach is elegant, but a practical disadvantage is that in<br />

<strong>image</strong> coding we generally do not know the statistical distribution of an <strong>image</strong>. Images are<br />

from an extremely high-dimensional space. Suppose a digital <strong>image</strong> has 256×256 pixels and<br />

each pixel is assigned a real value. The dimensionality of the <strong>image</strong> is then 65, 536. Viewing<br />

every <strong>image</strong> as a sample from a high-dimensional space, the number of samples is far less<br />

than the dimensionality of the space. Think about how many <strong>image</strong>s we can find on the<br />

internet. The size of <strong>image</strong> databases is usually not beyond three digits; for example, the<br />

USC <strong>image</strong> database and the database at the Center for Imaging Science Sensor. It is hard<br />

to infer the probability distribution in such a high-dimensional space from such a small<br />

sample. This motivates us to find a non-stochastic approach, which is also an empirical<br />

approach. By “empirical” we mean that the method depends only on observations.<br />

The key in the non-stochastic approach is to measure the empirical sparsity. We focus<br />

on the ideas that have been developed in [46, 47]. The measure is called the weak l p norm.<br />

The following two subsections answer the following two questions:<br />

1. What is the weak l p norm and what are the scientific reasons to invent such a norm?<br />

(Answered in Section 2.2.1.)<br />

2. How is the weak l p norm related to asymptotic efficiency of coding? (Answered in<br />

Section 2.2.2.)<br />

2.2.1 Weak l p Norm<br />

We start with the definition of the weak l p norm, for 1 ≤ p ≤ 2, then describe its connection<br />

with compression number, measure of numerosity and rate of recovery, and finally we<br />

introduce the concept of critical index, which plays an important role in the next section<br />

(Section 2.2.2).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!