v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

336 CHAPTER 4. SEMIDEFINITE PROGRAMMING image-gradient sparsity is actually closer to 1.9% than the 3% reported elsewhere; e.g., [308,IIB]. 2) Numerical precision (≈1E-2) of the fixed point of contraction (771) is a parameter to the implementation; meaning, direction vector y is typically updated after fixed-point recursion begins but prior to its culmination. Impact of this idiosyncrasy tends toward simultaneous optimization in variables U and y while insuring that vector y settles on a boundary point of the feasible set (nonnegative hypercube slice) in (467) at every iteration; for only a boundary point 4.53 can yield the sum of smallest entries in |Ψ vec U ⋆ | . Reconstruction of the Shepp-Logan phantom at 103dB image/error is achieved in a Matlab minute with 4.1% subsampled data; well below an 11% least lower bound predicted by the sparse sampling theorem. Because reconstruction approaches optimal solution to a 0-norm problem, minimum number of Fourier-domain samples is bounded below by cardinality of the image-gradient at 1.9%. 4.6.0.0.11 Example. Compressed sensing, compressive sampling. [260] As our modern technology-driven civilization acquires and exploits ever-increasing amounts of data, everyone now knows that most of the data we acquire can be thrown away with almost no perceptual loss − witness the broad success of lossy compression formats for sounds, images, and specialized technical data. The phenomenon of ubiquitous compressibility raises very natural questions: Why go to so much effort to acquire all the data when most of what we get will be thrown away? Can’t we just directly measure the part that won’t end up being thrown away? −David Donoho [100] Lossy data compression techniques are popular, but it is also well known that compression artifacts become quite perceptible with signal processing that goes beyond mere playback of a compressed signal. [191] [211] Spatial or audio frequencies presumed masked by a simultaneity, for example, become perceptible with significant post-filtering of the compressed signal. Further, there can be no universally acceptable and unique metric of perception for gauging exactly how much data can be tossed. For these reasons, there will always be need for raw (uncompressed) data. 4.53 Simultaneous optimization of these two variables U and y should never be a pinnacle of aspiration; for then, optimal y might not attain a boundary point.

4.6. CARDINALITY AND RANK CONSTRAINT EXAMPLES 337 Figure 90: Massachusetts Institute of Technology (MIT) logo, including its white boundary, may be interpreted as a rank-5 matrix. (Stanford University logo rank is much higher;) This constitutes Scene Y observed by the one-pixel camera in Figure 91 for Example 4.6.0.0.11. In this example we throw out only so much information as to leave perfect reconstruction within reach. Specifically, the MIT logo in Figure 90 is perfectly reconstructed from 700 time-sequential samples {y i } acquired by the one-pixel camera illustrated in Figure 91. The MIT-logo image in this example effectively impinges a 46×81 array micromirror device. This mirror array is modulated by a pseudonoise source that independently positions all the individual mirrors. A single photodiode (one pixel) integrates incident light from all mirrors. After stabilizing the mirrors to a fixed but pseudorandom pattern, light so collected is then digitized into one sample y i by analog-to-digital (A/D) conversion. This sampling process is repeated with the micromirror array modulated to a new pseudorandom pattern. The most important questions are: How many samples do we need for perfect reconstruction? Does that number of samples represent compression of the original data? We claim that perfect reconstruction of the MIT logo can be reliably achieved with as few as 700 samples y=[y i ]∈ R 700 from this one-pixel camera. That number represents only 19% of information obtainable from 3726 micromirrors. 4.54 4.54 That number (700 samples) is difficult to achieve, as reported in [260,6]. If a minimal basis for the MIT logo were instead constructed, only five rows or columns worth of data (from a 46×81 matrix) are independent. This means a lower bound on achievable compression is about 230 samples; which corresponds to 6% of the original information.

336 CHAPTER 4. SEMIDEFINITE PROGRAMMING<br />

image-gradient sparsity is actually closer to 1.9% than the 3% reported<br />

elsewhere; e.g., [308,IIB].<br />

2) Numerical precision (≈1E-2) of the fixed point of contraction (771)<br />

is a parameter to the implementation; meaning, direction vector y is<br />

typically updated after fixed-point recursion begins but prior to its<br />

culmination. Impact of this idiosyncrasy tends toward simultaneous<br />

optimization in variables U and y while insuring that vector y settles<br />

on a boundary point of the feasible set (nonnegative hypercube slice)<br />

in (467) at every iteration; for only a boundary point 4.53 can yield the<br />

sum of smallest entries in |Ψ vec U ⋆ | .<br />

Reconstruction of the Shepp-Logan phantom at 103dB image/error is<br />

achieved in a Matlab minute with 4.1% subsampled data; well below an<br />

11% least lower bound predicted by the sparse sampling theorem. Because<br />

reconstruction approaches optimal solution to a 0-norm problem, minimum<br />

number of Fourier-domain samples is bounded below by cardinality of the<br />

image-gradient at 1.9%.<br />

<br />

4.6.0.0.11 Example. Compressed sensing, compressive sampling. [260]<br />

As our modern technology-driven civilization acquires and exploits<br />

ever-increasing amounts of data, everyone now knows that most of the data<br />

we acquire can be thrown away with almost no perceptual loss − witness the<br />

broad success of lossy compression formats for sounds, images, and specialized<br />

technical data. The phenomenon of ubiquitous compressibility raises very<br />

natural questions: Why go to so much effort to acquire all the data when<br />

most of what we get will be thrown away? Can’t we just directly measure the<br />

part that won’t end up being thrown away? −David Donoho [100]<br />

Lossy data compression techniques are popular, but it is also well known<br />

that compression artifacts become quite perceptible with signal processing<br />

that goes beyond mere playback of a compressed signal. [191] [211] Spatial or<br />

audio frequencies presumed masked by a simultaneity, for example, become<br />

perceptible with significant post-filtering of the compressed signal. Further,<br />

there can be no universally acceptable and unique metric of perception for<br />

gauging exactly how much data can be tossed. For these reasons, there will<br />

always be need for raw (uncompressed) data.<br />

4.53 Simultaneous optimization of these two variables U and y should never be a pinnacle<br />

of aspiration; for then, optimal y might not attain a boundary point.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!