v2009.01.01 - Convex Optimization
v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization
336 CHAPTER 4. SEMIDEFINITE PROGRAMMING image-gradient sparsity is actually closer to 1.9% than the 3% reported elsewhere; e.g., [308,IIB]. 2) Numerical precision (≈1E-2) of the fixed point of contraction (771) is a parameter to the implementation; meaning, direction vector y is typically updated after fixed-point recursion begins but prior to its culmination. Impact of this idiosyncrasy tends toward simultaneous optimization in variables U and y while insuring that vector y settles on a boundary point of the feasible set (nonnegative hypercube slice) in (467) at every iteration; for only a boundary point 4.53 can yield the sum of smallest entries in |Ψ vec U ⋆ | . Reconstruction of the Shepp-Logan phantom at 103dB image/error is achieved in a Matlab minute with 4.1% subsampled data; well below an 11% least lower bound predicted by the sparse sampling theorem. Because reconstruction approaches optimal solution to a 0-norm problem, minimum number of Fourier-domain samples is bounded below by cardinality of the image-gradient at 1.9%. 4.6.0.0.11 Example. Compressed sensing, compressive sampling. [260] As our modern technology-driven civilization acquires and exploits ever-increasing amounts of data, everyone now knows that most of the data we acquire can be thrown away with almost no perceptual loss − witness the broad success of lossy compression formats for sounds, images, and specialized technical data. The phenomenon of ubiquitous compressibility raises very natural questions: Why go to so much effort to acquire all the data when most of what we get will be thrown away? Can’t we just directly measure the part that won’t end up being thrown away? −David Donoho [100] Lossy data compression techniques are popular, but it is also well known that compression artifacts become quite perceptible with signal processing that goes beyond mere playback of a compressed signal. [191] [211] Spatial or audio frequencies presumed masked by a simultaneity, for example, become perceptible with significant post-filtering of the compressed signal. Further, there can be no universally acceptable and unique metric of perception for gauging exactly how much data can be tossed. For these reasons, there will always be need for raw (uncompressed) data. 4.53 Simultaneous optimization of these two variables U and y should never be a pinnacle of aspiration; for then, optimal y might not attain a boundary point.
4.6. CARDINALITY AND RANK CONSTRAINT EXAMPLES 337 Figure 90: Massachusetts Institute of Technology (MIT) logo, including its white boundary, may be interpreted as a rank-5 matrix. (Stanford University logo rank is much higher;) This constitutes Scene Y observed by the one-pixel camera in Figure 91 for Example 4.6.0.0.11. In this example we throw out only so much information as to leave perfect reconstruction within reach. Specifically, the MIT logo in Figure 90 is perfectly reconstructed from 700 time-sequential samples {y i } acquired by the one-pixel camera illustrated in Figure 91. The MIT-logo image in this example effectively impinges a 46×81 array micromirror device. This mirror array is modulated by a pseudonoise source that independently positions all the individual mirrors. A single photodiode (one pixel) integrates incident light from all mirrors. After stabilizing the mirrors to a fixed but pseudorandom pattern, light so collected is then digitized into one sample y i by analog-to-digital (A/D) conversion. This sampling process is repeated with the micromirror array modulated to a new pseudorandom pattern. The most important questions are: How many samples do we need for perfect reconstruction? Does that number of samples represent compression of the original data? We claim that perfect reconstruction of the MIT logo can be reliably achieved with as few as 700 samples y=[y i ]∈ R 700 from this one-pixel camera. That number represents only 19% of information obtainable from 3726 micromirrors. 4.54 4.54 That number (700 samples) is difficult to achieve, as reported in [260,6]. If a minimal basis for the MIT logo were instead constructed, only five rows or columns worth of data (from a 46×81 matrix) are independent. This means a lower bound on achievable compression is about 230 samples; which corresponds to 6% of the original information.
- Page 285 and 286: 4.4. RANK-CONSTRAINED SEMIDEFINITE
- Page 287 and 288: 4.4. RANK-CONSTRAINED SEMIDEFINITE
- Page 289 and 290: 4.4. RANK-CONSTRAINED SEMIDEFINITE
- Page 291 and 292: 4.4. RANK-CONSTRAINED SEMIDEFINITE
- Page 293 and 294: 4.4. RANK-CONSTRAINED SEMIDEFINITE
- Page 295 and 296: 4.5. CONSTRAINING CARDINALITY 295 m
- Page 297 and 298: 4.5. CONSTRAINING CARDINALITY 297 m
- Page 299 and 300: 4.5. CONSTRAINING CARDINALITY 299 a
- Page 301 and 302: 4.5. CONSTRAINING CARDINALITY 301 f
- Page 303 and 304: 4.5. CONSTRAINING CARDINALITY 303 n
- Page 305 and 306: 4.5. CONSTRAINING CARDINALITY 305 W
- Page 307 and 308: 4.5. CONSTRAINING CARDINALITY 307 t
- Page 309 and 310: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 311 and 312: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 313 and 314: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 315 and 316: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 317 and 318: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 319 and 320: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 321 and 322: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 323 and 324: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 325 and 326: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 327 and 328: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 329 and 330: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 331 and 332: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 333 and 334: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 335: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 339 and 340: 4.6. CARDINALITY AND RANK CONSTRAIN
- Page 341 and 342: 4.7. CONVEX ITERATION RANK-1 341 fi
- Page 343 and 344: 4.7. CONVEX ITERATION RANK-1 343 Gi
- Page 345 and 346: Chapter 5 Euclidean Distance Matrix
- Page 347 and 348: 5.2. FIRST METRIC PROPERTIES 347 co
- Page 349 and 350: 5.3. ∃ FIFTH EUCLIDEAN METRIC PRO
- Page 351 and 352: 5.3. ∃ FIFTH EUCLIDEAN METRIC PRO
- Page 353 and 354: 5.4. EDM DEFINITION 353 The collect
- Page 355 and 356: 5.4. EDM DEFINITION 355 5.4.2 Gram-
- Page 357 and 358: 5.4. EDM DEFINITION 357 D ∈ EDM N
- Page 359 and 360: 5.4. EDM DEFINITION 359 5.4.2.2.1 E
- Page 361 and 362: 5.4. EDM DEFINITION 361 ten affine
- Page 363 and 364: 5.4. EDM DEFINITION 363 spheres: Th
- Page 365 and 366: 5.4. EDM DEFINITION 365 By eliminat
- Page 367 and 368: 5.4. EDM DEFINITION 367 where Φ ij
- Page 369 and 370: 5.4. EDM DEFINITION 369 5.4.2.2.6 D
- Page 371 and 372: 5.4. EDM DEFINITION 371 10 5 ˇx 4
- Page 373 and 374: 5.4. EDM DEFINITION 373 corrected b
- Page 375 and 376: 5.4. EDM DEFINITION 375 by translat
- Page 377 and 378: 5.4. EDM DEFINITION 377 Crippen & H
- Page 379 and 380: 5.4. EDM DEFINITION 379 where ([√
- Page 381 and 382: 5.4. EDM DEFINITION 381 because (A.
- Page 383 and 384: 5.5. INVARIANCE 383 5.5.1.0.1 Examp
- Page 385 and 386: 5.5. INVARIANCE 385 x 2 x 2 x 3 x 1
336 CHAPTER 4. SEMIDEFINITE PROGRAMMING<br />
image-gradient sparsity is actually closer to 1.9% than the 3% reported<br />
elsewhere; e.g., [308,IIB].<br />
2) Numerical precision (≈1E-2) of the fixed point of contraction (771)<br />
is a parameter to the implementation; meaning, direction vector y is<br />
typically updated after fixed-point recursion begins but prior to its<br />
culmination. Impact of this idiosyncrasy tends toward simultaneous<br />
optimization in variables U and y while insuring that vector y settles<br />
on a boundary point of the feasible set (nonnegative hypercube slice)<br />
in (467) at every iteration; for only a boundary point 4.53 can yield the<br />
sum of smallest entries in |Ψ vec U ⋆ | .<br />
Reconstruction of the Shepp-Logan phantom at 103dB image/error is<br />
achieved in a Matlab minute with 4.1% subsampled data; well below an<br />
11% least lower bound predicted by the sparse sampling theorem. Because<br />
reconstruction approaches optimal solution to a 0-norm problem, minimum<br />
number of Fourier-domain samples is bounded below by cardinality of the<br />
image-gradient at 1.9%.<br />
<br />
4.6.0.0.11 Example. Compressed sensing, compressive sampling. [260]<br />
As our modern technology-driven civilization acquires and exploits<br />
ever-increasing amounts of data, everyone now knows that most of the data<br />
we acquire can be thrown away with almost no perceptual loss − witness the<br />
broad success of lossy compression formats for sounds, images, and specialized<br />
technical data. The phenomenon of ubiquitous compressibility raises very<br />
natural questions: Why go to so much effort to acquire all the data when<br />
most of what we get will be thrown away? Can’t we just directly measure the<br />
part that won’t end up being thrown away? −David Donoho [100]<br />
Lossy data compression techniques are popular, but it is also well known<br />
that compression artifacts become quite perceptible with signal processing<br />
that goes beyond mere playback of a compressed signal. [191] [211] Spatial or<br />
audio frequencies presumed masked by a simultaneity, for example, become<br />
perceptible with significant post-filtering of the compressed signal. Further,<br />
there can be no universally acceptable and unique metric of perception for<br />
gauging exactly how much data can be tossed. For these reasons, there will<br />
always be need for raw (uncompressed) data.<br />
4.53 Simultaneous optimization of these two variables U and y should never be a pinnacle<br />
of aspiration; for then, optimal y might not attain a boundary point.