v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

340 CHAPTER 4. SEMIDEFINITE PROGRAMMING4.5.1.5.1 Example. Sparsest solution to Ax = b. [70] [118](confer Example 4.5.1.8.1) Problem (682) has sparsest solution not easilyrecoverable by least 1-norm; id est, not by compressed sensing becauseof proximity to a theoretical lower bound on number of measurements mdepicted in Figure 100: for A∈ R m×nGiven data from Example 4.2.3.1.1, for m=3, n=6, k=1⎡⎤ ⎡ ⎤−1 1 8 1 1 01⎢1 1 1A = ⎣ −3 2 8 − 1 ⎥ ⎢2 3 2 3 ⎦ , b = ⎣−9 4 8141914 − 1 91214⎥⎦ (682)the sparsest solution to classical linear equation Ax = b is x = e 4 ∈ R 6(confer (695)).Although the sparsest solution is recoverable by inspection, we discernit instead by convex iteration; namely, by iterating problem sequence(156) (525) on page 334. From the numerical data given, cardinality ‖x‖ 0 = 1is expected. Iteration continues until x T y vanishes (to within some numericalprecision); id est, until desired cardinality is achieved. But this comes notwithout a stall.Stalling, whose occurrence is sensitive to initial conditions of convexiteration, is a consequence of finding a local minimum of a multimodalobjective 〈x, y〉 when regarded as simultaneously variable in x and y .(3.8.0.0.3) Stalls are simply detected as fixed points x of infeasiblecardinality, sometimes remedied by reinitializing direction vector y to arandom positive state.Bolstered by success in breaking out of a stall, we then apply convexiteration to 22,000 randomized problems:Given random data for m=3, n=6, k=1, in Matlab notationA=randn(3,6), index=round(5∗rand(1)) +1, b=rand(1)∗A(:,index)(769)the sparsest solution x∝e index is a scaled standard basis vector.Without convex iteration or a nonnegativity constraint x≽0, rate of failurefor this minimal cardinality problem Ax=b by 1-norm minimization of xis 22%. That failure rate drops to 6% with a nonnegativity constraint. If

4.5. CONSTRAINING CARDINALITY 341we then engage convex iteration, detect stalls, and randomly reinitialize thedirection vector, failure rate drops to 0% but the amount of computation isapproximately doubled.Stalling is not an inevitable behavior. For some problem types (beyondmere Ax = b), convex iteration succeeds nearly all the time. Here is acardinality problem, with noise, whose statement is just a bit more intricatebut easy to solve in a few convex iterations:4.5.1.5.2 Example. Signal dropout. [121,6.2]Signal dropout is an old problem; well studied from both an industrial andacademic perspective. Essentially dropout means momentary loss or gap ina signal, while passing through some channel, caused by some man-madeor natural phenomenon. The signal lost is assumed completely destroyedsomehow. What remains within the time-gap is system or idle channel noise.The signal could be voice over Internet protocol (VoIP), for example, audiodata from a compact disc (CD) or video data from a digital video disc (DVD),a television transmission over cable or the airwaves, or a typically ravagedcell phone communication, etcetera.Here we consider signal dropout in a discrete-time signal corrupted byadditive white noise assumed uncorrelated to the signal. The linear channelis assumed to introduce no filtering. We create a discretized windowedsignal for this example by positively combining k randomly chosen vectorsfrom a discrete cosine transform (DCT) basis denoted Ψ∈ R n×n . Frequencyincreases, in the Fourier sense, from DC toward Nyquist as column index ofbasis Ψ increases. Otherwise, details of the basis are unimportant exceptfor its orthogonality Ψ T = Ψ −1 . Transmitted signal is denoteds = Ψz ∈ R n (770)whose upper bound on DCT basis coefficient cardinality cardz ≤ k isassumed known; 4.34 hence a critical assumption: transmitted signal s issparsely supported (k < n) on the DCT basis. It is further assumed thatnonzero signal coefficients in vector z place each chosen basis vector abovethe noise floor.4.34 This simplifies exposition, although it may be an unrealistic assumption in manyapplications.

4.5. CONSTRAINING CARDINALITY 341we then engage convex iteration, detect stalls, and randomly reinitialize thedirection vector, failure rate drops to 0% but the amount of computation isapproximately doubled.Stalling is not an inevitable behavior. For some problem types (beyondmere Ax = b), convex iteration succeeds nearly all the time. Here is acardinality problem, with noise, whose statement is just a bit more intricatebut easy to solve in a few convex iterations:4.5.1.5.2 Example. Signal dropout. [121,6.2]Signal dropout is an old problem; well studied from both an industrial andacademic perspective. Essentially dropout means momentary loss or gap ina signal, while passing through some channel, caused by some man-madeor natural phenomenon. The signal lost is assumed completely destroyedsomehow. What remains within the time-gap is system or idle channel noise.The signal could be voice over Internet protocol (VoIP), for example, audiodata from a compact disc (CD) or video data from a digital video disc (DVD),a television transmission over cable or the airwaves, or a typically ravagedcell phone communication, etcetera.Here we consider signal dropout in a discrete-time signal corrupted byadditive white noise assumed uncorrelated to the signal. The linear channelis assumed to introduce no filtering. We create a discretized windowedsignal for this example by positively combining k randomly chosen vectorsfrom a discrete cosine transform (DCT) basis denoted Ψ∈ R n×n . Frequencyincreases, in the Fourier sense, from DC toward Nyquist as column index ofbasis Ψ increases. Otherwise, details of the basis are unimportant exceptfor its orthogonality Ψ T = Ψ −1 . Transmitted signal is denoteds = Ψz ∈ R n (770)whose upper bound on DCT basis coefficient cardinality cardz ≤ k isassumed known; 4.34 hence a critical assumption: transmitted signal s issparsely supported (k < n) on the DCT basis. It is further assumed thatnonzero signal coefficients in vector z place each chosen basis vector abovethe noise floor.4.34 This simplifies exposition, although it may be an unrealistic assumption in manyapplications.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!