v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

382 CHAPTER 4. SEMIDEFINITE PROGRAMMING|x|∫ x ydy−1 |y|+ǫFigure 111: Real absolute value function f 2 (x)= |x| on x∈[−1, 1] fromFigure 68b superimposed upon integral of its derivative at ǫ=0.05 whichsmooths objective function.convex iterationBy convex iteration we mean alternation of solution to (856a) and (862)until convergence. Direction vector y is initialized to 1 until the first fixedpoint is found; which means, the contraction recursion begins calculatinga (1-norm) solution U ⋆ to (854) via problem (856b). Once U ⋆ is found,vector y is updated according to an estimate of discrete image-gradientcardinality c : Sum of the 4n 2 −c smallest entries of |Ψ vec U ⋆ |∈ R 4n2 isthe optimal objective value from a linear program, for 0≤c≤4n 2 − 1 (525)4n ∑2π(|Ψ vec U ⋆ |) i = minimize 〈|Ψ vec U ⋆ | , y〉y∈R 4n2subject to 0 ≼ y ≼ 1i=c+1y T 1 = 4n 2 −c(862)where π is the nonlinear permutation-operator sorting its vector argumentinto nonincreasing order. An optimal solution y to (862), that is anextreme point of its feasible set, is known in closed form: it has 1 in eachentry corresponding to the 4n 2 −c smallest entries of |Ψ vec U ⋆ | and has 0elsewhere (page 336). Updated image U ⋆ is assigned to U t , the contractionis recomputed solving (856b), direction vector y is updated again, andso on until convergence which is guaranteed by virtue of a monotonicallynonincreasing real sequence of objective values in (856a) and (862).

4.6. CARDINALITY AND RANK CONSTRAINT EXAMPLES 383There are two features that distinguish problem formulation (856b) andour particular implementation of it [Matlab on Wıκımization]:1) An image-gradient estimate may engage any combination of fouradjacent pixels. In other words, the algorithm is not lockedinto a four-point gradient estimate (Figure 110); number of pointsconstituting an estimate is directly determined by direction vectory . 4.59 Indeed, we find only c = 5092 zero entries in y ⋆ for theShepp-Logan phantom; meaning, discrete image-gradient sparsity isactually closer to 1.9% than the 3% reported elsewhere; e.g., [355,IIB].2) Numerical precision of the fixed point of contraction (860) (≈1E-2for almost perfect reconstruction @−103dB error) is a parameter tothe implementation; meaning, direction vector y is updated aftercontraction begins but prior to its culmination. Impact of thisidiosyncrasy tends toward simultaneous optimization in variables Uand y while insuring y settles on a boundary point of its feasible set(nonnegative hypercube slice) in (862) at every iteration; for only aboundary point 4.60 can yield the sum of smallest entries in |Ψ vec U ⋆ |.Almost perfect reconstruction of the Shepp-Logan phantom at 103dBimage/error is achieved in a Matlab minute with 4.1% subsampled data(2671 complex samples); well below an 11% least lower bound predicted bythe sparse sampling theorem. Because reconstruction approaches optimalsolution to a 0-norm problem, minimum number of Fourier-domain samplesis bounded below by cardinality of discrete image-gradient at 1.9%. 4.6.0.0.13 Exercise. Contraction operator.Determine conditions on λ and ǫ under which (860) is a contraction andΨ T δ(y)δ(|Ψ vec U t | + ǫ1) −1 Ψ + λP from (861) is positive definite. 4.59 This adaptive gradient was not contrived. It is an artifact of the convex iterationmethod for minimal cardinality solution; in this case, cardinality minimization of a discreteimage-gradient.4.60 Simultaneous optimization of these two variables U and y should never be a pinnacleof aspiration; for then, optimal y might not attain a boundary point.

382 CHAPTER 4. SEMIDEFINITE PROGRAMMING|x|∫ x ydy−1 |y|+ǫFigure 111: Real absolute value function f 2 (x)= |x| on x∈[−1, 1] fromFigure 68b superimposed upon integral of its derivative at ǫ=0.05 whichsmooths objective function.convex iterationBy convex iteration we mean alternation of solution to (856a) and (862)until convergence. Direction vector y is initialized to 1 until the first fixedpoint is found; which means, the contraction recursion begins calculatinga (1-norm) solution U ⋆ to (854) via problem (856b). Once U ⋆ is found,vector y is updated according to an estimate of discrete image-gradientcardinality c : Sum of the 4n 2 −c smallest entries of |Ψ vec U ⋆ |∈ R 4n2 isthe optimal objective value from a linear program, for 0≤c≤4n 2 − 1 (525)4n ∑2π(|Ψ vec U ⋆ |) i = minimize 〈|Ψ vec U ⋆ | , y〉y∈R 4n2subject to 0 ≼ y ≼ 1i=c+1y T 1 = 4n 2 −c(862)where π is the nonlinear permutation-operator sorting its vector argumentinto nonincreasing order. An optimal solution y to (862), that is anextreme point of its feasible set, is known in closed form: it has 1 in eachentry corresponding to the 4n 2 −c smallest entries of |Ψ vec U ⋆ | and has 0elsewhere (page 336). Updated image U ⋆ is assigned to U t , the contractionis recomputed solving (856b), direction vector y is updated again, andso on until convergence which is guaranteed by virtue of a monotonicallynonincreasing real sequence of objective values in (856a) and (862).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!