v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

334 CHAPTER 4. SEMIDEFINITE PROGRAMMING Figure 89: Neighboring-pixel candidates from [309] for estimating image-gradient. Our implementation selects adaptively from darkest four • about central. Because of P idempotence and Hermitian symmetry and sgn(κ)= κ/|κ| , this is equivalent to lim ǫ→0 ( Ψ T δ(y)δ(|Ψ vec U| + ǫ1) −1 Ψ + λP ) vec U = λPf (770) where small positive constant ǫ ∈ R + has been introduced for invertibility. When small enough for practical purposes 4.51 (ǫ≈1E-3), we may ignore the limiting operation. Then the mapping, for λ ≫ 1/ǫ and 0 ≼ y ≼ 1 vec U t+1 = ( Ψ T δ(y)δ(|Ψ vec U t | + ǫ1) −1 Ψ + λP ) −1 λPf (771) is a contraction in U t that can be solved recursively in t for its unique fixed point; id est, until U t+1 → U t . [197, p.300] Calculating this inversion directly is not possible for large matrices on contemporary computers because of numerical precision, so we employ the conjugate gradient method of solution [128,4.8.3.2] at each recursion in the Matlab program. 4.51 We are looking for at least 50dB image/error ratio from only 4.1% subsampled data (10 radial lines in k-space). With this setting of ǫ, we actually attain in excess of 100dB from a very simple Matlab program in about a minute on a 2006 vintage laptop. By trading execution time and treating image-gradient cardinality as a known quantity for this phantom, over 160dB is achievable.

4.6. CARDINALITY AND RANK CONSTRAINT EXAMPLES 335 Observe that P (759), in the equality constraint from problem (765), is not a fat matrix. 4.52 Although number of Fourier samples taken is equal to the number of nonzero entries in binary mask Φ , matrix P is square but never actually formed during computation. Rather, a two-dimensional fast Fourier transform of U is computed followed by masking with ΘΦΘ and then an inverse fast Fourier transform. This technique significantly reduces memory requirements and, together with contraction method of solution, is the principal reason for relatively fast computation when compared with previous methods. convex iteration Direction vector y is initialized to 1 until the first fixed point is found; which means, the first sequence of contractions begins calculating the (1-norm) solution U ⋆ to (765) via problem (767). Once U ⋆ is found, vector y is updated according to an estimate of image-gradient cardinality c : Sum of the 4n 2 −c smallest entries of |Ψ vec U ⋆ |∈ R 4n2 is the optimal objective value from a linear program, for 0≤c≤4n 2 − 1 4n ∑ 2 π(|Ψ vec U ⋆ |) i = minimize 〈|Ψ vec U ⋆ | , y〉 y∈R 4n2 subject to 0 ≼ y ≼ 1 i=c+1 y T 1 = 4n 2 −c (467) where π is the nonlinear permutation-operator sorting its vector argument into nonincreasing order. An optimal solution y to (467), that is an extreme point of its feasible set, is known in closed form: it has 1 in each entry corresponding to the 4n 2 −c smallest entries of |Ψ vec U ⋆ | and has 0 elsewhere (page 295). The updated image U ⋆ is assigned to U t , the contraction sequence is recomputed solving (767), direction vector y is updated again, and so on until convergence. There are two features that distinguish problem formulation (767) and our particular implementation of it (available on Wıκımization): 1) An image-gradient estimate may engage any combination of four adjacent pixels. In other words, the algorithm is not locked into a four-point gradient estimate (Figure 89); number of points constituting an estimate is determined by direction vector y . Indeed, we find only 5092 zero entries in y for the Shepp-Logan phantom; meaning, 4.52 Fat is typical of compressed sensing problems; e.g., [60] [67].

4.6. CARDINALITY AND RANK CONSTRAINT EXAMPLES 335<br />

Observe that P (759), in the equality constraint from problem (765), is<br />

not a fat matrix. 4.52 Although number of Fourier samples taken is equal to<br />

the number of nonzero entries in binary mask Φ , matrix P is square but<br />

never actually formed during computation. Rather, a two-dimensional fast<br />

Fourier transform of U is computed followed by masking with ΘΦΘ and<br />

then an inverse fast Fourier transform. This technique significantly reduces<br />

memory requirements and, together with contraction method of solution,<br />

is the principal reason for relatively fast computation when compared with<br />

previous methods.<br />

convex iteration<br />

Direction vector y is initialized to 1 until the first fixed point is found; which<br />

means, the first sequence of contractions begins calculating the (1-norm)<br />

solution U ⋆ to (765) via problem (767). Once U ⋆ is found, vector y is<br />

updated according to an estimate of image-gradient cardinality c : Sum<br />

of the 4n 2 −c smallest entries of |Ψ vec U ⋆ |∈ R 4n2 is the optimal objective<br />

value from a linear program, for 0≤c≤4n 2 − 1<br />

4n ∑<br />

2<br />

π(|Ψ vec U ⋆ |) i = minimize 〈|Ψ vec U ⋆ | , y〉<br />

y∈R 4n2<br />

subject to 0 ≼ y ≼ 1<br />

i=c+1<br />

y T 1 = 4n 2 −c<br />

(467)<br />

where π is the nonlinear permutation-operator sorting its vector argument<br />

into nonincreasing order. An optimal solution y to (467), that is an<br />

extreme point of its feasible set, is known in closed form: it has 1 in each<br />

entry corresponding to the 4n 2 −c smallest entries of |Ψ vec U ⋆ | and has 0<br />

elsewhere (page 295). The updated image U ⋆ is assigned to U t , the<br />

contraction sequence is recomputed solving (767), direction vector y is<br />

updated again, and so on until convergence.<br />

There are two features that distinguish problem formulation (767) and<br />

our particular implementation of it (available on Wıκımization):<br />

1) An image-gradient estimate may engage any combination of four<br />

adjacent pixels. In other words, the algorithm is not locked into a<br />

four-point gradient estimate (Figure 89); number of points constituting<br />

an estimate is determined by direction vector y . Indeed, we find<br />

only 5092 zero entries in y for the Shepp-Logan phantom; meaning,<br />

4.52 Fat is typical of compressed sensing problems; e.g., [60] [67].

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!