v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

332 CHAPTER 4. SEMIDEFINITE PROGRAMMING vec −1 f Figure 88: Aliasing of Shepp-Logan phantom in Figure 86 resulting from k-space subsampling pattern in Figure 87. This image is real because binary mask Φ is vertically and horizontally symmetric. It is remarkable that the phantom can be reconstructed, by convex iteration, given only U 0 = vec −1 f . Express an image-gradient estimate ⎡ U ∆ ∇U = ∆ ⎢ U ∆ T ⎣ ∆ U ∆ T U ⎤ ⎥ ⎦ ∈ R4n×n (763) that is a simple first-order difference of neighboring pixels (Figure 89) to the right, left, above, and below. 4.49 ByA.1.1 no.25, its vectorization: for Ψ i ∈ R n2 ×n 2 vec ∇U = ⎡ ⎢ ⎣ ⎤ ∆ T ⊗ I ∆ ⊗ I I ⊗ ∆ I ⊗ ∆ T ⎥ ⎦ vec U ∆ = ⎡ ⎢ ⎣ Ψ 1 Ψ T 1 Ψ 2 Ψ T 2 ⎤ ⎥ ⎦ vec U ∆ = Ψ vec U ∈ R 4n2 (764) where Ψ∈ R 4n2 ×n 2 . A total-variation minimization for reconstructing MRI 4.49 There is significant improvement in reconstruction quality by augmentation of a normally two-dimensional image-gradient to a four-dimensional estimate per pixel by inclusion of two polar directions. We find small improvement on real-life images, ≈1dB empirically, by further augmentation with diagonally adjacent pixel differences.

4.6. CARDINALITY AND RANK CONSTRAINT EXAMPLES 333 image U , that is known suboptimal [62], may be concisely posed where minimize ‖Ψ vec U‖ 1 U subject to P vec U = f (765) f = (F H ⊗F H ) vec K ∈ C n2 (766) is the known inverse subsampled Fourier data (a vectorized aliased image, Figure 88), and where a norm of image-gradient ∇U is equivalently expressed as norm of a linear transformation Ψ vec U . Although this simple problem statement (765) is equivalent to a linear program (3.1.3), its numerical solution is beyond the capability of even the most highly regarded of contemporary commercial solvers. 4.50 Our only recourse is to recast the problem in Lagrangian form and write customized code to solve it: [308,IIIA] [217, p.398] minimize 〈|Ψ vec U| , y〉 + 1λ‖P vec U − U 2 f‖2 2 (767) where (multiobjective optimization) parameter λ∈ R + is quite large (λ≈1E8) so as to enforce the equality constraint in (765). We introduce a direction vector y ∈ R 4n2 + as part of a convex iteration (4.5.2) to overcome that known suboptimal minimization of image-gradient cardinality: id est, there exists a vector y ⋆ having entries yi ⋆ ∈ {0, 1} such that minimize ‖Ψ vec U‖ 0 U subject to P vec U = f ≡ minimize U 〈|Ψ vec U| , y ⋆ 〉 + 1 2 λ‖P vec U − f‖2 2 (768) Existence of such a y ⋆ , complementary to an optimal vector Ψ vec U ⋆ , is obvious by definition of global optimality 〈|Ψ vec U ⋆ | , y ⋆ 〉= 0 (683) under which a cardinality-c optimal objective is assumed to exist. Because (767) is an unconstrained convex problem, a zero in the objective function’s gradient is necessary and sufficient for optimality (2.13.3); id est, (D.2.1) Ψ T δ(y) sgn(Ψ vec U) + λP H (P vec U − f) = 0 (769) 4.50 for images as small as 128×128 pixels. Obstacle to numerical solution is not a computer resource: e.g., execution time, memory. The obstacle is, in fact, inadequate numerical precision. Even when all dependent equality constraints are manually removed, the best commercial solvers fail simply because computer numerics become nonsense; id est, numerical errors enter significant digits and the algorithm exits prematurely, loops indefinitely, or produces an infeasible solution.

4.6. CARDINALITY AND RANK CONSTRAINT EXAMPLES 333<br />

image U , that is known suboptimal [62], may be concisely posed<br />

where<br />

minimize ‖Ψ vec U‖ 1<br />

U<br />

subject to P vec U = f<br />

(765)<br />

f = (F H ⊗F H ) vec K ∈ C n2 (766)<br />

is the known inverse subsampled Fourier data (a vectorized aliased image,<br />

Figure 88), and where a norm of image-gradient ∇U is equivalently<br />

expressed as norm of a linear transformation Ψ vec U .<br />

Although this simple problem statement (765) is equivalent to a linear<br />

program (3.1.3), its numerical solution is beyond the capability of even<br />

the most highly regarded of contemporary commercial solvers. 4.50 Our only<br />

recourse is to recast the problem in Lagrangian form and write customized<br />

code to solve it: [308,IIIA] [217, p.398]<br />

minimize 〈|Ψ vec U| , y〉 + 1λ‖P vec U −<br />

U<br />

2 f‖2 2 (767)<br />

where (multiobjective optimization) parameter λ∈ R + is quite large (λ≈1E8)<br />

so as to enforce the equality constraint in (765). We introduce a direction<br />

vector y ∈ R 4n2<br />

+ as part of a convex iteration (4.5.2) to overcome that known<br />

suboptimal minimization of image-gradient cardinality: id est, there exists a<br />

vector y ⋆ having entries yi ⋆ ∈ {0, 1} such that<br />

minimize ‖Ψ vec U‖ 0<br />

U<br />

subject to P vec U = f<br />

≡<br />

minimize<br />

U<br />

〈|Ψ vec U| , y ⋆ 〉 + 1 2 λ‖P vec U − f‖2 2<br />

(768)<br />

Existence of such a y ⋆ , complementary to an optimal vector Ψ vec U ⋆ , is<br />

obvious by definition of global optimality 〈|Ψ vec U ⋆ | , y ⋆ 〉= 0 (683) under<br />

which a cardinality-c optimal objective is assumed to exist.<br />

Because (767) is an unconstrained convex problem, a zero in the objective<br />

function’s gradient is necessary and sufficient for optimality (2.13.3); id est,<br />

(D.2.1)<br />

Ψ T δ(y) sgn(Ψ vec U) + λP H (P vec U − f) = 0 (769)<br />

4.50 for images as small as 128×128 pixels. Obstacle to numerical solution is not a<br />

computer resource: e.g., execution time, memory. The obstacle is, in fact, inadequate<br />

numerical precision. Even when all dependent equality constraints are manually removed,<br />

the best commercial solvers fail simply because computer numerics become nonsense;<br />

id est, numerical errors enter significant digits and the algorithm exits prematurely, loops<br />

indefinitely, or produces an infeasible solution.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!