sparse image representation via combined transforms - Convex ...

sparse image representation via combined transforms - Convex ... sparse image representation via combined transforms - Convex ...

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

96 CHAPTER 4. COMBINED IMAGE REPRESENTATION 4.10 Numerical Issues Damped Newton direction. To ensure that Newton’s method converges, we implement a backtracking scheme to find the value for β i in (4.11). It guarantees that at every iteration, the value of the objective function is reduced. Truncated CG. In order to save computing time, in the early Newton iterations, we terminate the CG solver before it reaches high precision, because an inexact Newton direction does not hurt the precision of the final solution by Newton method [38, 39]. The algorithm is implemented in Matlab. Fast algorithms for wavelet and edgelet transforms are implemented in C and called by Matlab through a CMEX interface. 4.11 Discussion 4.11.1 Connection With Statistics In statistics, we can find the same method being used in model selection, where we choose a subset of variables so that the model is still sufficient for prediction and inference. To be more specific, in linear regression models, we consider y = Xβ + ε, where y is the response, X is the model matrix with every column being values of a variable (predictor), β is the coefficient vector, and ε is a vector of IID random variables. Model selection in this setting means choosing a subset of columns of X, X (0) , so that for most of the possible responses y, wehavey ≈ X (0) β (0) , where β (0) is a subvector of β with locations corresponding to the selected columns in X (0) . The difference (or prediction error), y − X (0) β (0) , is negligible in the sense that it can be interpreted as a realization of the random noise vector ε. Typically, people use penalized regression to select the model. Basically, we solve (PR) minimize ‖y − Xβ‖ 2 2 + λρ(β), β which is exactly the problem we encountered in (4.2). After solving problem (PR), we can pick the ith column in X if β i has a significant amplitude. When ρ(β) =‖β‖ 1 , the method (PR) is called LASSO by R. Tibshirani [133] and Basis Pursuit by Chen et al [27]. When

4.11. DISCUSSION 97 ρ(β) =‖β‖ 2 2 , the method (PR) is called ridge regression by Hoerl and Kennard [81, 80]. 4.11.2 Non-convex Sparsity Measure An ideal measure of sparsity is usually nonconvex. For example, in (4.2), the number of nonzero elements in x is the most intuitive measure of sparsity. The l 0 norm of x, ‖x‖ 0 ,is equal to the number of nonzero elements, but it is not a convex function. Another choice of measure of sparsity is the logarithmic function; for x =(x 1 ,... ,x N ) T ∈ R N ,wecan have ρ(x) = ∑ N i=1 log |x i|. In sparse image component analysis, another nonconvex sparsity measure is used: ρ(x) = ∑ N i=1 log(1 + x2 i ) [53]. Generally speaking, a nonconvex optimization problem is a combinatorial optimization problem, and hence it is NP hard. Some discussion about how to use reweighting methods to solve a nonconvex optimization problem is given in the next subsection. 4.11.3 Iterative Algorithm for Non-convex Optimization Problems Sometimes, a reweighted iterative method canbeusedtofindalocal minimum for a nonconvex optimization problem. Let’s consider the following problem: (LO) minimize x N∑ log |x i |, subject to y =Φx; i=1 and its corresponding version with a Lagrangian multiplier λ, 1 (LO λ ) minimize ‖y − Φx‖ 2 x 2 + λ N∑ log(|x i | + δ). i=1 Note that the objective function of (LO) is not convex. Let’s consider a reweighted iterative algorithm: for δ>0, (RIA) x (k+1) = argmin x N∑ i=1 |x i | |x (k) , subject to y =Φx; i | + δ 1 More precisely, (LO λ ) is the Lagrangian multiplier version of the following optimization problem: minimize x N∑ log(|x i| + δ), subject to ‖y − Φx‖ ≤ε. i=1 Note when δ and ε are small, it is close to (LO).

96 CHAPTER 4. COMBINED IMAGE REPRESENTATION<br />

4.10 Numerical Issues<br />

Damped Newton direction. To ensure that Newton’s method converges, we implement a<br />

backtracking scheme to find the value for β i in (4.11). It guarantees that at every iteration,<br />

the value of the objective function is reduced.<br />

Truncated CG. In order to save computing time, in the early Newton iterations, we terminate<br />

the CG solver before it reaches high precision, because an inexact Newton direction<br />

does not hurt the precision of the final solution by Newton method [38, 39].<br />

The algorithm is implemented in Matlab. Fast algorithms for wavelet and edgelet <strong>transforms</strong><br />

are implemented in C and called by Matlab through a CMEX interface.<br />

4.11 Discussion<br />

4.11.1 Connection With Statistics<br />

In statistics, we can find the same method being used in model selection, where we choose<br />

a subset of variables so that the model is still sufficient for prediction and inference. To be<br />

more specific, in linear regression models, we consider<br />

y = Xβ + ε,<br />

where y is the response, X is the model matrix with every column being values of a variable<br />

(predictor), β is the coefficient vector, and ε is a vector of IID random variables. Model<br />

selection in this setting means choosing a subset of columns of X, X (0) , so that for most<br />

of the possible responses y, wehavey ≈ X (0) β (0) , where β (0) is a subvector of β with<br />

locations corresponding to the selected columns in X (0) . The difference (or prediction<br />

error), y − X (0) β (0) , is negligible in the sense that it can be interpreted as a realization of<br />

the random noise vector ε.<br />

Typically, people use penalized regression to select the model. Basically, we solve<br />

(PR)<br />

minimize ‖y − Xβ‖ 2 2 + λρ(β),<br />

β<br />

which is exactly the problem we encountered in (4.2). After solving problem (PR), we can<br />

pick the ith column in X if β i has a significant amplitude. When ρ(β) =‖β‖ 1 , the method<br />

(PR) is called LASSO by R. Tibshirani [133] and Basis Pursuit by Chen et al [27]. When

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!