10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

340 CHAPTER 4. SEMIDEFINITE PROGRAMMING<br />

problem in Chapter 5.4.2.2.3, for example, requires a problem sequence in<br />

a progressively larger number of balls to find a good initial value for the<br />

direction matrix, whereas many of the examples in the present chapter require<br />

an initial value of 0. Finding a feasible Boolean vector in Example 4.6.0.0.6<br />

requires a procedure to detect stalls, when other problems have no such<br />

requirement. The combinatorial Procrustes problem in Example 4.6.0.0.3<br />

allows use of a known closed-form solution for direction vector when solved<br />

via rank constraint, but not when solved via cardinality constraint. Some<br />

problems require a careful weighting of the regularization term, whereas other<br />

problems do not, and so on. It would be nice if there were a universally<br />

applicable method for constraining rank; one that is less susceptible to quirks<br />

of a particular problem type.<br />

Poor initialization of the direction matrix from the regularization can<br />

lead to an erroneous result. We speculate one reason to be a simple dearth<br />

of optimal solutions of desired rank or cardinality; 4.55 an unfortunate choice<br />

of initial search direction leading astray. Ease of solution by convex iteration<br />

occurs when optimal solutions abound. With this speculation in mind, we<br />

now propose a further generalization of convex iteration for constraining rank<br />

that attempts to ameliorate quirks and unify problem types:<br />

4.7 <strong>Convex</strong> Iteration rank-1<br />

We now develop a general method for constraining rank that first decomposes<br />

a given problem via standard diagonalization of matrices (A.5). This<br />

method is motivated by observation (4.4.1.1) that an optimal direction<br />

matrix can be diagonalizable simultaneously with an optimal variable<br />

matrix. This suggests minimization of an objective function directly in<br />

terms of eigenvalues. A second motivating observation is that variable<br />

orthogonal matrices seem easily found by convex iteration; e.g., Procrustes<br />

Example 4.6.0.0.2.<br />

It turns out that this general method always requires solution to a rank-1<br />

constrained problem regardless of desired rank from the original problem. To<br />

demonstrate, we pose a semidefinite feasibility problem<br />

4.55 Recall that, in <strong>Convex</strong> <strong>Optimization</strong>, an optimal solution generally comes from a<br />

convex set of optimal solutions that can be large.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!