The MOSEK command line tool Version 7.0 (Revision 141)
The MOSEK command line tool. Version 7.0 ... - Documentation The MOSEK command line tool. Version 7.0 ... - Documentation
36 CHAPTER 5. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS 5.1.3 Scaling Problems containing data with large and/or small coefficients, say 1.0e + 9 or 1.0e − 7 , are often hard to solve. Significant digits may be truncated in calculations with finite precision, which can result in the optimizer relying on inaccurate calculations. Since computers work in finite precision, extreme coefficients should be avoided. In general, data around the same ”order of magnitude” is preferred, and we will refer to a problem, satisfying this loose property, as being well-scaled. If the problem is not well scaled, MOSEK will try to scale (multiply) constraints and variables by suitable constants. MOSEK solves the scaled problem to improve the numerical properties. The scaling process is transparent, i.e. the solution to the original problem is reported. It is important to be aware that the optimizer terminates when the termination criterion is met on the scaled problem, therefore significant primal or dual infeasibilities may occur after unscaling for badly scaled problems. The best solution to this problem is to reformulate it, making it better scaled. By default MOSEK heuristically chooses a suitable scaling. The scaling for interior-point and simplex optimizers can be controlled with the parameters MSK IPAR INTPNT SCALING and MSK IPAR SIM SCALING respectively. 5.1.4 Using multiple threads The interior-point optimizers in MOSEK have been parallelized. This means that if you solve linear, quadratic, conic, or general convex optimization problem using the interior-point optimizer, you can take advantage of multiple CPU’s. By default MOSEK will automatically select the number of threads to be employed when solving the problem. However, the number of threads employed can be changed by setting the parameter MSK IPAR NUM THREADS. This should never exceed the number of cores on the computer. The speed-up obtained when using multiple threads is highly problem and hardware dependent, and consequently, it is advisable to compare single threaded and multi threaded performance for the given problem type to determine the optimal settings. For small problems, using multiple threads is not be worthwhile and may even be counter productive. 5.2 Linear optimization 5.2.1 Optimizer selection Two different types of optimizers are available for linear problems: The default is an interior-point method, and the alternatives are simplex methods. The optimizer can be selected using the parameter MSK IPAR OPTIMIZER.
5.2. LINEAR OPTIMIZATION 37 5.2.2 The interior-point optimizer The purpose of this section is to provide information about the algorithm employed in MOSEK interiorpoint optimizer. In order to keep the discussion simple it is assumed that MOSEK solves linear optimization problems on standard form minimize c T x subject to Ax = b, x ≥ 0. (5.1) This is in fact what happens inside MOSEK; for efficiency reasons MOSEK converts the problem to standard form before solving, then convert it back to the input form when reporting the solution. Since it is not known beforehand whether problem (5.1) has an optimal solution, is primal infeasible or is dual infeasible, the optimization algorithm must deal with all three situations. This is the reason that MOSEK solves the so-called homogeneous model Ax − bτ = 0, A T y + s − cτ = 0, − c T x + b T y − κ = 0, x, s, τ, κ ≥ 0, (5.2) where y and s correspond to the dual variables in (5.1), and τ and κ are two additional scalar variables. Note that the homogeneous model (5.2) always has solution since is a solution, although not a very interesting one. Any solution (x, y, s, τ, κ) = (0, 0, 0, 0, 0) to the homogeneous model (5.2) satisfies (x ∗ , y ∗ , s ∗ , τ ∗ , κ ∗ ) x ∗ j s ∗ j = 0 and τ ∗ κ ∗ = 0. Moreover, there is always a solution that has the property First, assume that τ ∗ > 0 . It follows that τ ∗ + κ ∗ > 0.
- Page 1 and 2: The MOSEK command line tool. Versio
- Page 3 and 4: Contents 1 Changes and new features
- Page 5 and 6: CONTENTS v 6.5 Termination criterio
- Page 7 and 8: CONTENTS vii 9.1.56 MSK DPAR NONCON
- Page 9 and 10: CONTENTS ix 9.2.79 MSK IPAR MIO FEA
- Page 11 and 12: CONTENTS xi 9.2.171 MSK IPAR SOL RE
- Page 13 and 14: CONTENTS xiii 11.29 Ordering strate
- Page 15 and 16: CONTENTS xv 18.2 arki001 . . . . .
- Page 17 and 18: Contact information Phone +45 3917
- Page 19 and 20: License agreement Before using the
- Page 21 and 22: Chapter 1 Changes and new features
- Page 23 and 24: 1.4. OPTIMIZATION TOOLBOX FOR MATLA
- Page 25 and 26: Chapter 2 What is MOSEK MOSEK is a
- Page 27 and 28: Chapter 3 MOSEK and AMPL AMPL is a
- Page 29 and 30: 3.6. CONSTRAINT AND VARIABLE NAMES
- Page 31 and 32: 3.8. HOT-START 15 Linear dependency
- Page 33 and 34: 3.10. SENSITIVITY ANALYSIS 17 • .
- Page 35 and 36: Chapter 4 Problem formulation and s
- Page 37 and 38: 4.1. LINEAR OPTIMIZATION 21 be a pr
- Page 39 and 40: 4.2. CONIC QUADRATIC OPTIMIZATION 2
- Page 41 and 42: 4.2. CONIC QUADRATIC OPTIMIZATION 2
- Page 43 and 44: 4.3. SEMIDEFINITE OPTIMIZATION 27 4
- Page 45 and 46: 4.5. GENERAL CONVEX OPTIMIZATION 29
- Page 47 and 48: 4.5. GENERAL CONVEX OPTIMIZATION 31
- Page 49 and 50: Chapter 5 The optimizers for contin
- Page 51: 5.1. HOW AN OPTIMIZER WORKS 35 5.1.
- Page 55 and 56: 5.2. LINEAR OPTIMIZATION 39 Wheneve
- Page 57 and 58: 5.2. LINEAR OPTIMIZATION 41 5.2.2.3
- Page 59 and 60: 5.2. LINEAR OPTIMIZATION 43 • Rai
- Page 61 and 62: 5.5. NONLINEAR CONVEX OPTIMIZATION
- Page 63 and 64: 5.6. SOLVING PROBLEMS IN PARALLEL 4
- Page 65 and 66: Chapter 6 The optimizers for mixed-
- Page 67 and 68: 6.3. THE MIXED-INTEGER CONIC OPTIMI
- Page 69 and 70: 6.5. TERMINATION CRITERION 53 The f
- Page 71 and 72: 6.7. UNDERSTANDING SOLUTION QUALITY
- Page 73 and 74: Chapter 7 The analyzers 7.1 The pro
- Page 75 and 76: 7.1. THE PROBLEM ANALYZER 59 Constr
- Page 77 and 78: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 79 and 80: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 81 and 82: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 83 and 84: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 85 and 86: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 87 and 88: Chapter 8 Sensitivity analysis 8.1
- Page 89 and 90: 8.4. SENSITIVITY ANALYSIS FOR LINEA
- Page 91 and 92: 8.5. SENSITIVITY ANALYSIS WITH THE
- Page 93 and 94: 8.5. SENSITIVITY ANALYSIS WITH THE
- Page 95 and 96: Chapter 9 Parameters Parameters gro
- Page 97 and 98: 81 • MSK SPAR ITR SOL FILE NAME.
- Page 99 and 100: 83 • MSK DPAR INTPNT NL TOL REL G
- Page 101 and 102: 85 • MSK IPAR LOG MIO. Controls t
5.2. LINEAR OPTIMIZATION 37<br />
5.2.2 <strong>The</strong> interior-point optimizer<br />
<strong>The</strong> purpose of this section is to provide information about the algorithm employed in <strong>MOSEK</strong> interiorpoint<br />
optimizer.<br />
In order to keep the discussion simple it is assumed that <strong>MOSEK</strong> solves <strong>line</strong>ar optimization problems<br />
on standard form<br />
minimize c T x<br />
subject to Ax = b,<br />
x ≥ 0.<br />
(5.1)<br />
This is in fact what happens inside <strong>MOSEK</strong>; for efficiency reasons <strong>MOSEK</strong> converts the problem to<br />
standard form before solving, then convert it back to the input form when reporting the solution.<br />
Since it is not known beforehand whether problem (5.1) has an optimal solution, is primal infeasible<br />
or is dual infeasible, the optimization algorithm must deal with all three situations. This is the reason<br />
that <strong>MOSEK</strong> solves the so-called homogeneous model<br />
Ax − bτ = 0,<br />
A T y + s − cτ = 0,<br />
− c T x + b T y − κ = 0,<br />
x, s, τ, κ ≥ 0,<br />
(5.2)<br />
where y and s correspond to the dual variables in (5.1), and τ and κ are two additional scalar variables.<br />
Note that the homogeneous model (5.2) always has solution since<br />
is a solution, although not a very interesting one.<br />
Any solution<br />
(x, y, s, τ, κ) = (0, 0, 0, 0, 0)<br />
to the homogeneous model (5.2) satisfies<br />
(x ∗ , y ∗ , s ∗ , τ ∗ , κ ∗ )<br />
x ∗ j s ∗ j = 0 and τ ∗ κ ∗ = 0.<br />
Moreover, there is always a solution that has the property<br />
First, assume that τ ∗ > 0 . It follows that<br />
τ ∗ + κ ∗ > 0.