The MOSEK command line tool Version 7.0 (Revision 141)
The MOSEK command line tool. Version 7.0 ... - Documentation The MOSEK command line tool. Version 7.0 ... - Documentation
30 CHAPTER 4. PROBLEM FORMULATION AND SOLUTIONS where minimize f(x) + c T x + c f subject to l c ≤ g(x) + Ax ≤ u c , l x ≤ x ≤ u x , (4.17) • m is the number of constraints. • n is the number of decision variables. • x ∈ R n is a vector of decision variables. • c ∈ R n is the linear part objective function. • A ∈ R m×n is the constraint matrix. • l c ∈ R m is the lower limit on the activity for the constraints. • u c ∈ R m is the upper limit on the activity for the constraints. • l x ∈ R n is the lower limit on the activity for the variables. • u x ∈ R n is the upper limit on the activity for the variables. • f : R n → R is a nonlinear function. • g : R n → R m is a nonlinear vector function. This means that the ith constraint has the form l c i ≤ g i (x) + n∑ a ij x j ≤ u c i. j=1 The linear term Ax is not included in g(x) since it can be handled much more efficiently as a separate entity when optimizing. The nonlinear functions f and g must be smooth in all x ∈ [l x ; u x ]. Moreover, f(x) must be a convex function and g i (x) must satisfy − ∞ < li c ⇒ g i (x) is concave, u c i < ∞ ⇒ g i (x) is convex, − ∞ < li c ≤ u c i < ∞ ⇒ g i (x) = 0. 4.5.1 Duality for general convex optimization Similar to the linear case, MOSEK reports dual information in the general nonlinear case. Indeed in this case the Lagrange function is defined by
4.5. GENERAL CONVEX OPTIMIZATION 31 L(x, s c l , s c u, s x l , s x u) := f(x) + c T x + c f and the dual problem is given by − (s c l ) T (g(x) + Ax − l c ) − (s c u) T (u c − g(x) − Ax) − (s x l ) T (x − l x ) − (s x u) T (u x − x), which is equivalent to maximize L(x, s c l , s c u, s x l , s x u) subject to ∇ x L(x, s c l , s c u, s x l , s x u) T = 0, s c l , s c u, s x l , s x u ≥ 0, maximize (l c ) T s c l − (u c ) T s c u + (l x ) T s x l − (u x ) T s x u + c f + f(x) − g(x) T y − (∇f(x) T − ∇g(x) T y) T x subject to A T y + s x l − s x u − (∇f(x) T − ∇g(x) T y) = c, − y + s c l − s c u = 0, s c l , s c u, s x l , s x u ≥ 0. In this context we use the following definition for scalar functions and accordingly for vector functions ∇f(x) = ∇g(x) = [ ∂f(x) ∂x 1 , ..., ⎡ ⎣ ∇g 1 (x) : ∇g m (x) ] ∂f(x) , ∂x n ⎤ ⎦ . (4.18)
- Page 1 and 2: The MOSEK command line tool. Versio
- Page 3 and 4: Contents 1 Changes and new features
- Page 5 and 6: CONTENTS v 6.5 Termination criterio
- Page 7 and 8: CONTENTS vii 9.1.56 MSK DPAR NONCON
- Page 9 and 10: CONTENTS ix 9.2.79 MSK IPAR MIO FEA
- Page 11 and 12: CONTENTS xi 9.2.171 MSK IPAR SOL RE
- Page 13 and 14: CONTENTS xiii 11.29 Ordering strate
- Page 15 and 16: CONTENTS xv 18.2 arki001 . . . . .
- Page 17 and 18: Contact information Phone +45 3917
- Page 19 and 20: License agreement Before using the
- Page 21 and 22: Chapter 1 Changes and new features
- Page 23 and 24: 1.4. OPTIMIZATION TOOLBOX FOR MATLA
- Page 25 and 26: Chapter 2 What is MOSEK MOSEK is a
- Page 27 and 28: Chapter 3 MOSEK and AMPL AMPL is a
- Page 29 and 30: 3.6. CONSTRAINT AND VARIABLE NAMES
- Page 31 and 32: 3.8. HOT-START 15 Linear dependency
- Page 33 and 34: 3.10. SENSITIVITY ANALYSIS 17 • .
- Page 35 and 36: Chapter 4 Problem formulation and s
- Page 37 and 38: 4.1. LINEAR OPTIMIZATION 21 be a pr
- Page 39 and 40: 4.2. CONIC QUADRATIC OPTIMIZATION 2
- Page 41 and 42: 4.2. CONIC QUADRATIC OPTIMIZATION 2
- Page 43 and 44: 4.3. SEMIDEFINITE OPTIMIZATION 27 4
- Page 45: 4.5. GENERAL CONVEX OPTIMIZATION 29
- Page 49 and 50: Chapter 5 The optimizers for contin
- Page 51 and 52: 5.1. HOW AN OPTIMIZER WORKS 35 5.1.
- Page 53 and 54: 5.2. LINEAR OPTIMIZATION 37 5.2.2 T
- Page 55 and 56: 5.2. LINEAR OPTIMIZATION 39 Wheneve
- Page 57 and 58: 5.2. LINEAR OPTIMIZATION 41 5.2.2.3
- Page 59 and 60: 5.2. LINEAR OPTIMIZATION 43 • Rai
- Page 61 and 62: 5.5. NONLINEAR CONVEX OPTIMIZATION
- Page 63 and 64: 5.6. SOLVING PROBLEMS IN PARALLEL 4
- Page 65 and 66: Chapter 6 The optimizers for mixed-
- Page 67 and 68: 6.3. THE MIXED-INTEGER CONIC OPTIMI
- Page 69 and 70: 6.5. TERMINATION CRITERION 53 The f
- Page 71 and 72: 6.7. UNDERSTANDING SOLUTION QUALITY
- Page 73 and 74: Chapter 7 The analyzers 7.1 The pro
- Page 75 and 76: 7.1. THE PROBLEM ANALYZER 59 Constr
- Page 77 and 78: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 79 and 80: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 81 and 82: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 83 and 84: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 85 and 86: 7.2. ANALYZING INFEASIBLE PROBLEMS
- Page 87 and 88: Chapter 8 Sensitivity analysis 8.1
- Page 89 and 90: 8.4. SENSITIVITY ANALYSIS FOR LINEA
- Page 91 and 92: 8.5. SENSITIVITY ANALYSIS WITH THE
- Page 93 and 94: 8.5. SENSITIVITY ANALYSIS WITH THE
- Page 95 and 96: Chapter 9 Parameters Parameters gro
4.5. GENERAL CONVEX OPTIMIZATION 31<br />
L(x, s c l , s c u, s x l , s x u) := f(x) + c T x + c f<br />
and the dual problem is given by<br />
− (s c l ) T (g(x) + Ax − l c ) − (s c u) T (u c − g(x) − Ax)<br />
− (s x l ) T (x − l x ) − (s x u) T (u x − x),<br />
which is equivalent to<br />
maximize L(x, s c l , s c u, s x l , s x u)<br />
subject to ∇ x L(x, s c l , s c u, s x l , s x u) T = 0,<br />
s c l , s c u, s x l , s x u ≥ 0,<br />
maximize (l c ) T s c l − (u c ) T s c u + (l x ) T s x l − (u x ) T s x u + c f<br />
+ f(x) − g(x) T y − (∇f(x) T − ∇g(x) T y) T x<br />
subject to A T y + s x l − s x u − (∇f(x) T − ∇g(x) T y) = c,<br />
− y + s c l − s c u = 0,<br />
s c l , s c u, s x l , s x u ≥ 0.<br />
In this context we use the following definition for scalar functions<br />
and accordingly for vector functions<br />
∇f(x) =<br />
∇g(x) =<br />
[ ∂f(x)<br />
∂x 1<br />
, ...,<br />
⎡<br />
⎣<br />
∇g 1 (x)<br />
:<br />
∇g m (x)<br />
]<br />
∂f(x)<br />
,<br />
∂x n<br />
⎤<br />
⎦ .<br />
(4.18)