The MOSEK command line tool Version 7.0 (Revision 141)

The MOSEK command line tool. Version 7.0 ... - Documentation The MOSEK command line tool. Version 7.0 ... - Documentation

25.11.2015 Views

30 CHAPTER 4. PROBLEM FORMULATION AND SOLUTIONS where minimize f(x) + c T x + c f subject to l c ≤ g(x) + Ax ≤ u c , l x ≤ x ≤ u x , (4.17) • m is the number of constraints. • n is the number of decision variables. • x ∈ R n is a vector of decision variables. • c ∈ R n is the linear part objective function. • A ∈ R m×n is the constraint matrix. • l c ∈ R m is the lower limit on the activity for the constraints. • u c ∈ R m is the upper limit on the activity for the constraints. • l x ∈ R n is the lower limit on the activity for the variables. • u x ∈ R n is the upper limit on the activity for the variables. • f : R n → R is a nonlinear function. • g : R n → R m is a nonlinear vector function. This means that the ith constraint has the form l c i ≤ g i (x) + n∑ a ij x j ≤ u c i. j=1 The linear term Ax is not included in g(x) since it can be handled much more efficiently as a separate entity when optimizing. The nonlinear functions f and g must be smooth in all x ∈ [l x ; u x ]. Moreover, f(x) must be a convex function and g i (x) must satisfy − ∞ < li c ⇒ g i (x) is concave, u c i < ∞ ⇒ g i (x) is convex, − ∞ < li c ≤ u c i < ∞ ⇒ g i (x) = 0. 4.5.1 Duality for general convex optimization Similar to the linear case, MOSEK reports dual information in the general nonlinear case. Indeed in this case the Lagrange function is defined by

4.5. GENERAL CONVEX OPTIMIZATION 31 L(x, s c l , s c u, s x l , s x u) := f(x) + c T x + c f and the dual problem is given by − (s c l ) T (g(x) + Ax − l c ) − (s c u) T (u c − g(x) − Ax) − (s x l ) T (x − l x ) − (s x u) T (u x − x), which is equivalent to maximize L(x, s c l , s c u, s x l , s x u) subject to ∇ x L(x, s c l , s c u, s x l , s x u) T = 0, s c l , s c u, s x l , s x u ≥ 0, maximize (l c ) T s c l − (u c ) T s c u + (l x ) T s x l − (u x ) T s x u + c f + f(x) − g(x) T y − (∇f(x) T − ∇g(x) T y) T x subject to A T y + s x l − s x u − (∇f(x) T − ∇g(x) T y) = c, − y + s c l − s c u = 0, s c l , s c u, s x l , s x u ≥ 0. In this context we use the following definition for scalar functions and accordingly for vector functions ∇f(x) = ∇g(x) = [ ∂f(x) ∂x 1 , ..., ⎡ ⎣ ∇g 1 (x) : ∇g m (x) ] ∂f(x) , ∂x n ⎤ ⎦ . (4.18)

4.5. GENERAL CONVEX OPTIMIZATION 31<br />

L(x, s c l , s c u, s x l , s x u) := f(x) + c T x + c f<br />

and the dual problem is given by<br />

− (s c l ) T (g(x) + Ax − l c ) − (s c u) T (u c − g(x) − Ax)<br />

− (s x l ) T (x − l x ) − (s x u) T (u x − x),<br />

which is equivalent to<br />

maximize L(x, s c l , s c u, s x l , s x u)<br />

subject to ∇ x L(x, s c l , s c u, s x l , s x u) T = 0,<br />

s c l , s c u, s x l , s x u ≥ 0,<br />

maximize (l c ) T s c l − (u c ) T s c u + (l x ) T s x l − (u x ) T s x u + c f<br />

+ f(x) − g(x) T y − (∇f(x) T − ∇g(x) T y) T x<br />

subject to A T y + s x l − s x u − (∇f(x) T − ∇g(x) T y) = c,<br />

− y + s c l − s c u = 0,<br />

s c l , s c u, s x l , s x u ≥ 0.<br />

In this context we use the following definition for scalar functions<br />

and accordingly for vector functions<br />

∇f(x) =<br />

∇g(x) =<br />

[ ∂f(x)<br />

∂x 1<br />

, ...,<br />

⎡<br />

⎣<br />

∇g 1 (x)<br />

:<br />

∇g m (x)<br />

]<br />

∂f(x)<br />

,<br />

∂x n<br />

⎤<br />

⎦ .<br />

(4.18)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!