The MOSEK Python optimizer API manual Version 7.0 (Revision 141)
Optimizer API for Python - Documentation - Mosek Optimizer API for Python - Documentation - Mosek
100 CHAPTER 8. A CASE STUDY maximize µ T x subject to e T x = w + e T x 0 , [γ; G T x] ∈ Q n+1 , x ≥ 0, which is a conic quadratic optimization problem that can easily be solved using MOSEK. Subsequently we will use the example data (8.3) and This implies ⎡ Σ = 0.1 ⎣ ⎡ µ = ⎣ 0.1073 0.0737 0.0627 ⎤ ⎦ 0.2778 0.0387 0.0021 0.0387 0.1112 − 0.0020 0.0021 − 0.0020 0.0115 ⎤ ⎦ ⎡ G T = √ 0.1 ⎣ using 5 figures of accuracy. Moreover, let 0.5271 0.0734 0.0040 0 0.3253 − 0.0070 0 0 0.1069 ⎤ ⎦ and ⎡ x 0 = ⎣ 0.0 0.0 0.0 ⎤ ⎦ w = 1.0. The data has been taken from [5]. 8.1.2.1 Why a conic formulation? The problem (8.1) is a convex quadratically constrained optimization problems that can be solved directly using MOSEK, then why reformulate it as a conic quadratic optimization problem? The main reason for choosing a conic model is that it is more robust and usually leads to a shorter solution times. For instance it is not always easy to determine whether the Q matrix in (8.1) is positive semidefinite due to the presence of rounding errors. It is also very easy to make a mistake so Q becomes indefinite. These causes of problems are completely eliminated in the conic formulation. Moreover, observe the constraint
8.1. PORTFOLIO OPTIMIZATION 101 is nicer than ∥ ∥G T x ∥ ∥ ≤ γ x T Σx ≤ γ 2 for small and values of γ. For instance assume a γ of 10000 then γ 2 would 1.0e8 which introduces a scaling issue in the model. Hence, using conic formulation it is possible to work with the standard deviation instead of the variance, which usually gives rise to a better scaled model. 8.1.2.2 Implementing the portfolio model The model (8.3) can not be implemented as stated using the MOSEK optimizer API because the API requires the problem to be on the form where ˆx is referred to as the API variable. maximize c T ˆx subject to l c ≤ Aˆx ≤ u c , l x ≤ ˆx ≤ u x , ˆx ∈ K. The first step in bringing (8.3) to the form (8.4) is the reformulation (8.4) maximize µ T x subject to e T x = w + e T x 0 , G T x − t = 0 [s; t] ∈ Q n+1 , x ≥ 0, s 0. where s is an additional scalar variable and t is a n dimensional vector variable. The next step is to define a mapping of the variables (8.5) ˆx = [x; s; t] = ⎡ ⎣ x s t ⎤ ⎦ . (8.6) Hence, the API variable ˆx is concatenation of model variables x, s and t. In Table (8.1) the details of the concatenation are specified. For instance it can be seen that because the offset of the t variable is n + 2. ˆx n+2 = t 1 . Given the ordering of the variables specified by (8.6) the data should be defined as follows
- Page 71 and 72: 5.5. QUADRATIC OPTIMIZATION 49 5.5.
- Page 73 and 74: 5.5. QUADRATIC OPTIMIZATION 51 81 a
- Page 75 and 76: 5.6. THE SOLUTION SUMMARY 53 • Th
- Page 77 and 78: 5.7. INTEGER OPTIMIZATION 55 21 # f
- Page 79 and 80: 5.7. INTEGER OPTIMIZATION 57 137 sy
- Page 81 and 82: 5.8. THE SOLUTION SUMMARY FOR MIXED
- Page 83 and 84: 5.9. RESPONSE HANDLING 61 Note that
- Page 85 and 86: 5.10. PROBLEM MODIFICATION AND REOP
- Page 87 and 88: 5.10. PROBLEM MODIFICATION AND REOP
- Page 89 and 90: 5.11. SOLUTION ANALYSIS 67 151 # Pu
- Page 91 and 92: 5.13. CONVENTIONS EMPLOYED IN THE A
- Page 93 and 94: 5.13. CONVENTIONS EMPLOYED IN THE A
- Page 95 and 96: 5.13. CONVENTIONS EMPLOYED IN THE A
- Page 97 and 98: 5.13. CONVENTIONS EMPLOYED IN THE A
- Page 99 and 100: Chapter 6 Nonlinear API tutorial Th
- Page 101 and 102: 6.1. SEPARABLE CONVEX (SCOPT) INTER
- Page 103 and 104: 6.1. SEPARABLE CONVEX (SCOPT) INTER
- Page 105 and 106: 6.1. SEPARABLE CONVEX (SCOPT) INTER
- Page 107 and 108: Chapter 7 Advanced API tutorial Thi
- Page 109 and 110: 7.1. THE PROGRESS CALL-BACK 87 71 p
- Page 111 and 112: 7.2. SOLVING LINEAR SYSTEMS INVOLVI
- Page 113 and 114: 7.2. SOLVING LINEAR SYSTEMS INVOLVI
- Page 115 and 116: 7.2. SOLVING LINEAR SYSTEMS INVOLVI
- Page 117 and 118: 7.2. SOLVING LINEAR SYSTEMS INVOLVI
- Page 119 and 120: Chapter 8 A case study 8.1 Portfoli
- Page 121: 8.1. PORTFOLIO OPTIMIZATION 99 e T
- Page 125 and 126: 8.1. PORTFOLIO OPTIMIZATION 103 15
- Page 127 and 128: 8.1. PORTFOLIO OPTIMIZATION 105 63
- Page 129 and 130: 8.1. PORTFOLIO OPTIMIZATION 107 8.1
- Page 131 and 132: 8.1. PORTFOLIO OPTIMIZATION 109 110
- Page 133 and 134: 8.1. PORTFOLIO OPTIMIZATION 111 z j
- Page 135 and 136: 8.1. PORTFOLIO OPTIMIZATION 113 Var
- Page 137 and 138: 8.1. PORTFOLIO OPTIMIZATION 115 56
- Page 139 and 140: 8.1. PORTFOLIO OPTIMIZATION 117 172
- Page 141 and 142: Chapter 9 Usage guidelines The purp
- Page 143 and 144: 9.3. WRITING TASK DATA TO A FILE 12
- Page 145 and 146: Chapter 10 Problem formulation and
- Page 147 and 148: 10.1. LINEAR OPTIMIZATION 125 be a
- Page 149 and 150: 10.2. CONIC QUADRATIC OPTIMIZATION
- Page 151 and 152: 10.2. CONIC QUADRATIC OPTIMIZATION
- Page 153 and 154: 10.3. SEMIDEFINITE OPTIMIZATION 131
- Page 155 and 156: 10.4. QUADRATIC AND QUADRATICALLY C
- Page 157 and 158: Chapter 11 The optimizers for conti
- Page 159 and 160: 11.1. HOW AN OPTIMIZER WORKS 137 11
- Page 161 and 162: 11.2. LINEAR OPTIMIZATION 139 11.2.
- Page 163 and 164: 11.2. LINEAR OPTIMIZATION 141 Whene
- Page 165 and 166: 11.2. LINEAR OPTIMIZATION 143 11.2.
- Page 167 and 168: 11.2. LINEAR OPTIMIZATION 145 • R
- Page 169 and 170: 11.5. NONLINEAR CONVEX OPTIMIZATION
- Page 171 and 172: 11.6. SOLVING PROBLEMS IN PARALLEL
100 CHAPTER 8. A CASE STUDY<br />
maximize µ T x<br />
subject to e T x = w + e T x 0 ,<br />
[γ; G T x] ∈ Q n+1 ,<br />
x ≥ 0,<br />
which is a conic quadratic optimization problem that can easily be solved using <strong>MOSEK</strong>.<br />
Subsequently we will use the example data<br />
(8.3)<br />
and<br />
This implies<br />
⎡<br />
Σ = 0.1 ⎣<br />
⎡<br />
µ = ⎣<br />
0.1073<br />
0.0737<br />
0.0627<br />
⎤<br />
⎦<br />
0.2778 0.0387 0.0021<br />
0.0387 0.1112 − 0.0020<br />
0.0021 − 0.0020 0.0115<br />
⎤<br />
⎦<br />
⎡<br />
G T = √ 0.1 ⎣<br />
using 5 figures of accuracy. Moreover, let<br />
0.5271 0.0734 0.0040<br />
0 0.3253 − 0.0070<br />
0 0 0.1069<br />
⎤<br />
⎦<br />
and<br />
⎡<br />
x 0 = ⎣<br />
0.0<br />
0.0<br />
0.0<br />
⎤<br />
⎦<br />
w = 1.0.<br />
<strong>The</strong> data has been taken from [5].<br />
8.1.2.1 Why a conic formulation?<br />
<strong>The</strong> problem (8.1) is a convex quadratically constrained optimization problems that can be solved<br />
directly using <strong>MOSEK</strong>, then why reformulate it as a conic quadratic optimization problem? <strong>The</strong> main<br />
reason for choosing a conic model is that it is more robust and usually leads to a shorter solution times.<br />
For instance it is not always easy to determine whether the Q matrix in (8.1) is positive semidefinite<br />
due to the presence of rounding errors. It is also very easy to make a mistake so Q becomes indefinite.<br />
<strong>The</strong>se causes of problems are completely eliminated in the conic formulation.<br />
Moreover, observe the constraint