The MOSEK Python optimizer API manual Version 7.0 (Revision 141)
Optimizer API for Python - Documentation - Mosek Optimizer API for Python - Documentation - Mosek
178 CHAPTER 14. PRIMAL FEASIBILITY REPAIR which is infeasible for any ɛ ≠ 0. Here the infeasibility is caused by a linear dependency in the constraint matrix and that the right-hand side does not match if ɛ ≠ 0. Observe even if the problem is feasible then just a tiny perturbation to the right-hand side will make the problem infeasible. Therefore, even though the problem can be repaired then a much more robust solution is to avoid problems with linear dependent constraints. Indeed if a problem contains linear dependencies then the problem is either infeasible or contains redundant constraints. In the above case any of the equality constraints can be removed while not changing the set of feasible solutions. To summarize linear dependencies in the constraints can give rise to infeasible problems and therefore it is better to avoid them. Note that most network flow models usually is formulated with one linear dependent constraint. Next consider the problem minimize subject to x 1 − 0.01x 2 = 0 x 2 − 0.01x 3 = 0 x 3 − 0.01x 4 = 0 x 1 ≥ − 1.0e − 9 x 1 ≤ 1.0e − 9 x 4 ≤ − 1.0e − 4 (14.2) Now the MOSEK presolve for the sake of efficiency fix variables (and constraints) that has tight bounds where tightness is controlled by the parameter dparam.presolve tol x. Since, the bounds −1.0e − 9 ≤ x 1 ≤ 1.0e − 9 are tight then the MOSEK presolve will fix variable x 1 at the mid point between the bounds i.e. at 0. It easy to see that this implies x 4 = 0 too which leads to the incorrect conclusion that the problem is infeasible. Observe tiny change of the size 1.0e-9 make the problem switch from feasible to infeasible. Such a problem is inherently unstable and is hard to solve. We normally call such a problem ill-posed. In general it is recommended to avoid ill-posed problems, but if that is not possible then one solution to this issue is is to reduce the parameter to say dparam.presolve tol x to say 1.0e-10. This will at least make sure that the presolve does not make the wrong conclusion. 14.2 Automatic repair In this section we will describe the idea behind a method that automatically can repair an infeasible probem. The main idea can be described as follows. Consider the linear optimization problem with m constraints and n variables which is assumed to be infeasible. minimize c T x + c f subject to l c ≤ Ax ≤ u c , l x ≤ x ≤ u x , (14.3)
14.2. AUTOMATIC REPAIR 179 One way of making the problem feasible is to reduce the lower bounds and increase the upper bounds. If the change is sufficiently large the problem becomes feasible. Now an obvious idea is to compute the optimal relaxation by solving an optimization problem. The problem minimize p(v c l , v c u, v x l , v x u) subject to l c ≤ Ax + v c l − v c u ≤ u c , l x ≤ x + v x l − v x u ≤ u x , v c l , v c u, v x l , v x u ≥ 0 (14.4) does exactly that. The additional variables (v c l ) i, (v c u) i , (v x l ) j and (v c u) j are elasticity variables because they allow a constraint to be violated and hence add some elasticity to the problem. For instance, the elasticity variable (v c l ) i controls how much the lower bound (l c ) i should be relaxed to make the problem feasible. Finally, the so-called penalty function p(vl c , vu, c vl x , vu) x is chosen so it penalize changes to bounds. Given the weights • wl c ∈ Rm (associated with l c ), • wu c ∈ R m (associated with u c ), • wl x ∈ R n (associated with l x ), • wu x ∈ R n (associated with u x ), then a natural choice is p(v c l , v c u, v x l , v x u) = (w c l ) T v c l + (w c u) T v c u + (w x l ) T v x l + (w x u) T v x u. (14.5) Hence, the penalty function p() is a weighted sum of the relaxation and therefore the problem (14.4) keeps the amount of relaxation at a minimum. Please observe that • the problem (14.6) is always feasible. • a negative weight implies problem (14.6) is unbounded. For this reason if the value of a weight is negative MOSEK fixes the associated elasticity variable to zero. Clearly, if one or more of the weights are negative may imply that it is not possible repair the problem. A simple choice of weights is to let them all to be 1, but of course that does not take into account that constraints may have different importance. 14.2.1 Caveats Observe if the infeasible problem
- Page 149 and 150: 10.2. CONIC QUADRATIC OPTIMIZATION
- Page 151 and 152: 10.2. CONIC QUADRATIC OPTIMIZATION
- Page 153 and 154: 10.3. SEMIDEFINITE OPTIMIZATION 131
- Page 155 and 156: 10.4. QUADRATIC AND QUADRATICALLY C
- Page 157 and 158: Chapter 11 The optimizers for conti
- Page 159 and 160: 11.1. HOW AN OPTIMIZER WORKS 137 11
- Page 161 and 162: 11.2. LINEAR OPTIMIZATION 139 11.2.
- Page 163 and 164: 11.2. LINEAR OPTIMIZATION 141 Whene
- Page 165 and 166: 11.2. LINEAR OPTIMIZATION 143 11.2.
- Page 167 and 168: 11.2. LINEAR OPTIMIZATION 145 • R
- Page 169 and 170: 11.5. NONLINEAR CONVEX OPTIMIZATION
- Page 171 and 172: 11.6. SOLVING PROBLEMS IN PARALLEL
- Page 173 and 174: 11.6. SOLVING PROBLEMS IN PARALLEL
- Page 175 and 176: 11.6. SOLVING PROBLEMS IN PARALLEL
- Page 177 and 178: Chapter 12 The optimizers for mixed
- Page 179 and 180: 12.3. THE MIXED-INTEGER CONIC OPTIM
- Page 181 and 182: 12.5. TERMINATION CRITERION 159 •
- Page 183 and 184: 12.7. UNDERSTANDING SOLUTION QUALIT
- Page 185 and 186: Chapter 13 The analyzers 13.1 The p
- Page 187 and 188: 13.1. THE PROBLEM ANALYZER 165 Cons
- Page 189 and 190: 13.2. ANALYZING INFEASIBLE PROBLEMS
- Page 191 and 192: 13.2. ANALYZING INFEASIBLE PROBLEMS
- Page 193 and 194: 13.2. ANALYZING INFEASIBLE PROBLEMS
- Page 195 and 196: 13.2. ANALYZING INFEASIBLE PROBLEMS
- Page 197 and 198: 13.2. ANALYZING INFEASIBLE PROBLEMS
- Page 199: Chapter 14 Primal feasibility repai
- Page 203 and 204: 14.3. FEASIBILITY REPAIR IN MOSEK 1
- Page 205 and 206: 14.3. FEASIBILITY REPAIR IN MOSEK 1
- Page 207 and 208: Chapter 15 Sensitivity analysis 15.
- Page 209 and 210: 15.4. SENSITIVITY ANALYSIS FOR LINE
- Page 211 and 212: 15.4. SENSITIVITY ANALYSIS FOR LINE
- Page 213 and 214: 15.4. SENSITIVITY ANALYSIS FOR LINE
- Page 215 and 216: 15.5. SENSITIVITY ANALYSIS FROM THE
- Page 217 and 218: 15.6. SENSITIVITY ANALYSIS WITH THE
- Page 219 and 220: 15.6. SENSITIVITY ANALYSIS WITH THE
- Page 221 and 222: Appendix A API reference This chapt
- Page 223 and 224: 201 • Task.relaxprimal Obtain inf
- Page 225 and 226: A.1. EXCEPTIONS 203 • Task.putvar
- Page 227 and 228: A.2. CLASS TASK 205 Arguments which
- Page 229 and 230: A.2. CLASS TASK 207 See also • Ro
- Page 231 and 232: A.2. CLASS TASK 209 Description: Ap
- Page 233 and 234: A.2. CLASS TASK 211 Description: If
- Page 235 and 236: A.2. CLASS TASK 213 A.2.16 Task.com
- Page 237 and 238: A.2. CLASS TASK 215 subj : int[] In
- Page 239 and 240: A.2. CLASS TASK 217 firsti : int In
- Page 241 and 242: A.2. CLASS TASK 219 A.2.27 Task.get
- Page 243 and 244: A.2. CLASS TASK 221 valijkl : Descr
- Page 245 and 246: A.2. CLASS TASK 223 A.2.33 Task.get
- Page 247 and 248: A.2. CLASS TASK 225 idx : long Inde
- Page 249 and 250: A.2. CLASS TASK 227 A.2.41 Task.get
14.2. AUTOMATIC REPAIR 179<br />
One way of making the problem feasible is to reduce the lower bounds and increase the upper bounds.<br />
If the change is sufficiently large the problem becomes feasible. Now an obvious idea is to compute<br />
the optimal relaxation by solving an optimization problem. <strong>The</strong> problem<br />
minimize p(v c l , v c u, v x l , v x u)<br />
subject to l c ≤ Ax + v c l − v c u ≤ u c ,<br />
l x ≤ x + v x l − v x u ≤ u x ,<br />
v c l , v c u, v x l , v x u ≥ 0<br />
(14.4)<br />
does exactly that. <strong>The</strong> additional variables (v c l ) i, (v c u) i , (v x l ) j and (v c u) j are elasticity variables because<br />
they allow a constraint to be violated and hence add some elasticity to the problem. For instance,<br />
the elasticity variable (v c l ) i controls how much the lower bound (l c ) i should be relaxed to make the<br />
problem feasible. Finally, the so-called penalty function<br />
p(vl c , vu, c vl x , vu)<br />
x<br />
is chosen so it penalize changes to bounds. Given the weights<br />
• wl c ∈ Rm (associated with l c ),<br />
• wu c ∈ R m (associated with u c ),<br />
• wl x ∈ R n (associated with l x ),<br />
• wu x ∈ R n (associated with u x ),<br />
then a natural choice is<br />
p(v c l , v c u, v x l , v x u) = (w c l ) T v c l + (w c u) T v c u + (w x l ) T v x l + (w x u) T v x u. (14.5)<br />
Hence, the penalty function p() is a weighted sum of the relaxation and therefore the problem (14.4)<br />
keeps the amount of relaxation at a minimum. Please observe that<br />
• the problem (14.6) is always feasible.<br />
• a negative weight implies problem (14.6) is unbounded. For this reason if the value of a weight<br />
is negative <strong>MOSEK</strong> fixes the associated elasticity variable to zero. Clearly, if one or more of the<br />
weights are negative may imply that it is not possible repair the problem.<br />
A simple choice of weights is to let them all to be 1, but of course that does not take into account that<br />
constraints may have different importance.<br />
14.2.1 Caveats<br />
Observe if the infeasible problem