# 9.2 Addressing numerical issues¶

The suggestions in this section should help diagnose and solve issues with numerical instability, in particular `UNKNOWN`

solution status or solutions with large violations. Since numerically stable models tend to solve faster, following these hints can also dramatically shorten solution times.

We always recommend that issues of this kind are addressed by reformulating or rescaling the model, since it is the modeler who has the best insight into the structure of the problem and can fix the cause of the issue.

## 9.2.1 Formulating problems¶

Scaling

Make sure that all the data in the problem are of comparable orders of magnitude. This applies especially to the linear constraint matrix. Use Sec. 9.1.1 (Input data) if necessary. For example a report such as

```
|A| nnz: 597023 min=1.17e-6 max=2.21e+5
```

means that the ratio of largest to smallest elements in `A`

is \(10^{11}\). In this case the user should rescale or reformulate the model to avoid such spread which makes it difficult for **MOSEK** to scale the problem internally. In many cases it may be possible to change the units, i.e. express the model in terms of rescaled variables (for instance work with millions of dollars instead of dollars, etc.).

Similarly, if the objective contains very different coefficients, say

then it is likely to lead to inaccuracies. The objective will be dominated by the contribution from \(x\) and \(y\) will become insignificant.

Removing huge bounds

**Never** use a very large number as replacement for \(\infty\). Instead define the variable or constraint as unbounded from below/above. Similarly, avoid artificial huge bounds if you expect they will not become tight in the optimal solution.

Avoiding linear dependencies

As much as possible try to avoid linear dependencies and near-linear dependencies in the model. See Example 9.3.

Avoiding ill-posedness

Avoid continuous models which are ill-posed: the solution space is degenerate, for example consists of a single point (technically, the Slater condition is not satisfied). In general, this refers to problems which are borderline between feasible and infeasible. See Example 9.1.

Scaling the expected solution

Try to formulate the problem in such a way that the expected solution (both primal and dual) is not very large. Consult the solution summary Sec. 9.1.2 (Solution summary) to check the objective values or solution norms.

## 9.2.2 Further suggestions¶

Here are other simple suggestions that can help locate the cause of the issues. They can also be used as hints for how to tune the optimizer if fixing the root causes of the issue is not possible.

Remove the objective and solve the feasibility problem. This can reveal issues with the objective.

Change the objective or change the objective sense from minimization to maximization (if applicable). If the two objective values are almost identical, this may indicate that the feasible set is very small, possibly degenerate.

Perturb the data, for instance bounds, very slightly, and compare the results.

For linear problems: solve the problem using a different optimizer by setting the parameter

`optimizer`

and compare the results.Force the optimizer to solve the primal/dual versions of the problem by setting the parameter

`intpntSolveForm`

or`simSolveForm`

.**MOSEK**has a heuristic to decide whether to dualize, but for some problems the guess is wrong an explicit choice may give better results.Solve the problem without presolve or some of its parts by setting the parameter

`presolveUse`

, see Sec. 13.1 (Presolve).Use different numbers of threads (

`numThreads`

) and compare the results. Very different results indicate numerical issues resulting from round-off errors.

If the problem was dumped to a file, experimenting with various parameters is facilitated with the **MOSEK** Command Line Tool or **MOSEK** Python Console Sec. 9.4 (Python Console).

## 9.2.3 Typical pitfalls¶

A toy example of this situation is the feasibility problem

whose only solution is \(x=0\) and moreover replacing any \(1\) on the right hand side by \(1-\varepsilon\) makes the problem infeasible and replacing it by \(1+\varepsilon\) yields a problem whose solution set is an interval (fully-dimensional). This is an example of ill-posedness.

If the norm of the expected solution is very large it may lead to numerical issues or infeasibility. For example the problem

may be declared infeasible because the expected solution must satisfy \(x\geq 5\cdot 10^{9}\).

Consider the following problem:

If we add the equalities together we obtain:

which is infeasible for any \(\varepsilon \neq 0\). Here infeasibility is caused by a linear dependency in the constraint matrix coupled with a precision error represented by the \(\varepsilon\). Indeed if a problem contains linear dependencies then the problem is either infeasible or contains redundant constraints. In the above case any of the equality constraints can be removed while not changing the set of feasible solutions. To summarize linear dependencies in the constraints can give rise to infeasible problems and therefore it is better to avoid them.

Next consider the problem

Now the **MOSEK** presolve will, for the sake of efficiency, fix variables (and constraints) that have tight bounds where tightness is controlled by the parameter `presolveTolX`

. Since the bounds

are tight, presolve will set \(x_1=0\). It easy to see that this implies \(x_4=0\), which leads to the incorrect conclusion that the problem is infeasible. However a tiny change of the value \(10^{-9}\) makes the problem feasible. In general it is recommended to avoid ill-posed problems, but if that is not possible then one solution is to reduce parameters such as `presolveTolX`

to say \(10^{-10}\). This will at least make sure that presolve does not make the undesired reduction.