# 7. Practical optimization¶

## 7.1. Introduction¶

In this chapter we discuss different practical aspects of creating optimization models.

## 7.2. The quality of a solution¶

In this section we will discuss how to validate an obtained solution. Assume we have a conic model with continuous variables only and that the optimization software has reported an optimal primal and dual solution. Given such a solution, we might ask how can we verify that the reported solution is indeed optimal?

To that end, consider a simple model

where MOSEK might approximate the solution as

and therefore the approximate optimal objective value is

The true objective value is \(-\sqrt{2}\), so the approximate objective value is wrong by the amount

Most likely this difference is irrelevant for all practical purposes. Nevertheless, in general a solution obtained using floating point arithmetic is only an approximation. Most (if not all) commercial optimization software uses double precision floating point arithmetic, implying that about 16 digits are correct in the computations performed internally by the software. This also means that irrational numbers such as \(\sqrt{2}\) and \(\pi\) can only be stored accurately within 16 digits.

A good practice after solving an optimization problem is to evaluate whether the reported solution is indeed an optimal solution; at the very least this process should be carried out during the initial phase of building a model, or if the reported solution is unexpected in some way. The first step in that process is to check that is it feasible; in case of the small example (1) this amounts to verifying that

which demonstrates that the solution is feasible. However, in general the constraints might be violated due to computations in finite precision. It can be difficult to access the significance of a specific violation; for example a violation of one unit in

may or may not be more significant than a violation of one unit in

The right-hand side of \(10^9\) may itself be the result of a computation in finite precision, and my only be known with, say 3 digits of accuracy. Therefore, a violation of 1 unit is not significant since the true right-hand side could just as well be \(1.001 \cdot 10^9\).

Another question is how to verify that a feasible approximate solution is actually optimal, which is answered by duality theory. From the discussion of duality in Sec. 2.3, the dual problem to (1) is

and weak duality mandates that the primal objective value is greater than dual objective value for any dual feasible solution, i.e.,

Furthermore, if the bound is tight (i.e., if the inequality holds as an equality), then the primal solution must be optimal. In other words, if there exists a dual feasible solution such that the dual objective value is identical to \(-\sqrt{2}\), then we know (from duality) that the primal solution is optimal. For example, the optimization software may report

which is easily verified to be a feasible solution with dual objective value \(-\sqrt{2}\), proving optimality of both the primal and solution.

## 7.3. Distance to a cone¶

Assume we want to solve the conic optimization problem

where \(\K\) is a convex cone. Let \(P(x,\K)\) denotes the projection of \(x\) onto \(\K\). The distance measure

then gives the distance from \(x\) to the nearest feasible point in \(\K\), and obviously

Thus \(d(x,\K)\) is natural measure of the cone-feasibility of a point \(x\); if \(d(x,\K)>0\) then \(x\not \in \K\) (i.e., \(x\) is infeasible) and the smaller the value, the closer \(x\) is to \(\K\). For \(x\in \R\), let

The following lemmas then give the projections and distance measure for the different cones.

The projection is

and the distance is

Let \(x:=(x_1, x_2)\in \R\times \R^{n-1}\). Then

see [FLT02] prop 3.3, and

Let \(x:=(x_1, x_2, x_3)\in \R\times \R\times \R^{n-2}\). Then

and

where \(T_n\) is the rotation operator defining the rotated cone.

Let \(X=\sum_{i=1}^n \lambda_i q_i q_i^T\in\mathcal{S}^n\) be an eigenvalue decomposition. Then

and