2 Linear optimization¶
In this chapter we discuss various aspects of linear optimization. We first introduce the basic concepts of linear optimization and discuss the underlying geometric interpretations. We then give examples of the most frequently used reformulations or modeling tricks used in linear optimization, and finally we discuss duality and infeasibility theory in some detail.
2.1 Introduction¶
2.1.1 Basic notions¶
The most basic type of optimization is linear optimization. In linear optimization we minimize a linear function given a set of linear constraints. For example, we may wish to minimize a linear function
under the constraints that
The function we minimize is often called the objective function; in this case we have a linear objective function. The constraints are also linear and consist of both linear equalities and inequalities. We typically use more compact notation
and we call (2.1) a linear optimization problem. The domain where all constraints are satisfied is called the feasible set; the feasible set for (2.1) is shown in Fig. 2.1.

Fig. 2.1 Feasible set for
For this simple problem we see by inspection that the optimal value of the problem is
Linear optimization problems are typically formulated using matrix notation. The standard form of a linear minimization problem is:
For example, we can pose (2.1) in this form with
There are many other formulations for linear optimization problems; we can have different types of constraints,
and different bounds on the variables
or we may have no bounds on some
2.1.2 Geometry of linear optimization¶
A hyperplane is a subset of

Fig. 2.2 The dashed line illustrates a hyperplane
Thus a linear constraint
with
Next, consider a point

Fig. 2.3 The grey area is the halfspace
A set of linear inequalities
corresponds to an intersection of halfspaces and forms a polyhedron, see Fig. 2.4.

Fig. 2.4 A polyhedron formed as an intersection of halfspaces.¶
The polyhedral description of the feasible set gives us a very intuitive interpretation of linear optimization, which is illustrated in Fig. 2.5. The dashed lines are normal to the objective

Fig. 2.5 Geometric interpretation of linear optimization. The optimal solution
The polyhedron shown in the figure is nonempty and bounded, but this is not always the case for polyhedra arising from linear inequalities in optimization problems. In such cases the optimization problem may be infeasible or unbounded, which we will discuss in detail in Sec. 2.3 (Infeasibility in linear optimization).
2.2 Linear modeling¶
In this section we present useful reformulation techniques and standard tricks which allow constructing more complicated models using linear optimization. It is also a guide through the types of constraints which can be expressed using linear (in)equalities.
2.2.1 Maximum¶
The inequality
and similarly
Of course the same reformulation applies if each

Fig. 2.6 A convex piecewise-linear function (solid lines) of a single variable
The epigraph
Piecewise-linear functions have many uses linear in optimization; either we have a convex piecewise-linear formulation from the onset, or we may approximate a more complicated (nonlinear) problem using piecewise-linear approximations, although with modern nonlinear optimization software it is becoming both easier and more efficient to directly formulate and solve nonlinear problems without piecewise-linear approximations.
2.2.2 Absolute value¶
The absolute value of a scalar variable,
we can thus use two inequalities
2.2.3 The norm¶
All norms are convex functions, but the
To model the epigraph
we introduce the following system
with additional (auxiliary) variable
with auxiliary variables
as
where
The
where
uses the
where
2.2.4 The norm¶
The
which is another example of a simple piecewise-linear function. Using Sec. 2.2.2 (Absolute value) we model
as
Again, we can also consider affine functions of
which can be described as
It is interesting to note that the
Let us verify that the dual of the
Obviously the maximum is attained for
i.e.,
To maximize
2.2.5 Homogenization¶
Consider the linear-fractional problem
Perhaps surprisingly, it can be turned into a linear problem if we homogenize the linear constraint, i.e. replace it with
If
Note that, as the sketch of proof above suggests, the optimal value in (2.8) may not be attained, even though the one in the linear problem (2.9) always is. For example, consider a pair of problems constructed as above:
Both have an optimal value of
2.2.6 Sum of largest elements¶
Suppose
with new variables
If
which is a linear function minimized at one of the endpoints of
It follows that
Since the assumption that
2.3 Infeasibility in linear optimization¶
In this section we discuss the basic theory of primal infeasibility certificates for linear problems. These ideas will be developed further after we have introduced duality in the next section.
One of the first issues one faces when presented with an optimization problem is whether it has any solutions at all. As we discussed previously, for a linear optimization problem
the feasible set
is a convex polytope. We say the problem is feasible if
Consider the optimization problem:
This problem is infeasible. We see it by taking a linear combination of the constraints with coefficients
This clearly proves infeasibility: the left-hand side is negative and the right-hand side is positive, which is impossible.
2.3.1 Farkas’ lemma¶
In the last example we proved infeasibility of the linear system by exhibiting an explicit linear combination of the equations, such that the right-hand side (constant) is positive while on the left-hand side all coefficients are negative or zero. In matrix notation, such a linear combination is given by a vector
Given
There exists
such that .There exists
such that and .
Let
Farkas’ lemma implies that either the problem (2.11) is feasible or there is a certificate of infeasibility
2.3.2 Locating infeasibility¶
As we already discussed, the infeasibility certificate
As a cautionary note consider the constraints
Any problem with those constraints is infeasible, but dropping any one of the inequalities creates a feasible subproblem.
2.4 Duality in linear optimization¶
Duality is a rich and powerful theory, central to understanding infeasibility and sensitivity issues in linear optimization. In this section we only discuss duality in linear optimization at a descriptive level suited for practitioners; we refer to Sec. 8 (Duality in conic optimization) for a more in-depth discussion of duality for general conic problems.
2.4.1 The dual problem¶
Primal problem
We consider as always a linear optimization problem in standard form:
We denote the optimal objective value in (2.12) by
The problem is infeasible. By convention
. is finite, in which case the problem has an optimal solution. , meaning that there are feasible solutions with decreasing to , in which case we say the problem is unbounded.
Lagrange function
We associate with (2.12) a so-called Lagrangian function
The variables
Note the we used the nonnegativity of
Dual problem
For every
The optimal value of (2.13) will be denoted
As an example, let us derive the dual of the basis pursuit formulation (2.7). It would be possible to add auxiliary variables and constraints to force that problem into the standard form (2.12) and then just apply the dual transformation as a black box, but it is both easier and more instructive to directly write the Lagrangian:
where
is only bounded below if
It is not hard to observe that an equivalent formulation of (2.14) is simply
which should be associated with duality between norms discussed in Example 2.2.
We can similarly derive the dual of problem (2.13). If we write it simply as
then the Lagrangian is
with
so, as expected, the dual of the dual recovers the original primal problem.
2.4.2 Weak and strong duality¶
Suppose
so the dual objective value is a lower bound on the objective value of the primal. In particular, any dual feasible point
and we immediately get the next lemma.
It follows that if
The remarkable property of linear optimization is that
If at least one of
Suppose
Optimality of
If
The first inequality above implies that
We can exploit strong duality to freely choose between solving the primal or dual version of any linear problem.
Suppose that
The maximum is attained when
with
which is exactly the problem (2.10) we studied in Sec. 2.2.6 (Sum of largest elements). Strong duality now implies that (2.10) computes the sum of
2.4.3 Duality and infeasibility: summary¶
We can now expand the discussion of infeasibility certificates in the context of duality. Farkas’ lemma Lemma 2.1 can be dualized and the two versions can be summarized as follows:
Weak and strong duality for linear optimization now lead to the following conclusions:
If the problem is primal feasible and has finite objective value (
) then so is the dual and . We sometimes refer to this case as primal and dual feasible. The dual solution certifies the optimality of the primal solution and vice versa.If the primal problem is feasible but unbounded (
) then the dual is infeasible ( ). Part (ii) of Farkas’ lemma provides a certificate of this fact, that is a vector with , and . In fact it is easy to give this statement a geometric interpretation. If is any primal feasible point then the infinite raybelongs to the feasible set
because and . Along this ray the objective value is unbounded below:If the primal problem is infeasible (
) then a certificate of this fact is provided by part (i). The dual problem may be unbounded ( ) or infeasible ( ).
Weak and strong duality imply that the only case when
2.4.4 Dual values as shadow prices¶
Dual values are related to shadow prices, as they measure, under some nondegeneracy assumption, the sensitivity of the objective value to a change in the constraint. Consider again the primal and dual problem pair (2.12) and (2.13) with feasible sets
Suppose we change one of the values in
and by strong duality the primal objective changes by the same amount.
An optimization student wants to save money on the diet while remaining healthy. A healthy diet requires at least
P |
C |
F |
V |
price |
|
takeaway |
3 |
3 |
2 |
1 |
5 |
vegetables |
1 |
2 |
0 |
4 |
1 |
bread |
0.5 |
4 |
1 |
0 |
2 |
The problem of minimizing cost while meeting dietary requirements is
If
with optimal cost
Improving the intake of protein by
If one month the student only had
We stress that a truly balanced diet problem should also include upper bounds.