# 15.3 Sensitivity Analysis¶

Given an optimization problem it is often useful to obtain information about how the optimal objective value changes when the problem parameters are perturbed. E.g, assume that a bound represents the capacity of a machine. Now, it may be possible to expand the capacity for a certain cost and hence it is worthwhile knowing what the value of additional capacity is. This is precisely the type of questions the sensitivity analysis deals with.

Analyzing how the optimal objective value changes when the problem data is changed is called *sensitivity analysis*.

References

The book [Chv83] discusses the classical sensitivity analysis in Chapter 10 whereas the book [RTV97] presents a modern introduction to sensitivity analysis. Finally, it is recommended to read the short paper [Wal00] to avoid some of the pitfalls associated with sensitivity analysis.

Warning

Currently, sensitivity analysis is only available for continuous linear optimization problems. Moreover, **MOSEK** can only deal with perturbations of bounds and objective function coefficients.

## 15.3.1 Sensitivity Analysis for Linear Problems¶

### 15.3.1.1 The Optimal Objective Value Function¶

Assume that we are given the problem

and we want to know how the optimal objective value changes as \(l^c_i\) is perturbed. To answer this question we define the perturbed problem for \(l^c_i\) as follows

where \(e_i\) is the \(i\)-th column of the identity matrix. The function

shows the optimal objective value as a function of \(\beta\). Please note that a change in \(\beta\) corresponds to a perturbation in \(l_i^c\) and hence (2) shows the optimal objective value as a function of varying \(l^c_i\) with the other bounds fixed.

It is possible to prove that the function (2) is a piecewise linear and convex function, i.e. its graph may look like in Fig. 3 and Fig. 4.

Clearly, if the function \(f_{l^c_i}(\beta )\) does not change much when \(\beta\) is changed, then we can conclude that the optimal objective value is insensitive to changes in \(l_i^c\). Therefore, we are interested in the rate of change in \(f_{l^c_i}(\beta)\) for small changes in \(\beta\) — specifically the gradient

which is called the *shadow price* related to \(l^c_i\). The shadow price specifies how the objective value changes for small changes of \(\beta\) around zero. Moreover, we are interested in the *linearity interval*

for which

Since \(f_{l^c_i}\) is not a smooth function \(f^\prime_{l^c_i}\) may not be defined at \(0\), as illustrated in Fig. 4. In this case we can define a left and a right shadow price and a left and a right linearity interval.

The function \(f_{l^c_i}\) considered only changes in \(l^c_i\). We can define similar functions for the remaining parameters of the \(z\) defined in (1) as well:

Given these definitions it should be clear how linearity intervals and shadow prices are defined for the parameters \(u^c_i\) etc.

#### 15.3.1.1.1 Equality Constraints¶

In **MOSEK** a constraint can be specified as either an equality constraint or a ranged constraint. If some constraint \(e_i^c\) is an equality constraint, we define the optimal value function for this constraint as

Thus for an equality constraint the upper and the lower bounds (which are equal) are perturbed simultaneously. Therefore, **MOSEK** will handle sensitivity analysis differently for a ranged constraint with \(l^c_i = u^c_i\) and for an equality constraint.

### 15.3.1.2 The Basis Type Sensitivity Analysis¶

The classical sensitivity analysis discussed in most textbooks about linear optimization, e.g. [Chv83], is based on an optimal basic solution or, equivalently, on an optimal basis. This method may produce misleading results [RTV97] but is **computationally cheap**. Therefore, and for historical reasons, this method is available in **MOSEK**.

We will now briefly discuss the basis type sensitivity analysis. Given an optimal basic solution which provides a partition of variables into basic and non-basic variables, the basis type sensitivity analysis computes the linearity interval \([\beta_1,\beta_2]\) so that the basis remains optimal for the perturbed problem. A shadow price associated with the linearity interval is also computed. However, it is well-known that an optimal basic solution may not be unique and therefore the result depends on the optimal basic solution employed in the sensitivity analysis. This implies that the computed interval is only a subset of the largest interval for which the shadow price is constant. Furthermore, the optimal objective value function might have a breakpoint for \(\beta = 0\). In this case the basis type sensitivity method will only provide a subset of either the left or the right linearity interval.

In summary, the basis type sensitivity analysis is computationally cheap but does not provide complete information. Hence, the results of the basis type sensitivity analysis should be used with care.

### 15.3.1.3 The Optimal Partition Type Sensitivity Analysis¶

Another method for computing the complete linearity interval is called the *optimal partition type sensitivity analysis*. The main drawback of the optimal partition type sensitivity analysis is that it is computationally expensive compared to the basis type analysis. This type of sensitivity analysis is currently provided as an experimental feature in
**MOSEK**.

Given the optimal primal and dual solutions to (1), i.e. \(x^*\) and \(((s_l^c)^*,(s_u^c)^*,(s_l^x)^*,(s_u^x)^*)\) the optimal objective value is given by

The left and right shadow prices \(\sigma_1\) and \(\sigma_2\) for \(l_i^c\) are given by this pair of optimization problems:

and

These two optimization problems make it easy to interpret the shadow price. Indeed, if \(((s_l^c)^*,(s_u^c)^*,(s_l^x)^*,(s_u^x)^*)\) is an arbitrary optimal solution then

Next, the linearity interval \([\beta_1,\beta_2]\) for \(l_i^c\) is computed by solving the two optimization problems

and

The linearity intervals and shadow prices for \(u_i^c,\) \(l_j^x,\) and \(u_j^x\) are computed similarly to \(l_i^c\).

The left and right shadow prices for \(c_j\) denoted \(\sigma_1\) and \(\sigma_2\) respectively are computed as follows:

and

Once again the above two optimization problems make it easy to interpret the shadow prices. Indeed, if \(x^*\) is an arbitrary primal optimal solution, then

The linearity interval \([\beta_1,\beta_2]\) for a \(c_j\) is computed as follows:

and

### 15.3.1.4 Example: Sensitivity Analysis¶

As an example we will use the following transportation problem. Consider the problem of minimizing the transportation cost between a number of production plants and stores. Each plant supplies a number of goods and each store has a given demand that must be met. Supply, demand and cost of transportation per unit are shown in Fig. 5.

If we denote the number of transported goods from location \(i\) to location \(j\) by \(x_{ij}\), problem can be formulated as the linear optimization problem of minimizing

subject to

The sensitivity parameters are shown in Table 18 and Table 19 for the basis type analysis and in Table 20 and Table 21 for the optimal partition type analysis.

Con. | \(\beta_1\) | \(\beta_2\) | \(\sigma_1\) | \(\sigma_2\) |
---|---|---|---|---|

\(1\) | \(-300.00\) | \(0.00\) | \(3.00\) | \(3.00\) |

\(2\) | \(-700.00\) | \(+\infty\) | \(0.00\) | \(0.00\) |

\(3\) | \(-500.00\) | \(0.00\) | \(3.00\) | \(3.00\) |

\(4\) | \(-0.00\) | \(500.00\) | \(4.00\) | \(4.00\) |

\(5\) | \(-0.00\) | \(300.00\) | \(5.00\) | \(5.00\) |

\(6\) | \(-0.00\) | \(700.00\) | \(5.00\) | \(5.00\) |

\(7\) | \(-500.00\) | \(700.00\) | \(2.00\) | \(2.00\) |

Var. | \(\beta_1\) | \(\beta_2\) | \(\sigma_1\) | \(\sigma_2\) |

\(x_{11}\) | \(-\infty\) | \(300.00\) | \(0.00\) | \(0.00\) |

\(x_{12}\) | \(-\infty\) | \(100.00\) | \(0.00\) | \(0.00\) |

\(x_{23}\) | \(-\infty\) | \(0.00\) | \(0.00\) | \(0.00\) |

\(x_{24}\) | \(-\infty\) | \(500.00\) | \(0.00\) | \(0.00\) |

\(x_{31}\) | \(-\infty\) | \(500.00\) | \(0.00\) | \(0.00\) |

\(x_{33}\) | \(-\infty\) | \(500.00\) | \(0.00\) | \(0.00\) |

\(x_{34}\) | \(-0.000000\) | \(500.00\) | \(2.00\) | \(2.00\) |

Con. | \(\beta_1\) | \(\beta_2\) | \(\sigma_1\) | \(\sigma_2\) |
---|---|---|---|---|

\(1\) | \(-300.00\) | \(500.00\) | \(3.00\) | \(1.00\) |

\(2\) | \(-700.00\) | \(+\infty\) | \(-0.00\) | \(-0.00\) |

\(3\) | \(-500.00\) | \(500.00\) | \(3.00\) | \(1.00\) |

\(4\) | \(-500.00\) | \(500.00\) | \(2.00\) | \(4.00\) |

\(5\) | \(-100.00\) | \(300.00\) | \(3.00\) | \(5.00\) |

\(6\) | \(-500.00\) | \(700.00\) | \(3.00\) | \(5.00\) |

\(7\) | \(-500.00\) | \(700.00\) | \(2.00\) | \(2.00\) |

Var. | \(\beta_1\) | \(\beta_2\) | \(\sigma_1\) | \(\sigma_2\) |

\(x_{11}\) | \(-\infty\) | \(300.00\) | \(0.00\) | \(0.00\) |

\(x_{12}\) | \(-\infty\) | \(100.00\) | \(0.00\) | \(0.00\) |

\(x_{23}\) | \(-\infty\) | \(500.00\) | \(0.00\) | \(2.00\) |

\(x_{24}\) | \(-\infty\) | \(500.00\) | \(0.00\) | \(0.00\) |

\(x_{31}\) | \(-\infty\) | \(500.00\) | \(0.00\) | \(0.00\) |

\(x_{33}\) | \(-\infty\) | \(500.00\) | \(0.00\) | \(0.00\) |

\(x_{34}\) | \(-\infty\) | \(500.00\) | \(0.00\) | \(2.00\) |

Var. | \(\beta_1\) | \(\beta_2\) | \(\sigma_1\) | \(\sigma_2\) |
---|---|---|---|---|

\(c_1\) | \(-\infty\) | \(3.00\) | \(300.00\) | \(300.00\) |

\(c_2\) | \(-\infty\) | \(\infty\) | \(100.00\) | \(100.00\) |

\(c_3\) | \(-2.00\) | \(\infty\) | \(0.00\) | \(0.00\) |

\(c_4\) | \(-\infty\) | \(2.00\) | \(500.00\) | \(500.00\) |

\(c_5\) | \(-3.00\) | \(\infty\) | \(500.00\) | \(500.00\) |

\(c_6\) | \(-\infty\) | \(2.00\) | \(500.00\) | \(500.00\) |

\(c_7\) | \(-2.00\) | \(\infty\) | \(0.00\) | \(0.00\) |

Var. | \(\beta_1\) | \(\beta_2\) | \(\sigma_1\) | \(\sigma_2\) |
---|---|---|---|---|

\(c_1\) | \(-\infty\) | \(3.00\) | \(300.00\) | \(300.00\) |

\(c_2\) | \(-\infty\) | \(\infty\) | \(100.00\) | \(100.00\) |

\(c_3\) | \(-2.00\) | \(\infty\) | \(0.00\) | \(0.00\) |

\(c_4\) | \(-\infty\) | \(2.00\) | \(500.00\) | \(500.00\) |

\(c_5\) | \(-3.00\) | \(\infty\) | \(500.00\) | \(500.00\) |

\(c_6\) | \(-\infty\) | \(2.00\) | \(500.00\) | \(500.00\) |

\(c_7\) | \(-2.00\) | \(\infty\) | \(0.00\) | \(0.00\) |

Examining the results from the optimal partition type sensitivity analysis we see that for constraint number \(1\) we have \(\sigma_1 = 3,\ \sigma_2=1\) and \(\beta_1 = -300,\ \beta_2=500\). Therefore, we have a left linearity interval of \([-300,0]\) and a right interval of \([0,500]\). The corresponding left and right shadow prices are \(3\) and \(1\) respectively. This implies that if the upper bound on constraint \(1\) increases by

then the optimal objective value will decrease by the value

Correspondingly, if the upper bound on constraint \(1\) is decreased by

then the optimal objective value will increase by the value

## 15.3.2 Sensitivity Analysis with **MOSEK**¶

**MOSEK** provides the functions `Task.primalsensitivity`

and `Task.dualsensitivity`

for performing sensitivity analysis. The code in Listing 34 gives an example of its use.

```
import sys
import mosek
# Since the actual value of Infinity is ignores, we define it solely
# for symbolic purposes:
inf = 0.0
# Define a stream printer to grab output from MOSEK
def streamprinter(text):
sys.stdout.write(text)
sys.stdout.flush()
def main():
# Create a MOSEK environment
with mosek.Env() as env:
# Attach a printer to the environment
env.set_Stream(mosek.streamtype.log, streamprinter)
# Create a task
with env.Task(0, 0) as task:
# Attach a printer to the task
task.set_Stream(mosek.streamtype.log, streamprinter)
# Set up data
bkc = [mosek.boundkey.up, mosek.boundkey.up,
mosek.boundkey.up, mosek.boundkey.fx,
mosek.boundkey.fx, mosek.boundkey.fx,
mosek.boundkey.fx]
blc = [-inf, -inf, -inf, 800., 100., 500., 500.]
buc = [400., 1200., 1000., 800., 100., 500., 500.]
bkx = [mosek.boundkey.lo, mosek.boundkey.lo,
mosek.boundkey.lo, mosek.boundkey.lo,
mosek.boundkey.lo, mosek.boundkey.lo,
mosek.boundkey.lo]
c = [1.0, 2.0, 5.0, 2.0, 1.0, 2.0, 1.0]
blx = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
bux = [inf, inf, inf, inf, inf, inf, inf]
ptrb = [0, 2, 4, 6, 8, 10, 12]
ptre = [2, 4, 6, 8, 10, 12, 14]
sub = [0, 3, 0, 4, 1, 5, 1, 6, 2, 3, 2, 5, 2, 6]
val = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
numcon = len(bkc)
numvar = len(bkx)
numanz = len(val)
# Input linear data
task.inputdata(numcon, numvar,
c, 0.0,
ptrb, ptre, sub, val,
bkc, blc, buc,
bkx, blx, bux)
# Set objective sense
task.putobjsense(mosek.objsense.minimize)
# Optimize
task.optimize()
# Analyze upper bound on c1 and the equality constraint on c4
subi = [0, 3]
marki = [mosek.mark.up, mosek.mark.up]
# Analyze lower bound on the variables x12 and x31
subj = [1, 4]
markj = [mosek.mark.lo, mosek.mark.lo]
leftpricei = [0., 0.]
rightpricei = [0., 0.]
leftrangei = [0., 0.]
rightrangei = [0., 0.]
leftpricej = [0., 0.]
rightpricej = [0., 0.]
leftrangej = [0., 0.]
rightrangej = [0., 0.]
task.primalsensitivity(subi,
marki,
subj,
markj,
leftpricei,
rightpricei,
leftrangei,
rightrangei,
leftpricej,
rightpricej,
leftrangej,
rightrangej)
print('Results from sensitivity analysis on bounds:')
print('\tleftprice | rightprice | leftrange | rightrange ')
print('For constraints:')
for i in range(2):
print('\t%10f %10f %10f %10f' % (leftpricei[i],
rightpricei[i],
leftrangei[i],
rightrangei[i]))
print('For variables:')
for i in range(2):
print('\t%10f %10f %10f %10f' % (leftpricej[i],
rightpricej[i],
leftrangej[i],
rightrangej[i]))
leftprice = [0., 0.]
rightprice = [0., 0.]
leftrange = [0., 0.]
rightrange = [0., 0.]
subc = [2, 5]
task.dualsensitivity(subc,
leftprice,
rightprice,
leftrange,
rightrange)
print('Results from sensitivity analysis on objective coefficients:')
for i in range(2):
print('\t%10f %10f %10f %10f' % (leftprice[i],
rightprice[i],
leftrange[i],
rightrange[i]))
return None
# call the main function
try:
main()
except mosek.MosekException as e:
print("ERROR: %s" % str(e.errno))
if e.msg is not None:
print("\t%s" % e.msg)
sys.exit(1)
except:
import traceback
traceback.print_exc()
sys.exit(1)
```