12 Sensitivity Analysis

Given an optimization problem it is often useful to obtain information about how the optimal objective value changes when the problem parameters are perturbed. E.g, assume that a bound represents the capacity of a machine. Now, it may be possible to expand the capacity for a certain cost and hence it is worthwhile knowing what the value of additional capacity is. This is precisely the type of questions the sensitivity analysis deals with.

Analyzing how the optimal objective value changes when the problem data is changed is called sensitivity analysis.

References

The book [Chv83] discusses the classical sensitivity analysis in Chapter 10 whereas the book [RTV97] presents a modern introduction to sensitivity analysis. Finally, it is recommended to read the short paper [Wal00] to avoid some of the pitfalls associated with sensitivity analysis.

Warning

Currently, sensitivity analysis is only available for continuous linear optimization problems. Moreover, MOSEK can only deal with perturbations of bounds and objective function coefficients.

12.1 Sensitivity Analysis for Linear Problems

12.1.1 The Optimal Objective Value Function

Assume that we are given the problem

(1)\[\begin{split}\begin{array}{rclccccl} z(l^c,u^c,l^x,u^x,c) & = & \mbox{minimize} & & & c^Tx & & \\ & & \mbox{subject to} & l^c & \leq & Ax & \leq & u^c, \\ & & & l^x & \leq & x & \leq & u^x, \end{array}\end{split}\]

and we want to know how the optimal objective value changes as \(l^c_i\) is perturbed. To answer this question we define the perturbed problem for \(l^c_i\) as follows

\[\begin{split}\begin{array}{rclcccl} f_{l^c_i}(\beta ) & = & \mbox{minimize} & & & c^Tx & \\ & & \mbox{subject to} & l^c + \beta e_i & \leq & Ax & \leq u^c, \\ & & & l^x & \leq & x \leq & u^x, \end{array}\end{split}\]

where \(e_i\) is the \(i\)-th column of the identity matrix. The function

(2)\[f_{l^c_i}(\beta)\]

shows the optimal objective value as a function of \(\beta\). Please note that a change in \(\beta\) corresponds to a perturbation in \(l_i^c\) and hence (2) shows the optimal objective value as a function of varying \(l^c_i\) with the other bounds fixed.

It is possible to prove that the function (2) is a piecewise linear and convex function, i.e. its graph may look like in Fig. 12.1 and Fig. 12.2.

_images/optimal_val_fun2.png

Fig. 12.1 \(\beta =0\) is in the interior of linearity interval.

_images/optimal_val_fun.png

Fig. 12.2 \(\beta =0\) is a breakpoint.

Clearly, if the function \(f_{l^c_i}(\beta )\) does not change much when \(\beta\) is changed, then we can conclude that the optimal objective value is insensitive to changes in \(l_i^c\). Therefore, we are interested in the rate of change in \(f_{l^c_i}(\beta)\) for small changes in \(\beta\) — specifically the gradient

\[f^\prime_{l^c_i}(0),\]

which is called the shadow price related to \(l^c_i\). The shadow price specifies how the objective value changes for small changes of \(\beta\) around zero. Moreover, we are interested in the linearity interval

\[\beta \in [\beta_1,\beta_2]\]

for which

\[f^\prime_{l^c_i}(\beta ) = f^\prime_{l^c_i}(0).\]

Since \(f_{l^c_i}\) is not a smooth function \(f^\prime_{l^c_i}\) may not be defined at \(0\), as illustrated in Fig. 12.2. In this case we can define a left and a right shadow price and a left and a right linearity interval.

The function \(f_{l^c_i}\) considered only changes in \(l^c_i\). We can define similar functions for the remaining parameters of the \(z\) defined in (1) as well:

\[\begin{split}\begin{array}{rcll} f_{l^c_i}(\beta ) & = & z(l^c+\beta e_i,u^c,l^x,u^x,c), & i=1,\ldots,m, \\ f_{u^c_i}(\beta ) & = & z(l^c,u^c+\beta e_i,l^x,u^x,c), & i=1,\ldots,m, \\ f_{l^x_{j}}(\beta ) & = & z(l^c,u^c,l^x+\beta e_{j},u^x,c), & j=1,\ldots,n, \\ f_{u^x_{j}}(\beta ) & = & z(l^c,u^c,l^x,u^x+\beta e_{j},c), & j=1,\ldots,n, \\ f_{c_{j}}(\beta ) & = & z(l^c,u^c,l^x,u^x,c+\beta e_{j}), & j=1,\ldots,n. \end{array}\end{split}\]

Given these definitions it should be clear how linearity intervals and shadow prices are defined for the parameters \(u^c_i\) etc.

12.1.1.1 Equality Constraints

In MOSEK a constraint can be specified as either an equality constraint or a ranged constraint. If some constraint \(e_i^c\) is an equality constraint, we define the optimal value function for this constraint as

\[f_{e_i^c}(\beta ) = z(l^c+\beta e_i,u^c+\beta e_i,l^x,u^x,c)\]

Thus for an equality constraint the upper and the lower bounds (which are equal) are perturbed simultaneously. Therefore, MOSEK will handle sensitivity analysis differently for a ranged constraint with \(l^c_i = u^c_i\) and for an equality constraint.

12.1.2 The Basis Type Sensitivity Analysis

The classical sensitivity analysis discussed in most textbooks about linear optimization, e.g. [Chv83], is based on an optimal basic solution or, equivalently, on an optimal basis. This method may produce misleading results [RTV97] but is computationally cheap. Therefore, and for historical reasons, this method is available in MOSEK.

We will now briefly discuss the basis type sensitivity analysis. Given an optimal basic solution which provides a partition of variables into basic and non-basic variables, the basis type sensitivity analysis computes the linearity interval \([\beta_1,\beta_2]\) so that the basis remains optimal for the perturbed problem. A shadow price associated with the linearity interval is also computed. However, it is well-known that an optimal basic solution may not be unique and therefore the result depends on the optimal basic solution employed in the sensitivity analysis. This implies that the computed interval is only a subset of the largest interval for which the shadow price is constant. Furthermore, the optimal objective value function might have a breakpoint for \(\beta = 0\). In this case the basis type sensitivity method will only provide a subset of either the left or the right linearity interval.

In summary, the basis type sensitivity analysis is computationally cheap but does not provide complete information. Hence, the results of the basis type sensitivity analysis should be used with care.

12.1.3 The Optimal Partition Type Sensitivity Analysis

Another method for computing the complete linearity interval is called the optimal partition type sensitivity analysis. The main drawback of the optimal partition type sensitivity analysis is that it is computationally expensive compared to the basis type analysis. This type of sensitivity analysis is currently provided as an experimental feature in MOSEK.

Given the optimal primal and dual solutions to (1), i.e. \(x^*\) and \(((s_l^c)^*,(s_u^c)^*,(s_l^x)^*,(s_u^x)^*)\) the optimal objective value is given by

\[z^* := c^T x^*.\]

The left and right shadow prices \(\sigma_1\) and \(\sigma_2\) for \(l_i^c\) are given by this pair of optimization problems:

\[\begin{split}\begin{array} {rclccl} \sigma_1 & = & \mbox{minimize} & e_i^T s_l^c & & \\ & & \mbox{subject to} & A^T (s_l^c-s_u^c) + s_l^x - s_u^x & = & c, \\ & & & (l^c)^T (s_l^c) - (u^c)^T (s_u^c) + (l^x)^T (s_l^x) - (u^x)^T (s_u^x) & = & z^*, \\ & & & s_l^c,s_u^c,s_l^c,s_u^x \geq 0 & & \end{array}\end{split}\]

and

\[\begin{split}\begin{array} {rclccl} \sigma_2 & = & \mbox{maximize} & e_i^T s_l^c & & \\ & & \mbox{subject to} & A^T (s_l^c-s_u^c) + s_l^x - s_u^x & = & c, \\ & & & (l^c)^T (s_l^c) - (u^c)^T (s_u^c) + (l^x)^T (s_l^x) - (u^x)^T (s_u^x) & = & z^*, \\ & & & s_l^c,s_u^c,s_l^c,s_u^x \geq 0. & & \end{array}\end{split}\]

These two optimization problems make it easy to interpret the shadow price. Indeed, if \(((s_l^c)^*,(s_u^c)^*,(s_l^x)^*,(s_u^x)^*)\) is an arbitrary optimal solution then

\[(s_l^c)_i^* \in [\sigma_1,\sigma_2].\]

Next, the linearity interval \([\beta_1,\beta_2]\) for \(l_i^c\) is computed by solving the two optimization problems

\[\begin{split}\begin{array} {lclccccl} \beta_1 & = & \mbox{minimize} & & & \beta & & \\ & & \mbox{subject to} & l^c + \beta e_i & \leq & Ax & \leq & u^c, \\ & & & & & c^T x - \sigma_1 \beta & = & z^*, \\ & & & l^x & \leq & x & \leq & u^x, \end{array}\end{split}\]

and

\[\begin{split}\begin{array} {lclccccl} \beta_2 & = & \mbox{maximize} & & & \beta & & \\ & & \mbox{subject to} & l^c + \beta e_i & \leq & Ax & \leq & u^c,\\ & & & & & c^T x - \sigma_2 \beta & = & z^*, \\ & & & l^x & \leq & x & \leq & u^x. \end{array}\end{split}\]

The linearity intervals and shadow prices for \(u_i^c,\) \(l_j^x,\) and \(u_j^x\) are computed similarly to \(l_i^c\).

The left and right shadow prices for \(c_j\) denoted \(\sigma_1\) and \(\sigma_2\) respectively are computed as follows:

\[\begin{split}\begin{array} {lclccccl} \sigma_1 & = & \mbox{minimize} & & & e_j^T x & & \\ & & \mbox{subject to} & l^c + \beta e_i & \leq & Ax & \leq & u^c,\\ & & & & & c^T x & = & z^*,\\ & & & l^x & \leq & x & \leq & u^x, \end{array}\end{split}\]

and

\[\begin{split}\begin{array} {lclccccl} \sigma_2 & = & \mbox{maximize} & & & e_j^T x & & \\ & & \mbox{subject to} & l^c + \beta e_i & \leq & Ax & \leq & u^c,\\ & & & & & c^T x & = & z^*,\\ & & & l^x & \leq & x & \leq & u^x. \end{array}\end{split}\]

Once again the above two optimization problems make it easy to interpret the shadow prices. Indeed, if \(x^*\) is an arbitrary primal optimal solution, then

\[x_j^* \in [\sigma_1,\sigma_2].\]

The linearity interval \([\beta_1,\beta_2]\) for a \(c_j\) is computed as follows:

\[\begin{split}\begin{array} {rclccl} \beta_1 & = & \mbox{minimize} & \beta & & \\ & & \mbox{subject to} & A^T (s_l^c-s_u^c) + s_l^x - s_u^x & = & c + \beta e_j, \\ & & & (l^c)^T (s_l^c) - (u^c)^T (s_u^c) + (l^x)^T (s_l^x) - (u^x)^T (s_u^x) - \sigma_1 \beta & \leq & z^*, \\ & & & s_l^c,s_u^c,s_l^c,s_u^x \geq 0 & & \end{array}\end{split}\]

and

\[\begin{split}\begin{array} {rclccl} \beta_2 & = & \mbox{maximize} & \beta & & \\ & & \mbox{subject to} & A^T (s_l^c-s_u^c) + s_l^x - s_u^x & = & c + \beta e_j, \\ & & & (l^c)^T (s_l^c) - (u^c)^T (s_u^c) + (l^x)^T (s_l^x) - (u^x)^T (s_u^x) - \sigma_2 \beta & \leq & z^*, \\ & & & s_l^c,s_u^c,s_l^c,s_u^x \geq 0. & & \end{array}\end{split}\]

12.1.4 Example: Sensitivity Analysis

As an example we will use the following transportation problem. Consider the problem of minimizing the transportation cost between a number of production plants and stores. Each plant supplies a number of goods and each store has a given demand that must be met. Supply, demand and cost of transportation per unit are shown in Fig. 12.3.

_images/transportp2.png

Fig. 12.3 Supply, demand and cost of transportation.

If we denote the number of transported goods from location \(i\) to location \(j\) by \(x_{ij}\), problem can be formulated as the linear optimization problem of minimizing

\[\begin{array}{ccccccccccccc} 1x_{11} & + & 2x_{12} & + & 5x_{23} & + & 2x_{24} & + & 1x_{31} & + & 2x_{33} & + & 1 x_{34} \end{array}\]

subject to

(3)\[\begin{split}\begin{array}{ccccccccccccccl} x_{11} & + & x_{12} & & & & & & & & & & & \leq & 400, \\ & & & & x_{23} & + & x_{24} & & & & & & & \leq & 1200, \\ & & & & & & & & x_{31} & + & x_{33} & + & x_{34} & \leq & 1000, \\ x_{11} & & & & & & & + & x_{31} & & & & & = & 800, \\ & & x_{12} & & & & & & & & & & & = & 100, \\ & & & & x_{23} & + & & & & & x_{33} & & & = & 500, \\ & & & & & & x_{24} & + & & & & & x_{34} & = & 500, \\ x_{11}, & & x_{12}, & & x_{23}, & & x_{24}, & & x_{31}, & & x_{33}, & & x_{34} & \geq & 0. \end{array}\end{split}\]

The sensitivity parameters are shown in Table 12.1 and Table 12.2 for the basis type analysis and in Table 12.3 and Table 12.4 for the optimal partition type analysis.

Table 12.1 Ranges and shadow prices related to bounds on constraints and variables: results for the basis type sensitivity analysis.
Con. \(\beta_1\) \(\beta_2\) \(\sigma_1\) \(\sigma_2\)
\(1\) \(-300.00\) \(0.00\) \(3.00\) \(3.00\)
\(2\) \(-700.00\) \(+\infty\) \(0.00\) \(0.00\)
\(3\) \(-500.00\) \(0.00\) \(3.00\) \(3.00\)
\(4\) \(-0.00\) \(500.00\) \(4.00\) \(4.00\)
\(5\) \(-0.00\) \(300.00\) \(5.00\) \(5.00\)
\(6\) \(-0.00\) \(700.00\) \(5.00\) \(5.00\)
\(7\) \(-500.00\) \(700.00\) \(2.00\) \(2.00\)
Var. \(\beta_1\) \(\beta_2\) \(\sigma_1\) \(\sigma_2\)
\(x_{11}\) \(-\infty\) \(300.00\) \(0.00\) \(0.00\)
\(x_{12}\) \(-\infty\) \(100.00\) \(0.00\) \(0.00\)
\(x_{23}\) \(-\infty\) \(0.00\) \(0.00\) \(0.00\)
\(x_{24}\) \(-\infty\) \(500.00\) \(0.00\) \(0.00\)
\(x_{31}\) \(-\infty\) \(500.00\) \(0.00\) \(0.00\)
\(x_{33}\) \(-\infty\) \(500.00\) \(0.00\) \(0.00\)
\(x_{34}\) \(-0.000000\) \(500.00\) \(2.00\) \(2.00\)

Table 12.2 Ranges and shadow prices related to bounds on constraints and variables: results for the optimal partition type sensitivity analysis.
Con. \(\beta_1\) \(\beta_2\) \(\sigma_1\) \(\sigma_2\)
\(1\) \(-300.00\) \(500.00\) \(3.00\) \(1.00\)
\(2\) \(-700.00\) \(+\infty\) \(-0.00\) \(-0.00\)
\(3\) \(-500.00\) \(500.00\) \(3.00\) \(1.00\)
\(4\) \(-500.00\) \(500.00\) \(2.00\) \(4.00\)
\(5\) \(-100.00\) \(300.00\) \(3.00\) \(5.00\)
\(6\) \(-500.00\) \(700.00\) \(3.00\) \(5.00\)
\(7\) \(-500.00\) \(700.00\) \(2.00\) \(2.00\)
Var. \(\beta_1\) \(\beta_2\) \(\sigma_1\) \(\sigma_2\)
\(x_{11}\) \(-\infty\) \(300.00\) \(0.00\) \(0.00\)
\(x_{12}\) \(-\infty\) \(100.00\) \(0.00\) \(0.00\)
\(x_{23}\) \(-\infty\) \(500.00\) \(0.00\) \(2.00\)
\(x_{24}\) \(-\infty\) \(500.00\) \(0.00\) \(0.00\)
\(x_{31}\) \(-\infty\) \(500.00\) \(0.00\) \(0.00\)
\(x_{33}\) \(-\infty\) \(500.00\) \(0.00\) \(0.00\)
\(x_{34}\) \(-\infty\) \(500.00\) \(0.00\) \(2.00\)

Table 12.3 Ranges and shadow prices related to the objective coefficients: results for the basis type sensitivity analysis.
Var. \(\beta_1\) \(\beta_2\) \(\sigma_1\) \(\sigma_2\)
\(c_1\) \(-\infty\) \(3.00\) \(300.00\) \(300.00\)
\(c_2\) \(-\infty\) \(\infty\) \(100.00\) \(100.00\)
\(c_3\) \(-2.00\) \(\infty\) \(0.00\) \(0.00\)
\(c_4\) \(-\infty\) \(2.00\) \(500.00\) \(500.00\)
\(c_5\) \(-3.00\) \(\infty\) \(500.00\) \(500.00\)
\(c_6\) \(-\infty\) \(2.00\) \(500.00\) \(500.00\)
\(c_7\) \(-2.00\) \(\infty\) \(0.00\) \(0.00\)

Table 12.4 Ranges and shadow prices related to the objective coefficients: results for the optimal partition type sensitivity analysis.
Var. \(\beta_1\) \(\beta_2\) \(\sigma_1\) \(\sigma_2\)
\(c_1\) \(-\infty\) \(3.00\) \(300.00\) \(300.00\)
\(c_2\) \(-\infty\) \(\infty\) \(100.00\) \(100.00\)
\(c_3\) \(-2.00\) \(\infty\) \(0.00\) \(0.00\)
\(c_4\) \(-\infty\) \(2.00\) \(500.00\) \(500.00\)
\(c_5\) \(-3.00\) \(\infty\) \(500.00\) \(500.00\)
\(c_6\) \(-\infty\) \(2.00\) \(500.00\) \(500.00\)
\(c_7\) \(-2.00\) \(\infty\) \(0.00\) \(0.00\)

Examining the results from the optimal partition type sensitivity analysis we see that for constraint number \(1\) we have \(\sigma_1 = 3,\ \sigma_2=1\) and \(\beta_1 = -300,\ \beta_2=500\). Therefore, we have a left linearity interval of \([-300,0]\) and a right interval of \([0,500]\). The corresponding left and right shadow prices are \(3\) and \(1\) respectively. This implies that if the upper bound on constraint \(1\) increases by

\[\beta \in [0,\beta_1] = [0,500]\]

then the optimal objective value will decrease by the value

\[\sigma_2 \beta = 1 \beta .\]

Correspondingly, if the upper bound on constraint \(1\) is decreased by

\[\beta \in [0,300]\]

then the optimal objective value will increase by the value

\[\sigma_1 \beta = 3 \beta .\]

12.2 Sensitivity Analysis with MOSEK

A sensitivity analysis can be performed with the MOSEK command line tool specifying the option -sen, e.g.

mosek myproblem.mps -sen sensitivity.ssp

where sensitivity.ssp is a file in the format described in the next section. The ssp file describes which parts of the problem the sensitivity analysis should be performed on, see Section 12.2.1.

By default results are written to a file named myproblem.sen. If necessary, this file name can be changed by setting the MSK_SPAR_SENSITIVITY_RES_FILE_NAME parameter. By default a basis type sensitivity analysis is performed. However, the type of sensitivity analysis (basis or optimal partition) can be changed by setting the parameter MSK_IPAR_SENSITIVITY_TYPE appropriately. Following values are accepted for this parameter:

12.2.1 Sensitivity Analysis Specification File

MOSEK employs an MPS-like file format to specify on which model parameters the sensitivity analysis should be performed. As the optimal partition type sensitivity analysis can be computationally expensive it is important to limit the sensitivity analysis.

The format of the sensitivity specification file is shown in Listing 12.1, where capitalized names are keywords, and names in brackets are names of the constraints and variables to be included in the analysis.

Listing 12.1 Sensitivity analysis file specification.
BOUNDS CONSTRAINTS
U|L|LU [cname1]
U|L|LU [cname2]-[cname3]
BOUNDS VARIABLES
U|L|LU [vname1]
U|L|LU [vname2]-[vname3]
OBJECTIVE VARIABLES
[vname1]
[vname2]-[vname3]

The sensitivity specification file has three sections, i.e.

  • BOUNDS CONSTRAINTS: Specifies on which bounds on constraints the sensitivity analysis should be performed.
  • BOUNDS VARIABLES: Specifies on which bounds on variables the sensitivity analysis should be performed.
  • OBJECTIVE VARIABLES: Specifies on which objective coefficients the sensitivity analysis should be performed.

A line in the body of a section must begin with a whitespace. In the BOUNDS sections one of the keys L, U, and LU must appear next. These keys specify whether the sensitivity analysis is performed on the lower bound, on the upper bound, or on both the lower and the upper bound respectively. Next, a single constraint (variable) or range of constraints (variables) is specified.

Recall from Section 12.1.1.1 that equality constraints are handled in a special way. Sensitivity analysis of an equality constraint can be specified with either L, U, or LU, all indicating the same, namely that upper and lower bounds (which are equal) are perturbed simultaneously.

As an example consider

BOUNDS CONSTRAINTS
L  "cons1"
U  "cons2"
LU "cons3"-"cons6"

which requests that sensitivity analysis is performed on the lower bound of the constraint named cons1, on the upper bound of the constraint named cons2, and on both lower and upper bound on the constraints named cons3 to cons6.

It is allowed to use indexes instead of names, for instance

BOUNDS CONSTRAINTS
L  "cons1"
U  2
LU 3 - 6

The character * indicates that the line contains a comment and is ignored.

12.2.2 Example: Sensitivity Analysis from Command Line

As an example consider problem (3): the sensitivity file shown below (included in the distribution among the examples).

Listing 12.2 Sensitivity file for problem (3). Click here to download.
* Comment 1

BOUNDS CONSTRAINTS
 U "c1"         * Analyze upper bound for constraints named c1 
 U 2            * Analyze upper bound for constraints with index 2 
 U 3-5          * Analyze upper bound for constraint with index in interval [3:5] 

VARIABLES CONSTRAINTS
 L 2-4          * This section specifies which bounds on variables should be analyzed.  
 L "x11" 
OBJECTIVE CONSTRAINTS
 "x11"          * This section specifies which objective coeficients should be analysed. 
 2

The command

mosek transport.lp -sen sensitivity.ssp  -d MSK_IPAR_SENSITIVITY_TYPE sensitivitytype.basis

produces the output file as follow

Listing 12.3 Results of sensitivity analysis
BOUNDS CONSTRAINTS
INDEX    NAME             BOUND    LEFTRANGE        RIGHTRANGE       LEFTPRICE        RIGHTPRICE
0        c1               UP       -6.574875e-18    5.000000e+02     1.000000e+00     1.000000e+00
2        c3               UP       -6.574875e-18    5.000000e+02     1.000000e+00     1.000000e+00
3        c4               FIX      -5.000000e+02    6.574875e-18     2.000000e+00     2.000000e+00
4        c5               FIX      -1.000000e+02    6.574875e-18     3.000000e+00     3.000000e+00
5        c6               FIX      -5.000000e+02    6.574875e-18     3.000000e+00     3.000000e+00

BOUNDS VARIABLES
INDEX    NAME             BOUND    LEFTRANGE        RIGHTRANGE       LEFTPRICE        RIGHTPRICE
2        x23              LO       -6.574875e-18    5.000000e+02     2.000000e+00     2.000000e+00
3        x24              LO       -inf             5.000000e+02     0.000000e+00     0.000000e+00
4        x31              LO       -inf             5.000000e+02     0.000000e+00     0.000000e+00
0        x11              LO       -inf             3.000000e+02     0.000000e+00     0.000000e+00

OBJECTIVE VARIABLES
INDEX    NAME                      LEFTRANGE        RIGHTRANGE       LEFTPRICE        RIGHTPRICE
0        x11                       -inf             1.000000e+00     3.000000e+02     3.000000e+02
2        x23                       -2.000000e+00    +inf             0.000000e+00     0.000000e+00

12.2.3 Controlling Log Output

Setting the parameter MSK_IPAR_LOG_SENSITIVITY to \(1\) or \(0\) (default) controls whether or not the results from sensitivity calculations are printed to the message stream.

The parameter MSK_IPAR_LOG_SENSITIVITY_OPT controls the amount of debug information on internal calculations from the sensitivity analysis.