6.1 Linear Optimization

The simplest optimization problem is a purely linear problem. A linear optimization problem (see also Sec. 12.1 (Linear Optimization)) is a problem of the following form:

Minimize or maximize the objective function

\[\sum_{j=0}^{n-1} c_j x_j + c^f\]

subject to the linear constraints

\[l_k^c \leq \sum_{j=0}^{n-1} a_{kj} x_j \leq u_k^c,\quad k=0,\ldots ,m-1,\]

and the bounds

\[l_j^x \leq x_j \leq u_j^x, \quad j=0,\ldots ,n-1.\]

The problem description consists of the following elements:

  • \(m\) and \(n\) — the number of constraints and variables, respectively,

  • \(x\) — the variable vector of length \(n\),

  • \(c\) — the coefficient vector of length \(n\)

    \[\begin{split}c = \left[ \begin{array}{c} c_0 \\ \vdots \\ c_{n-1} \end{array} \right],\end{split}\]
  • \(c^f\) — fixed term in the objective,

  • \(A\) — an \(m\times n\) matrix of coefficients

    \[\begin{split}A = \left[ \begin{array}{ccc} a_{0,0} & \cdots & a_{0,(n-1)} \\ \vdots & \cdots & \vdots \\ a_{(m-1),0} & \cdots & a_{(m-1),(n-1)} \end{array} \right],\end{split}\]
  • \(l^c\) and \(u^c\) — the lower and upper bounds on constraints,

  • \(l^x\) and \(u^x\) — the lower and upper bounds on variables.

Please note that we are using \(0\) as the first index: \(x_0\) is the first element in variable vector \(x\).

6.1.1 Example LO1

The following is an example of a small linear optimization problem:

(6.1)\[\begin{split}\begin{array} {lccccccccl} \mbox{maximize} & 3 x_0 & + & 1 x_1 & + & 5 x_2 & + & 1 x_3 & & \\ \mbox{subject to} & 3 x_0 & + & 1 x_1 & + & 2 x_2 & & & = & 30, \\ & 2 x_0 & + & 1 x_1 & + & 3 x_2 & + & 1 x_3 & \geq & 15, \\ & & & 2 x_1 & & & + & 3 x_3 & \leq & 25, \end{array}\end{split}\]

under the bounds

\[\begin{split}\begin{array}{ccccc} 0 & \leq & x_0 & \leq & \infty , \\ 0 & \leq & x_1 & \leq & 10, \\ 0 & \leq & x_2 & \leq & \infty ,\\ 0 & \leq & x_3 & \leq & \infty . \end{array}\end{split}\]

Example: Linear optimization using msklpopt

A linear optimization problem such as (6.1) can be solved using the msklpopt function. The first step in solving the example is to setup the data for problem (6.1) i.e. the \(c\), \(A\), etc. Afterwards the problem is solved using an appropriate call to msklpopt.

Listing 6.1 Script implementing problem (6.1) using msklpopt. Click here to download.
function lo1()

c     = [3 1 5 1]';
a     = [[3 1 2 0];[2 1 3 1];[0 2 0 3]];
blc   = [30 15  -inf]';
buc   = [30 inf 25 ]';
blx   = zeros(4,1);
bux   = [inf 10 inf inf]';

[res] = msklpopt(c,a,blc,buc,blx,bux,[],'maximize');
sol   = res.sol;

% Interior-point solution.

sol.itr.xx'      % x solution.
sol.itr.sux'     % Dual variables corresponding to buc.
sol.itr.slx'     % Dual variables corresponding to blx.

% Basic solution.

sol.bas.xx'      % x solution in basic solution.

Please note that:

  • Infinite bounds are specified using -inf and inf. Moreover, using [] for bux, buc, blx or blc means there are no bounds of the corresponding type.

  • Retrieving different solution types is discussed in Sec. 7.1 (Accessing the solution).

Example: Linear optimization using mosekopt

The function msklpopt is just a wrapper around the mosekopt, which is the main interface to MOSEK and is the only choice for more complicated problems, for instance with conic constraints. We demonstrate how to solve (6.1) directly with mosekopt. The following MATLAB code demonstrate how to set up the prob structure for the example (6.1) and solve the problem using mosekopt.

Listing 6.2 Script implementing problem (6.1) using mosekopt. Click here to download.
function lo2()
clear prob;

% Specify the c vector.
prob.c  = [3 1 5 1]';

% Specify a in sparse format.
subi   = [1 1 1 2 2 2 2 3 3];
subj   = [1 2 3 1 2 3 4 2 4];
valij  = [3 1 2 2 1 3 1 2 3];

prob.a = sparse(subi,subj,valij);

% Specify lower bounds of the constraints.
prob.blc = [30 15  -inf]';

% Specify  upper bounds of the constraints.
prob.buc = [30 inf 25 ]';

% Specify lower bounds of the variables.
prob.blx = zeros(4,1);

% Specify upper bounds of the variables.
prob.bux = [inf 10 inf inf]';

% Perform the optimization.
[r,res] = mosekopt('maximize',prob); 

% Show the optimal x solution.
res.sol.bas.xx

Please note that

  • A MATLAB structure named prob containing all the relevant problem data is defined.

  • All fields of this structure are optional except prob.a which is required to be a sparse matrix. The dimension of this matrix determine the number of constraints and variables in the problem.

  • Different parts of the solution can be accessed as described in Sec. 7.1 (Accessing the solution).

Example: Linear optimization using linprog

MOSEK also provides a function linprog with a function of the same name from the MATLAB Optimization Toolbox. Consult Sec. 10.1 (Integration with MATLAB) for details.

Listing 6.3 Script implementing problem (6.1) using linprog. Click here to download.
f     = - [3 1 5 1]';				% minus because we maximize
A     = [[-2 -1 -3 -1]; [0 2 0 3]];
b     = [-15  25]';
Aeq   = [3 1 2 0];
beq   = 30;
l     = zeros(4,1);
u     = [inf 10 inf inf]';

% Example of setting options for linprog
% Get default options
opt = mskoptimset('');
% Turn on diagnostic output
opt = mskoptimset(opt,'Diagnostics','on');
% Set a MOSEK option, in this case turn basic identification off.
opt = mskoptimset(opt,'MSK_IPAR_INTPNT_BASIS','MSK_OFF');
% Modify a MOSEK parameter with double value
opt = mskoptimset(opt,'MSK_DPAR_INTPNT_TOL_INFEAS',1e-12);

[x,fval,exitflag,output,lambda] = linprog(f,A,b,Aeq,beq,l,u,opt);

x
fval
exitflag
output
lambda