6.9 Quadratic Optimization

MOSEK can solve quadratic and quadratically constrained problems, as long as they are convex. This class of problems can be formulated as follows:

(6.28)\[\begin{split}\begin{array}{lrcccll} \mbox{minimize} & & & \half x^T Q^o x + c^T x + c^f & & & \\ \mbox{subject to} & l_k^c & \leq & \half x^T Q^k x + \sum_{j=0}^{n-1} a_{k,j} x_j & \leq & u_k^c, & k =0,\ldots ,m-1, \\ & l_j^x & \leq & x_j & \leq & u_j^x, & j=0,\ldots ,n-1. \end{array}\end{split}\]

Without loss of generality it is assumed that \(Q^o\) and \(Q^k\) are all symmetric because

\[x^T Q x = \half x^T(Q+Q^T)x.\]

This implies that a non-symmetric \(Q\) can be replaced by the symmetric matrix \(\half(Q+Q^T)\).

The problem is required to be convex. More precisely, the matrix \(Q^o\) must be positive semi-definite and the \(k\)th constraint must be of the form

(6.29)\[ l_k^c \leq \half x^T Q^k x + \sum_{j=0}^{n-1} a_{k,j} x_j\]

with a negative semi-definite \(Q^k\) or of the form

\[\half x^T Q^k x + \sum_{j=0}^{n-1} a_{k,j} x_j \leq u_k^c.\]

with a positive semi-definite \(Q^k\). This implies that quadratic equalities are not allowed. Specifying a non-convex problem will result in an error when the optimizer is called.

A matrix is positive semidefinite if all the eigenvalues of \(Q\) are nonnegative. An alternative statement of the positive semidefinite requirement is

\[x^T Q x \geq 0, \quad \forall x.\]

If the convexity (i.e. semidefiniteness) conditions are not met MOSEK will not produce reliable results or work at all.

6.9.1 Example: Quadratic Objective

We look at a small problem with linear constraints and quadratic objective:

(6.30)\[\begin{split}\begin{array}{lll} \mbox{minimize} & & x_1^2 + 0.1 x_2^2 + x_3^2 - x_1 x_3 - x_2 \\ \mbox{subject to} & 1 \leq & x_1 + x_2 + x_3 \\ & 0 \leq & x. \end{array}\end{split}\]

The matrix formulation of (6.30) has:

\[\begin{split}Q^o = \left[ \begin{array}{ccc} 2 & 0 & -1\\ 0 & 0.2 & 0\\ -1 & 0 & 2 \end{array} \right], c = \left[ \begin{array}c 0\\ -1\\ 0 \end{array} \right], A = \left[ \begin{array} {ccc} 1 & 1 & 1 \end{array} \right],\end{split}\]

with the bounds:

\[\begin{split}l^c = 1, u^c = \infty , l^x = \left[ \begin{array}c 0 \\ 0 \\ 0 \end{array} \right] \mbox{ and } u^x = \left[ \begin{array} c \infty \\ \infty \\ \infty \end{array} \right]\end{split}\]

Please note the explicit \(\half\) in the objective function of (6.28) which implies that diagonal elements must be doubled in \(Q\), i.e. \(Q_{11}=2\) even though \(1\) is the coefficient in front of \(x_1^2\) in (6.30).

Using mosekopt

In Listing 6.17 we show how to use mosekopt to solve problem (6.30). This is the preferred way.

Listing 6.17 How to solve problem (6.30) using mosekopt. Click here to download.
function qo2()

clear prob;

% c vector.
prob.c = [0 -1 0]';

% Define the data.

% First the lower triangular part of q in the objective 
% is specified in a sparse format. The format is:
%
%   Q(prob.qosubi(t),prob.qosubj(t)) = prob.qoval(t), t=1,...,4

prob.qosubi = [ 1  3 2   3]';
prob.qosubj = [ 1  1 2   3]';
prob.qoval  = [ 2 -1 0.2 2]';

% a, the constraint matrix
subi  = ones(3,1);
subj  = 1:3;
valij = ones(3,1);

prob.a = sparse(subi,subj,valij);

% Lower bounds of constraints.
prob.blc  = [1.0]';

% Upper bounds of constraints.
prob.buc  = [inf]';

% Lower bounds of variables.
prob.blx  = sparse(3,1);

% Upper bounds of variables.
prob.bux = [];   % There are no bounds.

[r,res] = mosekopt('minimize',prob);

% Display return code.
fprintf('Return code: %d\n',r);

% Display primal solution for the constraints.
res.sol.itr.xc'

% Display primal solution for the variables.
res.sol.itr.xx'

This sequence of commands looks much like the one that was used to solve the linear optimization example using mosekopt except that the definition of the \(Q\) matrix in prob. mosekopt requires that \(Q\) is specified in a sparse format. Indeed the vectors qosubi, qosubj, and qoval are used to specify the coefficients of \(Q\) in the objective using the principle

\[Q_{\mathtt{qosubi(t),qosubj(t)} } = \mathtt{ qoval(t) }, \mbox{for} \quad t=1,\ldots ,\mbox{length}(\mathtt{qosubi} ).\]

An important observation is that due to \(Q\) being symmetric, only the lower triangular part of \(Q\) should be specified.

Using mskqpopt

In Listing 6.18 we show how to use mskqpopt to solve problem (6.30).

Listing 6.18 Function solving problem (6.30) using mskqpopt. Click here to download.
function qo1()

% Set up Q.
q     = [[2 0 -1];[0 0.2 0];[-1 0 2]];

% Set up the linear part of the problem.
c     = [0 -1 0]';
a     = ones(1,3);
blc   = [1.0];
buc   = [inf];
blx   = sparse(3,1);
bux   = [];

% Optimize the problem.
[res] = mskqpopt(q,c,a,blc,buc,blx,bux);

% Show the primal solution.
res.sol.itr.xx

It should be clear that the format for calling mskqpopt is very similar to calling msklpopt except that the \(Q\) matrix is included as the first argument of the call. Similarly, the solution can be inspected by viewing the res.sol field.

6.9.2 Example: Quadratic constraints

In this section we show how to solve a problem with quadratic constraints. Please note that quadratic constraints are subject to the convexity requirement (6.29).

Consider the problem:

\[\begin{split}\begin{array}{lcccl} \mbox{minimize} & & & x_1^2 + 0.1 x_2^2 + x_3^2 - x_1 x_3 - x_2 & \\ \mbox{subject to} & 1 & \leq & x_1 + x_2 + x_3 - x_1^2 - x_2^2 - 0.1 x_3^2 + 0.2 x_1 x_3, & \\ & & & x \geq 0. & \end{array}\end{split}\]

This is equivalent to

(6.31)\[\begin{split}\begin{array}{lccl} \mbox{minimize} & \half x^T Q^o x + c^T x & & \\ \mbox{subject to} & \half x^T Q^0 x + A x & \geq & b, \\ & x\geq 0, \end{array}\end{split}\]

where

\[\begin{split}Q^o = \left[ \begin{array}{ccc} 2 & 0 & -1 \\ 0 & 0.2 & 0 \\ -1 & 0 & 2 \end{array} \right], c = \left[ \begin{array}{ccc} 0 &-1 & 0 \end{array} \right]^T, A = \left[ \begin{array}{ccc} 1 & 1 & 1 \end{array} \right], b = 1.\end{split}\]
\[\begin{split}Q^0 = \left[ \begin{array}{ccc} -2 & 0 & 0.2 \\ 0 & -2 & 0 \\ 0.2 & 0 & -0.2 \end{array} \right].\end{split}\]

The linear parts and quadratic objective are set up the way described in the previous tutorial.

Setting up quadratic constraints

Listing 6.19 Script implementing problem (6.31). Click here to download.
function qcqo1()
clear prob;

% Specify the linear objective terms.
prob.c      = [0, -1, 0];

% Specify the quadratic terms of the constraints.
prob.qcsubk = [1     1    1   1  ]';
prob.qcsubi = [1     2    3   3  ]';
prob.qcsubj = [1     2    3   1  ]';
prob.qcval  = [-2.0 -2.0 -0.2 0.2]';

% Specify the quadratic terms of the objective.
prob.qosubi = [1     2    3    3  ]';
prob.qosubj = [1     2    3    1  ]';
prob.qoval  = [2.0   0.2  2.0 -1.0]';

% Specify the linear constraint matrix
prob.a      = [1 1 1];

% Specify the lower bounds
prob.blc    = [1];
prob.blx    = zeros(3,1);

[r,res]     = mosekopt('minimize',prob);

% Display the solution.
fprintf('\nx:');
fprintf(' %-.4e',res.sol.itr.xx');
fprintf('\n||x||: %-.4e',norm(res.sol.itr.xx));