6.5 Quadratic Optimization

MOSEK can solve quadratic and quadratically constrained problems, as long as they are convex. This class of problems can be formulated as follows:

(1)\[\begin{split}\begin{array}{lrcccll} \mbox{minimize} & & & \half x^T Q^o x + c^T x + c^f & & & \\ \mbox{subject to} & l_k^c & \leq & \half x^T Q^k x + \sum_{j=0}^{n-1} a_{k,j} x_j & \leq & u_k^c, & k =0,\ldots ,m-1, \\ & l_j^x & \leq & x_j & \leq & u_j^x, & j=0,\ldots ,n-1. \end{array}\end{split}\]

Without loss of generality it is assumed that \(Q^o\) and \(Q^k\) are all symmetric because

\[x^T Q x = \half x^T(Q+Q^T)x.\]

This implies that a non-symmetric \(Q\) can be replaced by the symmetric matrix \(\half(Q+Q^T)\).

The problem is required to be convex. More precisely, the matrix \(Q^o\) must be positive semi-definite and the \(k\)th constraint must be of the form

(2)\[ l_k^c \leq \half x^T Q^k x + \sum_{j=0}^{n-1} a_{k,j} x_j\]

with a negative semi-definite \(Q^k\) or of the form

\[\half x^T Q^k x + \sum_{j=0}^{n-1} a_{k,j} x_j \leq u_k^c.\]

with a positive semi-definite \(Q^k\). This implies that quadratic equalities are not allowed. Specifying a non-convex problem will result in an error when the optimizer is called.

A matrix is positive semidefinite if all the eigenvalues of \(Q\) are nonnegative. An alternative statement of the positive semidefinite requirement is

\[x^T Q x \geq 0, \quad \forall x.\]

If the convexity (i.e. semidefiniteness) conditions are not met MOSEK will not produce reliable results or work at all.

6.5.1 Example: Quadratic Objective

We look at a small problem with linear constraints and quadratic objective:

(3)\[\begin{split}\begin{array}{lll} \mbox{minimize} & & x_1^2 + 0.1 x_2^2 + x_3^2 - x_1 x_3 - x_2 \\ \mbox{subject to} & 1 \leq & x_1 + x_2 + x_3 \\ & 0 \leq & x. \end{array}\end{split}\]

The matrix formulation (3) has:

\[\begin{split}Q^o = \left[ \begin{array}{ccc} 2 & 0 & -1\\ 0 & 0.2 & 0\\ -1 & 0 & 2 \end{array} \right], c = \left[ \begin{array}c 0\\ -1\\ 0 \end{array} \right], A = \left[ \begin{array} {ccc} 1 & 1 & 1 \end{array} \right],\end{split}\]

with the bounds:

\[\begin{split}l^c = 1, u^c = \infty , l^x = \left[ \begin{array}c 0 \\ 0 \\ 0 \end{array} \right] \mbox{ and } u^x = \left[ \begin{array} c \infty \\ \infty \\ \infty \end{array} \right]\end{split}\]

Please note the explicit \(\half\) in the objective function of (1) which implies that diagonal elements must be doubled in \(Q\), i.e. \(Q_{11}=2\), whereas the coefficient in (3) is \(1\) in front of \(x_1^2\).

Using mosekopt

In Listing 8 we show how to use mosekopt to solve problem (3). This is the preferred way.

Listing 8 How to solve problem (3) using mosekopt. Click here to download.
function qo2()

clear prob;

% c vector.
prob.c = [0 -1 0]';

% Define the data.

% First the lower triangular part of q in the objective 
% is specified in a sparse format. The format is:
%
%   Q(prob.qosubi(t),prob.qosubj(t)) = prob.qoval(t), t=1,...,4

prob.qosubi = [ 1  3 2   3]';
prob.qosubj = [ 1  1 2   3]';
prob.qoval  = [ 2 -1 0.2 2]';

% a, the constraint matrix
subi  = ones(3,1);
subj  = 1:3;
valij = ones(3,1);

prob.a = sparse(subi,subj,valij);

% Lower bounds of constraints.
prob.blc  = [1.0]';

% Upper bounds of constraints.
prob.buc  = [inf]';

% Lower bounds of variables.
prob.blx  = sparse(3,1);

% Upper bounds of variables.
prob.bux = [];   % There are no bounds.

[r,res] = mosekopt('minimize',prob);

% Display return code.
fprintf('Return code: %d\n',r);

% Display primal solution for the constraints.
res.sol.itr.xc'

% Display primal solution for the variables.
res.sol.itr.xx'

This sequence of commands looks much like the one that was used to solve the linear optimization example using mosekopt except that the definition of the \(Q\) matrix in prob. mosekopt requires that \(Q\) is specified in a sparse format. Indeed the vectors qosubi, qosubj, and qoval are used to specify the coefficients of \(Q\) in the objective using the principle

\[Q_{\mathtt{qosubi(t),qosubj(t)} } = \mathtt{ qoval(t) }, \mbox{for} \quad t=1,\ldots ,\mbox{length}(\mathtt{qosubi} ).\]

An important observation is that due to \(Q\) being symmetric, only the lower triangular part of \(Q\) should be specified.

Using mskqpopt

In Listing 9 we show how to use mskqpopt to solve problem (3).

Listing 9 Function solving problem (3) using mskqpopt. Click here to download.
function qo1()

% Set up Q.
q     = [[2 0 -1];[0 0.2 0];[-1 0 2]];

% Set up the linear part of the problem.
c     = [0 -1 0]';
a     = ones(1,3);
blc   = [1.0];
buc   = [inf];
blx   = sparse(3,1);
bux   = [];

% Optimize the problem.
[res] = mskqpopt(q,c,a,blc,buc,blx,bux);

% Show the primal solution.
res.sol.itr.xx

It should be clear that the format for calling mskqpopt is very similar to calling msklpopt except that the \(Q\) matrix is included as the first argument of the call. Similarly, the solution can be inspected by viewing the res.sol field.

6.5.2 Example: Quadratic constraints

In this section we show how to solve a problem with quadratic constraints. Please note that quadratic constraints are subject to the convexity requirement (2).

Consider the problem:

\[\begin{split}\begin{array}{lcccl} \mbox{minimize} & & & x_1^2 + 0.1 x_2^2 + x_3^2 - x_1 x_3 - x_2 & \\ \mbox{subject to} & 1 & \leq & x_1 + x_2 + x_3 - x_1^2 - x_2^2 - 0.1 x_3^2 + 0.2 x_1 x_3, & \\ & & & x \geq 0. & \end{array}\end{split}\]

This is equivalent to

(4)\[\begin{split}\begin{array}{lccl} \mbox{minimize} & \half x^T Q^o x + c^T x & & \\ \mbox{subject to} & \half x^T Q^0 x + A x & \geq & b, \\ & x\geq 0, \end{array}\end{split}\]

where

\[\begin{split}Q^o = \left[ \begin{array}{ccc} 2 & 0 & -1 \\ 0 & 0.2 & 0 \\ -1 & 0 & 2 \end{array} \right], c = \left[ \begin{array}{ccc} 0 &-1 & 0 \end{array} \right]^T, A = \left[ \begin{array}{ccc} 1 & 1 & 1 \end{array} \right], b = 1.\end{split}\]
\[\begin{split}Q^0 = \left[ \begin{array}{ccc} -2 & 0 & 0.2 \\ 0 & -2 & 0 \\ 0.2 & 0 & -0.2 \end{array} \right].\end{split}\]

The linear parts and quadratic objective are set up the way described in the previous tutorial.

Setting up quadratic constraints

Please note that there are quadratic terms in both constraints. This problem can be solved using mosekopt as the following

Listing 10 Script implementing problem (4). Click here to download.
function qcqo1()
clear prob;

% Specify the linear objective terms.
prob.c      = [0, -1, 0];

% Specify the quadratic terms of the constraints.
prob.qcsubk = [1     1    1   1  ]';
prob.qcsubi = [1     2    3   3  ]';
prob.qcsubj = [1     2    3   1  ]';
prob.qcval  = [-2.0 -2.0 -0.2 0.2]';

% Specify the quadratic terms of the objective.
prob.qosubi = [1     2    3    3  ]';
prob.qosubj = [1     2    3    1  ]';
prob.qoval  = [2.0   0.2  2.0 -1.0]';

% Specify the linear constraint matrix
prob.a      = [1 1 1];

% Specify the lower bounds
prob.blc    = [1];
prob.blx    = zeros(3,1);

[r,res]     = mosekopt('minimize',prob);

% Display the solution.
fprintf('\nx:');
fprintf(' %-.4e',res.sol.itr.xx');
fprintf('\n||x||: %-.4e',norm(res.sol.itr.xx));