# 6.6 Semidefinite Optimization¶

Semidefinite optimization is a generalization of conic optimization, allowing the use of matrix variables belonging to the convex cone of positive semidefinite matrices

$\PSD^r = \left\lbrace X \in \Symm^r: z^T X z \geq 0, \quad \forall z \in \real^r \right\rbrace,$

where $$\Symm^r$$ is the set of $$r \times r$$ real-valued symmetric matrices.

MOSEK can solve semidefinite optimization problems of the form

$\begin{split}\begin{array}{lccccll} \mbox{minimize} & & & \sum_{j=0}^{n-1} c_j x_j + \sum_{j=0}^{p-1} \left\langle \barC_j, \barX_j \right\rangle + c^f & & &\\ \mbox{subject to} & l_i^c & \leq & \sum_{j=0}^{n-1} a_{ij} x_j + \sum_{j=0}^{p-1} \left\langle \barA_{ij}, \barX_j \right\rangle & \leq & u_i^c, & i = 0, \ldots, m-1,\\ & l_j^x & \leq & x_j & \leq & u_j^x, & j = 0, \ldots, n-1,\\ & & & x \in \K, \barX_j \in \PSD^{r_j}, & & & j = 0, \ldots, p-1 \end{array}\end{split}$

where the problem has $$p$$ symmetric positive semidefinite variables $$\barX_j\in \PSD^{r_j}$$ of dimension $$r_j$$ with symmetric coefficient matrices $$\barC_j\in \Symm^{r_j}$$ and $$\barA_{i,j}\in \Symm^{r_j}$$. We use standard notation for the matrix inner product, i.e., for $$A,B\in \real^{m\times n}$$ we have

$\left\langle A,B \right\rangle := \sum_{i=0}^{m-1} \sum_{j=0}^{n-1} A_{ij} B_{ij}.$

We demonstrate the setup of semidefinite variables and the matrices $$\barC$$, $$\barA$$ on the following examples:

## 6.6.1 Example SDO1¶

We consider the simple optimization problem with semidefinite and conic quadratic constraints:

(6.11)$\begin{split}\begin{array} {llcc} \mbox{minimize} & \left\langle \left[ \begin{array} {ccc} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{array} \right], \barX \right\rangle + x_0 & & \\ \mbox{subject to} & \left\langle \left[ \begin{array} {ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right], \barX \right\rangle + x_0 & = & 1, \\ & \left\langle \left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right], \barX \right\rangle + x_1 + x_2 & = & 1/2, \\ & x_0 \geq \sqrt{{x_1}^2 + {x_2}^2}, & \barX \succeq 0, & \end{array}\end{split}$

The problem description contains a 3-dimensional symmetric semidefinite variable which can be written explicitly as:

$\begin{split}\barX = \left[ \begin{array} {ccc} \barX_{00} & \barX_{10} & \barX_{20} \\ \barX_{10} & \barX_{11} & \barX_{21} \\ \barX_{20} & \barX_{21} & \barX_{22} \end{array} \right] \in \PSD^3,\end{split}$

and a conic quadratic variable $$(x_0, x_1, x_2) \in \Q^3$$. The objective is to minimize

$2(\barX_{00} + \barX_{10} + \barX_{11} + \barX_{21} + \barX_{22}) + x_0,$

subject to the two linear constraints

$\begin{split}\begin{array}{ccc} \barX_{00} + \barX_{11} + \barX_{22} + x_0 & = & 1, \\ \barX_{00} + \barX_{11} + \barX_{22} + 2(\barX_{10} + \barX_{20} + \barX_{21}) + x_1 + x_2 & = & 1/2. \end{array}\end{split}$

Setting up the linear and conic part

The linear and conic parts (constraints, variables, objective, cones) are set up using the methods described in the relevant tutorials; Sec. 6.1 (Linear Optimization), Sec. 6.3 (Conic Quadratic Optimization), Sec. 6.5 (Conic Exponential Optimization), Sec. 6.4 (Power Cone Optimization). Here we only discuss the aspects directly involving semidefinite variables.

Appending semidefinite variables

The dimensions of semidefinite variables are passed in prob.bardim.

Coefficients of semidefinite terms.

Every term of the form $$(\barA_{i,j})_{k,l}(\barX_j)_{k,l}$$ is determined by four indices $$(i,j,k,l)$$ and a coefficient value $$v=(\barA_{i,j})_{k,l}$$. Here $$i$$ is the number of the constraint in which the term appears, $$j$$ is the index of the semidefinite variable it involves and $$(k,l)$$ is the position in that variable. This data is passed in the structure bara. Note that only the lower triangular part should be specified explicitly, that is one always has $$k\geq l$$.

Semidefinite terms $$(\barC_j)_{k,l}(\barX_j)_{k,l}$$ of the objective are specified in the same way in barc but only include $$(j,k,l)$$ and $$v$$.

Source code

Listing 6.10 Code implementing problem (6.11). Click here to download.
function sdo1()
[r, res] = mosekopt('symbcon');

prob.c         = [1, 0, 0];

prob.bardim    = ;
prob.barc.subj = [1, 1, 1, 1, 1];
prob.barc.subk = [1, 2, 2, 3, 3];
prob.barc.subl = [1, 1, 2, 2, 3];
prob.barc.val  = [2.0, 1.0, 2.0, 1.0, 2.0];

prob.blc = [1, 0.5];
prob.buc = [1, 0.5];

% It is a good practice to provide the correct
% dimmension of A as the last two arguments
% because it facilitates better error checking.
prob.a         = sparse([1, 2, 2], [1, 2, 3], [1, 1, 1], 2, 3);
prob.bara.subi = [1, 1, 1, 2, 2, 2, 2, 2, 2];
prob.bara.subj = [1, 1, 1, 1, 1, 1, 1, 1, 1];
prob.bara.subk = [1, 2, 3, 1, 2, 3, 2, 3, 3];
prob.bara.subl = [1, 2, 3, 1, 1, 1, 2, 2, 3];
prob.bara.val  = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0];

prob.cones.sub    = [1, 2, 3];
prob.cones.subptr = ;

[r,res] = mosekopt('minimize info',prob);

X = zeros(3);
X([1,2,3,5,6,9]) = res.sol.itr.barx;
X = X + tril(X,-1)';

x = res.sol.itr.xx;


The solution $$x$$ is returned in res.sol.itr.xx and the numerical values of $$\barX_j$$ are returned in res.sol.itr.barx; the lower triangular part of each $$\barX_j$$ is stacked column-by-column into an array, and each array is then concatenated forming a single array res.sol.itr.barx representing $$\barX_1, \ldots, \barX_p$$. Similarly, the dual semidefinite variables $$\barS_j$$ are recovered through res.sol.itr.bars.

## 6.6.2 Example SDO2¶

We now demonstrate how to define more than one semidefinite variable using the following problem with two matrix variables and two types of constraints:

(6.12)$\begin{split}\begin{array}{lrll} \mbox{minimize} & \langle C_1,\barX_1\rangle + \langle C_2,\barX_2\rangle & & \\ \mbox{subject to} & \langle A_1,\barX_1\rangle + \langle A_2,\barX_2\rangle & = & b, \\ & (\barX_2)_{01} & \leq & k, \\ & \barX_1, \barX_2 & \succeq & 0. \end{array}\end{split}$

In our example $$\dim(\barX_1)=3$$, $$\dim(\barX_2)=4$$, $$b=23$$, $$k=-3$$ and

$\begin{split}C_1= \left[\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 6 \end{array}\right], A_1= \left[\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 2 \end{array}\right],\end{split}$
$\begin{split}C_2= \left[\begin{array}{cccc} 1 & -3 & 0 & 0\\ -3 & 2 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array}\right], A_2= \left[\begin{array}{cccc} 0 & 1 & 0 & 0\\ 1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -3 \\ \end{array}\right],\end{split}$

are constant symmetric matrices.

Note that this problem does not contain any scalar variables, but they could be added in the same fashion as in Sec. 6.6.1 (Example SDO1).

For explanations of data structures used in the example see Sec. 6.6.1 (Example SDO1). Note that the field bardim is used to specify that we have two semidefinite variables of dimensions 3 and 4.

The code representing the above problem is shown below.

Listing 6.11 Implementation of model (6.12). Click here to download.
% Sample data
C1 = [ 1 0 0; 0 0 0; 0 0 6 ];
A1 = [ 1 0 1; 0 0 0; 1 0 2 ];
C2 = [ 1 -3 0 0; -3 2 0 0; 0 0 1 0; 0 0 0 0 ];
A2 = [ 0 1 0 0; 1 -1 0 0; 0 0 0 0; 0 0 0 -3 ];
b = 23;
k = -3;

% The scalar part, as in linear optimization examples
prob.c = [];
prob.a = sparse([], [], [], 2, 0);          % 2 constraints, no scalar variables
prob.blc = [b -inf];                        % Bounds
prob.buc = [b k];

% Dimensions of PSD variables
prob.bardim = [3, 4];

% Coefficients in the objective
[r1,c1,v1] = find(tril(C1));
[r2,c2,v2] = find(tril(C2));

prob.barc.subj = [repmat(1,length(v1),1);                % Which PSD variable (j)
repmat(2,length(v2),1)];
prob.barc.subk = [r1; r2];                               % Which matrix entry and value ((k,l)->v)
prob.barc.subl = [c1; c2];
prob.barc.val = [v1; v2];

% Coefficients in the constraints
[r1,c1,v1] = find(tril(A1));
[r2,c2,v2] = find(tril(A2));

prob.bara.subi = [ones(length(v1)+length(v2),1);         % Which constraint (i)
2];
prob.bara.subj = [repmat(1,length(v1),1);
repmat(2,length(v2),1);
2];                                    % Which PSD variable (j)
prob.bara.subk = [r1; r2; 2];                            % Which matrix entry and value ((k,l)->v)
prob.bara.subl = [c1; c2; 1];
prob.bara.val = [v1; v2; 0.5];

% Solve with log output
[r, res] = mosekopt('write(test.ptf) minimize echo(10)', prob);

% Retrieve the result assuming primal and dual feasible
X1 = zeros(3);
X1([1,2,3,5,6,9]) = res.sol.itr.barx(1:6);
X1 = X1 + tril(X1,-1)';

X2 = zeros(4);
X2([1,2,3,4,6,7,8,11,12,16]) = res.sol.itr.barx(7:16);
X2 = X2 + tril(X2,-1)';

X1
X2


The numerical values of $$\barX_j$$ are returned in res.sol.itr.barx; the lower triangular part of each $$\barX_j$$ is stacked column-by-column into an array, and each array is then concatenated forming a single array res.sol.itr.barx representing $$\barX_1, \ldots, \barX_p$$. Similarly, the dual semidefinite variables $$\barS_j$$ are recovered through res.sol.itr.bars.