# 7.6 Semidefinite Optimization¶

Semidefinite optimization is a generalization of conic optimization, allowing the use of matrix variables belonging to the convex cone of positive semidefinite matrices

$\PSD^r = \left\lbrace X \in \Symm^r: z^T X z \geq 0, \quad \forall z \in \real^r \right\rbrace,$

where $$\Symm^r$$ is the set of $$r \times r$$ real-valued symmetric matrices.

MOSEK can solve semidefinite optimization problems stated in the primal form,

(7.13)$\begin{split}\begin{array}{lccccll} \mbox{minimize} & & & \sum_{j=0}^{p-1} \left\langle \barC_j, \barX_j \right\rangle + \sum_{j=0}^{n-1} c_j x_j + c^f & & &\\ \mbox{subject to} & l_i^c & \leq & \sum_{j=0}^{p-1} \left\langle \barA_{ij}, \barX_j \right\rangle + \sum_{j=0}^{n-1} a_{ij} x_j & \leq & u_i^c, & i = 0, \ldots, m-1,\\ & & & \sum_{j=0}^{p-1} \left\langle \barF_{ij}, \barX_j \right\rangle + \sum_{j=0}^{n-1} f_{ij} x_j + g_i & \in & \K_{i}, & i = 0, \ldots, q-1,\\ & l_j^x & \leq & x_j & \leq & u_j^x, & j = 0, \ldots, n-1,\\ & & & x \in \K, \barX_j \in \PSD^{r_j}, & & & j = 0, \ldots, p-1 \end{array}\end{split}$

where the problem has $$p$$ symmetric positive semidefinite variables $$\barX_j\in \PSD^{r_j}$$ of dimension $$r_j$$. The symmetric coefficient matrices $$\barC_j\in \Symm^{r_j}$$ and $$\barA_{i,j}\in \Symm^{r_j}$$ are used to specify PSD terms in the linear objective and the linear constraints, respectively. The symmetric coefficient matrices $$\barF_{i,j}\in \Symm^{r_j}$$ are used to specify PSD terms in the affine conic constraints. Note that $$q$$ ((7.13)) is the total dimension of all the cones, i.e. $$q=\text{dim}(\K_1 \times \ldots \times \K_k)$$, given there are $$k$$ ACCs. We use standard notation for the matrix inner product, i.e., for $$A,B\in \real^{m\times n}$$ we have

$\left\langle A,B \right\rangle := \sum_{i=0}^{m-1} \sum_{j=0}^{n-1} A_{ij} B_{ij}.$

In addition to the primal form presented above, semidefinite problems can be expressed in their dual form. Constraints in this form are usually called linear matrix inequalities (LMIs). LMIs can be easily specified in MOSEK using the vectorized positive semidefinite cone which is defined as:

• Vectorized semidefinite domain:

$\PSD^{d,\mathrm{vec}} = \left\{(x_1,\ldots,x_{d(d+1)/2})\in \real^n~:~ \mathrm{sMat}(x)\in\PSD^d\right\},$

where $$n=d(d+1)/2$$ and,

$\begin{split}\mathrm{sMat}(x) = \left[\begin{array}{cccc}x_1 & x_2/\sqrt{2} & \cdots & x_{d}/\sqrt{2} \\ x_2/\sqrt{2} & x_{d+1} & \cdots & x_{2d-1}/\sqrt{2} \\ \cdots & \cdots & \cdots & \cdots \\ x_{d}/\sqrt{2} & x_{2d-1}/\sqrt{2} & \cdots & x_{d(d+1)/2}\end{array}\right],\end{split}$

or equivalently

$\PSD^{d,\mathrm{vec}} = \left\{\mathrm{sVec}(X)~:~X\in\PSD^d\right\},$

where

$\mathrm{sVec}(X) = (X_{11},\sqrt{2}X_{21},\ldots,\sqrt{2}X_{d1},X_{22},\sqrt{2}X_{32},\ldots,X_{dd}).$

In other words, the domain consists of vectorizations of the lower-triangular part of a positive semidefinite matrix, with the non-diagonal elements additionally rescaled. LMIs can be expressed by restricting appropriate affine expressions to this cone type.

For other types of cones supported by MOSEK, see Sec. 14.8 (Supported domains) and the other tutorials in this chapter. Different cone types can appear together in one optimization problem.

In Fusion the user can enter the linear expressions in a more convenient way, without having to cast the problem exactly in the above form.

We demonstrate the setup of semidefinite variables and their coefficient matrices in the following examples:

## 7.6.1 Example SDO1¶

We consider the simple optimization problem with semidefinite and conic quadratic constraints:

(7.14)$\begin{split}\begin{array} {llcc} \mbox{minimize} & \left\langle \left[ \begin{array} {ccc} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{array} \right], \barX \right\rangle + x_0 & & \\ \mbox{subject to} & \left\langle \left[ \begin{array} {ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right], \barX \right\rangle + x_0 & = & 1, \\ & \left\langle \left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right], \barX \right\rangle + x_1 + x_2 & = & 1/2, \\ & x_0 \geq \sqrt{{x_1}^2 + {x_2}^2}, & \barX \succeq 0, & \end{array}\end{split}$

The problem description contains a 3-dimensional symmetric semidefinite variable which can be written explicitly as:

$\begin{split}\barX = \left[ \begin{array} {ccc} \barX_{00} & \barX_{10} & \barX_{20} \\ \barX_{10} & \barX_{11} & \barX_{21} \\ \barX_{20} & \barX_{21} & \barX_{22} \end{array} \right] \in \PSD^3,\end{split}$

and an affine conic constraint (ACC) $$(x_0, x_1, x_2) \in \Q^3$$. The objective is to minimize

$2(\barX_{00} + \barX_{10} + \barX_{11} + \barX_{21} + \barX_{22}) + x_0,$

subject to the two linear constraints

$\begin{split}\begin{array}{ccc} \barX_{00} + \barX_{11} + \barX_{22} + x_0 & = & 1, \\ \barX_{00} + \barX_{11} + \barX_{22} + 2(\barX_{10} + \barX_{20} + \barX_{21}) + x_1 + x_2 & = & 1/2. \end{array}\end{split}$

Our implementation in Fusion begins with creating a new model:

  Model::t M = new Model("sdo1"); auto _M = finally([&]() { M->dispose(); });


We create a symmetric semidefinite variable $$\barX$$ and another variable representing $$x$$. For simplicity we immediately declare that $$x$$ belongs to a quadratic cone

  Variable::t X  = M->variable("X", Domain::inPSDCone(3));
Variable::t x  = M->variable("x", Domain::inQCone(3));


In this elementary example we are going to create an explicit matrix representation of the problem

$\begin{split}\barC =\left[ \begin{array}{ccc} 2 & 1 & 0 \\ 1 & 2 & 1\\ 0 & 1 & 2\end{array}\right],\ \barA_1 =\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0\\ 0 & 0 & 1\end{array}\right],\ \barA_2 =\left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1\\ 1 & 1 & 1\end{array}\right].\end{split}$

and use it in the model via the dot product operation $$\langle\cdot,\cdot\rangle$$ which applies to matrices as well as to vectors. This way we create each of the linear constraints and the objective as one expression.

  // Objective

// Constraints
M->constraint("c2", Expr::add(Expr::dot(A2, X), Expr::sum(x->slice(1, 3))), Domain::equalsTo(0.5));


Now it remains to solve the problem with Model.solve.

Listing 7.8 Fusion implementation of problem (7.14). Click here to download.
#include <iostream>
#include "fusion.h"

using namespace mosek::fusion;
using namespace monty;

int main(int argc, char ** argv)
{
Model::t M = new Model("sdo1"); auto _M = finally([&]() { M->dispose(); });

// Setting up the variables
Variable::t X  = M->variable("X", Domain::inPSDCone(3));
Variable::t x  = M->variable("x", Domain::inQCone(3));

// Setting up the constant coefficient matrices
Matrix::t C  = Matrix::dense ( new_array_ptr<double, 2>({{2., 1., 0.}, {1., 2., 1.}, {0., 1., 2.}}));
Matrix::t A1 = Matrix::eye(3);
Matrix::t A2 = Matrix::ones(3, 3);

// Objective

// Constraints
M->constraint("c2", Expr::add(Expr::dot(A2, X), Expr::sum(x->slice(1, 3))), Domain::equalsTo(0.5));

M->solve();

std::cout << "Solution : " << std::endl;
std::cout << "  X = " << *(X->level()) << std::endl;
std::cout << "  x = " << *(x->level()) << std::endl;

return 0;
}


## 7.6.2 Example SDO2¶

We now demonstrate how to define more than one semidefinite variable using the following problem with two matrix variables and two types of constraints:

(7.15)$\begin{split}\begin{array}{lrll} \mbox{minimize} & \langle C_1,\barX_1\rangle + \langle C_2,\barX_2\rangle & & \\ \mbox{subject to} & \langle A_1,\barX_1\rangle + \langle A_2,\barX_2\rangle & = & b, \\ & (\barX_2)_{01} & \leq & k, \\ & \barX_1, \barX_2 & \succeq & 0. \end{array}\end{split}$

In our example $$\dim(\barX_1)=3$$, $$\dim(\barX_2)=4$$, $$b=23$$, $$k=-3$$ and

$\begin{split}C_1= \left[\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 6 \end{array}\right], A_1= \left[\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 2 \end{array}\right],\end{split}$
$\begin{split}C_2= \left[\begin{array}{cccc} 1 & -3 & 0 & 0\\ -3 & 2 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array}\right], A_2= \left[\begin{array}{cccc} 0 & 1 & 0 & 0\\ 1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -3 \\ \end{array}\right],\end{split}$

are constant symmetric matrices.

Note that this problem does not contain any scalar variables, but they could be added in the same fashion as in Sec. 7.6.1 (Example SDO1).

The code representing the above problem is shown below.

Listing 7.9 Implementation of model (7.15). Click here to download.
#include <iostream>
#include "fusion.h"

using namespace mosek::fusion;
using namespace monty;

std::shared_ptr<ndarray<int,1>>    nint(const std::vector<int> &X)    { return new_array_ptr<int>(X); }
std::shared_ptr<ndarray<double,1>> ndou(const std::vector<double> &X) { return new_array_ptr<double>(X); }

int main(int argc, char ** argv)
{
// Sample data in sparse, symmetric triplet format
std::vector<int>    C1_k = {0, 2};
std::vector<int>    C1_l = {0, 2};
std::vector<double> C1_v = {1, 6};
std::vector<int>    A1_k = {0, 2, 0, 2};
std::vector<int>    A1_l = {0, 0, 2, 2};
std::vector<double> A1_v = {1, 1, 1, 2};
std::vector<int>    C2_k = {0, 1, 0, 1, 2};
std::vector<int>    C2_l = {0, 0, 1, 1, 2};
std::vector<double> C2_v = {1, -3, -3, 2, 1};
std::vector<int>    A2_k = {1, 0, 1, 3};
std::vector<int>    A2_l = {0, 1, 1, 3};
std::vector<double> A2_v = {1, 1, -1, -3};
double b = 23;
double k = -3;

// Convert input data into Fusion sparse matrices
auto C1 = Matrix::sparse(3, 3, nint(C1_k), nint(C1_l), ndou(C1_v));
auto C2 = Matrix::sparse(4, 4, nint(C2_k), nint(C2_l), ndou(C2_v));
auto A1 = Matrix::sparse(3, 3, nint(A1_k), nint(A1_l), ndou(A1_v));
auto A2 = Matrix::sparse(4, 4, nint(A2_k), nint(A2_l), ndou(A2_v));

// Create model
Model::t M = new Model("sdo2"); auto _M = finally([&]() { M->dispose(); });

// Two semidefinite variables
auto X1 = M->variable(Domain::inPSDCone(3));
auto X2 = M->variable(Domain::inPSDCone(4));

// Objective

// Equality constraint

// Inequality constraint
M->constraint(X2->index(nint({0,1})), Domain::lessThan(k));

// Solve
M->setLogHandler([ = ](const std::string & msg) { std::cout << msg << std::flush; } );
M->solve();

// Retrieve solution
std::cout << "Solution (vectorized) : " << std::endl;
std::cout << "  X1 = " << *(X1->level()) << std::endl;
std::cout << "  X2 = " << *(X2->level()) << std::endl;

return 0;
}


## 7.6.3 Example SDO3¶

Here we demonstrate how to use the facilities provided in Fusion to set up a model with many semidefinite variables of the same dimension more efficiently than via looping. We consider a problem with $$n$$ semidefinite variables of dimension $$d$$ and $$k$$ constraints:

(7.16)$\begin{split}\begin{array}{rrl} \mbox{minimize} & \sum_j \mathrm{tr}(\barX_j) & \\ \mbox{subject to} & \sum_j \langle A_{ij},\barX_j\rangle \geq b_i, & i=1,\ldots,k, \\ & \barX_j \succeq 0 & j=1,\ldots,n, \end{array}\end{split}$

with symmetric data matrices $$A_{ij}$$.

The key construction is:

Listing 7.10 Creating a stack of semidefinite variables.
    Variable::t X = M->variable(Domain::inPSDCone(d, n));


It creates $$n$$ symmetric, semidefinite matrix variables of dimension $$d$$ arranged in a single variable object X of shape $$(n,d,d)$$. Individual matrix variables can be accessed as slices from $$(i,0,0)$$ to $$(i+1,d,d)$$ (reshaped into shape $$(d,d)$$ if necessary). It is also possible to operate on the full variable X when constructing expressions that involve entries of all the semidefinite matrices in a natural way. The source code example illustrates both these approaches.

Listing 7.11 Implementation of model (7.16). Click here to download.
#include <iostream>
#include <random>
#include "fusion.h"

using namespace mosek::fusion;
using namespace monty;

// A helper function which returns a slice corresponding to j-th variable
Variable::t slice(Variable::t X, int d, int j) {
return
X->slice(new_array_ptr<int,1>({j,0,0}), new_array_ptr<int,1>({j+1,d,d}))
->reshape(new_array_ptr<int,1>({d,d}));
}

int main(int argc, char ** argv)
{
std::random_device rd;
std::mt19937 e2(rd());
std::uniform_real_distribution<> dist(0, 1);

// Sample data
int n = 100, d = 4, k = 3;
std::vector<double> b({9, 10, 11});
std::vector< std::shared_ptr<ndarray<double,2>> > A;
for(int i=0; i<k*n; i++) {
auto Ai = std::make_shared<ndarray<double,2>>(shape(d,d));
for(int s1=0; s1<d; s1++)
for(int s2=0; s2<=s1; s2++)
(*Ai)(s1,s2) = (*Ai)(s2,s1) = dist(e2);
A.push_back(Ai);
}

// Create a model with n semidefinite variables od dimension d x d
Model::t M = new Model("sdo3"); auto _M = finally([&]() { M->dispose(); });

Variable::t X = M->variable(Domain::inPSDCone(d, n));

// Pick indexes (j, s, s), j=0..n-1, s=0..d, of diagonal entries for the objective
auto alldiag =  std::make_shared<ndarray<int,2>>(
shape(d*n,3),
std::function<int(const shape_t<2> &)>([d](const shape_t<2> & p) { return p[1]==0 ? p[0]/d : p[0]%d; }));

M->objective(ObjectiveSense::Minimize, Expr::sum( X->pick(alldiag) ));

// Each constraint is a sum of inner products
// Each semidefinite variable is a slice of X
for(int i=0; i<k; i++) {
std::vector<Expression::t> sumlist;
for(int j=0; j<n ;j++)
sumlist.push_back( Expr::dot(A[i*n+j], slice(X, d, j)) );

}

// Solve
M->setLogHandler([ = ](const std::string & msg) { std::cout << msg << std::flush; } );            // Add logging
M->solve();

// Get results. Each variable is a slice of X
std::cout << "Contributing variables:" << std::endl;
for(int j=0; j<n; j++) {
auto Xj = *(slice(X, d, j)->level());
double maxval = 0;
for(int s=0; s<d*d; s++) maxval = std::max(maxval, Xj[s]);
if (maxval>1e-6) {
std::cout << "X" << j << "=" << std::endl;
for(int s1=0; s1<d; s1++) {
for(int s2=0; s2<d; s2++) std::cout << Xj[s1*d+s2] << "  ";
std::cout << std::endl;
}
}
}

return 0;
}