11.10 Semidefinite Relaxation of MIQCQO Problems¶
In this case study we will discuss a fairly common application for Semidefinite Optimization: to define a continuous semidefinite relaxation of a mixed-integer quadratic optimization problem. This section is based on the method by Park and Boyd [PB15].
We will focus on problems of the form:
where \(q\in \real^n\) and \(P\in \PSD^{n\times n}\) is positive semidefinite. There are many important problems that can be reformulated as (11.40), for example:
integer least squares: minimize \(\|Ax -b\|^2_2\) subject to \(x\in \integral^n\),
closest vector problem: minimize \(\|v - z\|_2\) subject to \(z\in \{ Bx~|~x\in \integral^n\}\).
Following [PB15], we can derive a relaxed continuous model. We first relax the integrality constraint
The last constraint is still non-convex. We introduce a new variable \(X\in \real^{n\times n}\), such that \(X = x\cdot x^T\). This allows us to write an equivalent formulation:
To get a conic problem we relax the last constraint and apply the Schur complement. The final relaxation follows:
Fusion Implementation
Implementing model (11.41) in Fusion is very simple. We assume the input \(n\), \(P\) and \(q\). Then we proceed creating the optimization model
Model M = new Model();
The important step is to define a single PSD variable
Our code will create \(Z\) and two slices that correspond to \(X\) and \(x\):
Variable Z = M.variable("Z", Domain.inPSDCone(n + 1));
Variable X = Z.slice(new int[] {0, 0}, new int[] {n, n});
Variable x = Z.slice(new int[] {0, n}, new int[] {n, n + 1});
Then we define the constraints:
M.constraint( Expr.sub(X.diag(), x), Domain.greaterThan(0.) );
M.constraint( Z.index(n, n), Domain.equalsTo(1.) );
The objective function uses several available linear expressions:
M.objective( ObjectiveSense.Minimize, Expr.add(
Expr.sum( Expr.mulElm( P, X ) ),
Expr.mul( 2.0, Expr.dot(x, q) )
) );
Note that the trace operator is not directly available in Fusion, but it can easily be defined from scratch.
Complete code
static Model miqcqp_sdo_relaxation(int n, Matrix P, double[] q) {
Model M = new Model();
Variable Z = M.variable("Z", Domain.inPSDCone(n + 1));
Variable X = Z.slice(new int[] {0, 0}, new int[] {n, n});
Variable x = Z.slice(new int[] {0, n}, new int[] {n, n + 1});
M.constraint( Expr.sub(X.diag(), x), Domain.greaterThan(0.) );
M.constraint( Z.index(n, n), Domain.equalsTo(1.) );
M.objective( ObjectiveSense.Minimize, Expr.add(
Expr.sum( Expr.mulElm( P, X ) ),
Expr.mul( 2.0, Expr.dot(x, q) )
) );
return M;
}
Numerical Examples
We present now some simple numerical experiments for the integer least squares problem:
It corresponds to the problem (11.40) with \(P=A^TA\) and \(q=-A^Tb\). Following [PB15] we will generate the input data by taking all entries of \(A\) from the normal distribution \(\mathcal{N}(0,1)\) and setting \(b=Ac\) where \(c\) comes from the uniform distribution on \([0,1]\).
We implement the linear algebra operations using the LinAlg
module available in MOSEK.
An integer rounding xRound
of the solution to (11.41) is a feasible integer solution to (11.42). We can compare it to the actual optimal integer solution xOpt
, whenever the latter is available. Of course it is very simple to formulate the integer least squares problem in Fusion:
static Model int_least_squares(int n, Matrix A, double[] b) {
Model M = new Model();
Variable x = M.variable("x", n, Domain.integral(Domain.unbounded()));
Variable t = M.variable("t", 1, Domain.unbounded());
M.constraint( Expr.vstack(t, Expr.sub(Expr.mul(A, x), b)), Domain.inQCone() );
M.objective( ObjectiveSense.Minimize, t );
return M;
}
All that remains is to compare the values of the objective function \(\|Ax-b\|_2\) for the two solutions.
// problem dimensions
int n = 20;
int m = 2 * n;
// problem data
double[] A = new double[m * n];
double[] b = new double[m];
double[] c = new double[n];
double[] P = new double[n * n];
double[] q = new double[n];
for (int j = 0; j < n; j++) {
for (int i = 0; i < m; i++)
A[i * n + j] = rnd.nextGaussian();
c[j] = rnd.nextDouble();
}
// P = A^T A
LinAlg.syrk(mosek.uplo.lo, mosek.transpose.yes,
n, m, 1.0, A, 0., P);
for (int j = 0; j < n; j++) for (int i = j + 1; i < n; i++) P[i * n + j] = P[j * n + i];
// q = -P c, b = A c
LinAlg.gemv(mosek.transpose.no, n, n, -1.0, P, c, 0., q);
LinAlg.gemv(mosek.transpose.no, m, n, 1.0, A, c, 0., b);
// Solve the problems
Model M = miqcqp_sdo_relaxation(n, Matrix.dense(n, n, P), q);
Model Mint = int_least_squares(n, Matrix.dense(n, m, A).transpose(), b);
M.solve();
Mint.solve();
// rounded and optimal solution
double[] xRound = M.getVariable("Z").slice(new int[] {0, n}, new int[] {n, n + 1}).level();
for (int i = 0; i < n; i++) xRound[i] = java.lang.Math.round(xRound[i]);
double[] yRound = b.clone();
double[] xOpt = Mint.getVariable("x").level();
double[] yOpt = b.clone();
LinAlg.gemv(mosek.transpose.no, m, n, 1.0, A, xRound, -1.0, yRound); // Ax_round-b
LinAlg.gemv(mosek.transpose.no, m, n, 1.0, A, xOpt, -1.0, yOpt); // Ax_opt-b
System.out.println(M.getSolverDoubleInfo("optimizerTime") + " " + Mint.getSolverDoubleInfo("optimizerTime"));
System.out.println(java.lang.Math.sqrt(LinAlg.dot(m, yRound, yRound)) + " " +
java.lang.Math.sqrt(LinAlg.dot(m, yOpt, yOpt)));
Experimentally the objective value for xRound
approximates the optimal solution with a factor of \(1.1\)-\(1.4\). We refer to [PB15] for a more involved iterative rounding procedure, producing integer solutions of even better quality, and for a detailed discussion of test results.