# 5. Quadratic optimization¶

## 5.1. Introduction¶

In this chapter we discuss convex quadratic optimization. Our discussion is fairly brief compared to the previous chapters for three reasons; (*i*) quadratic optimization is a special case of conic quadratic optimization, (*ii*) for most problems it is actually more efficient for the solver to pose the problem in conic form, and (*iii*) duality theory (including infeasibility certificates) is much simpler for conic quadratic optimization. Therefore, we generally recommend a conic quadratic formulation.

## 5.2. Convex quadratic optimization¶

A standard (convex) quadratic optimization problem

with \(Q\in\PSD^n\) is conceptually a simple extension of a standard linear optimization problem with a quadratic term \(x^T Q x\). Note the important requirement that \(Q\) is symmetric positive semidefinite; otherwise the objective function is not convex.

### 5.2.1. Geometry of quadratic optimization¶

Quadratic optimization has a simple geometric interpretation; we minimize a convex quadratic function over a polyhedron. In Fig. 5.1 we illustrate this interpretation for the same example as in Fig. 2.5 extended with an additional quadratic term with

It is intuitively clear that the following different cases can occur:

- The optimal solution \(x^\star\) is at the boundary of the polyhedron (as shown in Fig. 5.1). At \(x^\star\) one of the hyperplanes is tangential to an ellipsoidal level curve.
- The optimal solution is inside the polyhedron; this occurs if the unconstrained minimizer \(\arg\min_x \half x^T Q x + c^T x=-Q^\dagger c\) (i.e., the center of the ellipsoidal level curves) is inside the polyhedron.
- If the polyhedron is unbounded in the opposite direction of \(c\), and if the ellipsoid level curves are degenerate in that direction (i.e., \(Qc=0\)), then the problem is unbounded. If \(Q\in \mathcal{S}_{++}^n\), then the problem cannot be unbounded.
- The problem is infeasible, i.e., \(\{ x \mid Ax=b, \: x\geq 0 \}=\emptyset\).

Possibly because of its simple geometric interpretation and similarity to linear optimization, quadratic optimization has been more widely adopted by optimization practitioners than conic quadratic optimization.

### 5.2.2. Duality in quadratic optimization¶

The Lagrangian function for (1) is

with Lagrange multipliers \(s\in \R^n_+\), and from \(\nabla_x L(x,y,s)=0\) we get the necessary first-order optimality condition

i.e., \((A^Ty+s-c)\in \mathcal{R}(Q)\). Then

which can be substituted into (2) to give a dual function

Thus we get a dual problem

Alternatively, we can characterize the constraint \((A^T y + s - c) \in \mathcal{R}(Q)\) as \(Qw = A^T y + s - c\) with an extraneous variable \(w\) to get an equivalent dual problem

Note from the optimality conditions that \(w=x\), so (4) is an unusual dual problem in the sense that it involves both primal and dual variables.

Weak duality follows from the inner product between \(x\) and the dual equality constraint,

which shows that

Furthermore, strong duality holds if a Slater constraint qualification is satisfied, i.e., if either the primal or dual problem is strictly feasible (as in the previous chapters).

### 5.2.3. Infeasibility in quadratic optimization¶

Feasibility of the primal problem (1) is described by Lemma 6; either problem (1) is feasible or there exists a certificate \(y\) of primal infeasibility satisfying

In order to characterize feasibility of the dual problem, we need another pair of alternatives, which follows directly from Lemma 7 by considering

Either there exists an \(x\geq 0\) satisfying

or there exists \((y,w)\) satisfying

## 5.3. Quadratically constrained optimization¶

A general convex quadratically constrained quadratic optimization problem is

where \(Q_i\in \mathcal{S}_+^n, \: \forall i\). This corresponds to minimizing a convex quadratic function over an intersection of ellipsoids (or affine halfspaces if some of \(Q_i\) are zero). Note the important requirement \(Q_i\succeq 0,\: \forall i\), so that the objective function is convex and the constraints

characterize convex sets. It is important to note that neither quadratic equalities

or inequalities of the form

characterize convex sets, and therefore cannot be included.

### 5.3.1. Duality in quadratically constrained optimization¶

The Lagrangian function for (5) is

where

From the Lagrangian we get the first-order optimality conditions

and similar to the case of quadratic optimization we get a dual problem

or equivalently

Weak duality is easily verified; from (6) we have

which implies

and strong duality holds provided the quadratic inequalities are strictly feasible,

for some \(x\in\R^n\). Using a general version of the Schur Lemma for singular matrices, we can also write (7) as an equivalent semidefinite optimization problem,

### 5.3.2. Infeasibility in quadratically constrained optimization¶

Feasibility of (5) is characterized by the following lemma.

Given \(Q_i\succeq 0\) \(i=1,\dots,m\), either there exists an \(x\in \R^n\) satisfying

or there exists \(\lambda \geq 0, \: \lambda \neq 0\) satisfying

Suppose (10) and (11) are both satisfied. Then

which is a contradiction. Conversely, assume (11) is not satisfied, i.e.,

But then \(X=\left( \begin{smallmatrix}1 & x^T\\ x & xx^T\end{smallmatrix} \right)\) satisfies

or equivalently

which in turn implies (10).

Dual infeasibility is characterized by the following set of alternatives; either problem (8) is feasible or there exists a certificate \(x\) of dual infeasibility satisfying (13).

Given \(Q_i\succeq 0\), \(i=0,\dots,m\), either there exists \((\lambda,w)\) satisfying

or there exists \(x\in \R^n\) satisfying

Both cannot be true, because then

which is a contradiction. On the other hand, suppose

i.e., \(Q(\lambda)\) is singular \(\forall \lambda\geq 0\), so there exists \(x\in \R^n\) satisfying \(Q_ix=0, \: i=0,\dots,m\).

## 5.4. Separable quadratic optimization¶

Often a factorization of \(Q=F^T F\) is either known or readily available, in which case we get an alternative formulation of (1) as

Formulation (14) has several advantages; convexity of the problem is obvious (which can occasionally be difficult to detect in finite precision arithmetic), and the structure and sparsity of the separable reformulation is typically more efficient for an optimization solver. In the following example we consider a related reformulation, which can result in a significant reduction of complexity and solution time (see also Sec. 3.3.3).

Consider a standard quadratic optimization problem (14) where

We then get a separable problem

In general we can reformulate (5) as

which is very similar to the conic formulation found in Section 3.3.1. If the formulation (15) is sparse compared to (5) (i.e., if it is described using fewer non-zeros), then it generally results in a reduction of computation time. On the other hand, if we form (15), we might as well use the conic formulation instead.