10.4 Inner and outer Löwner-John Ellipsoids

In this section we show how to compute the Löwner-John inner and outer ellipsoidal approximations of a polytope. They are defined as, respectively, the largest volume ellipsoid contained inside the polytope and the smallest volume ellipsoid containing the polytope, as seen in Fig. 7.

_images/ellipses_polygon.png

Fig. 7 The inner and outer Löwner-John ellipse of a polygon.

For further mathematical details, such as uniqueness of the two ellipsoids, consult [BenTalN01]. Our solution is a mix of conic quadratic and semidefinite programming. Among other things, in Sec. 10.4.3 (Bound on the Determinant Root) we show how to implement bounds involving the determinant of a PSD matrix.

10.4.1 Inner Löwner-John Ellipsoids

Suppose we have a polytope given by an h-representation

\[\mathcal{P} = \{ x \in \real^n \mid Ax \leq b \}\]

and we wish to find the inscribed ellipsoid with maximal volume. It will be convenient to parametrize the ellipsoid as an affine transformation of the standard disk:

\[\mathcal{E} = \{ x \mid x = Cu + d,\ u\in\real^n,\ \| u \|_2 \leq 1 \}.\]

Every non-degenerate ellipsoid has a parametrization such that \(C\) is a positive definite symmetric \(n\times n\) matrix. Now the volume of \(\mathcal{E}\) is proportional to \(\mbox{det}(C)^{1/n}\). The condition \(\mathcal{E}\subseteq\mathcal{P}\) is equivalent to the inequality \(A(Cu+d)\leq b\) for all \(u\) with \(\|u\|_2\leq 1\). After a short computation we obtain the formulation:

(1)\[\begin{split}\begin{array}{lll} \maximize & t & \\ \st & t \leq \mbox{det}(C)^{1/n}, & \\ & (b-Ad)_i\geq \|(AC)_i\|_2, & i=1,\ldots,m,\\ & C \succeq 0, & \end{array}\end{split}\]

where \(X_i\) denotes the \(i\)-th row of the matrix \(X\). This can easily be implemented using Fusion, where the sequence of conic inequalities can be realized at once by feeding in the matrices \(b-Ad\) and \(AC\).

Listing 22 Fusion implementation of model (1). Click here to download.
def lownerjohn_inner(A, b):
    with Model("lownerjohn_inner") as M:
        M.setLogHandler(sys.stdout)
        m, n = len(A), len(A[0])

        # Setup variables
        t = M.variable("t", 1, Domain.greaterThan(0.0))
        C = M.variable("C", Domain.inPSDCone(n))
        d = M.variable("d", n, Domain.unbounded())

        # (b-Ad, AC) generate cones
        M.constraint("qc", Expr.hstack(Expr.sub(b, Expr.mul(A, d)), Expr.mul(A, C)),
                     Domain.inQCone())
        # t <= det(C)^{1/n}
        det_rootn(M, C, t)

        # Objective: Maximize t
        M.objective(ObjectiveSense.Maximize, t)

        M.solve()

        C, d = C.level(), d.level()
        return ([C[i:i + n] for i in range(0, n * n, n)], d)

The only black box is the method det_rootn which implements the constraint \(t\leq \mbox{det}(C)^{1/n}\). It will be described in Sec. 10.4.3 (Bound on the Determinant Root).

10.4.2 Outer Löwner-John Ellipsoids

To compute the outer ellipsoidal approximation to a polytope, let us now start with a v-representation

\[\mathcal{P} = \mbox{conv}\{ x_1, x_2, \ldots , x_m \} \subseteq \real^n,\]

of the polytope as a convex hull of a set of points. We are looking for an ellipsoid given by a quadratic inequality

\[\mathcal{E} = \{ x\in\real^n \mid \| Px-c \|_2 \leq 1 \},\]

whose volume is proportional to \(\mbox{det}(P)^{-1/n}\), so we are after maximizing \(\mbox{det}(P)^{1/n}\). Again, there is always such a representation with a symmetric, positive definite matrix \(P\). The inclusion conditions \(x_i\in\mathcal{E}\) translate into a straightforward problem formulation:

(2)\[\begin{split}\begin{array}{lll} \maximize & t &\\ \st & t \leq \mbox{det}(P)^{1/n}, &\\ & \|Px_i - c\|_2 \leq 1, &i=1,\ldots,m,\\ & P \succeq 0, & \end{array}\end{split}\]

and then directly into Fusion code:

Listing 23 Fusion implementation of model (2). Click here to download.
def lownerjohn_outer(x):
    with Model("lownerjohn_outer") as M:
        M.setLogHandler(sys.stdout)
        m, n = len(x), len(x[0])

        # Setup variables
        t = M.variable("t", 1, Domain.greaterThan(0.0))
        P = M.variable("P", Domain.inPSDCone(n))
        c = M.variable("c", n, Domain.unbounded())

        # (1, Px-c) in cone
        M.constraint("qc",
                     Expr.hstack(Expr.ones(m),
                                 Expr.sub(Expr.mul(x, P),
                                          Var.reshape(Var.repeat(c, m), [m, n])
                                          )
                                 ),
                     Domain.inQCone())

        # t <= det(P)^{1/n}
        det_rootn(M, P, t)

        # Objective: Maximize t
        M.objective(ObjectiveSense.Maximize, t)
        M.solve()

        P, c = P.level(), c.level()
        return ([P[i:i + n] for i in range(0, n * n, n)], c)

10.4.3 Bound on the Determinant Root

It remains to show how to express the bounds on \(\mbox{det}(X)^{1/n}\) for a symmetric positive definite \(n\times n\) matrix \(X\) using PSD and conic quadratic variables. We want to model the set

(3)\[C = \lbrace (X, t) \in \PSD^n \times \real \mid t \leq \mbox{det}(X)^{1/n} \rbrace.\]

A standard approach when working with the determinant of a PSD matrix is to consider a semidefinite cone

(4)\[\begin{split}\left( {\begin{array}{cc}X & Z \\ Z^T & \mbox{Diag}(Z) \\ \end{array} } \right) \succeq 0\end{split}\]

where \(Z\) is a matrix of additional variables and where we intuitively identify \(\mbox{Diag}(Z)=\{\lambda_1,\ldots,\lambda_n\}\) with the eigenvalues of \(X\). With this in mind, we are left with expressing the constraint

(5)\[t \leq (\lambda_1\cdot\ldots\cdot\lambda_n)^{1/n}.\]

This is easy to implement recursively using rotated quadratic cones when \(n\) is a power of \(2\); otherwise we need to round \(n\) up to the nearest power of \(2\) as in Listing 25. For example, \(t\leq (\lambda_1\lambda_2\lambda_3\lambda_4)^{1/4}\) is equivalent to

\[\lambda_1\lambda_2\geq y_1^2,\ \lambda_3\lambda_4\geq y_2^2,\ y_1y_2\geq t^2\]

while \(t\leq (\lambda_1\lambda_2\lambda_3)^{1/3}\) can be achieved by writing \(t\leq (t\lambda_1\lambda_2\lambda_3)^{1/4}\).

For further details and proofs see [BenTalN01] or [MOSEKApS12].

Listing 24 Approaching the determinant, see (4). Click here to download.
def det_rootn(M, X, t):
    n = int(sqrt(X.size()))

    # Setup variables
    Y = M.variable(Domain.inPSDCone(2 * n))

    # Setup Y = [X, Z; Z^T , diag(Z)]
    Y11 = Y.slice([0, 0], [n, n])
    Y21 = Y.slice([n, 0], [2 * n, n])
    Y22 = Y.slice([n, n], [2 * n, 2 * n])

    M.constraint(Expr.sub(Y21.diag(), Y22.diag()), Domain.equalsTo(0.0))
    M.constraint(Expr.sub(X, Y11), Domain.equalsTo(0.0))

    # t^n <= (Z11*Z22*...*Znn)
    geometric_mean(M, Y22.diag(), t)
Listing 25 Bounding the geometric mean, see (5). Click here to download.
def geometric_mean(M, x, t):
    def rec(x):
        n = x.getShape().dim(0)
        if n > 1:
            y = M.variable(int(n // 2), Domain.unbounded())
            M.constraint(Var.hstack(Var.reshape(
                x, [n // 2, 2]), y), Domain.inRotatedQCone())
            return rec(y)
        else:
            return x

    n = x.getShape().dim(0)
    l = int(ceil(log(n, 2)))
    m = int(2**l) - n

    # if size of x is not a power of 2 we pad it:
    if m > 0:
        x_padding = M.variable(m, Domain.unbounded())

        # set the last m elements equal to t
        M.constraint(Expr.sub(x_padding, Var.repeat(t, m)),
                     Domain.equalsTo(0.0))

        x = Var.vstack(x, x_padding)

    M.constraint(Expr.sub(Expr.mul(2.0**(l / 2.0), t), rec(x)),
                 Domain.equalsTo(0.0))