11.9 Nearest Correlation Matrix Problem

A correlation matrix is a symmetric positive definite matrix with unit diagonal. This term has origins in statistics, since the matrix whose entries are the correlation coefficients of a sequence of random variables has all these properties.

In this section we study variants of the problem of approximating a given symmetric matrix \(A\) with correlation matrices:

  • find the correlation matrix \(X\) nearest to \(A\) in the Frobenius norm,

  • find an approximation of the form \(D+X\) where \(D\) is a diagonal matrix with positive diagonal and \(X\) is a positive semidefinite matrix of low rank, using the combination of Frobenius and nuclear norm.

Both problems are related to portfolio optimization, where one can often have a matrix \(A\) that only approximates the correlations of stocks. For subsequent optimizations one would like to approximate \(A\) with a correlation matrix or, in the factor model, with \(D+VV^T\) with \(VV^T\) of small rank.

11.9.1 Nearest correlation with the Frobenius norm

The Frobenius norm of a real matrix \(M\) is defined as

\[\|M\|_F = \left(\sum_{i,j}M_{i,j}^2\right)^{1/2}\]

and with respect to this norm our optimization problem can be expressed simply as:

(11.36)\[\begin{split}\begin{array}{ll} \minimize & \|A-X\|_F\\ \st & \mathbf{diag}(X) = e,\\ & X \succeq 0.\\ \end{array}\end{split}\]

We can exploit the symmetry of \(A\) and \(X\) to get a compact vector representation. To this end we make use of the following mapping from a symmetric matrix to a flattened vector containing the (scaled) lower triangular part of the matrix:

(11.37)\[\begin{split}\begin{array}{ll} \mbox{vec}: & \R^{n\times n} \rightarrow \real^{n(n+1)/2} \\ \mbox{vec}(M) = & (\alpha_{11}M_{11},\alpha_{21}M_{21},\alpha_{22}M_{22},\ldots,\alpha_{n1}M_{n1},\ldots,\alpha_{nn}M_{nn}) \\ \alpha_{ij}=&\begin{cases}1 & j=i\\ \sqrt{2} & j<i\end{cases} \end{array}\end{split}\]

Note that \(\|M\|_F=\|\mbox{vec}(M)\|_2\). The Fusion implementation of \(\mbox{vec}\) is as follows:

Listing 11.18 Implementation of function \(vec\) in (11.37). Click here to download.
def vec(e):
    N = e.getShape()[0]

    msubi = range(N * (N + 1) // 2)
    msubj = [i * N + j for i in range(N) for j in range(i + 1)]
    mcof  = [2.0**0.5 if i !=
             j else 1.0 for i in range(N) for j in range(i + 1)]

    S = Matrix.sparse(N * (N + 1) // 2, N * N, msubi, msubj, mcof)
    return Expr.mul(S, Expr.flatten(e))

That leads to an optimization problem with both conic quadratic and semidefinite constraints:

(11.38)\[\begin{split}\begin{array}{ll} \minimize & t\\ \st & (t, \mbox{vec} (A-X)) \in \Q,\\ & \mathbf{diag}(X) = e,\\ & X \succeq 0.\\ \end{array}\end{split}\]

Code example

Listing 11.19 Implementation of problem (11.38). Click here to download.
def nearestcorr(A):
    N = A.numRows()

    # Create a model
    with Model("NearestCorrelation") as M:
        # Setting up the variables
        X = M.variable("X", Domain.inPSDCone(N))
        t = M.variable("t", 1, Domain.unbounded())

        # (t, vec (A-X)) \in Q
        v = vec(Expr.sub(A, X))
        M.constraint("C1", Expr.vstack(t, v), Domain.inQCone())

        # diag(X) = e
        M.constraint("C2", X.diag(), Domain.equalsTo(1.0))

        # Objective: Minimize t
        M.objective(ObjectiveSense.Minimize, t)
        M.writeTask('nearcor.ptf')
        M.solve()

        return X.level(), t.level()

We use the following input

Listing 11.20 Input for the nearest correlation problem.
    N = 5
    A = Matrix.dense(N, N, [0.0, 0.5, -0.1, -0.2, 0.5,
                            0.5, 1.25, -0.05, -0.1, 0.25,
                            -0.1, -0.05, 0.51, 0.02, -0.05,
                            -0.2, -0.1, 0.02, 0.54, -0.1,
                            0.5, 0.25, -0.05, -0.1, 1.25])

The expected output is the following (small differences may apply):

X =
[[ 1.          0.50001941 -0.09999994 -0.20000084  0.50001941]
 [ 0.50001941  1.         -0.04999551 -0.09999154  0.24999101]
 [-0.09999994 -0.04999551  1.          0.01999746 -0.04999551]
 [-0.20000084 -0.09999154  0.01999746  1.         -0.09999154]
 [ 0.50001941  0.24999101 -0.04999551 -0.09999154  1.        ]]

11.9.2 Nearest Correlation with Nuclear-norm Penalty

Next, we consider the approximation of \(A\) of the form \(D+X\) where \(D=\diag(w),\ w\geq 0\) and \(X\succeq 0\). We will also aim at minimizing the rank of \(X\). This can be approximated by a relaxed linear objective penalizing the trace \(\trace(X)\) (which in this case is the nuclear norm of \(X\) and happens to be the sum of its eigenvalues).

The combination of these constraints leads to a problem:

\[\begin{split}\begin{array}{ll} \minimize & \left\|X+\diag(w)-A\right\|_F + \gamma \trace(X),\\ \st & X \succeq 0, w \geq 0, \end{array}\end{split}\]

where the parameter \(\gamma\) controls the tradeoff between the quality of approximation and the rank of \(X\).

Exploit the mapping \(\mbox{vec}\) defined in (11.37) we can express this problem as:

(11.39)\[\begin{split}\begin{array}{ll} \minimize & t + \gamma\trace(X) \\ \st & (t, \mbox{vec} (X + \diag(w) - A) ) \in \Q, \\ & X \succeq 0 , w \geq 0. \end{array}\end{split}\]

Code example

Listing 11.21 Implementation of problem (11.39). Click here to download.
def nearestcorr_nucnorm(A, gammas):
    N = A.numRows()
    with Model("NucNorm") as M:
        # Setup variables
        t = M.variable("t", 1, Domain.unbounded())
        X = M.variable("X", Domain.inPSDCone(N))
        w = M.variable("w", N, Domain.greaterThan(0.0))

        # D = diag(w)
        D = Expr.mulElm(Matrix.eye(N), Var.repeat(w, 1, N))
        # (t, vec (X + D - A)) in Q
        M.constraint(Expr.vstack(t, vec(Expr.sub(Expr.add(X, D), A))),
                     Domain.inQCone())

        result = []
        for g in gammas:
            # Objective: Minimize t + gamma*Tr(X)
            M.objective(ObjectiveSense.Minimize, Expr.add(
                t, Expr.mul(g, Expr.sum(X.diag()))))
            M.solve()

            # Find eigenvalues of X and compute its rank
            d = [0.0] * int(N)
            LinAlg.syeig(mosek.uplo.lo, N, X.level(), d)
            result.append(
                (g, t.level(), sum([d[i] > 1e-6 for i in range(N)]), X.level()))

        return result

We feed MOSEK with the same input as in Sec. 11.9.1 (Nearest correlation with the Frobenius norm). The problem is solved for a range of values \(\gamma\) values, to demonstrate how the penalty term helps achieve a low rank solution. To this extent we report both the rank of \(X\) and the residual norm \(\left\|X+\diag(w)-A\right\|_F\).

--- Nearest Correlation with Nuclear Norm---
gamma=0.000000, res=3.076163e-01, rank=4
gamma=0.100000, res=4.251692e-01, rank=2
gamma=0.200000, res=5.112082e-01, rank=1
gamma=0.300000, res=5.298432e-01, rank=1
gamma=0.400000, res=5.592686e-01, rank=1
gamma=0.500000, res=6.045702e-01, rank=1
gamma=0.600000, res=6.764402e-01, rank=1
gamma=0.700000, res=8.009913e-01, rank=1
gamma=0.800000, res=1.062385e+00, rank=1
gamma=0.900000, res=1.129513e+00, rank=0
gamma=1.000000, res=1.129513e+00, rank=0