11.9 Nearest Correlation Matrix Problem

A correlation matrix is a symmetric positive definite matrix with unit diagonal. This term has origins in statistics, since the matrix whose entries are the correlation coefficients of a sequence of random variables has all these properties.

In this section we study variants of the problem of approximating a given symmetric matrix \(A\) with correlation matrices:

  • find the correlation matrix \(X\) nearest to \(A\) in the Frobenius norm,

  • find an approximation of the form \(D+X\) where \(D\) is a diagonal matrix with positive diagonal and \(X\) is a positive semidefinite matrix of low rank, using the combination of Frobenius and nuclear norm.

Both problems are related to portfolio optimization, where one can often have a matrix \(A\) that only approximates the correlations of stocks. For subsequent optimizations one would like to approximate \(A\) with a correlation matrix or, in the factor model, with \(D+VV^T\) with \(VV^T\) of small rank.

11.9.1 Nearest correlation with the Frobenius norm

The Frobenius norm of a real matrix \(M\) is defined as

\[\|M\|_F = \left(\sum_{i,j}M_{i,j}^2\right)^{1/2}\]

and with respect to this norm our optimization problem can be expressed simply as:

(11.36)\[\begin{split}\begin{array}{ll} \minimize & \|A-X\|_F\\ \st & \mathbf{diag}(X) = e,\\ & X \succeq 0.\\ \end{array}\end{split}\]

We can exploit the symmetry of \(A\) and \(X\) to get a compact vector representation. To this end we make use of the following mapping from a symmetric matrix to a flattened vector containing the (scaled) lower triangular part of the matrix:

(11.37)\[\begin{split}\begin{array}{ll} \mbox{vec}: & \R^{n\times n} \rightarrow \real^{n(n+1)/2} \\ \mbox{vec}(M) = & (\alpha_{11}M_{11},\alpha_{21}M_{21},\alpha_{22}M_{22},\ldots,\alpha_{n1}M_{n1},\ldots,\alpha_{nn}M_{nn}) \\ \alpha_{ij}=&\begin{cases}1 & j=i\\ \sqrt{2} & j<i\end{cases} \end{array}\end{split}\]

Note that \(\|M\|_F=\|\mbox{vec}(M)\|_2\). The Fusion implementation of \(\mbox{vec}\) is as follows:

Listing 11.18 Implementation of function \(vec\) in (11.37). Click here to download.
    public static Expression Vec(Expression e)
    {
      int N       = e.GetShape()[0];
      int[] msubi = new int[N * (N + 1) / 2],
      msubj = new int[N * (N + 1) / 2];
      double[] mcof = new double[N * (N + 1) / 2];

      for (int i = 0, k = 0; i < N; ++i)
        for (int j = 0; j < i + 1; ++j, ++k)
        {
          msubi[k] = k;
          msubj[k] = i * N + j;
          if (i == j) mcof[k] = 1.0;
          else        mcof[k] = Math.Sqrt(2);
        }

      var S = Matrix.Sparse(N * (N + 1) / 2, N * N, msubi, msubj, mcof);
      return Expr.Mul(S, Expr.Flatten(e));
    }

That leads to an optimization problem with both conic quadratic and semidefinite constraints:

(11.38)\[\begin{split}\begin{array}{ll} \minimize & t\\ \st & (t, \mbox{vec} (A-X)) \in \Q,\\ & \mathbf{diag}(X) = e,\\ & X \succeq 0.\\ \end{array}\end{split}\]

Code example

Listing 11.19 Implementation of problem (11.38). Click here to download.
    public static void nearestcorr_frobenius(Matrix A)
    {
      int N = A.NumRows();
      using (var M = new Model("NearestCorrelation"))
      {
        // Setting up the variables
        var X = M.Variable("X", Domain.InPSDCone(N));
        var t = M.Variable("t", 1, Domain.Unbounded());

        // (t, vec (A-X)) \in Q
        M.Constraint( Expr.Vstack(t, Vec(Expr.Sub(A, X))), Domain.InQCone() );

        // diag(X) = e
        M.Constraint(X.Diag(), Domain.EqualsTo(1.0));

        // Objective: Minimize t
        M.Objective(ObjectiveSense.Minimize, t);

        // Solve the problem
        M.Solve();

        // Get the solution values
        Console.WriteLine("X = \n{0}", mattostr(X.Level(), N));
        Console.WriteLine("t = {0}", mattostr(t.Level(), N));
      }
    }

We use the following input

Listing 11.20 Input for the nearest correlation problem.
      int N = 5;
      var A = Matrix.Dense( new double[,]
      { { 0.0,  0.5,  - 0.1,  -0.2,   0.5},
        { 0.5,  1.25, -0.05, -0.1,   0.25},
        { -0.1, -0.05,  0.51,  0.02, -0.05},
        { -0.2, -0.1,   0.02,  0.54, -0.1},
        { 0.5,  0.25, -0.05, -0.1,   1.25}
      });

The expected output is the following (small differences may apply):

X =
[[ 1.          0.50001941 -0.09999994 -0.20000084  0.50001941]
 [ 0.50001941  1.         -0.04999551 -0.09999154  0.24999101]
 [-0.09999994 -0.04999551  1.          0.01999746 -0.04999551]
 [-0.20000084 -0.09999154  0.01999746  1.         -0.09999154]
 [ 0.50001941  0.24999101 -0.04999551 -0.09999154  1.        ]]

11.9.2 Nearest Correlation with Nuclear-norm Penalty

Next, we consider the approximation of \(A\) of the form \(D+X\) where \(D=\diag(w),\ w\geq 0\) and \(X\succeq 0\). We will also aim at minimizing the rank of \(X\). This can be approximated by a relaxed linear objective penalizing the trace \(\trace(X)\) (which in this case is the nuclear norm of \(X\) and happens to be the sum of its eigenvalues).

The combination of these constraints leads to a problem:

\[\begin{split}\begin{array}{ll} \minimize & \left\|X+\diag(w)-A\right\|_F + \gamma \trace(X),\\ \st & X \succeq 0, w \geq 0, \end{array}\end{split}\]

where the parameter \(\gamma\) controls the tradeoff between the quality of approximation and the rank of \(X\).

Exploit the mapping \(\mbox{vec}\) defined in (11.37) we can express this problem as:

(11.39)\[\begin{split}\begin{array}{ll} \minimize & t + \gamma\trace(X) \\ \st & (t, \mbox{vec} (X + \diag(w) - A) ) \in \Q, \\ & X \succeq 0 , w \geq 0. \end{array}\end{split}\]

Code example

Listing 11.21 Implementation of problem (11.39). Click here to download.
    public static void nearestcorr_nn(Matrix A, double[] gammas, double[] res, int[] rank)
    {
      int N = A.NumRows();
      using (var M = new Model("NucNorm"))
      {
        // Setup variables
        var t = M.Variable("t", 1, Domain.Unbounded());
        var X = M.Variable("X", Domain.InPSDCone(N));
        var w = M.Variable("w", N, Domain.GreaterThan(0.0));

        // (t, vec (X + diag(w) - A)) in Q
        var D = Expr.MulElm( Matrix.Eye(N), Var.Repeat(w, 1, N) );
        M.Constraint( Expr.Vstack( t, Vec(Expr.Sub(Expr.Add(X, D), A)) ), Domain.InQCone() );

        for (var k = 0; k < gammas.Length; ++k)
        {
          // Objective: Minimize t + gamma*Tr(X)
          var gamm_trX = Expr.Mul( gammas[k], Expr.Sum(X.Diag()) );
          M.Objective(ObjectiveSense.Minimize, Expr.Add(t, gamm_trX));
          M.Solve();

          // Find the eigenvalues of X and approximate rank
          var d = new double[N];
          mosek.LinAlg.syeig(mosek.uplo.lo, N, X.Level(), d);
          var rnk = 0; foreach (var v in d) if (v > 1e-6) ++rnk;

          res[k] = t.Level()[0];
          rank[k] = rnk;
        }
      }
    }

We feed MOSEK with the same input as in Sec. 11.9.1 (Nearest correlation with the Frobenius norm). The problem is solved for a range of values \(\gamma\) values, to demonstrate how the penalty term helps achieve a low rank solution. To this extent we report both the rank of \(X\) and the residual norm \(\left\|X+\diag(w)-A\right\|_F\).

--- Nearest Correlation with Nuclear Norm---
gamma=0.000000, res=3.076163e-01, rank=4
gamma=0.100000, res=4.251692e-01, rank=2
gamma=0.200000, res=5.112082e-01, rank=1
gamma=0.300000, res=5.298432e-01, rank=1
gamma=0.400000, res=5.592686e-01, rank=1
gamma=0.500000, res=6.045702e-01, rank=1
gamma=0.600000, res=6.764402e-01, rank=1
gamma=0.700000, res=8.009913e-01, rank=1
gamma=0.800000, res=1.062385e+00, rank=1
gamma=0.900000, res=1.129513e+00, rank=0
gamma=1.000000, res=1.129513e+00, rank=0