11.9 Nearest Correlation Matrix Problem¶
A correlation matrix is a symmetric positive definite matrix with unit diagonal. This term has origins in statistics, since the matrix whose entries are the correlation coefficients of a sequence of random variables has all these properties.
In this section we study variants of the problem of approximating a given symmetric matrix
find the correlation matrix
nearest to in the Frobenius norm,find an approximation of the form
where is a diagonal matrix with positive diagonal and is a positive semidefinite matrix of low rank, using the combination of Frobenius and nuclear norm.
Both problems are related to portfolio optimization, where one can often have a matrix
11.9.1 Nearest correlation with the Frobenius norm¶
The Frobenius norm of a real matrix
and with respect to this norm our optimization problem can be expressed simply as:
We can exploit the symmetry of
Note that
def vec(e):
N = e.getShape()[0]
msubi = range(N * (N + 1) // 2)
msubj = [i * N + j for i in range(N) for j in range(i + 1)]
mcof = [2.0**0.5 if i !=
j else 1.0 for i in range(N) for j in range(i + 1)]
S = Matrix.sparse(N * (N + 1) // 2, N * N, msubi, msubj, mcof)
return S @ Expr.flatten(e)
That leads to an optimization problem with both conic quadratic and semidefinite constraints:
Code example
def nearestcorr(A):
N = A.numRows()
# Create a model
with Model("NearestCorrelation") as M:
# Setting up the variables
X = M.variable("X", Domain.inPSDCone(N))
t = M.variable("t", 1, Domain.unbounded())
# (t, vec (A-X)) \in Q
M.constraint("C1", Expr.vstack(t, vec(A-X)), Domain.inQCone())
# diag(X) = e
M.constraint("C2", X.diag() == 1.0)
# Objective: Minimize t
M.objective(ObjectiveSense.Minimize, t)
M.writeTask('nearcor.ptf')
M.solve()
return X.level(), t.level()
We use the following input
N = 5
A = Matrix.dense(N, N, [0.0, 0.5, -0.1, -0.2, 0.5,
0.5, 1.25, -0.05, -0.1, 0.25,
-0.1, -0.05, 0.51, 0.02, -0.05,
-0.2, -0.1, 0.02, 0.54, -0.1,
0.5, 0.25, -0.05, -0.1, 1.25])
The expected output is the following (small differences may apply):
X =
[[ 1. 0.50001941 -0.09999994 -0.20000084 0.50001941]
[ 0.50001941 1. -0.04999551 -0.09999154 0.24999101]
[-0.09999994 -0.04999551 1. 0.01999746 -0.04999551]
[-0.20000084 -0.09999154 0.01999746 1. -0.09999154]
[ 0.50001941 0.24999101 -0.04999551 -0.09999154 1. ]]
11.9.2 Nearest Correlation with Nuclear-norm Penalty¶
Next, we consider the approximation of
The combination of these constraints leads to a problem:
where the parameter
Exploit the mapping
Code example
def nearestcorr_nucnorm(A, gammas):
N = A.numRows()
with Model("NucNorm") as M:
# Setup variables
t = M.variable("t", 1, Domain.unbounded())
X = M.variable("X", Domain.inPSDCone(N))
w = M.variable("w", N, Domain.greaterThan(0.0))
# D = diag(w)
D = Expr.mulElm(Matrix.eye(N), Var.repeat(w, N, 1))
# (t, vec (X + D - A)) in Q
M.constraint(Expr.vstack(t, vec(X+D-A)), Domain.inQCone())
result = []
for g in gammas:
# Objective: Minimize t + gamma*Tr(X)
M.objective(ObjectiveSense.Minimize, t + g * Expr.sum(X.diag()))
M.solve()
# Find eigenvalues of X and compute its rank
d = [0.0] * int(N)
LinAlg.syeig(mosek.uplo.lo, N, X.level(), d)
result.append(
(g, t.level()[0], sum([d[i] > 1e-6 for i in range(N)]), X.level()))
return result
We feed MOSEK with the same input as in Sec. 11.9.1 (Nearest correlation with the Frobenius norm). The problem is solved for a range of values
--- Nearest Correlation with Nuclear Norm---
gamma=0.000000, res=3.076163e-01, rank=4
gamma=0.100000, res=4.251692e-01, rank=2
gamma=0.200000, res=5.112082e-01, rank=1
gamma=0.300000, res=5.298432e-01, rank=1
gamma=0.400000, res=5.592686e-01, rank=1
gamma=0.500000, res=6.045702e-01, rank=1
gamma=0.600000, res=6.764402e-01, rank=1
gamma=0.700000, res=8.009913e-01, rank=1
gamma=0.800000, res=1.062385e+00, rank=1
gamma=0.900000, res=1.129513e+00, rank=0
gamma=1.000000, res=1.129513e+00, rank=0