# 7.9 Model Parametrization and Reoptimization¶

This tutorial demonstrates how to construct a model with a fixed structure and reoptimize it by changing some of the input data. If you instead want to dynamically modify the model structure between optimizations by adding variables, constraints etc., see the other reoptimization tutorial Sec. 7.10 (Problem Modification and Reoptimization).

For this tutorial we solve the following variant of linear regression with elastic net regularization:

$\minimize_x\ \|Ax-b\|_2+\lambda_1\|x\|_1+\lambda_2\|x\|_2$

where $$A\in\real^{m\times n}$$, $$b\in\real^{m}$$. The optimization variable is $$x\in\real^n$$ and $$\lambda_1,\lambda_2$$ are two nonnegative numbers indicating the tradeoff between the linear regression objective, a lasso ($$\ell_1$$-norm) penalty and a ridge ($$\ell_2$$-norm) regularization. The representation of this problem compatible with MOSEK input format is

$\begin{split}\begin{array}{ll} \minimize & t + \lambda_1 \sum_i p_i + \lambda_2 q \\ \st & (t,Ax-b)\in \Q^{m+1}, \\ & p_i\geq |x_i|,\ i=1,\ldots,n, \\ & (q,x)\in\Q^{n+1}. \end{array}\end{split}$

## 7.9.1 Creating a model¶

Before creating a parametrized model we should analyze which parts of the model are fixed once for all, and which parts do we intend to change between optimizations. Here we make the following assumption:

• the matrix $$A$$ will not change,

• we want to solve the problem for many target vectors $$b$$,

• we want to experiment with different tradeoffs $$\lambda_1, \lambda_2$$.

That leads us to construct the model with $$A$$ provided from the start as fixed input and declare $$b,\lambda_1,\lambda_2$$ as parameters. The initial model construction is shown below. Parameters are objects of type Parameter, created with the method Model.parameter. We exploit the fact that parameters can have shapes, just like variables and expressions, and that they can be used everywhere within an expression where a constant of the same shape would be suitable.

Listing 7.36 Constructing a parametrized model. Click here to download.
Model::t initializeModel(int m, int n, std::shared_ptr<ndarray<double,2>> A) {
Model::t M = new Model();
auto x = M->variable("x", n);

// t >= |Ax-b|_2 where b is a parameter
auto b = M->parameter("b", m);
auto t = M->variable();
M->constraint(Expr::vstack(t, Expr::sub(Expr::mul(A, x), b)), Domain::inQCone());

// p_i >= |x_i|, i=1..n
auto p = M->variable(n);
M->constraint(Expr::hstack(p, x), Domain::inQCone());

// q >= |x|_2
auto q = M->variable();
M->constraint(Expr::vstack(q, x), Domain::inQCone());

// Objective, parametrized with lambda1, lambda2
// t + lambda1*sum(p) + lambda2*q
auto lambda1 = M->parameter("lambda1");
auto lambda2 = M->parameter("lambda2");
auto obj = Expr::add(new_array_ptr<Expression::t, 1>({t, Expr::mul(lambda1, Expr::sum(p)), Expr::mul(lambda2, q)}));
M->objective(ObjectiveSense::Minimize, obj);

return M;
}


For the purpose of the example we take

$\begin{split}A = \left[\begin{array}{cc}1 & 2\\ 3 & 4\\-2 & -1 \\ -4 & -3\end{array}\right]\end{split}$

and we initialize the parametrized model:

Listing 7.37 Initializing the model Click here to download.
  //Create a small example
int m = 4;
int n = 2;
auto A = new_array_ptr<double, 2>(
{ {1.0,   2.0},
{3.0,   4.0},
{-2.0, -1.0},
{-4.0, -3.0} });

auto M = initializeModel(m, n, A);

// For convenience retrieve some elements of the model
auto b = M->getParameter("b");
auto lambda1 = M->getParameter("lambda1");
auto lambda2 = M->getParameter("lambda2");
auto x = M->getVariable("x");


We made sure to keep references to the interesting elements of the model, in particular the parameter objects we are about to set values of.

## 7.9.2 Setting parameters¶

For the first solve we use

$b = [0.1, 1.2, -1.1, 3.0]^T, \ \lambda_1=0.1,\ \lambda_2=0.01.$

Parameters are set with method Parameter.setValue. We set the parameters and solve the model as follows:

Listing 7.38 Setting parameters and solving the model. Click here to download.
  // First solve
b->setValue(new_array_ptr<double, 1>({0.1, 1.2, -1.1, 3.0}));
lambda1->setValue(0.1);
lambda2->setValue(0.01);

M->solve();
auto sol = x->level();
std::cout << "Objective " << M->primalObjValue() << ", solution " << (*sol)[0] << ", " << (*sol)[1] << "\n";


## 7.9.3 Changing parameters¶

Let us say we now want to increase the weight of the lasso penalty in order to favor sparser solutions. We can simply change that parameter, leave the other ones unchanged, and resolve:

Listing 7.39 Changing a parameter and resolving Click here to download.
  // Increase lambda1
lambda1->setValue(0.5);

M->solve();
sol = x->level();
std::cout << "Objective " << M->primalObjValue() << ", solution " << (*sol)[0] << ", " << (*sol)[1] << "\n";


Next, we might want to solve a few instances of the problem for another value of $$b$$. Again, we reset the relevant parameters and solve:

Listing 7.40 Changing parameters and resolving Click here to download.
  // Now change the data completely
b->setValue(new_array_ptr<double, 1>({1.0, 1.0, 1.0, 1.0}));
lambda1->setValue(0.0);
lambda2->setValue(0.0);

M->solve();
sol = x->level();
std::cout << "Objective " << M->primalObjValue() << ", solution " << (*sol)[0] << ", " << (*sol)[1] << "\n";

// And increase lamda2
lambda2->setValue(1.4145);

M->solve();
sol = x->level();
std::cout << "Objective " << M->primalObjValue() << ", solution " << (*sol)[0] << ", " << (*sol)[1] << "\n";