11.3 2D Total Variation¶

This case study is based mainly on the paper by Goldfarb and Yin [GY05].

Mathematical Formulation

We are given a $$n\times m$$ grid and for each cell $$(i,j)$$ an observed value $$f_{ij}$$ that can expressed as

$f_{ij} = u_{ij} + v_{ij},$

where $$u_{ij}\in[0,1]$$ is the actual signal value and $$v_{ij}$$ is the noise. The aim is to reconstruct $$u$$ subtracting the noise from the observations.

We assume the $$2$$-norm of the overall noise to be bounded: the corresponding constraint is

$\| u - f\|_2 \leq \sigma$

which translates into a simple conic quadratic constraint as

$(\sigma, u-f ) \in \Q.$

We aim to minimize the change in signal value when moving between adjacent cells. To this end we define the adjacent differences vector as

(11.15)$\begin{split}\partial^+_{ij} = \left( \begin{array}{l} \partial^x_{ij}\\ \partial^y_{ij}\\ \end{array}\right)= \left( \begin{array}{l} u_{i+1,j} - u_{i,j} \\ u_{i,j+1} - u_{i,j} \end{array}\right),\end{split}$

for each cell $$1\leq i,j\leq n$$ (we assume that the respective coordinates $$\partial^x_{ij}$$ and $$\partial^y_{ij}$$ are zero on the right and bottom boundary of the grid).

For each cell we want to minimize the norm of $$\partial^+_{ij}$$, and therefore we introduce auxiliary variables $$t_{ij}$$ such that

$t_{ij} \geq \| \partial^+_{ij}\|_2 \quad\mathrm{or} \quad \left( t_{ij} , \partial^+_{ij} \right) \in \Q,$

and minimize the sum of all $$t_{ij}$$.

The complete model takes the form:

(11.16)$\begin{split}\begin{array}{lll} \min & \sum_{1\leq i,j\leq n} t_{ij}, & \\ & & \\ \mathrm{s.t.} & \partial^+_{ij} = \left( u_{i+1,j} - u_{i,j}, u_{i,j+1} - u_{i,j} \right)^T,& \forall 1\leq i,j\leq n, \\ & & \\ & \left( t_{ij}, \partial^+_{ij} \right) \in \Q^3, & \forall 1\leq i,j\leq n, \\ & \left( \sigma, u-f \right) \in \Q^{nm+1}, & \\ & & \\ & u_{i,j}\in [0,1]. & \forall 1\leq i,j\leq n. \end{array}\end{split}$

Implementation

The Fusion implementation of model (11.16) uses variable and expression slices.

First of all we start by creating the optimization model and variables t and u. Since we intend to solve the problem many times with various input data we define $$\sigma$$ and $$f$$ as parameters:

  Model::t M = new Model("TV");

Variable::t u = M->variable("u", new_array_ptr<int, 1>({n + 1, m + 1}), Domain::inRange(0.0, 1.0));
Variable::t t = M->variable("t", new_array_ptr<int, 1>({n, m}), Domain::unbounded());

// In this example we define sigma and the input image f as parameters
// to demonstrate how to solve the same model with many data variants.
// Of course they could simply be passed as ordinary arrays if that is not needed.
Parameter::t sigma = M->parameter("sigma");
Parameter::t f = M->parameter("f", n, m);


Note the dimensions of u is larger than those of the grid to accommodate the boundary conditions later. The actual cells of the grid are defined as a slice of u:

  Variable::t ucore = u->slice(new_array_ptr<int, 1>({0, 0}), new_array_ptr<int, 1>({n, m}));


The next step is to define the partial variation along each axis, as in (11.15):

  Expression::t deltax = Expr::sub( u->slice( new_array_ptr<int, 1>({1, 0}), new_array_ptr<int, 1>({n + 1, m}) ), ucore );
Expression::t deltay = Expr::sub( u->slice( new_array_ptr<int, 1>({0, 1}), new_array_ptr<int, 1>({n, m + 1}) ), ucore );


Slices are created on the fly as they will not be reused. Now we can set the conic constraints on the norm of the total variations. To this extent we stack the variables t, deltax and deltay together and demand that each row of the new matrix is in a quadratic cone.

  M->constraint( Expr::stack(2, t, deltax, deltay), Domain::inQCone()->axis(2) );


We now need to bound the norm of the noise. This can be achieved with a conic constraint using f as a one-dimensional array:

  M->constraint( Expr::vstack(sigma, Expr::flatten( Expr::sub(f,  ucore) ) ),
Domain::inQCone() );


The objective function is the sum of all $$t_{ij}$$:

  M->objective( ObjectiveSense::Minimize, Expr::sum(t) );


Example

Consider the linear signal $$u_{ij}=\frac{i+j}{n+m}$$ and its modification with random Gaussian noise, as in Fig. 11.4. Various reconstructions of $$u$$, obtained with different values of $$\sigma$$, are shown in Fig. 11.5 (where $$\bar{\sigma}=\sigma/nm$$ is the relative noise bound per cell).

Source code

Listing 11.9 The Fusion implementation of model (11.16). Click here to download.
Model::t total_var(int n, int m) {
Model::t M = new Model("TV");

Variable::t u = M->variable("u", new_array_ptr<int, 1>({n + 1, m + 1}), Domain::inRange(0.0, 1.0));
Variable::t t = M->variable("t", new_array_ptr<int, 1>({n, m}), Domain::unbounded());

// In this example we define sigma and the input image f as parameters
// to demonstrate how to solve the same model with many data variants.
// Of course they could simply be passed as ordinary arrays if that is not needed.
Parameter::t sigma = M->parameter("sigma");
Parameter::t f = M->parameter("f", n, m);

Variable::t ucore = u->slice(new_array_ptr<int, 1>({0, 0}), new_array_ptr<int, 1>({n, m}));

Expression::t deltax = Expr::sub( u->slice( new_array_ptr<int, 1>({1, 0}), new_array_ptr<int, 1>({n + 1, m}) ), ucore );
Expression::t deltay = Expr::sub( u->slice( new_array_ptr<int, 1>({0, 1}), new_array_ptr<int, 1>({n, m + 1}) ), ucore );

M->constraint( Expr::stack(2, t, deltax, deltay), Domain::inQCone()->axis(2) );

M->constraint( Expr::vstack(sigma, Expr::flatten( Expr::sub(f,  ucore) ) ),
Domain::inQCone() );

M->objective( ObjectiveSense::Minimize, Expr::sum(t) );

return M;
}

int main(int argc, char ** argv)
{
std::normal_distribution<double>       ndistr(0., 1.);
std::mt19937 engine(0);

int n = 100;
int m = 200;
std::vector<double> sigmas({ 0.0004, 0.0005, 0.0006 });

// Create a parametrized model with given shape
Model::t M = total_var(n, m);
Parameter::t sigma = M->getParameter("sigma");
Parameter::t f     = M->getParameter("f");
Variable::t  ucore = M->getVariable("u")->slice(new_array_ptr<int, 1>({0, 0}), new_array_ptr<int, 1>({n, m}));

// Example: Linear signal with Gaussian noise
std::vector<std::vector<double>> signal(n, std::vector<double>(m));
std::vector<std::vector<double>> noise(n, std::vector<double>(m));
std::vector<std::vector<double>> fVal(n, std::vector<double>(m));
std::vector<std::vector<double>> sol(n, std::vector<double>(m));

for(int i=0; i<n; i++) for(int j=0; j<m; j++) {
signal[i][j] = 1.0*(i+j)/(n+m);
noise[i][j] = ndistr(engine) * 0.08;
fVal[i][j] = std::max( std::min(1.0, signal[i][j] + noise[i][j]), .0 );
}

// Set value for f
f->setValue(new_array_ptr(fVal));

for(int iter=0; iter<3; iter++) {
// Set new value for sigma and solve
sigma->setValue(sigmas[iter]*n*m);

M->solve();

// Retrieve solution from ucore
auto ucoreLev = *(ucore->level());
for(int i=0; i<n; i++) for(int j=0; j<m; j++)
sol[i][j] = ucoreLev[i*n+m];

// Use the solution
// ...

std::cout << "rel_sigma = " << sigmas[iter] << " total_var = " << M->primalObjValue() << std::endl;
}

M->dispose();

return 0;
}