10.3 2D Total Variation

This case study is based mainly on the paper by Goldfarb and Yin [GY05].

Mathematical Formulation

We are given a \(n\times m\) grid and for each cell \((i,j)\) an observed value \(f_{ij}\) that can expressed as

\[f_{ij} = u_{ij} + v_{ij},\]

where \(u_{ij}\in[0,1]\) is the actual signal value and \(v_{ij}\) is the noise. The aim is to reconstruct \(u\) subtracting the noise from the observations.

We assume the \(2\)-norm of the overall noise to be bounded: the corresponding constraint is

\[\| u - f\|_2 \leq \sigma\]

which translates into a simple conic quadratic constraint as

\[(\sigma, u-f ) \in \Q.\]

We aim to minimize the change in signal value when moving between adjacent cells. To this end we define the adjacent differences vector as

(1)\[\begin{split}\partial^+_{ij} = \left( \begin{array}{l} \partial^x_{ij}\\ \partial^y_{ij}\\ \end{array}\right)= \left( \begin{array}{l} u_{i+1,j} - u_{i,j} \\ u_{i,j+1} - u_{i,j} \end{array}\right),\end{split}\]

for each cell \(1\leq i,j\leq n\) (we assume that the respective coordinates \(\partial^x_{ij}\) and \(\partial^y_{ij}\) are zero on the right and bottom boundary of the grid).

For each cell we want to minimize the norm of \(\partial^+_{ij}\), and therefore we introduce auxiliary variables \(t_{ij}\) such that

\[t_{ij} \geq \| \partial^+_{ij}\|_2 \quad\mathrm{or} \quad \left( t_{ij} , \partial^+_{ij} \right) \in \Q,\]

and minimize the sum of all \(t_{ij}\).

The complete model takes the form:

(2)\[\begin{split}\begin{array}{lll} \min & \sum_{1\leq i,j\leq n} t_{ij}, & \\ & & \\ \mathrm{s.t.} & \partial^+_{ij} = \left( u_{i+1,j} - u_{i,j}, u_{i,j+1} - u_{i,j} \right)^T,& \forall 1\leq i,j\leq n, \\ & & \\ & \left( t_{ij}, \partial^+_{ij} \right) \in \Q^3, & \forall 1\leq i,j\leq n, \\ & \left( \sigma, u-f \right) \in \Q^{nm+1}, & \\ & & \\ & u_{i,j}\in [0,1]. & \forall 1\leq i,j\leq n. \end{array}\end{split}\]

Implementation

The Fusion implementation of model (2) uses variable and expression slices.

First of all we start by creating the optimization model and variables t and u:

    Model M = new Model("TV");
    try {
      Variable u = M.variable(new int[] {nrows + 1, ncols + 1}, Domain.inRange(0., 1.0));
      Variable t = M.variable(new int[] {nrows, ncols}, Domain.unbounded());

Note the dimensions of u is larger than those of the grid to accommodate the boundary conditions later. The actual cells of the grid are defined as a slice of u:

      Variable ucore = u.slice(new int[] {0, 0}, new int[] {nrows, ncols});

The next step is to define the partial variation along each axis, as in (1):

      Expression deltax = Expr.sub( u.slice( new int[] {1, 0}, new int[] {nrows + 1, ncols} ), ucore );
      Expression deltay = Expr.sub( u.slice( new int[] {0, 1}, new int[] {nrows, ncols + 1} ), ucore );

Slices are created on the fly as they will not be reused. Now we can set the conic constraints on the norm of the total variations. To this extent we stack the variables t, deltax and deltay together and demand that each row of the new matrix is in a quadratic cone.

      M.constraint( Expr.stack(2, t, deltax, deltay), Domain.inQCone().axis(2) );

We now need to bound the norm of the noise. This can be achieved with a conic constraint using f as a one-dimensional array:

      M.constraint( Expr.vstack(sigma, Expr.flatten( Expr.sub( Matrix.dense(nrows, ncols, f),  ucore ) ) ),
                    Domain.inQCone() );

The objective function is the sum of all \(t_{ij}\):

      M.objective( ObjectiveSense.Minimize, Expr.sum(t) );

Example

Consider the linear signal \(u_{ij}=\frac{i+j}{n+m}\) and its modification with random Gaussian noise, as in Fig. 10.3. Various reconstructions of \(u\), obtained with different values of \(\sigma\), are shown in Fig. 10.4 (where \(\bar{\sigma}=\sigma/nm\) is the relative noise bound per cell).

_images/total_var.png

Fig. 10.3 A linear signal and its modification with random Gaussian noise.

_images/total_var_sol.png

Fig. 10.4 Three reconstructions of the linear signal obtained for \(\bar{\sigma}\in\{0.0004,0.0005,0.0006\}\), respectively.

Source code

Listing 10.6 The Fusion implementation of model (2). Click here to download.
    Model M = new Model("TV");
    try {
      Variable u = M.variable(new int[] {nrows + 1, ncols + 1}, Domain.inRange(0., 1.0));
      Variable t = M.variable(new int[] {nrows, ncols}, Domain.unbounded());

      Variable ucore = u.slice(new int[] {0, 0}, new int[] {nrows, ncols});

      Expression deltax = Expr.sub( u.slice( new int[] {1, 0}, new int[] {nrows + 1, ncols} ), ucore );
      Expression deltay = Expr.sub( u.slice( new int[] {0, 1}, new int[] {nrows, ncols + 1} ), ucore );

      M.constraint( Expr.stack(2, t, deltax, deltay), Domain.inQCone().axis(2) );

      M.constraint( Expr.vstack(sigma, Expr.flatten( Expr.sub( Matrix.dense(nrows, ncols, f),  ucore ) ) ),
                    Domain.inQCone() );

      M.objective( ObjectiveSense.Minimize, Expr.sum(t) );

      M.setLogHandler(new java.io.PrintWriter(System.out));
      M.solve();
    } finally {
      M.dispose();
    }