# 10.3 2D Total Variation¶

This case study is based mainly on the paper by Goldfarb and Yin [GY05].

Mathematical Formulation

We are given a $$n\times m$$ grid and for each cell $$(i,j)$$ an observed value $$f_{ij}$$ that can expressed as

$f_{ij} = u_{ij} + v_{ij},$

where $$u_{ij}\in[0,1]$$ is the actual signal value and $$v_{ij}$$ is the noise. The aim is to reconstruct $$u$$ subtracting the noise from the observations.

We assume the $$2$$-norm of the overall noise to be bounded: the corresponding constraint is

$\| u - f\|_2 \leq \sigma$

which translates into a simple conic quadratic constraint as

$(\sigma, u-f ) \in \Q.$

We aim to minimize the change in signal value when moving between adjacent cells. To this end we define the adjacent differences vector as

(1)$\begin{split}\partial^+_{ij} = \left( \begin{array}{l} \partial^x_{ij}\\ \partial^y_{ij}\\ \end{array}\right)= \left( \begin{array}{l} u_{i+1,j} - u_{i,j} \\ u_{i,j+1} - u_{i,j} \end{array}\right),\end{split}$

for each cell $$1\leq i,j\leq n$$ (we assume that the respective coordinates $$\partial^x_{ij}$$ and $$\partial^y_{ij}$$ are zero on the right and bottom boundary of the grid).

For each cell we want to minimize the norm of $$\partial^+_{ij}$$, and therefore we introduce auxiliary variables $$t_{ij}$$ such that

$t_{ij} \geq \| \partial^+_{ij}\|_2 \quad\mathrm{or} \quad \left( t_{ij} , \partial^+_{ij} \right) \in \Q,$

and minimize the sum of all $$t_{ij}$$.

The complete model takes the form:

(2)$\begin{split}\begin{array}{lll} \min & \sum_{1\leq i,j\leq n} t_{ij}, & \\ & & \\ \mathrm{s.t.} & \partial^+_{ij} = \left( u_{i+1,j} - u_{i,j}, u_{i,j+1} - u_{i,j} \right)^T,& \forall 1\leq i,j\leq n, \\ & & \\ & \left( t_{ij}, \partial^+_{ij} \right) \in \Q^3, & \forall 1\leq i,j\leq n, \\ & \left( \sigma, u-f \right) \in \Q^{nm+1}, & \\ & & \\ & u_{i,j}\in [0,1]. & \forall 1\leq i,j\leq n. \end{array}\end{split}$

Implementation

The Fusion implementation of model (2) uses variable and expression slices.

First of all we start by creating the optimization model and variables t and u:

    with Model('TV') as M:

u= M.variable( [n+1,m+1], Domain.inRange(0.,1.0) )
t= M.variable( [n,m], Domain.unbounded() )


Note the dimensions of u is larger than those of the grid to accommodate the boundary conditions later. The actual cells of the grid are defined as a slice of u:

        ucore=  u.slice( [0,0], [n,m] )


The next step is to define the partial variation along each axis, as in (1):

        deltax= Expr.sub( u.slice( [1,0], [n+1,m] ), ucore)
deltay= Expr.sub( u.slice( [0,1], [n,m+1] ), ucore)


Slices are created on the fly as they will not be reused. Now we can set the conic constraints on the norm of the total variations. To this extent we stack the variables t, deltax and deltay together and demand that each row of the new matrix is in a quadratic cone.

        M.constraint( Expr.stack(2, t, deltax, deltay), Domain.inQCone().axis(2) )


We now need to bound the norm of the noise. This can be achieved with a conic constraint using f as a one-dimensional array:

        fmat = Matrix.dense(n,m,f)
M.constraint( Expr.vstack(sigma, Expr.flatten( Expr.sub( fmat, ucore ) ) ),
Domain.inQCone() )


The objective function is the sum of all $$t_{ij}$$:

        M.objective( ObjectiveSense.Minimize, Expr.sum(t) )


Example

Consider the linear signal $$u_{ij}=\frac{i+j}{n+m}$$ and its modification with random Gaussian noise, as in Fig. 5. Various reconstructions of $$u$$, obtained with different values of $$\sigma$$, are shown in Fig. 6 (where $$\bar{\sigma}=\sigma/nm$$ is the relative noise bound per cell).

Fig. 5 A linear signal and its modification with random Gaussian noise.

Fig. 6 Three reconstructions of the linear signal obtained for $$\bar{\sigma}\in\{0.0004,0.0005,0.0006\}$$, respectively.

Source code

Listing 21 The Fusion implementation of model (2). Click here to download.
def total_var(n,m,f,sigma):
with Model('TV') as M:

u= M.variable( [n+1,m+1], Domain.inRange(0.,1.0) )
t= M.variable( [n,m], Domain.unbounded() )

ucore=  u.slice( [0,0], [n,m] )

deltax= Expr.sub( u.slice( [1,0], [n+1,m] ), ucore)
deltay= Expr.sub( u.slice( [0,1], [n,m+1] ), ucore)

M.constraint( Expr.stack(2, t, deltax, deltay), Domain.inQCone().axis(2) )

fmat = Matrix.dense(n,m,f)
M.constraint( Expr.vstack(sigma, Expr.flatten( Expr.sub( fmat, ucore ) ) ),
Domain.inQCone() )

M.objective( ObjectiveSense.Minimize, Expr.sum(t) )
M.setLogHandler(sys.stdout)
M.solve()

return ucore.level()