# 5 Design OverviewΒΆ

*Fusion* is a result of many years of experience in conic optimization. It is a dedicated API for users who want to enjoy a simpler experience interfacing with the solver. This applies to users who regularly solve conic problems, and to new users who do not want to be too bothered with the technicalities of a low-level optimizer. *Fusion* is designed for fast and clean prototyping of conic problems without suffering excessive performance degradation.

Note that *Fusion* **is** an object-oriented framework for conic-optimization but it **is not** a general purpose modeling language. The main design principles of *Fusion* are:

**Expressiveness**: we try to make it nice! Despite not being a modeling language,*Fusion*yields readable, easy to maintain code that closely resembles the mathematical formulation of the problem.**Seamlessly multi-language**:*Fusion*code can be ported across C++, Python, Java, .NET and with only minimal adaptations to the syntax of each language.**What you write is what MOSEK gets**: A*Fusion*model is fed into the solver with (almost) no additional transformations.

Expressiveness

Suppose you have a conic quadratic optimization problem like the efficient frontier in portfolio optimization:

where \(\mu, G\) are input data and \(\alpha\) is an input parameter whose value we want to change between many optimizations. Its representation in *Fusion* is a direct translation of the mathematical model and could look as in the following code snippet.

One can express the model very compactly using the arithmetic, comparison and indexing operators `+, -, *, @, [:], .T, >=, ==`

etc. familiar from NumPy and other Python packages. They are available upon importing the module `mosek.fusion.pythonic`

, see Sec. 14.1.3 (Pythonic extensions) for details and examples.

```
x = M.variable('x', n)
gamma = M.variable()
alpha = M.parameter()
M.objective(ObjectiveSense.Maximize, x.T @ mu - alpha * gamma)
M.constraint(Expr.sum(x) == w)
M.constraint(Expr.vstack(gamma, G.T @ x), Domain.inQCone())
M.constraint(x >= 0.0)
```

Seamless multi-language API

*Fusion* can easily be ported across the five supported languages. All functionalities and naming conventions remain the same in all of them. This has some advantages:

Simplifies code sharing between developers working in different languages.

Improves code reusability.

Simplifies the transition from R&D to production (for instance from fast-prototyping languages used in R&D to more efficient ones used for high performance).

Here is the same code snippet (creation of a variable in the model) in all languages supported by *Fusion*. Careful code design can generate models with only the necessary syntactic differences between implementations.

```
auto x= M->variable("x", 3, Domain::greaterThan(0.0)); // C++
```

```
x = M.variable('x', 3, Domain.greaterThan(0.0)) # Python
```

```
Variable x = M.variable("x", 3, Domain.greaterThan(0.0)) // Java
```

```
Variable x = M.Variable("x", 3, Domain.GreaterThan(0.0)) // C#
```

What You Write is What **MOSEK** Gets

*Fusion* is not a modeling language. Instead it clearly defines the formulation the user must adhere to and only provides functionalities required for that formulation. Users familiar with the concept of DCP (Disciplined Convex Programming) can think of *Fusion* as a language for VDCP - Very Disciplined Convex Programming.

An important upshot is that *Fusion* will not modify the problem provided by the user any more that is required to fit it into the form accepted by the low-lever optimizer. In other words, the problem that is actually solved is as close as possible to what the user writes. For example, *Fusion* will transform a multi-dimensional constraint into a sequence of scalar constraints for the linear constraint matrix \(A\), and so on. So, in effect, the *Fusion* mechanism only automates operations that the user would have to carry out anyway (using pencil and paper, presumably). Otherwise the optimizer model is a direct copy of the *Fusion* model.

The main benefits of this approach are:

The user knows what problem is actually being solved.

Dual information is readily available for all variables and constraints.

Only the necessary overhead.

Better control over numerical stability.