# 9.2 Geometric (posynomial) Optimization¶

## 9.2.1 Problem Definition¶

A geometric optimization problem can be stated as follows

(1)$\begin{split}\begin{array} {lccll} \mbox{minimize} & \sum_{k \in J_0} c_k \prod_{j=0}^{n-1} t_j^{a_{kj}} & & & \\ \mbox{subject to} & \sum_{k \in J_i} c_k \prod_{j=0}^{n-1} t_j^{a_{kj}} & \leq & 1, & i=1,\ldots ,m, \\ & t > 0, & & & \end{array}\end{split}$

where it is assumed that

$\cup_{k=0}^{m} J_k = \{1,\ldots ,T\}$

and if $$i \neq j$$, then

$J_i \cap J_j = \emptyset .$

Hence, $$A$$ is a $$T \times n$$ matrix and $$c$$ is a vector of length $$T$$. Given $$c_k>0$$ then

$c_k \prod_{j=0}^{n-1} t_j^{a_{kj}}$

is called a monomial. A sum of monomials i.e.

$\sum_{k \in J_i} c_k \prod_{j=0}^{n-1} t_j^{a_{kj}}$

is called a posynomial posynomial In general, problem (1) is very hard to solve. However, the posynomial case where it is required that

$c > 0$

is relatively easy. The reason is that using a simple variable transformation a convex optimization problem can be obtained. Indeed using the variable transformation

$t_j = e^{x_j}$

we obtain the problem

(2)$\begin{split}\begin{array} {lccll} \mbox{minimize} & \sum_{k \in J_0} c_k e^{\sum_{j=0}^{n-1} a_{kj}x_j} & & & \\ \mbox{subject to} & \sum_{k \in J_i} c_k e^{\sum_{j=0}^{n-1} a_{kj}x_j} & \leq & 1, & i=1,\ldots ,m, \end{array}\end{split}$

which is a convex optimization problem that can be solved using MOSEK. We will call

$c_t e^{ \left\lbrace \sum_{j=0}^{n-1} a_{tj}x_j \right\rbrace} = e^{\left\lbrace \log (c_{t}) + \sum_{j=0}^{n-1} a_{tj}x_j \right\rbrace}$

a term and hence the number of terms is $$T$$.

As stated, problem (2) is non-separable. However, using

$v_{t} = \log (c_{t}) + \sum_{j=0}^{n-1} a_{tj}x_j$

we obtain the separable problem

$\begin{split}\begin{array} {lccll} \mbox{minimize} & \sum_{t \in J_0} e^{v_t} & & & \\ \mbox{subject to} & \sum_{t \in J_i} e^{v_t} & \leq & 1, & i=1,\ldots ,m, \\ & \sum_{j=0}^{n-1} a_{tj} x_j - v_t & = & -\log (c_t), & t=0,\ldots ,T, \end{array}\end{split}$

which is a separable convex optimization problem.

A warning about this approach is that the exponential function $$e^x$$ is only numerically well-defined for values of $$x$$ in a small interval around $$0$$ since $$e^x$$ grows very rapidly as $$x$$ becomes larger. Therefore numerical problems may arise when solving the problem on this form.

Applications

A large number of practical applications, particularly in electrical circuit design, can be cast as a geometric optimization problem. We will not review these applications here but rather refer the reader to [BKVH04] and the references therein.

Further Information

Modeling tricks

A lot of tricks that can be used for modeling posynomial optimization problems are described in [BKVH04]. Therefore, in this section we cover only one important case.

### 9.2.1.1 Equalities¶

In general, equalities are not allowed in (1), i.e.

$\sum_{k \in J_i} c_k \prod_{j=0}^{n-1} t_j^{a_{kj}} = 1$

is not allowed. However, a monomial equality is not a problem. Indeed consider the example

$xyz^{-1} = 1$

of a monomial equality. The equality is identical to

$1 \leq xyz^{-1} \leq 1$

which in turn is identical to the two inequalities

$\begin{split}\begin{array} {lclcl} xyz^{-1} & & & \leq & 1, \\ \frac{1}{xyz^{-1}} & = & x^{-1} y^{-1} z & \leq & 1. \end{array}\end{split}$

Hence, it is possible to model a monomial equality using two inequalities.

## 9.2.2 Problematic Formulations¶

Certain formulations of geometric optimization problems may cause problems for the algorithms implemented in MOSEK. Basically there are two kinds of problems that may occur:

• The solution vector is finite, but an optimal objective value can only be a approximated.
• The optimal objective value is finite but implies that a variable in the solution is infinite.

### 9.2.2.1 Finite Unattainable Solution¶

The following problem illustrates an unattainable solution:

$\begin{split}\begin{array} {lclcl} \mbox{minimize} & x^2 y & & & \\ \mbox{subject to} & x y & \leq & 1, & \\ & x,y > 0. & & & \end{array}\end{split}$

Clearly, the optimal objective value is $$0$$ but because of the constraint the $$x,y>0$$ constraint this value can never be attained: To see why this is a problem, remember that MOSEK substitutes $$x=e^{t_x}$$ and $$y=e^{t_y}$$ and solves the problem as

$\begin{split}\begin{array} {lclc} \mbox{minimize} & e^{2t_x} e^{t_y} & & \\ \mbox{subject to} & e^{t_x} e^{t_y} & \leq & 1, \\ & t_x,t_y \in \real. & & \end{array}\end{split}$

The optimal solution implies that $$t_x=-\infty$$ or $$t_y=-\infty$$, and thus it is unattainable.

Now, the issue should be clear: If a variable $$x$$ appears only with nonnegative exponents, then fixing $$x=0$$ will minimize all terms in which it appears — but such a solution cannot be attained.

### 9.2.2.2 Infinite Solution¶

A similar problem will occur if a finite optimal objective value requires a variable to be infinite. This can be illustrated by the following example:

$\begin{split}\begin{array} {lclc} \mbox{minimize} & x^{-2} & & \\ \mbox{subject to} & x^{-1} & \leq & 1, \\ & x > 0, & & \end{array}\end{split}$

which is a valid geometric programming problem. In this case the optimal objective is $$0$$, but this requires $$x=\infty$$, which is unattainable.

Again, this specific case will appear if a variable $$x$$ appears only with negative exponents in the problem, implying that each term in which it appears can be minimized for $$x\rightarrow \infty$$.

## 9.2.3 An Example¶

Consider the example

$\begin{split}\begin{array} {lclc} \mbox{minimize} & x^{-1} y & & \\ \mbox{subject to} & x^2 y^{-\half} + 3 y^{\half} z^{-1} & \leq & 1, \\ & x y^{-1} & = & z^2, \\ & -x & \leq & - \frac{1}{10}, \\ & x & \leq & 3, \\ & x,y,z > 0, & & \end{array}\end{split}$

which is not a geometric optimization problem. However, using the obvious transformations we obtain the problem

(3)$\begin{split}\begin{array} {lcll} \mbox{minimize} & x^{-1} y & & \\ \mbox{subject to} & x^2 y^{-\half} + 3 y^{\half} z^{-1} & \leq & 1, \\ & x y^{-1} z^{-2} & \leq & 1, \\ & x^{-1} y z^2 & \leq & 1, \\ & \frac{1}{10} x^{-1} & \leq & 1, \\ & \frac{1}{3} x & \leq & 1, \\ & x,y,z > 0, & & \end{array}\end{split}$

which is a geometric optimization problem.

## 9.2.4 Solving the Example¶

The problem (3) can be defined and solved in the MOSEK toolbox as shown in Listing 33.

Listing 33 Script implementing problem (3). Click here to download.
function go2()
c     = [1 1 3 1 1 0.1 1/3]';
a     = sparse([[-1  1  0];
[2 -0.5 0];
[0 0.5 -1];
[1 -1 -2];
[-1 1 2];
[-1 0 0];
[1 0 0]]);

map   = [0 1 1 2 3 4 5]';
[res] = mskgpopt(c,a,map);

fprintf('\nPrimal optimal solution to original gp:');
fprintf(' %e',exp(res.sol.itr.xx));
fprintf('\n\n');

% Compute the optimal objective value and
% the constraint activities.
v = c.*exp(a*res.sol.itr.xx);

f = sparse(map+1,1:7,ones(size(map)))*v;

% First objective value. Then constraint values.
fprintf('Objective value: %e\n',log(f(1)));
fprintf('Constraint values:');
fprintf(' %e',log(f(2:end)));
fprintf('\n\n');

% Dual multipliers (should be negative)
fprintf('Dual variables (should be negative):');
fprintf(' %e',res.sol.itr.y);
fprintf('\n\n');


## 9.2.5 Exporting to a File¶

It’s possible to write a geometric optimization problem to a file with the command:

mskgpwri(c,a,map,filename)


This file format is compatible with the mskenopt command line tool. See the MOSEK Tools User’s manual for details on mskenopt. This file format can be useful for sending debug information to MOSEK or for testing. It’s also possible to read the above format with the command:

[c,a,map] = mskgpread(filename)