7.3 Geometric Optimization

A so-called geometric optimization problem can be stated as follows

(1)\[\begin{split}\begin{array}{lccll} \mbox{minimize} & \sum_{k \in J_0} c_k \prod_{j=1}^n t_j^{a_{kj}} & & &\\ \mbox{subject to} & \sum_{k \in J_i} c_k \prod_{j=1}^n t_j^{a_{kj}} & \leq & 1, & i=1,\ldots ,m,\\ & t>0, & & & \end{array}\end{split}\]

where it is assumed that

\[\cup_{k=0}^m J_k = \{1,\ldots ,T\}\]

and if \(i \neq j\), then

\[J_i \cap J_j = \emptyset .\]

Hence, \(A\) is a \(T\times n\) matrix and \(c\) is a vector of length \(t\). In general, the problem (1) is very hard to solve, but the posynomial case where

\[c > 0\]

is relatively easy. Using the variable transformation

(2)\[t_j = e^{x_j}\]

we obtain the problem

\[\begin{split}\begin{array}{lccll} \mbox{minimize} & \sum_{k \in J_0} c_k e^{a_{k:}x} & & &\\ \mbox{subject to} & \sum_{k \in J_i} c_k e^{a_{k:}x} & \leq & 1, & i=1,\ldots ,m,\\ \end{array}\end{split}\]

which is convex in \(x\) for \(c>0\). We apply the \(\log\) function to obtain the equivalent problem

(3)\[\begin{split}\begin{array}{lccll} \mbox{minimize} & \log (\sum_{k \in J_0} c_k e^{a_{k:}x}) & & &\\ \mbox{subject to} & \log (\sum_{k \in J_i} c_k e^{a_{k:}x}) & \leq & \log (1), & i=1,\ldots ,m,\\ \end{array}\end{split}\]

which is also a convex optimization problem since \(\log\) is strictly increasing. Hence, the problem (3) can be solved by MOSEK.

For further details about geometric optimization we refer the reader to [BSS93].

MOSEK cannot handle a geometric optimization problem directly, but the transformation (3) can be solved using the MOSEK optimization toolbox function mskgpopt Please note that the solution to the transformed problem can easily be converted into a solution to the original geometric optimization problem using relation (2).

Subsequently, we will use the example

(4)\[\begin{split} \begin{array}{lccl} \mbox{minimize} & 40 t_1^{-1} t_2^{-\half} t_3^{-1} + 20 t_1 t_3 + 40 t_1 t_2 t_3 & & \\ \mbox{subject to} & \frac{1}{3} t_1^{-2} t_2^{-2} + \frac{4}{3} t_2^{\half} t_3^{-1} & \leq & 1, \\ & 0 < t_1,t_2,t_3 & & \end{array}\end{split}\]

to demonstrate how a geometric optimization problem is solved using mskgpopt Please note that both the objective and the constraint functions consist of a sum of simple terms. These terms can be specified completely using the matrix

\[\begin{split}A = \left[ \begin{array}{ccc} -1 & -0.5 & -1\\ 1 & 0 & 1\\ 1 & 1 & 1\\ -2 & -2 & 0\\ 0 & 0.5 & -1 \end{array} \right],\end{split}\]

and the vectors

\[\begin{split}c = \left[ \begin{array} c 40\\ 20\\ 40\\ \frac{1}{3} \\ \frac{4}{3} \end{array} \right] \quad \mbox{and}\quad \mbox{map} = \left[ \begin{array} c 0 \\ 0\\ 0\\ 1\\ 1 \end{array} \right].\end{split}\]

The interpretation is this: Each row of \(A,c\) describes one term, e.g. the first row of \(A\) and the first element of \(c\) describe the first term in the objective function. The vector \(\mbox{map}\) indicated whether a term belongs to the objective or to a constraint. If \(\mbox{map}{}_{k}\) equals zero, the \(k\)th term belongs to the objective function, otherwise it belongs to the \(\mbox{map}{}_k\)nth constraint.

Listing 21 demonstrates how the example is solved using mskgpopt.

Listing 21 Example on how to use mskgpopt.
%%
%
%  Copyright : Copyright (c) MOSEK ApS, Denmark. All rights reserved.
%
%  File :      go1.m
%
%  Purpose : Demonstrates a simple geometric optimization problem.
%
%%

function go1()

c     = [40 20 40 1/3 4/3]';
a     = sparse([[-1  -0.5  -1];[1 0 1];...
                [1 1 1];[-2 -2 0];[0 0.5 -1]]);
map   = [0 0 0 1 1]';
[res] = mskgpopt(c,a,map);

fprintf('\nPrimal optimal solution to original gp:');
fprintf(' %e',exp(res.sol.itr.xx));
fprintf('\n\n');

% Compute the optimal objective value and
% the constraint activities.
v = c.*exp(a*res.sol.itr.xx);

% Add appropriate terms together.
f = sparse(map+1,1:5,ones(size(map)))*v;

% First objective value. Then constraint values.
fprintf('Objective value: %e\n',log(f(1)));
fprintf('Constraint values:');
fprintf(' %e',log(f(2:end)));
fprintf('\n\n');

% Dual multipliers (should be negative)
fprintf('Dual variables (should be negative):');
fprintf(' %e',res.sol.itr.y);
fprintf('\n\n');

The code also computes the objective value and the constraint values at the optimal solution. Moreover, the optimal dual Lagrange multipliers for the constraints are shown and the gradient of the Lagrange function at the optimal point is computed. Feasibility of the computed solution can be checked as

max(res.sol.itr.xc) < = 0.0

or equivalently

exp(max(res.sol.itr.xc)) <= 1.0

Solving large scale problems

If you want to solve a large problem, i.e. a problem where \(A\) has large dimensions, then \(A\) must be sparse or you will run out of space. Recall that a sparse matrix contains few non-zero elements, so if \(A\) is a sparse matrix, you should construct it using MATLAB’s sparse sparse as follows

A = sparse(subi,subj,valij);

where

\[a_{\mathtt{subi}[k],\mathtt{subj}[k]} = \mathtt{valij}[k].\]

For further details on the sparse function, please enter

help sparse

in MATLAB.

Preprocessing tip

Before solving a geometric optimization problem it is worthwhile to check if a column of the \(A\) matrix inputted to mskgpopt contains only positive elements. If this is the case, the corresponding variable \(t_i\) can take the value zero in the optimal solution: This may cause problems for MOSEK so it is better to remove such variables from the problem — doing so will have no influence on the optimal solution.

Reading and writing problems to a file

The functions mskgpread and mskgpwri can used to read and write geometric programming problems to file, see the Command Reference in Sec. 17.1 (Command Reference).