# 11.3 Concurrent optimizer¶

The idea of the concurrent optimizer is to run multiple optimizations of **the same problem** simultaneously, and pick the one that provides the fastest or best answer. This approach is especially useful for problems which require a very long time and it is hard to say in advance which optimizer or algorithm will perform best.

The major applications of concurrent optimization we describe in this section are:

Using the interior-point and simplex optimizers simultaneously on a linear problem. Note that any solution present in the task will also be used for hot-starting the simplex algorithms. One possible scenario would therefore be running a hot-start simplex in parallel with interior point, taking advantage of both the stability of the interior-point method and the ability of the simplex method to use an initial solution.

Using multiple instances of the mixed-integer optimizer to solve many copies of one mixed-integer problem. This is not in contradiction with the run-to-run determinism of

**MOSEK**if a different value of the MIO seed parameter`MSK_IPAR_MIO_SEED`

is set in each instance. As a result each setting leads to a different optimizer run (each of them being deterministic in its own right).

The downloadable file contains usage examples of both kinds.

## 11.3.1 Common setup¶

We first define a method that runs a number of optimization tasks in parallel, using the standard multithreading setup available in the language. All tasks register for a callback function which will signal them to interrupt as soon as the first task completes successfully (with response code `MSK_RES_OK`

).

```
/**
Defines a Mosek callback function whose only function
is to indicate if the optimizer should be stopped.
*/
int stop = 0;
int firstStop = -1;
MSKint32t MSKAPI callback (MSKtask_t task, MSKuserhandle_t usrptr, MSKcallbackcodee caller,
const MSKrealt * douinf, const MSKint32t * intinf, const MSKint64t * lintinf)
{
return stop;
}
```

When all remaining tasks respond to the stop signal, response codes and statuses are returned to the caller, together with the index of the task which won the race.

```
void runTask(int num,
MSKtask_t task,
MSKrescodee *res,
MSKrescodee *trm)
{
*res = MSK_optimizetrm(task, trm);
if (*res == MSK_RES_OK) {
if (!stop) firstStop = num;
stop = 1;
}
}
int optimize(int n,
MSKtask_t *tasks,
MSKrescodee *res,
MSKrescodee *trm)
{
int i;
std::thread * jobs = new std::thread[n];
// Set a callback function and start optimization
for(i = 0; i < n; ++i) {
MSK_putcallbackfunc(tasks[i], callback, NULL);
res[i] = trm[i] = MSK_RES_ERR_UNKNOWN;
jobs[i] = std::thread(runTask, i, tasks[i], &(res[i]), &(trm[i]));
}
// Join all threads
for(i = 0; i < n; ++i) jobs[i].join();
delete[] jobs;
// For debugging, print res and trm codes for all optimizers
for(i = 0; i < n; ++i)
printf("Optimizer %d res %d trm %d\n", i, res[i], trm[i]);
return firstStop;
}
```

## 11.3.2 Linear optimization¶

We use the multithreaded setup to run the interior-point and simplex optimizers simultaneously on a linear problem. The next methods simply clones the given task and sets a different optimizer for each. The result is the clone which finished first.

```
int optimizeconcurrent(MSKtask_t task,
int n,
MSKoptimizertypee *optimizers,
MSKtask_t *winTask,
MSKrescodee *winTrm,
MSKrescodee *winRes)
{
MSKtask_t *tasks = new MSKtask_t[n];
MSKrescodee *res = new MSKrescodee[n];
MSKrescodee *trm = new MSKrescodee[n];
// Clone tasks and choose various optimizers
for (int i = 0; i < n; ++i)
{
MSK_clonetask(task, &(tasks[i]));
MSK_putintparam(tasks[i], MSK_IPAR_OPTIMIZER, optimizers[i]);
}
// Solve tasks in parallel
int firstOK = optimize(n, tasks, res, trm);
if (firstOK >= 0)
{
*winTask = tasks[firstOK];
*winTrm = trm[firstOK];
*winRes = res[firstOK];
}
else
{
*winTask = NULL;
*winTrm = MSK_RES_ERR_UNKNOWN;
*winRes = MSK_RES_ERR_UNKNOWN;
}
// Cleanup
for (int i = 0; i < n; ++i)
if (i != firstOK) MSK_deletetask(&(tasks[i]));
delete[] tasks; delete[] res; delete[] trm;
return firstOK;
}
```

It remains to call the method with a choice of optimizers, for example:

```
MSKoptimizertypee optimizers[3] = {
MSK_OPTIMIZER_CONIC,
MSK_OPTIMIZER_DUAL_SIMPLEX,
MSK_OPTIMIZER_PRIMAL_SIMPLEX
};
idx = optimizeconcurrent(task, 3, optimizers, &t, &trm, &res);
```

## 11.3.3 Mixed-integer optimization¶

We use the multithreaded setup to run many, differently seeded copies of the mixed-integer optimizer. This approach is most useful for hard problems where we don’t expect an optimal solution in reasonable time. The input task would typically contain a time limit. It is possible that all the cloned tasks reach the time limit, in which case it doesn’t really mater which one terminated first. Instead we examine all the task clones for the best objective value.

```
int optimizeconcurrentMIO(MSKtask_t task,
int n,
int *seeds,
MSKtask_t *winTask,
MSKrescodee *winTrm,
MSKrescodee *winRes)
{
MSKtask_t *tasks = new MSKtask_t[n];
MSKrescodee *res = new MSKrescodee[n];
MSKrescodee *trm = new MSKrescodee[n];
*winTask = NULL;
*winTrm = MSK_RES_ERR_UNKNOWN;
*winRes = MSK_RES_ERR_UNKNOWN;
int bestPos = -1;
// Clone tasks and choose various optimizers
for (int i = 0; i < n; ++i)
{
MSK_clonetask(task, &(tasks[i]));
MSK_putintparam(tasks[i], MSK_IPAR_MIO_SEED, seeds[i]);
}
// Solve tasks in parallel
int firstOK = optimize(n, tasks, res, trm);
if (firstOK >= 0)
{
// Pick the task that ended with res = ok
// and contains an integer solution with best objective value
MSKobjsensee sense;
double bestObj;
MSK_getobjsense(task, &sense);
bestObj = (sense == MSK_OBJECTIVE_SENSE_MINIMIZE) ? 1.0e+10 : -1.0e+10;
for (int i = 0; i < n; ++i) {
double priObj;
MSK_getprimalobj(tasks[i], MSK_SOL_ITG, &priObj);
printf("%d %f\n", i, priObj);
}
for (int i = 0; i < n; ++i) {
double priObj;
MSKsolstae solsta;
MSK_getprimalobj(tasks[i], MSK_SOL_ITG, &priObj);
MSK_getsolsta(tasks[i], MSK_SOL_ITG, &solsta);
if ((res[i] == MSK_RES_OK) &&
(solsta == MSK_SOL_STA_PRIM_FEAS ||
solsta == MSK_SOL_STA_INTEGER_OPTIMAL) &&
((sense == MSK_OBJECTIVE_SENSE_MINIMIZE) ?
(priObj < bestObj) : (priObj > bestObj) ) )
{
bestObj = priObj;
bestPos = i;
}
}
if (bestPos != -1)
{
*winTask = tasks[bestPos];
*winTrm = trm[bestPos];
*winRes = res[bestPos];
}
}
// Cleanup
for (int i = 0; i < n; ++i)
if (i != bestPos) MSK_deletetask(&(tasks[i]));
delete[] tasks; delete[] res; delete[] trm;
return bestPos;
}
```

It remains to call the method with a choice of seeds, for example:

```
int seeds[3] = { 42, 13, 71749373 };
idx = optimizeconcurrentMIO(task, 3, seeds, &t, &trm, &res);
```