11.3 Concurrent optimizer

The idea of the concurrent optimizer is to run multiple optimizations of the same problem simultaneously, and pick the one that provides the fastest or best answer. This approach is especially useful for problems which require a very long time and it is hard to say in advance which optimizer or algorithm will perform best.

The major applications of concurrent optimization we describe in this section are:

  • Using the interior-point and simplex optimizers simultaneously on a linear problem. Note that any solution present in the task will also be used for hot-starting the simplex algorithms. One possible scenario would therefore be running a hot-start simplex in parallel with interior point, taking advantage of both the stability of the interior-point method and the ability of the simplex method to use an initial solution.

  • Using multiple instances of the mixed-integer optimizer to solve many copies of one mixed-integer problem. This is not in contradiction with the run-to-run determinism of MOSEK if a different value of the MIO seed parameter MSK_IPAR_MIO_SEED is set in each instance. As a result each setting leads to a different optimizer run (each of them being deterministic in its own right).

The downloadable file contains usage examples of both kinds.

11.3.1 Common setup

We first define a method that runs a number of optimization tasks in parallel, using the standard multithreading setup available in the language. All tasks register for a callback function which will signal them to interrupt as soon as the first task completes successfully (with response code MSK_RES_OK).

Listing 11.10 Simple callback function which signals the optimizer to stop. Click here to download.
# Defines a Mosek callback function whose only function
# is to indicate if the optimizer should be stopped.
stop = false
firstStop = 0
function callback(caller  :: Callbackcode,
                  douinf  :: Vector{Float64},
                  intinf  :: Vector{Int32},
                  lintinf :: Vector{Int64})
    if stop
        1
    else
        0
    end
end

When all remaining tasks respond to the stop signal, response codes and statuses are returned to the caller, together with the index of the task which won the race.

Listing 11.11 A routine for parallel task race. Click here to download.
function runTask(num, task)
    global stop
    global firstStop

    ttrm =
        try
            optimize(task)
        catch e
            if isa(e,MosekError)
                return r.rcode,MSK_RES_ERR_UNKNOWN
            else
                rethrow()
            end
        end

    # If this finished with success, inform other tasks to interrupt
    # Note that data races around stop/firstStop are irrelevant
    if ! stop
        stop = true
        firstStop = num
    end

    return MSK_RES_OK,ttrm
end

function optimizeconcurrent(tasks::Vector{Mosek.Task})
    res = [ MSK_RES_ERR_UNKNOWN for t in tasks ]
    trm = [ MSK_RES_ERR_UNKNOWN for t in tasks ]

    # Set a callback function
    for t in tasks
        # Use remote server: putoptserverhost(task,"http://solve.mosek.com:30080")
        putcallbackfunc(t, callback)
    end

    # Start parallel optimizations, one per task

    
    Threads.@threads for i in 1:length(tasks)
        (tres,ttrm) = runTask(i,tasks[i])
        res[i] = tres
        trm[i] = ttrm
    end

    # For debugging, print res and trm codes for all optimizers
    for (i,(tres,ttrm)) in enumerate(zip(res,trm))
        println("Optimizer  $i   res $tres   trm $ttrm")
    end

  return firstStop, res, trm
end

11.3.2 Linear optimization

We use the multithreaded setup to run the interior-point and simplex optimizers simultaneously on a linear problem. The next methods simply clones the given task and sets a different optimizer for each. The result is the clone which finished first.

Listing 11.12 Concurrent optimization with different optimizers. Click here to download.
function optimizeconcurrent(task, optimizers)
    # Choose various optimizers for cloned tasks
    tasks = Mosek.Task[ let t = maketask(task)
                            # Use remote server: putoptserverhost(task,"http://solve.mosek.com:30080")
                            putintparam(t,MSK_IPAR_OPTIMIZER, opt)
                            t
                        end for opt in optimizers ]


    # Solve tasks in parallel
    firstOK, res, trm = optimizeconcurrent(tasks)

    if firstOK > 0
        return firstOK, tasks[firstOK], trm[firstOK], res[firstOK]
    else
        return 0, Nothing, Nothing, Nothing
    end
end

It remains to call the method with a choice of optimizers, for example:

Listing 11.13 Calling concurrent linear optimization. Click here to download.
                optimizers = [ MSK_OPTIMIZER_CONIC,
                               MSK_OPTIMIZER_DUAL_SIMPLEX,
                               MSK_OPTIMIZER_PRIMAL_SIMPLEX ]
                optimizeconcurrent(task, optimizers)

11.3.3 Mixed-integer optimization

We use the multithreaded setup to run many, differently seeded copies of the mixed-integer optimizer. This approach is most useful for hard problems where we don’t expect an optimal solution in reasonable time. The input task would typically contain a time limit. It is possible that all the cloned tasks reach the time limit, in which case it doesn’t really mater which one terminated first. Instead we examine all the task clones for the best objective value.

Listing 11.14 Concurrent optimization of a mixed-integer problem. Click here to download.
function optimizeconcurrentMIO(task, seeds)
    # Choose various seeds for cloned tasks
    tasks = Mosek.Task[ let t = maketask(task)
                            putintparam(MSK_IPAR_MIO_SEED, seed)
                            t
                        end for seed in seeds ]

    # Solve tasks in parallel
    (firstOK, res, trm) = optimizeconcurrent(tasks)

    sense = getobjsense(task)
    bestObj = if sense == MSK_OBJECTIVE_SENSE_MINIMIZE 1.0e+10 else -1.0e+10 end
    bestPos = -1

    if firstOK >= 0
        # Pick the task that ended with res = ok
        # and contains an integer solution with best objective value

        for (i,t) in enumerate(tasks)
            pobj = getprimalobj(t,MSK_SOL_ITG)
            print("$i   $pobj")
        end

        for (i,(tres,ttrm,t)) in enumerate(zip(res,trm,tasks))
            solsta = getsolsta(t,MSK_SOL_ITG)
            if tres == MSK_RES_OK &&
                ( solsta == MSK_SOL_STA_PRIM_FEAS ||
                  solsta == MSK_SOL_STA_INTEGER_OPTIMAL)
                pobj = getprimalobj(t,MSK_SOL_ITG)
                if ( ( sense == MSK_OBJECTIVE_SENSE_MINIMIZE &&
                       getprimalobj(t,MSK_SOL_ITG) < bestObj ) ||
                     ( sense == MSK_OBJECTIVE_SENSE_MAXIMIZE &&
                       getprimalobj(t,MSK_SOL_ITG) > bestObj ) )
                    bestObj = pobj
                    bestPos = i
                end
            end
        end
    end

    if bestPos > 0
        return bestPos, tasks[bestPos], trm[bestPos], res[bestPos]
    else
        return 0, Nothing, Nothing, Nothing
    end
end

It remains to call the method with a choice of seeds, for example:

Listing 11.15 Calling concurrent integer optimization. Click here to download.
                seeds = [ 42, 13, 71749373 ]

                optimizeconcurrentMIO(task, seeds)