Notebooks

We present here a curated list of notebooks recommended to start with discrete-optimization, available in the notebooks/ folder of the repository.

Knapsack problem

Github Colab Binder

This is a very common combinatorial optimization problem where you are given a knapsack of a given weight capacity \(C\) and a bunch of items with values and weight. The goal is to fill the knapsack with the best aggregated value, respecting the weight constraint.

knapsack problem illustration.

We handle here the 0-1 knapsack problem where each item can only be taken once.

Many different optimization approach can be tested on the combinatorial problem, we’ll see a few during the notebook:

RCPSP tutorials

Introduction to RCPSP

Github Colab Binder

What is RCPSP ? (Resource Constrained Project Scheduling Problem)

  • \(M\) activities or tasks in a project (instance)

  • Precedence constraints:

    If activity \(j\in[1,M]\) is a successor of activity \(i\in[1,M]\), then activity \(i\) must be completed before activity \(j\) can be started

  • Resource constraints:

    Each project is assigned a set of K renewable resources where each resource \(k\) is available in \(R_{k}\) units for the entire duration of the project. Each activity may require one or more of these resources to be completed. While scheduling the activities, the daily resource usage for resource \(k\) can not exceed \(R_{k}\) units.

  • Each activity \(j\) takes \(d_{j}\) time units to complete.

  • The overall goal of the problem is usually to minimize the makespan.

Here we focus on single mode RCPSP with renewable resources, but there exists also variants of the problem

  • multi-mode: a task can be performed in several ways (modes), with specific duration and resources needs. The choice of the mode is in this case part of the solution.

  • mix of renewable and non-renewable resources.

Solving RCPSP with heuristics

Github Colab Binder

We admit that you followed the following notebook that introduced you all the basics for RCPSP Problems, in this notebook we will test two greedy heuristics that builds feasible solution.

Solving RCPSP with local search/metaheuristics/genetic algorithm

Github Colab Binder

One big family of combinatorial optimisation solver is local search. We include all metaheuristics and genetic algorithm into this simple terminology.

In general, a local search algorithm explore the solution space by applying local changes to the current set of solutions.

In the case of RCPSP, we have seen in the first notebook that we can represent a solution with a priority list of tasks (equivalent to a permutation). Therefore local search algorithms are available on the search space being the ensemble of all permutation of tasks. We can imagine many kind of local changes possible to explore the permutation set.

Local search (LS) algorithms are anytime algorithm, we have access to the current best solution whenever we want to stop the optimisation process. LS can’t prove that a solution is optimal but it is rarely an issue in real world applications.

Solve RCPSP by linear programming: MILP

Github Colab Binder

We admit that you followed the first notebook that introduced you all the basics for RCPSP Problems, on which we will apply here linear programming.

Linear programming is a paradigm to model optimisation problem where the objective function and constraints are linear in terms of the variables :

\( y = max_{x}(c^t.x) \) such that \(A.x \leq b \)

\(x\) can be either floating or integer values, which explains the Mixed Integer Linear Programming denomination.

Solve RCPSP by constraint programming

Github Colab Binder

One of the main class of methods to solve discrete optimization problem is Constraint programming (CP). CP solvers explore the state variables and propagate constraints in a clever way. Using constraint programming, users declaratively state the constraints on the feasible solutions for a set of decision variables. Constraints specify the properties of a solution to be found. In discrete-optimization library, minizinc declarative language and its python api is used extensively. However it is quite easy to use other modeling library such as Cpmpy in the future. Some constraint programming models are stored in discrete_optimization/knapsack/minizinc folder.

In this notebook, we’ll use the chuffed solver which is a state of the art lazy clause solver.

We assume that you have already been introduced to RCPSP problems thanks to this notebook.

Large neighborhood search + CP to solve RCPSP

Github Colab Binder

LNS is an iterative heuristic method consisting in freezing randomly part of the solutions and optimize the remaining part. Full solution is then rebuilt and hopefully, repeating the process lead to a good solution to the original problem.

Advanced

Callbacks usage

Github Colab Binder

When using discrete-optimization to solve a problem, it is possible to execute your own code at various stage of the solving process.

To achieve that, you have to

  • either create your own callback by inheriting from Callback base class, and implementing the hooks you need

  • or directly use one of the already implemented ones available in discrete_optimization.generic_tools.callbacks submodules (like loggers, early_stoppers, or optuna )

  • and put them in callbacks argument of SolverDO.solve(), as shown in the API doc.

The main usecases for using a callback are

  • Logging: you need to display more information about what happens during the solving process;

  • Backuping: you need to store a model at an intermediate stage;

  • Early stopping: you want to stop the solving process under your own specific condition, not available in the solver api;

  • Tuning hyperparameters with Optuna: you want let Optuna having access to intermediate results so that it can decide whether to drop the current trial. (See dedicated notebook.)

Tuning hyperparameters with Optuna

Github Colab Binder

All our solvers have got a lot of hyperparameters. And of course, the optimization result can change significantly according to them.

In this notebook, we will see how we can make use of Optuna to tune them for a given problem (or family of problems).

Some work has been done in the library to ease this tuning:

  • main hyperparameters of each solver have been identified, with default values and possible ranges registered;

  • some utility methods have been coded to get default hyperparameters and to make use of optuna hyperparameters auto-suggestion with as little work as possible from the user.

After applying this to tune hyperparameters of a solver, further examples will show you that

  • we can also use optuna to select the solver class itself as another meta-hyperparameter;

  • some solvers are meta-solvers having themselves subsolvers as hyperparameters with their own set of hyperparameters, that can also be tuned.