Contributing

We welcome all contributions to discrete-optimization.

You can help by:

  • fixing bugs (see issues with label “bug”),

  • improving the documentation,

  • adding and improving educational notebooks in notebooks/.

This is not exhaustive.

The project is hosted on https://github.com/airbus/discrete-optimization. Contributions to the repository are made by submitting pull requests.

This guide is organized as follows:

Setting up your development environment

Prerequisites

Minizinc 2.8+ [optional]

If you want to use the minizinc based solver of the library, you need to install minizinc (version greater than 2.8) and update the PATH environment variable so that it can be found by Python. See minizinc documentation for more details.

Python 3.10+ environment [deprecated]

Attention

You can skip this step if you choose to manage the project with uv as now recommended. Indeed uv will automatically create the virtual environment. Go directly to Section “Managing the project with uv” in that case.

The use of a virtual environment is recommended, and you will need to ensure that the environment use a Python version greater than 3.10. This can be achieved for instance either by using conda or by using pyenv (or pyenv-win on windows) and venv module, or by using uv.

The following examples show how to create a virtual environment with Python version 3.12 with the mentioned methods.

With conda
conda create -n do-env python=3.12
conda activate do-env
With pyenv + venv
pyenv install 3.12
pyenv shell 3.12
python -m venv do-venv
source do-venv/bin/activate  # do-venv\Scripts\activate on windows
With uv

NB: use the command below only if you want to install manually (e.g. with uv pip) the library. Else go directly to Section “Managing the project with uv”.

uv venv do-venv --python 3.12
source do-venv/bin/activate  # do-venv\Scripts\activate on windows

Installing the library from source in developer mode

With pip [deprecated]

Attention

The preferred process is now using uv for managing the project but you can still install it with pip.

We use the option --editable (or -e) of pip install. We can also install

  • dependencies for testing via --group test (see [dependency-groups] in pyproject.toml),

  • dependencies for building doc via --group doc,

  • other dev dependencies via --group dev,

  • optional dependencies to make use of all solvers, by using the corresponding extras.

Update the following command if you do not need all features from extras and dependency groups, and want not to install corresponding dependencies.

git clone https://github.com/airbus/discrete-optimization.git
cd discrete-optimization
pip install -U pip
pip install -e .[gurobi, quantum, dashboard, optuna, toulbar] --group test --group dev --group doc

Note

  • pip version must be >= 25.1 to use --group option.

  • “toulbar” extra will not work on windows.

Managing the project with uv

We can also let uv manage the project. This is now the preferred process.

You can install all dependencies (including dependency groups and extras) via

uv sync --python=3.12 --all-extras

Notes:

  • You can skip the python version, it will choose the current python or a version specified in the file “.python-version” if existing.

  • If you want to avoid some extras (like “toulbar” on windows), you can specify the ones you need:

    uv sync --extra gurobi --extra dashboard
    

You can actually even skip this step as any call to uv run will install necessary dependencies (but do not forget to add your extras to the first uv run in that case).

To learn how to add/update dependencies with uv, refer to its documentation.

Important

In the following sections, we assume that you chose to use uv. Else, you generally only need to remove uv run or uvx from the commands.

Building the docs locally

The documentation is using Sphinx to generate the html pages, and in particular the autogenerated API doc from in-code docstrings.

Build the docs

On Linux or Mac, or with git-bash on windows, make the documentation with

cd docs
# generate api doc source files
rm source/api/discrete_optimization*.rst
uv run sphinx-apidoc -o source/api -f -T ../src/discrete_optimization
# generate available notebooks list
uv run python generate_nb_index.py
# remove previous build
rm -rf build
# build doc html pages
uv run sphinx-build -M html source build

The index of the built documentation is then available at build/html/index.html from the docs/ repository. You can for instance browse the documentation by running

uv run python -m http.server -d build/html

and go to http://localhost:8000/. Doing this, rather than just opening index.html directly in you browser, make javascript work properly.

On windows with standard command prompt, the same lines work except that you have to use rm equivalent command to remove files and directories.

Notes

  • sphinx-apidoc is used to generate the source files for api doc. It is necessary to launch it each time a new subpackage/module is added to the code.

  • The line before is useful to avoid having source file corresponding to previous subpackage/module that have been removed.

  • generate_nb_index.py update the list of notebooks in the generated file notebooks.md that is the source file of notebooks page. The command has to be launched when a new notebook is added.

  • The last command actually build the html outputs and is required each time a doc page has a change (either because a source file has changed or the docstrings in the code have been modified).

  • To update the doc faster when testing changes, launch only the last command without the previous rm [...].

Running unit tests

The unit tests are gathered in tests/ folder and run with pytest.

Then, from the “discrete-optimization” root directory, run unit tests with:

uv run pytest tests -v

Running notebooks as tests

One can test programmatically that notebooks are not broken thanks to nbmake extension for pytest.

uv run pytest --nbmake notebooks -v

Guidelines to follow when preparing a contribution

Coding style and code linting

To help maintaining the same coding style across the project, some code linters are used via pre-commit.

It is used by CI to run checks at each push, but can also be used locally.

Once installed, you can run it on all files with

pre-commit run --all-files

Beware that doing so, you are actually modifying the files.

You can also use it when committing:

  • stage your changes: git add your_files,

  • run pre-commit on the staged files: pre-commit run,

  • check the changes made,

  • accept them by adding modified files: git add -u,

  • commit: git commit.

This can also be done automatically at each commit if you add pre-commit to git hooks with pre-commit install. Beware that when doing so,

  • the changes will be refused if pre-commit actually modifies the files,

  • you can then accept the modifications with git add -u,

  • you can still force a commit that violates pre-commit checks with git commit -n or git commit --no-verify.

If you prefer run pre-commit manually, you can remove the hooks with pre-commit uninstall.

Notebooks

We try to give some introductory examples via notebooks available in the corresponding notebooks/ directory.

The list of these notebooks is automatically inserted in the documentation with a title and a description. These are actually extracted from the first cell. To enable that, each notebook should

  • start with a markdown cell,

  • its first line being the title starting with one number sign (”# “),

  • the remaining lines being used as the description.

For instance:

# Great notebook title

A quick description of the main features of the notebook.
Can be on several lines.

Can include a nice thumbnail.
![Notebook_thumbnail](https://airbus.github.io/scikit-decide/maze.png)

Examples

With regards to examples that are not notebooks, such python scripts should be stored in exmaples/ directory. Be aware that all the examples are imported in the test suite (to check that the imports at least are still working with current version of the library), so you should:

  • wrap any actual (and memory/time-consuming) statements in a if __name__=="__main_": block to avoid running really the examples when importing them (cf this explanation),

  • wrap any import of additional depedencies in a try: ... except ImportError: ... block, and potentially raise a warning when the dependency is not installed, to avoid having the import test fail (cf management of cartopy dependency in “examples/gpdp/plots_wip/run_animated_plot.py”).

Adding unit tests

  • Whenever adding some code, think to add some tests to the tests/ folder.

  • Whenever fixing a bug, think to add a test that crashes before fixing the bug and does not afterwards.

Follow above instructions to run them with pytest.

Writing the documentation

In-code docstrings

The API is self-documented thanks to in-code docstrings and annotations. Whenever adding/fixing some code, you should add/update doctrings and annotations.

In order to generate properly the API doc, some guidelines should be followed:

  • Docstrings should follow Google style docstrings (see this example), parsed thanks to napoleon extension. This can be checked thanks to ruff tool via the option --select="D":

    uvx ruff check --select="D" src/discrete_optimization/path/to/your/files
    
  • As we use type annotations in the code, types hints should not be added to the docstrings in order to avoid duplicates, and potentially inconsistencies.

  • You should use annotations to explicit the types of public variables and of the inputs, outputs of public methods/functions.

Doc pages

Natively, sphinx is meant to parse reStructuredText files.

To be able to directly reuse materials already written in markdown format, we use here the myST extension, that allows to write makdown files.

In markdown files, we can still write sphinx directives and roles as explained in myST documentation. For instance a sphinx table of contents tree can be inserted in a markodown file with a code block like:

```{toctree}
---
maxdepth: 2
caption: Contents
---
install
getting_started
notebooks
api/modules
contribute
```

Submitting pull requests

When you think you are ready to merge your modifications into the main repository, you will have to open a pull request (PR). We can summarize the process as follows:

  • Fork the repository on github.

  • Clone your fork on your computer.

  • Make your changes and push them to your fork.

  • Do the necessary checks (see below).

  • Reorganize your commits (see below).

  • Submit your pull request (see github documentation).

  • See if all CI checks passed on your PR.

  • Wait for a review.

  • Take the comments and required changes into account.

Note that a PR needs at least one review by a core developer to be merged.

You may want to add a reference to the main repository to fetch from it and (re)base your changes on it:

git remote add upstream https://github.com/airbus/discrete-optimization

This post points out good practices to follow to submit great pull requests and review them efficiently.

Prior checks

Before submitting your pull request, think to

If you do not, you will still be able to see the status of your PR as CI will do these checks for you.

Reorganizing commits

On your way to implement your contribution, you will probably have lots of commits, some modifying other ones from the same PR, or only modifying the code style.

At the end of your work, consider reorganizing them by

  • squashing them into one or only a few logical commits,

  • having a separate commit to reformat previous existing code if necessary,

  • rewritting commit messages so that it explains the changes made and why, the “how” part being explained by the code itself (see this post about what a commit message should and should not contain),

  • rebasing on upstream repository master branch if it diverged too much by the time you finished.

You can use git rebase -i to do that, as explained in git documentation.