Tutorials

We present here a curated list of notebooks recommended to start with decomon, available in the tutorials/ folder of the repository.

DECOMON tutorial #1

Github Colab Binder

Bounding the output of a Neural Network trained on a sinusoidal function

After training a model, we want to make sure that the model is smooth: it will predict almost the same output for any data “close” to the initial one, showing some robustness to perturbation.

In this notebook, we train a Neural Network to approximate at best a simple sinusoidal function (the reference model). However, between test samples, we have no clue that the output of the Neural Network will look like. The objective is to have a formal proof that outputs of the neural network’s predictions do not go to weird values.

In the first part of the notebook, we define the reference function, build a training and test dataset and learn a dense fully connected neural network to approximate this reference function.

In the second part of the notebook, we use decomon to compute guaranteed bounds to the output of the model.

What we will show is how decomon module is able to provide guaranteed bounds that ensure our approximation will never have a strange behaviour between test dataset points.

DECOMON tutorial #2

Github Colab Binder

Local Robustness to sensor noise for Regression

Embedding simulation models developed during the design of a platform opens a lot of potential new functionalities but requires additional certification. Usually, these models require too much computing power, take too much time to run so we need to build an approximation of these models that can be compatible with operational constraints, hardware constraints, and real-time constraints. Also, we need to prove that the decisions made by the system using the surrogate model instead of the reference one will be safe.

A first assessment that can be performed is the robustness of the prediction given sensor noise: demonstrating that despite sensor noise, the neural network prediction remains consistent.

Local Robustness to sensoir noise can be performed efficiently thanks to formal robustness. In this notebook, we demonstrate how to derive deterministic upper and lower bounds of the output prediction of a neural network in the vicinity of a test sample.

DECOMON tutorial #3

Github Colab Binder

Local Robustness to Adversarial Attacks for classification tasks

After training a model, we want to make sure that the model will give the same output for any images “close” to the initial one, showing some robustness to perturbation.

In this notebook, we start from a classifier built on MNIST dataset that given a hand-written digit as input will predict the digit. This will be the first part of the notebook.

examples of hand-written digit

In the second part of the notebook, we will investigate the robustness of this model to unstructured modification of the input space: adversarial attacks. For this kind of attacks, we vary the magnitude of the perturbation of the initial image and want to assess that despite this noise, the classifier’s prediction remain unchanged.

examples of perturbated images

What we will show is the use of decomon module to assess the robustness of the prediction towards noise.

DECOMON tutorial #4

Github Colab Binder

Overestimation with formal guarantee for Braking Distance Estimation

In recent years, we have seen the emergence of safety-related properties for regression tasks in many industries. For example, numerical models have been developed to approximate the physical phenomena inherent in their systems. Since these models are based on physical equations, the relevance of which is affirmed by scientific experts, their qualifications are carried out without any problems. However, as their computational costs and execution time prevent us from embedding them, the use of these numerical models in the aeronautical domain remains mainly limited to the development and design phase of the aircraft. Thanks to the current success of deep neural networks, previous works have already studied neural network-based surrogates for the approximation of numerical models. Nevertheless, these surrogates have additional safety properties that need to be demonstrated to certification authorities. In this blog post, we will examine a specification that arises for a neural network used for take-off distance estimation which is the over-estimation of the simulation model. We will explore how to address them with decomon.

Advanced

Tensorboard with decomon (not yet working with keras 3 and decomon>0.1.1)

Github Colab Binder

In this notebook, we show how to have a look to the graph of a decomon model.

We use here the same model as in tutorial 1 and you should refer to it for any details about how it works.