decomon.layers package

Submodules

decomon.layers.activations module

decomon.layers.activations.deserialize(name: str) Callable[[...], list[KerasTensor | Any]][source]

Get the activation from name.

Parameters:

name – name of the method.

among the implemented Keras activation function.

Returns:

the activation function

decomon.layers.activations.elu(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA for Exponential linear unit.

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.exponential(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA for Exponential activation function.

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.get(identifier: Any) Callable[[...], list[KerasTensor | Any]][source]

Get the identifier activation function.

Parameters:

identifier – None or str, name of the function.

Returns:

The activation function, linear if identifier is None.

decomon.layers.activations.group_sort_2(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, data_format: str = 'channels_last', slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]
decomon.layers.activations.hard_sigmoid(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]
LiRPA for Hard sigmoid activation function.

Faster to compute than sigmoid activation.

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.linear(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA foe Linear (i.e. identity) activation function.

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.linear_hull_s_shape(inputs: list[~keras.src.backend.common.keras_tensor.KerasTensor | ~typing.Any], func: ~collections.abc.Callable[[~keras.src.backend.common.keras_tensor.KerasTensor | ~typing.Any], ~keras.src.backend.common.keras_tensor.KerasTensor | ~typing.Any] = <function sigmoid>, f_prime: ~collections.abc.Callable[[~keras.src.backend.common.keras_tensor.KerasTensor | ~typing.Any], ~keras.src.backend.common.keras_tensor.KerasTensor | ~typing.Any] = <function sigmoid_prime>, dc_decomp: bool = False, perturbation_domain: ~decomon.core.PerturbationDomain | None = None, mode: str | ~decomon.core.ForwardMode = ForwardMode.HYBRID, slope: str | ~decomon.core.Slope = Slope.V_SLOPE) list[KerasTensor | Any][source]

Computing the linear hull of s-shape functions

Parameters:
  • inputs – list of input tensors

  • func – the function (sigmoid, tanh, softsign…)

  • f_prime – the derivative of the function (sigmoid_prime…)

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.relu(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, alpha: float = 0.0, max_value: float | None = None, threshold: float = 0.0, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]
Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • alpha – see Keras official documentation

  • max_value – see Keras official documentation

  • threshold – see Keras official documentation

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.selu(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA for Scaled Exponential Linear Unit (SELU).

SELU is equal to: scale * elu(x, alpha), where alpha and scale are predefined constants. The values of alpha and scale are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly (see lecun_normal initialization) and the number of inputs is “large enough” (see references for more information).

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.sigmoid(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA for Sigmoid activation function . 1 / (1 + exp(-x)).

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.softmax(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, axis: int = -1, clip: bool = True, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA for Softmax activation function.

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.softplus(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA for Softplus activation function log(exp(x) + 1).

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.softsign(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA for Softsign activation function x / (abs(x) + 1).

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.activations.tanh(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA for Hyperbolic activation function. tanh(x)=2*sigmoid(2*x)+1

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – type of convex input domain (None or dict)

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – see Keras official documentation

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.convert module

decomon.layers.convert.get_layer_input_shape(layer: Layer) list[tuple[int | None, ...]][source]

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Parameters:

layer

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:
  • AttributeError – if the layer has no defined input_shape.

  • RuntimeError – if called in Eager mode.

decomon.layers.convert.to_decomon(layer: Layer, input_dim: int, slope: str | Slope = Slope.V_SLOPE, dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, finetune: bool = False, ibp: bool = True, affine: bool = True, shared: bool = True, fast: bool = True) DecomonLayer[source]

Transform a standard keras layer into a Decomon layer.

Type of layer is tested to know how to transform it into a DecomonLayer of the good type. If type is not treated yet, raises an TypeError

Parameters:
  • layer – a Keras Layer

  • input_dim – an integer that represents the dim of the input perturbation domain

  • slope

  • dc_decomp – boolean that indicates whether we return a difference of convex decomposition of our layer

  • perturbation_domain – the type of perturbation domain

  • ibp – boolean that indicates whether we propagate constant bounds

  • affine – boolean that indicates whether we propagate affine bounds

Returns:

the associated DecomonLayer

decomon.layers.core module

class decomon.layers.core.DecomonLayer(*args, **kwargs)[source]

Bases: ABC, Layer

Abstract class that contains the common information of every implemented layers for Forward LiRPA

property affine: bool
build(input_shape: list[tuple[int | None, ...]]) None[source]
Parameters:

input_shape

Returns:

abstract call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

compute_output_shape(input_shape: tuple[int | None, ...] | list[tuple[int | None, ...]]) tuple[int | None, ...] | list[tuple[int | None, ...]][source]

Compute expected output shape according to input shape

Will be called by symbolic calls on Keras Tensors.

  • We use the original (Keras) layer compute_output_shape() if available to update accordingly the input shapes.

  • Else we simply return the input shapes

Beware, compute_output_shape() is sometimes called by the original keras layer inside its call(), which can be called inside the decomon layer call(). Check this by looking at input_shape (list of shapes or simple shape?)

Parameters:

input_shape

Returns:

compute_output_spec(*args: Any, **kwargs: Any) KerasTensor[source]

Compute output spec from output shape in case of symbolic call.

freeze_alpha() None[source]
freeze_weights() None[source]
get_config() dict[str, Any][source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

property ibp: bool
join(bounds: list[Any]) list[Any][source]
Parameters:

bounds

Returns:

property keras_weights_names: list[str]

Weights names of the corresponding Keras layer.

Will be used to decide which weight to take from the keras layer in reset_layer()

abstract property original_keras_layer_class: type[Layer]

The keras layer class from which this class is the decomon equivalent.

reset_finetuning() None[source]
reset_layer(layer: Layer) None[source]

Reset the weights by using the weights of another (a priori non-decomon) layer.

It set the weights whose names are listed by keras_weights_names.

Parameters:

layer

Returns:

set_back_bounds(has_backward_bounds: bool) None[source]
share_weights(layer: Layer) None[source]
split_kwargs(**kwargs: Any) None[source]
unfreeze_alpha() None[source]
unfreeze_weights() None[source]

decomon.layers.decomon_layers module

class decomon.layers.decomon_layers.DecomonActivation(*args, **kwargs)

Bases: DecomonLayer, Activation

Forward LiRPA implementation of Activation layers. See Keras official documentation for further details on the Activation operator

build(input_shape: list[tuple[int | None, ...]]) None
Parameters:

input_shape

Returns:

call(inputs: list[Any], **kwargs: Any) list[Any]
Parameters:

inputs

Returns:

freeze_alpha() None
original_keras_layer_class

alias of Activation

reset_finetuning() None
unfreeze_alpha() None
class decomon.layers.decomon_layers.DecomonBatchNormalization(*args, **kwargs)

Bases: DecomonLayer, BatchNormalization

Forward LiRPA implementation of Batch Normalization layers. See Keras official documentation for further details on the BatchNormalization operator

build(input_shape: list[tuple[int | None, ...]]) None
Parameters:

input_shape

Returns:

call(inputs: list[Any], training: bool = False, **kwargs: Any) list[Any]
Parameters:

inputs

Returns:

property keras_weights_names: list[str]

Weights names of the corresponding Keras layer.

Will be used to decide which weight to take from the keras layer in reset_layer()

original_keras_layer_class

alias of BatchNormalization

class decomon.layers.decomon_layers.DecomonConv2D(*args, **kwargs)

Bases: DecomonLayer, Conv2D

Forward LiRPA implementation of Conv2d layers. See Keras official documentation for further details on the Conv2d operator

build(input_shape: list[tuple[int | None, ...]]) None
Parameters:

input_shape

Returns:

call(inputs: list[Any], **kwargs: Any) list[Any]

computing the perturbation analysis of the operator without the activation function

Parameters:
  • inputs – list of input tensors

  • **kwargs

Returns:

List of updated tensors

freeze_weights() None
property keras_weights_names: list[str]

Weights names of the corresponding Keras layer.

Will be used to decide which weight to take from the keras layer in reset_layer()

original_keras_layer_class

alias of Conv2D

share_weights(layer: Layer) None
unfreeze_weights() None
class decomon.layers.decomon_layers.DecomonDense(*args, **kwargs)

Bases: DecomonLayer, Dense

Forward LiRPA implementation of Dense layers. See Keras official documentation for further details on the Dense operator

build(input_shape: list[tuple[int | None, ...]]) None
Parameters:

input_shape – list of input shape

Returns:

call(inputs: list[Any], **kwargs: Any) list[Any]
Parameters:

inputs

Returns:

freeze_weights() None
property keras_weights_names: list[str]

Weights names of the corresponding Keras layer.

Will be used to decide which weight to take from the keras layer in reset_layer()

original_keras_layer_class

alias of Dense

set_back_bounds(has_backward_bounds: bool) None
share_weights(layer: Layer) None
unfreeze_weights() None
class decomon.layers.decomon_layers.DecomonDropout(*args, **kwargs)

Bases: DecomonLayer, Dropout

Forward LiRPA implementation of Dropout layers. See Keras official documentation for further details on the Dropout operator

build(input_shape: list[tuple[int | None, ...]]) None
Parameters:

input_shape

Returns:

call(inputs: list[Any], training: bool = False, **kwargs: Any) list[Any]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Dropout

class decomon.layers.decomon_layers.DecomonFlatten(*args, **kwargs)

Bases: DecomonLayer, Flatten

Forward LiRPA implementation of Flatten layers. See Keras official documentation for further details on the Flatten operator

build(input_shape: list[tuple[int | None, ...]]) None
Parameters:
  • self

  • input_shape

Returns:

call(inputs: list[Any], **kwargs: Any) list[Any]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Flatten

class decomon.layers.decomon_layers.DecomonInputLayer(*args, **kwargs)

Bases: DecomonLayer, InputLayer

Forward LiRPA implementation of Dropout layers. See Keras official documentation for further details on the Dropout operator

call(inputs: list[Any], **kwargs: Any) list[Any]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of InputLayer

decomon.layers.decomon_merge_layers module

class decomon.layers.decomon_merge_layers.DecomonAdd(*args, **kwargs)[source]

Bases: DecomonMerge, Add

LiRPA implementation of Add layers. See Keras official documentation for further details on the Add operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Add

class decomon.layers.decomon_merge_layers.DecomonAverage(*args, **kwargs)[source]

Bases: DecomonMerge, Average

LiRPA implementation of Average layers. See Keras official documentation for further details on the Average operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Average

class decomon.layers.decomon_merge_layers.DecomonConcatenate(*args, **kwargs)[source]

Bases: DecomonMerge, Concatenate

LiRPA implementation of Concatenate layers. See Keras official documentation for further details on the Concatenate operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Concatenate

class decomon.layers.decomon_merge_layers.DecomonDot(*args, **kwargs)[source]

Bases: DecomonMerge, Dot

LiRPA implementation of Dot layers. See Keras official documentation for further details on the Dot operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Dot

class decomon.layers.decomon_merge_layers.DecomonMaximum(*args, **kwargs)[source]

Bases: DecomonMerge, Maximum

LiRPA implementation of Maximum layers. See Keras official documentation for further details on the Maximum operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Maximum

class decomon.layers.decomon_merge_layers.DecomonMerge(*args, **kwargs)[source]

Bases: DecomonLayer

Base class for Decomon layers based on Mergind Keras layers.

build(input_shape: list[tuple[int | None, ...]]) None[source]
Parameters:

input_shape

Returns:

compute_output_shape(input_shape: list[tuple[int | None, ...]]) list[tuple[int | None, ...]][source]

Compute output shapes from input shapes.

By default, we assume that all inputs will be merged into “one” (still a list of tensors though).

class decomon.layers.decomon_merge_layers.DecomonMinimum(*args, **kwargs)[source]

Bases: DecomonMerge, Minimum

LiRPA implementation of Minimum layers. See Keras official documentation for further details on the Minimum operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Minimum

class decomon.layers.decomon_merge_layers.DecomonMultiply(*args, **kwargs)[source]

Bases: DecomonMerge, Multiply

LiRPA implementation of Multiply layers. See Keras official documentation for further details on the Multiply operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Multiply

class decomon.layers.decomon_merge_layers.DecomonSubtract(*args, **kwargs)[source]

Bases: DecomonMerge, Subtract

LiRPA implementation of Subtract layers. See Keras official documentation for further details on the Subtract operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Subtract

decomon.layers.decomon_reshape module

class decomon.layers.decomon_reshape.DecomonPermute(*args, **kwargs)[source]

Bases: DecomonLayer, Permute

Forward LiRPA implementation of Reshape layers. See Keras official documentation for further details on the Reshape operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Permute

class decomon.layers.decomon_reshape.DecomonReshape(*args, **kwargs)[source]

Bases: DecomonLayer, Reshape

Forward LiRPA implementation of Reshape layers. See Keras official documentation for further details on the Reshape operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

original_keras_layer_class

alias of Reshape

decomon.layers.maxpooling module

decomon.layers.maxpooling.DecomonMaxPool2d

alias of DecomonMaxPooling2D

class decomon.layers.maxpooling.DecomonMaxPooling2D(*args, **kwargs)[source]

Bases: DecomonLayer, MaxPooling2D

LiRPA implementation of MaxPooling2D layers. See Keras official documentation for further details on the MaxPooling2D operator

call(inputs: list[Any], **kwargs: Any) list[Any][source]
Parameters:

inputs

Returns:

data_format: str
original_keras_layer_class

alias of MaxPooling2D

padding: str
pool_size: tuple[int, int]
strides: tuple[int, int]

decomon.layers.utils module

class decomon.layers.utils.ClipAlpha[source]

Bases: Constraint

Cosntraints the weights to be between 0 and 1.

class decomon.layers.utils.ClipAlphaAndSumtoOne[source]

Bases: Constraint

Cosntraints the weights to be between 0 and 1.

class decomon.layers.utils.ClipAlphaGrid[source]

Bases: Constraint

Cosntraints the weights to be between 0 and 1.

class decomon.layers.utils.MultipleConstraint(constraint_0: Constraint | None, constraint_1: Constraint, **kwargs: Any)[source]

Bases: Constraint

stacking multiple constraints

class decomon.layers.utils.NonNeg[source]

Bases: Constraint

Constrains the weights to be non-negative.

class decomon.layers.utils.NonPos[source]

Bases: Constraint

Constrains the weights to be non-negative.

class decomon.layers.utils.Project_initializer_neg(initializer: Initializer, **kwargs: Any)[source]

Bases: Initializer

Initializer that generates tensors initialized to 1.

class decomon.layers.utils.Project_initializer_pos(initializer: Initializer, **kwargs: Any)[source]

Bases: Initializer

Initializer that generates tensors initialized to 1.

decomon.layers.utils.abs(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID) list[KerasTensor | Any][source]

LiRPA implementation of |x|

Parameters:
  • inputs

  • dc_decomp

  • perturbation_domain

  • mode

Returns:

decomon.layers.utils.broadcast(inputs: list[KerasTensor | Any], n: int, axis: int, mode: str | ForwardMode, dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None) list[KerasTensor | Any][source]

LiRPA implementation of broadcasting

Parameters:
  • inputs

  • n

  • axis

  • mode

Returns:

decomon.layers.utils.exp(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]

Exponential activation function.

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – the type of convex input domain

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • slope

  • **kwargs – extra parameters

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.utils.expand_dims(inputs: list[KerasTensor | Any], dc_decomp: bool = False, mode: str | ForwardMode = ForwardMode.HYBRID, axis: int = -1, perturbation_domain: PerturbationDomain | None = None, **kwargs: Any) list[KerasTensor | Any][source]
decomon.layers.utils.frac_pos(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any][source]
decomon.layers.utils.frac_pos_hull(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID) list[KerasTensor | Any][source]

LiRPA implementation of 1/x for x>0

Parameters:
  • inputs

  • dc_decomp

  • perturbation_domain

  • mode

Returns:

decomon.layers.utils.is_a_merge_layer(layer: Layer) bool[source]
decomon.layers.utils.linear_to_softmax(model: Model) tuple[Model, bool][source]
decomon.layers.utils.log(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any][source]

Exponential activation function.

Parameters:
  • inputs – list of input tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – the type of convex input domain

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • **kwargs – extra parameters

whether we return a difference of convex decomposition of our layer

Returns:

the updated list of tensors

decomon.layers.utils.max_(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, axis: int = -1, finetune: bool = False, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA implementation of max(x, axis)

Parameters:
  • inputs – list of tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – the type of perturbation domain

  • axis – axis to perform the maximum

whether we return a difference of convex decomposition of our layer

Returns:

max operation along an axis

decomon.layers.utils.min_(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, axis: int = -1, finetune: bool = False, **kwargs: Any) list[KerasTensor | Any][source]

LiRPA implementation of min(x, axis=axis)

Parameters:
  • inputs

  • dc_decomp

  • perturbation_domain

  • mode

  • axis

Returns:

decomon.layers.utils.multiply(inputs_0: list[KerasTensor | Any], inputs_1: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID) list[KerasTensor | Any][source]

LiRPA implementation of (element-wise) multiply(x,y)=-x*y.

Parameters:
  • inputs_0 – list of tensors

  • inputs_1 – list of tensors

  • dc_decomp – boolean that indicates

  • perturbation_domain – the type of perturbation domain

  • mode – type of Forward propagation (ibp, affine, or hybrid)

whether we return a difference of convex decomposition of our layer whether we propagate upper and lower bounds on the values of the gradient

Returns:

maximum(inputs_0, inputs_1)

decomon.layers.utils.permute_dimensions(inputs: list[KerasTensor | Any], axis: int, mode: str | ForwardMode = ForwardMode.HYBRID, axis_perm: int = 1, dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None) list[KerasTensor | Any][source]

LiRPA implementation of (element-wise) permute(x,axis)

Parameters:
  • inputs – list of input tensors

  • axis – axis on which we apply the permutation

  • mode – type of Forward propagation (ibp, affine, or hybrid)

  • axis_perm – see DecomonPermute operator

Returns:

decomon.layers.utils.pow(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID) list[KerasTensor | Any][source]

LiRPA implementation of pow(x )=x**2

Parameters:
  • inputs

  • dc_decomp

  • perturbation_domain

  • mode

Returns:

decomon.layers.utils.softmax_to_linear(model: Model) tuple[Model, bool][source]

linearize the softmax layer for verification

Parameters:

model – Keras Model

Returns:

model without the softmax

decomon.layers.utils.softplus_(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any][source]
decomon.layers.utils.sort(inputs: list[KerasTensor | Any], axis: int = -1, dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID) list[KerasTensor | Any][source]

LiRPA implementation of sort by selection

Parameters:
  • inputs

  • axis

  • dc_decomp

  • perturbation_domain

  • mode

Returns:

decomon.layers.utils.split(inputs: list[KerasTensor | Any], axis: int = -1, mode: str | ForwardMode = ForwardMode.HYBRID, dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None) list[list[KerasTensor | Any]][source]

LiRPA implementation of split

Parameters:
  • inputs

  • axis

  • mode

Returns:

decomon.layers.utils.sum(inputs: list[KerasTensor | Any], axis: int = -1, dc_decomp: bool = False, mode: str | ForwardMode = ForwardMode.HYBRID, perturbation_domain: PerturbationDomain | None = None, **kwargs: Any) list[KerasTensor | Any][source]

decomon.layers.utils_pooling module

decomon.layers.utils_pooling.get_lower_linear_hull_max(inputs: list[KerasTensor | Any], mode: str | ForwardMode = ForwardMode.HYBRID, perturbation_domain: PerturbationDomain | None = None, axis: int = -1, finetune_lower: Any | None = None, dc_decomp: bool = False, **kwargs: Any) list[KerasTensor | Any][source]

Compute the linear hull that overapproximates max along the axis dimension

Parameters:
  • inputs – list of input tensors

  • mode – type of Forward propagation (ibp, affine, or hybrid). Default to hybrid.

  • perturbation_domain (optional) – type of perturbation domain that encompass the set of perturbations. Defaults to None.

  • axis (optional) – Defaults to -1. See Keras offical documentation backend.max(., axis)

  • finetune_lower – If not None, should be a constant tensor used to fine tune the lower relaxation.

Raises:

NotImplementedError – axis <0 and axis!=-1

Returns:

list of output tensors. The lower linear relaxation of max(., axis) in the mode format

decomon.layers.utils_pooling.get_upper_linear_hull_max(inputs: list[KerasTensor | Any], mode: str | ForwardMode = ForwardMode.HYBRID, perturbation_domain: PerturbationDomain | None = None, axis: int = -1, dc_decomp: bool = False, **kwargs: Any) list[KerasTensor | Any][source]

Compute the linear hull that overapproximates max along the axis dimension

Parameters:
  • inputs – list of input tensors

  • mode – type of Forward propagation (ibp, affine, or hybrid). Default to hybrid.

  • perturbation_domain (optional) – type of perturbation domain that encompass the set of perturbations. Defaults to None.

  • axis (optional) – Defaults to -1. See Keras offical documentation backend.max(., axis)

Raises:

NotImplementedError – axis <0 and axis!=-1

Returns:

list of output tensors. The upper linear relaxation of max(., axis) in the mode format

Module contents