decomon.backward_layers package
Submodules
decomon.backward_layers.activations module
- decomon.backward_layers.activations.backward_elu(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of Exponential Linear Unit
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
slope –
mode –
Returns:
- decomon.backward_layers.activations.backward_exponential(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPAof exponential
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
slope –
mode –
Returns:
- decomon.backward_layers.activations.backward_hard_sigmoid(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of hard sigmoid
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
slope –
mode –
Returns:
- decomon.backward_layers.activations.backward_linear(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of linear
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
slope –
mode –
Returns:
- decomon.backward_layers.activations.backward_relu(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, alpha: float = 0.0, max_value: float | None = None, threshold: float = 0.0, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of relu
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
alpha –
max_value –
threshold –
slope –
mode –
Returns:
- decomon.backward_layers.activations.backward_selu(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of Scaled Exponential Linear Unit (SELU)
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
slope –
mode –
Returns:
- decomon.backward_layers.activations.backward_sigmoid(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of sigmoid
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
slope –
mode –
Returns:
- decomon.backward_layers.activations.backward_softmax(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, axis: int = -1, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of softmax
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
slope –
mode –
axis –
Returns:
- decomon.backward_layers.activations.backward_softplus(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of softplus
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
slope –
mode –
Returns:
- decomon.backward_layers.activations.backward_softsign(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of softsign
- Parameters:
inputs –
w_u_out –
b_u_out –
w_l_out –
b_l_out –
perturbation_domain –
slope – backward slope
mode –
Returns:
- decomon.backward_layers.activations.backward_softsign_(inputs: list[KerasTensor | Any], w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, slope: str | Slope = Slope.V_SLOPE, **kwargs: Any) list[KerasTensor | Any] [source]
- decomon.backward_layers.activations.backward_tanh(inputs: list[KerasTensor | Any], dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of tanh
- Parameters:
inputs –
dc_decomp –
perturbation_domain –
slope –
mode –
Returns:
decomon.backward_layers.backward_layers module
- class decomon.backward_layers.backward_layers.BackwardActivation(*args, **kwargs)
Bases:
BackwardLayer
- build(input_shape: list[tuple[int | None, ...]]) None
- Parameters:
input_shape – list of input shape
Returns:
- call(inputs: list[Any], **kwargs: Any) list[Any]
- Parameters:
inputs –
Returns:
- freeze_alpha() None
- freeze_grid() None
- get_config() dict[str, Any]
Returns the config of the object.
An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.
- layer: Layer
- unfreeze_alpha() None
- unfreeze_grid() None
- class decomon.backward_layers.backward_layers.BackwardBatchNormalization(*args, **kwargs)
Bases:
BackwardLayer
Backward LiRPA of Batch Normalization
- call(inputs: list[Any], **kwargs: Any) list[Any]
- Parameters:
inputs –
Returns:
- layer: Layer
- class decomon.backward_layers.backward_layers.BackwardConv2D(*args, **kwargs)
Bases:
BackwardLayer
Backward LiRPA of Conv2D
- call(inputs: list[Any], **kwargs: Any) list[Any]
- Parameters:
inputs –
Returns:
- freeze_weights() None
- get_affine_components(inputs: list[Any]) tuple[Any, Any]
Express the implicit affine matrix of the convolution layer.
Conv is a linear operator but its affine component is implicit we use im2col and extract_patches to express the affine matrix Note that this matrix is Toeplitz
- Parameters:
inputs – list of input tensors
- Returns:
conv(inputs)= W.inputs + b
- Return type:
the affine operators W, b
- layer: Layer
- unfreeze_weights() None
- class decomon.backward_layers.backward_layers.BackwardDense(*args, **kwargs)
Bases:
BackwardLayer
Backward LiRPA of Dense
- build(input_shape: list[tuple[int | None, ...]]) None
- Parameters:
input_shape – list of input shape
Returns:
- call(inputs: list[Any], **kwargs: Any) list[Any]
- Parameters:
inputs –
Returns:
- freeze_weights() None
- layer: Layer
- unfreeze_weights() None
- class decomon.backward_layers.backward_layers.BackwardDropout(*args, **kwargs)
Bases:
BackwardLayer
Backward LiRPA of Dropout
- call(inputs: list[Any], **kwargs: Any) list[Any]
- Parameters:
inputs –
Returns:
- layer: Layer
- class decomon.backward_layers.backward_layers.BackwardFlatten(*args, **kwargs)
Bases:
BackwardLayer
Backward LiRPA of Flatten
- call(inputs: list[Any], **kwargs: Any) list[Any]
- Parameters:
inputs –
Returns:
- layer: Layer
- class decomon.backward_layers.backward_layers.BackwardInputLayer(*args, **kwargs)
Bases:
BackwardLayer
- call(inputs: list[Any], **kwargs: Any) list[Any]
- Parameters:
inputs –
Returns:
- layer: Layer
- class decomon.backward_layers.backward_layers.BackwardPermute(*args, **kwargs)
Bases:
BackwardLayer
Backward LiRPA of Permute
- call(inputs: list[Any], **kwargs: Any) list[Any]
- Parameters:
inputs –
Returns:
- layer: Layer
- class decomon.backward_layers.backward_layers.BackwardReshape(*args, **kwargs)
Bases:
BackwardLayer
Backward LiRPA of Reshape
- call(inputs: list[Any], **kwargs: Any) list[Any]
- Parameters:
inputs –
Returns:
- layer: Layer
decomon.backward_layers.backward_maxpooling module
decomon.backward_layers.backward_merge module
- class decomon.backward_layers.backward_merge.BackwardAdd(*args, **kwargs)[source]
Bases:
BackwardMerge
Backward LiRPA of Add
- layer: Layer
- class decomon.backward_layers.backward_merge.BackwardAverage(*args, **kwargs)[source]
Bases:
BackwardMerge
Backward LiRPA of Average
- layer: Layer
- class decomon.backward_layers.backward_merge.BackwardConcatenate(*args, **kwargs)[source]
Bases:
BackwardMerge
Backward LiRPA of Concatenate
- layer: Layer
- class decomon.backward_layers.backward_merge.BackwardDot(*args, **kwargs)[source]
Bases:
BackwardMerge
Backward LiRPA of Dot
- layer: Layer
- class decomon.backward_layers.backward_merge.BackwardMaximum(*args, **kwargs)[source]
Bases:
BackwardMerge
Backward LiRPA of Maximum
- layer: Layer
- class decomon.backward_layers.backward_merge.BackwardMerge(*args, **kwargs)[source]
Bases:
ABC
,Wrapper
- property affine: bool
- abstract call(inputs: list[Any], **kwargs: Any) list[list[Any]] [source]
- Parameters:
inputs –
Returns:
- compute_output_shape(input_shape: list[tuple[int | None, ...]]) list[tuple[int | None, ...]] [source]
Compute expected output shape according to input shape
Will be called by symbolic calls on Keras Tensors.
- Parameters:
input_shape –
Returns:
- get_config() dict[str, Any] [source]
Returns the config of the object.
An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.
- property ibp: bool
- layer: Layer
- class decomon.backward_layers.backward_merge.BackwardMinimum(*args, **kwargs)[source]
Bases:
BackwardMerge
Backward LiRPA of Minimum
- layer: Layer
- class decomon.backward_layers.backward_merge.BackwardMultiply(*args, **kwargs)[source]
Bases:
BackwardMerge
Backward LiRPA of Multiply
- layer: Layer
decomon.backward_layers.backward_reshape module
decomon.backward_layers.convert module
- decomon.backward_layers.convert.to_backward(layer: Layer, slope: str | Slope = Slope.V_SLOPE, mode: str | ForwardMode = ForwardMode.HYBRID, perturbation_domain: PerturbationDomain | None = None, finetune: bool = False, **kwargs: Any) BackwardLayer [source]
decomon.backward_layers.core module
- class decomon.backward_layers.core.BackwardLayer(*args, **kwargs)[source]
Bases:
ABC
,Wrapper
- property affine: bool
- compute_output_shape(input_shape: list[tuple[int | None, ...]]) list[tuple[int | None, ...]] [source]
Compute expected output shape according to input shape
Will be called by symbolic calls on Keras Tensors.
- Parameters:
input_shape –
Returns:
- get_config() dict[str, Any] [source]
Returns the config of the object.
An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.
- property ibp: bool
- layer: Layer
decomon.backward_layers.utils module
- decomon.backward_layers.utils.backward_add(inputs_0: list[KerasTensor | Any], inputs_1: list[KerasTensor | Any], w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, dc_decomp: bool = False) list[list[KerasTensor | Any]] [source]
Backward LiRPA of inputs_0+inputs_1
- Parameters:
inputs_0 –
inputs_1 –
w_u_out –
b_u_out –
w_l_out –
b_l_out –
perturbation_domain –
mode –
Returns:
- decomon.backward_layers.utils.backward_linear_prod(x_0: KerasTensor | Any, bounds_x: list[KerasTensor | Any], back_bounds: list[KerasTensor | Any], perturbation_domain: PerturbationDomain | None = None) list[KerasTensor | Any] [source]
Backward LiRPA of a subroutine prod
- Parameters:
bounds_x –
back_bounds –
Returns:
- decomon.backward_layers.utils.backward_max_(inputs: list[KerasTensor | Any], w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, axis: int = -1, dc_decomp: bool = False, **kwargs: Any) list[KerasTensor | Any] [source]
Backward LiRPA of max
- Parameters:
inputs – list of tensors
dc_decomp – boolean that indicates
grad_bounds – boolean that indicates whether
perturbation_domain – the type of perturbation domain
axis – axis to perform the maximum
whether we return a difference of convex decomposition of our layer we propagate upper and lower bounds on the values of the gradient
- Returns:
max operation along an axis
- decomon.backward_layers.utils.backward_maximum(inputs_0: list[KerasTensor | Any], inputs_1: list[KerasTensor | Any], w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, dc_decomp: bool = False, **kwargs: Any) list[list[KerasTensor | Any]] [source]
Backward LiRPA of maximum(inputs_0, inputs_1)
- Parameters:
inputs_0 –
inputs_1 –
w_u_out –
b_u_out –
w_l_out –
b_l_out –
perturbation_domain –
mode –
Returns:
- decomon.backward_layers.utils.backward_minimum(inputs_0: list[KerasTensor | Any], inputs_1: list[KerasTensor | Any], w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, dc_decomp: bool = False, **kwargs: Any) list[list[KerasTensor | Any]] [source]
Backward LiRPA of minimum(inputs_0, inputs_1)
- Parameters:
inputs_0 –
inputs_1 –
w_u_out –
b_u_out –
w_l_out –
b_l_out –
perturbation_domain –
mode –
Returns:
- decomon.backward_layers.utils.backward_minus(w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any) list[KerasTensor | Any] [source]
Backward LiRPA of -x
- Parameters:
w_u_out –
b_u_out –
w_l_out –
b_l_out –
perturbation_domain –
mode –
Returns:
- decomon.backward_layers.utils.backward_multiply(inputs_0: list[KerasTensor | Any], inputs_1: list[KerasTensor | Any], w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, dc_decomp: bool = False) list[list[KerasTensor | Any]] [source]
Backward LiRPA of element-wise multiply inputs_0*inputs_1
- Parameters:
inputs_0 –
inputs_1 –
w_u_out –
b_u_out –
w_l_out –
b_l_out –
perturbation_domain –
mode –
Returns:
- decomon.backward_layers.utils.backward_scale(scale_factor: float, w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any) list[KerasTensor | Any] [source]
Backward LiRPA of scale_factor*x
- Parameters:
scale_factor –
w_u_out –
b_u_out –
w_l_out –
b_l_out –
Returns:
- decomon.backward_layers.utils.backward_sort(inputs: list[KerasTensor | Any], w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any, axis: int = -1, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, dc_decomp: bool = False) list[KerasTensor | Any] [source]
Backward LiRPA of sort
- Parameters:
inputs –
w_u_out –
b_u_out –
w_l_out –
b_l_out –
axis –
perturbation_domain –
mode –
Returns:
- decomon.backward_layers.utils.backward_subtract(inputs_0: list[KerasTensor | Any], inputs_1: list[KerasTensor | Any], w_u_out: KerasTensor | Any, b_u_out: KerasTensor | Any, w_l_out: KerasTensor | Any, b_l_out: KerasTensor | Any, perturbation_domain: PerturbationDomain | None = None, mode: str | ForwardMode = ForwardMode.HYBRID, dc_decomp: bool = False) list[list[KerasTensor | Any]] [source]
Backward LiRPA of inputs_0 - inputs_1
- Parameters:
inputs_0 –
inputs_1 –
w_u_out –
b_u_out –
w_l_out –
b_l_out –
perturbation_domain –
mode –
Returns:
decomon.backward_layers.utils_conv module
- decomon.backward_layers.utils_conv.get_toeplitz(conv_layer: Conv2D, flatten: bool = True) Any [source]
Express formally the affine component of the convolution Conv is a linear operator but its affine component is implicit we use im2col and extract_patches to express the affine matrix Note that this matrix is Toeplitz
- Parameters:
conv_layer – Keras Conv2D layer or Decomon Conv2D layer
flatten (optional) – convert the affine component as a 2D matrix (n_in, n_out). Defaults to True.
- Returns:
conv(x)= Wx + bias
- Return type:
the affine operator W
- decomon.backward_layers.utils_conv.get_toeplitz_channels_first(conv_layer: Conv2D, flatten: bool = True) Any [source]
Express formally the affine component of the convolution for data_format=channels_first Conv is a linear operator but its affine component is implicit we use im2col and extract_patches to express the affine matrix Note that this matrix is Toeplitz
- Parameters:
conv_layer – Keras Conv2D layer or Decomon Conv2D layer
flatten (optional) – convert the affine component as a 2D matrix (n_in, n_out). Defaults to True.
- Returns:
conv(x)= Wx + bias
- Return type:
the affine operator W
- decomon.backward_layers.utils_conv.get_toeplitz_channels_last(conv_layer: Conv2D, flatten: bool = True) Any [source]
Express formally the affine component of the convolution for data_format=channels_last Conv is a linear operator but its affine component is implicit we use im2col and extract_patches to express the affine matrix Note that this matrix is Toeplitz
- Parameters:
conv_layer – Keras Conv2D layer or Decomon Conv2D layer
flatten (optional) – convert the affine component as a 2D matrix (n_in, n_out). Defaults to True.
- Returns:
conv(x)= Wx + bias
- Return type:
the affine operator W