decomon.models package
Submodules
decomon.models.backward_cloning module
- decomon.models.backward_cloning.convert_backward(model: ~keras.src.models.model.Model, input_tensors: list[~keras.src.backend.common.keras_tensor.KerasTensor], back_bounds: list[~keras.src.backend.common.keras_tensor.KerasTensor] | None = None, slope: str | ~decomon.core.Slope = Slope.V_SLOPE, ibp: bool = True, affine: bool = True, perturbation_domain: ~decomon.core.PerturbationDomain | None = None, finetune: bool = False, forward_map: dict[str | int, list[~keras.src.backend.common.keras_tensor.KerasTensor] | dict[str | int, list[~keras.src.backend.common.keras_tensor.KerasTensor] | OutputMapDict]] | None = None, softmax_to_linear: bool = True, joint: bool = True, layer_fn: ~collections.abc.Callable[[...], ~decomon.backward_layers.core.BackwardLayer] = <function to_backward>, final_ibp: bool = True, final_affine: bool = False, input_dim: int = -1, **kwargs: ~typing.Any) tuple[list[KerasTensor], list[KerasTensor], dict[int, BackwardLayer], None] [source]
- decomon.models.backward_cloning.crown_(node: Node, ibp: bool, affine: bool, perturbation_domain: PerturbationDomain, input_map: dict[int, list[KerasTensor]], layer_fn: Callable[[Layer], BackwardLayer], backward_bounds: list[KerasTensor], backward_map: dict[int, BackwardLayer] | None = None, joint: bool = True, fuse: bool = True, output_map: dict[int, list[KerasTensor]] | None = None, merge_layers: Layer | None = None, fuse_layer: Layer | None = None, **kwargs: Any) tuple[list[KerasTensor], Layer | None] [source]
- Parameters:
node –
ibp –
affine –
input_map –
layer_fn –
backward_bounds –
backward_map –
joint –
fuse –
- Returns:
list of 4 tensors affine upper and lower bounds
- decomon.models.backward_cloning.crown_model(model: ~keras.src.models.model.Model, input_tensors: list[~keras.src.backend.common.keras_tensor.KerasTensor], back_bounds: list[~keras.src.backend.common.keras_tensor.KerasTensor] | None = None, slope: str | ~decomon.core.Slope = Slope.V_SLOPE, ibp: bool = True, affine: bool = True, perturbation_domain: ~decomon.core.PerturbationDomain | None = None, finetune: bool = False, forward_map: dict[str | int, list[~keras.src.backend.common.keras_tensor.KerasTensor] | dict[str | int, list[~keras.src.backend.common.keras_tensor.KerasTensor] | OutputMapDict]] | None = None, softmax_to_linear: bool = True, joint: bool = True, layer_fn: ~collections.abc.Callable[[...], ~decomon.backward_layers.core.BackwardLayer] = <function to_backward>, fuse: bool = True, **kwargs: ~typing.Any) tuple[list[KerasTensor], list[KerasTensor], dict[int, BackwardLayer], None] [source]
- decomon.models.backward_cloning.get_disconnected_input(mode: str | ForwardMode, perturbation_domain: PerturbationDomain, dtype: str | None = None) Layer [source]
- decomon.models.backward_cloning.get_input_nodes(model: Model, dico_nodes: dict[int, list[Node]], ibp: bool, affine: bool, input_tensors: list[KerasTensor], output_map: dict[str | int, list[KerasTensor] | dict[str | int, list[KerasTensor] | OutputMapDict]], layer_fn: Callable[[Layer], BackwardLayer], joint: bool, set_mode_layer: Layer, perturbation_domain: PerturbationDomain | None = None, **kwargs: Any) tuple[dict[int, list[KerasTensor]], dict[int, BackwardLayer], dict[int, list[KerasTensor]]] [source]
- decomon.models.backward_cloning.retrieve_layer(node: Node, layer_fn: Callable[[Layer], BackwardLayer], backward_map: dict[int, BackwardLayer], joint: bool = True) BackwardLayer [source]
decomon.models.convert module
- decomon.models.convert.clone(model: ~keras.src.models.model.Model, layer_fn: ~collections.abc.Callable[[...], ~keras.src.layers.layer.Layer] = <function to_decomon>, slope: str | ~decomon.core.Slope = Slope.V_SLOPE, perturbation_domain: ~decomon.core.PerturbationDomain | None = None, method: str | ~decomon.models.utils.ConvertMethod = ConvertMethod.CROWN, back_bounds: list[~keras.src.backend.common.keras_tensor.KerasTensor] | None = None, finetune: bool = False, shared: bool = True, finetune_forward: bool = False, finetune_backward: bool = False, extra_inputs: list[~keras.src.backend.common.keras_tensor.KerasTensor] | None = None, to_keras: bool = True, final_ibp: bool | None = None, final_affine: bool | None = None, **kwargs: ~typing.Any) DecomonModel [source]
- decomon.models.convert.convert(model: ~keras.src.models.model.Model, input_tensors: list[~keras.src.backend.common.keras_tensor.KerasTensor], method: str | ~decomon.models.utils.ConvertMethod = ConvertMethod.CROWN, ibp: bool = False, affine: bool = False, back_bounds: list[~keras.src.backend.common.keras_tensor.KerasTensor] | None = None, layer_fn: ~collections.abc.Callable[[...], ~keras.src.layers.layer.Layer] = <function to_decomon>, slope: str | ~decomon.core.Slope = Slope.V_SLOPE, input_dim: int = -1, perturbation_domain: ~decomon.core.PerturbationDomain | None = None, finetune: bool = False, forward_map: dict[str | int, list[~keras.src.backend.common.keras_tensor.KerasTensor] | dict[str | int, list[~keras.src.backend.common.keras_tensor.KerasTensor] | OutputMapDict]] | None = None, shared: bool = True, softmax_to_linear: bool = True, finetune_forward: bool = False, finetune_backward: bool = False, final_ibp: bool = False, final_affine: bool = False, **kwargs: ~typing.Any) tuple[list[KerasTensor], list[KerasTensor], dict[int, list[DecomonLayer] | dict[int, list[DecomonLayer] | LayerMapDict]] | dict[int, BackwardLayer], dict[str | int, list[KerasTensor] | dict[str | int, list[KerasTensor] | OutputMapDict]] | None] [source]
decomon.models.crown module
- class decomon.models.crown.Fuse(*args, **kwargs)[source]
Bases:
Layer
decomon.models.forward_cloning module
Module for DecomonSequential.
It inherits from keras Sequential class.
- decomon.models.forward_cloning.convert_forward(model: ~keras.src.models.model.Model, input_tensors: list[~keras.src.backend.common.keras_tensor.KerasTensor], layer_fn: ~collections.abc.Callable[[...], ~keras.src.layers.layer.Layer] = <function to_decomon>, slope: str | ~decomon.core.Slope = Slope.V_SLOPE, input_dim: int = -1, dc_decomp: bool = False, perturbation_domain: ~decomon.core.PerturbationDomain | None = None, ibp: bool = True, affine: bool = True, finetune: bool = False, shared: bool = True, softmax_to_linear: bool = True, **kwargs: ~typing.Any) tuple[list[KerasTensor], list[KerasTensor], dict[int, list[DecomonLayer] | dict[int, list[DecomonLayer] | LayerMapDict]], dict[str | int, list[KerasTensor] | dict[str | int, list[KerasTensor] | OutputMapDict]]] [source]
- decomon.models.forward_cloning.convert_forward_functional_model(model: Model, layer_fn: Callable[[Layer], list[Layer]], input_tensors: list[KerasTensor], softmax_to_linear: bool = True, count: int = 0, output_map: dict[str | int, list[KerasTensor] | dict[str | int, list[KerasTensor] | OutputMapDict]] | None = None, layer_map: dict[int, list[DecomonLayer] | dict[int, list[DecomonLayer] | LayerMapDict]] | None = None, layer2layer_map: dict[int, list[Layer]] | None = None) tuple[list[KerasTensor], list[KerasTensor], dict[int, list[DecomonLayer] | dict[int, list[DecomonLayer] | LayerMapDict]], dict[str | int, list[KerasTensor] | dict[str | int, list[KerasTensor] | OutputMapDict]], dict[int, list[Layer]]] [source]
- decomon.models.forward_cloning.include_dim_layer_fn(layer_fn: Callable[[...], Layer], input_dim: int, slope: str | Slope = Slope.V_SLOPE, dc_decomp: bool = False, perturbation_domain: PerturbationDomain | None = None, ibp: bool = True, affine: bool = True, finetune: bool = False, shared: bool = True) Callable[[Layer], list[Layer]] [source]
include external parameters inside the translation of a layer to its decomon counterpart
- Parameters:
layer_fn –
input_dim –
dc_decomp –
perturbation_domain –
finetune –
Returns:
decomon.models.models module
- class decomon.models.models.DecomonModel(*args, **kwargs)[source]
Bases:
Model
- get_config() dict[str, Any] [source]
Returns the config of the object.
An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.
- predict_on_single_batch_np(inputs: ndarray | list[ndarray]) ndarray | list[ndarray] [source]
Make predictions on numpy arrays fitting in one batch
Avoid using self.predict() known to be not designed for small arrays, and leading to memory leaks when used in loops.
See https://keras.io/api/models/model_training_apis/#predict-method and https://github.com/tensorflow/tensorflow/issues/44711
- Parameters:
inputs –
Returns:
- set_domain(perturbation_domain: PerturbationDomain) None [source]
- decomon.models.models.get_AB(model: DecomonModel) dict[str, list[Variable]] [source]
- decomon.models.models.get_AB_finetune(model: DecomonModel) dict[str, Variable] [source]
decomon.models.utils module
- class decomon.models.utils.Convert2Mode(*args, **kwargs)[source]
Bases:
Layer
- class decomon.models.utils.ConvertMethod(value)[source]
Bases:
str
,Enum
An enumeration.
- CROWN = 'crown'
- CROWN_FORWARD_AFFINE = 'crown-forward-affine'
- CROWN_FORWARD_HYBRID = 'crown-forward-hybrid'
- CROWN_FORWARD_IBP = 'crown-forward-ibp'
- FORWARD_AFFINE = 'forward-affine'
- FORWARD_HYBRID = 'forward-hybrid'
- FORWARD_IBP = 'forward-ibp'
- class decomon.models.utils.FeedDirection(value)[source]
Bases:
str
,Enum
An enumeration.
- BACKWARD = 'feed_backward'
- FORWARD = 'feed_forward'
- decomon.models.utils.check_model2convert_inputs(model: Model) None [source]
Check that the model to convert satisfy the hypotheses for decomon on inputs.
Which means:
only one input
the input must be flattened: only batchsize + another dimension
- decomon.models.utils.get_direction(method: str | ConvertMethod) FeedDirection [source]
- decomon.models.utils.get_ibp_affine_from_method(method: str | ConvertMethod) tuple[bool, bool] [source]
- decomon.models.utils.get_input_tensors(model: Model, perturbation_domain: PerturbationDomain, ibp: bool = True, affine: bool = True) tuple[KerasTensor, list[KerasTensor]] [source]
- decomon.models.utils.prepare_inputs_for_layer(inputs: tuple[KerasTensor, ...] | list[KerasTensor] | KerasTensor) tuple[KerasTensor, ...] | list[KerasTensor] | KerasTensor [source]
Prepare inputs for keras/decomon layers.
Some Keras layers do not like list of tensors even with one single tensor. So we keep only the tensor in this case.