# builders.domain.scheduling.scheduling_domains

Domain specification

Domain

# SchedulingObjectiveEnum

Enum defining the different scheduling objectives

# COST SchedulingObjectiveEnum

cost of resources (to be minimized)

# MAKESPAN SchedulingObjectiveEnum

makespan (to be minimized)

# D

Base class for any scheduling statefull domain

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

Get the domain action space (finite or infinite set).

This is a helper function called by default from Events._get_action_space(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The action space.

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history).

This is a helper function called by default from Events._get_applicable_actions(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of applicable actions.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Get the initial state.

This is a helper function called by default from DeterministicInitialized._get_initial_state(), the difference being that the result is not cached here.

# Returns

The initial state.

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

Get the observation space (finite or infinite set).

This is a helper function called by default from PartiallyObservable._get_observation_space(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The observation space.

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one sample of the transition's dynamics.

This is a helper function called by default from Simulation._sample(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The transition outcome of the sampled transition.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# SchedulingDomain

This is the highest level scheduling domain class (inheriting top-level class for each mandatory domain characteristic). This is where the implementation of the statefull scheduling domain is implemented, letting to the user the possibility to the user to define the scheduling problem without having to think of a statefull version.

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Sample, store and return task duration for the given task in the given mode.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state SchedulingDomain

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution SchedulingDomain

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# D_det

Base class for deterministic scheduling problems

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

Get the domain action space (finite or infinite set).

This is a helper function called by default from Events._get_action_space(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The action space.

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history).

This is a helper function called by default from Events._get_applicable_actions(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of applicable actions.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Get the initial state.

This is a helper function called by default from DeterministicInitialized._get_initial_state(), the difference being that the result is not cached here.

# Returns

The initial state.

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> SingleValueDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

Get the observation space (finite or infinite set).

This is a helper function called by default from PartiallyObservable._get_observation_space(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The observation space.

# _get_transition_value UncertainTransitions

_get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_terminal UncertainTransitions

_is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one sample of the transition's dynamics.

This is a helper function called by default from Simulation._sample(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The transition outcome of the sampled transition.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# D_uncertain

Base class for uncertain scheduling problems where we can compute distributions

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

Get the probability distribution of next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The probability distribution of next state.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

Get the domain action space (finite or infinite set).

This is a helper function called by default from Events._get_action_space(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The action space.

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history).

This is a helper function called by default from Events._get_applicable_actions(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of applicable actions.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Get the initial state.

This is a helper function called by default from DeterministicInitialized._get_initial_state(), the difference being that the result is not cached here.

# Returns

The initial state.

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

Get the probability distribution of next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The probability distribution of next state.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

Get the observation space (finite or infinite set).

This is a helper function called by default from PartiallyObservable._get_observation_space(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The observation space.

# _get_transition_value UncertainTransitions

_get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_terminal UncertainTransitions

_is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one sample of the transition's dynamics.

This is a helper function called by default from Simulation._sample(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The transition outcome of the sampled transition.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# UncertainSchedulingDomain

This is the highest level scheduling domain class (inheriting top-level class for each mandatory domain characteristic).

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

Get the probability distribution of next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Sample, store and return task duration for the given task in the given mode.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state SchedulingDomain

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# DeterministicSchedulingDomain

This is the highest level scheduling domain class (inheriting top-level class for each mandatory domain characteristic).

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Sample, store and return task duration for the given task in the given mode.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# SingleModeRCPSP

Single mode (classic) Resource project scheduling problem template. It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with constant availability (capacity)
  • task having deterministic resource consumption The goal is to minimize the overall makespan, respecting the cumulative resource consumption constraint

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_original_quantity_resource WithoutResourceAvailabilityChange

get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration DeterministicTaskDuration

get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_duration_lower_bound UncertainBoundedTaskDuration

get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# get_task_duration_upper_bound UncertainBoundedTaskDuration

get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_original_quantity_resource WithoutResourceAvailabilityChange

_get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration DeterministicTaskDuration

_get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
)

Return the Distribution of the duration of the given task in the given mode. Because the duration is deterministic, the distribution always returns the same duration.

# _get_task_duration_lower_bound UncertainBoundedTaskDuration

_get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# _get_task_duration_upper_bound UncertainBoundedTaskDuration

_get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_mode SingleMode

_get_tasks_mode(
  self
) -> dict[int, ModeConsumption]

Return a dictionary where the key is a task id and the value is a ModeConsumption object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}) }

E.g. with time varying resource consumption { 12: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}) }

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption.

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# SingleModeRCPSPCalendar

Single mode Resource project scheduling problem with varying resource availability template. It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with variable availability through time
  • task having deterministic resource consumption The goal is to minimize the overall makespan, respecting the cumulative resource consumption constraint at any time

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration DeterministicTaskDuration

get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_duration_lower_bound UncertainBoundedTaskDuration

get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# get_task_duration_upper_bound UncertainBoundedTaskDuration

get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration DeterministicTaskDuration

_get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
)

Return the Distribution of the duration of the given task in the given mode. Because the duration is deterministic, the distribution always returns the same duration.

# _get_task_duration_lower_bound UncertainBoundedTaskDuration

_get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# _get_task_duration_upper_bound UncertainBoundedTaskDuration

_get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_mode SingleMode

_get_tasks_mode(
  self
) -> dict[int, ModeConsumption]

Return a dictionary where the key is a task id and the value is a ModeConsumption object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}) }

E.g. with time varying resource consumption { 12: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}) }

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption.

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# MultiModeRCPSP

Multimode (classic) Resource project scheduling problem template. It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with constant availability (capacity)
  • a set of non-renewable resource (consumable)
  • task having several modes of execution, giving for each mode a deterministic resource consumption and duration The goal is to minimize the overall makespan, respecting the cumulative resource consumption constraint

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_original_quantity_resource WithoutResourceAvailabilityChange

get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration DeterministicTaskDuration

get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_duration_lower_bound UncertainBoundedTaskDuration

get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# get_task_duration_upper_bound UncertainBoundedTaskDuration

get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_original_quantity_resource WithoutResourceAvailabilityChange

_get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration DeterministicTaskDuration

_get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
)

Return the Distribution of the duration of the given task in the given mode. Because the duration is deterministic, the distribution always returns the same duration.

# _get_task_duration_lower_bound UncertainBoundedTaskDuration

_get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# _get_task_duration_upper_bound UncertainBoundedTaskDuration

_get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# MultiModeRCPSPWithCost

Multimode (classic) Resource project scheduling problem template with cost based on modes. It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with constant availability (capacity)
  • a set of non-renewable resource (consumable)
  • task having several modes of execution, giving for each mode a deterministic resource consumption and duration The goal is to minimize the overall cost that is function of the mode chosen for each task

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_original_quantity_resource WithoutResourceAvailabilityChange

get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration DeterministicTaskDuration

get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_duration_lower_bound UncertainBoundedTaskDuration

get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# get_task_duration_upper_bound UncertainBoundedTaskDuration

get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_original_quantity_resource WithoutResourceAvailabilityChange

_get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration DeterministicTaskDuration

_get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
)

Return the Distribution of the duration of the given task in the given mode. Because the duration is deterministic, the distribution always returns the same duration.

# _get_task_duration_lower_bound UncertainBoundedTaskDuration

_get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# _get_task_duration_upper_bound UncertainBoundedTaskDuration

_get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# MultiModeRCPSPCalendar

Multimode (classic) Resource project scheduling problem template with cost based on modes. It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with variable availability (capacity)
  • a set of non-renewable resource (consumable)
  • task having several modes of execution, giving for each mode a deterministic resource consumption and duration The goal is to minimize the overall makespan

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration DeterministicTaskDuration

get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_duration_lower_bound UncertainBoundedTaskDuration

get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# get_task_duration_upper_bound UncertainBoundedTaskDuration

get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration DeterministicTaskDuration

_get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
)

Return the Distribution of the duration of the given task in the given mode. Because the duration is deterministic, the distribution always returns the same duration.

# _get_task_duration_lower_bound UncertainBoundedTaskDuration

_get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# _get_task_duration_upper_bound UncertainBoundedTaskDuration

_get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# MultiModeRCPSPCalendar_Stochastic_Durations

Multimode (classic) Resource project scheduling problem template. It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with variable availability (capacity)
  • a set of non-renewable resource (consumable)
  • task having several modes of execution, giving for each mode a deterministic resource consumption and a stochastic duration The goal is to minimize the overall makespan

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

Get the probability distribution of next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state SchedulingDomain

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the univariate Distribution of the duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying univariate distribution.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# MultiModeMultiSkillRCPSP

Multimode multiskill Resource project scheduling problem template It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with constant availability (capacity)
  • resource can be unitary and have skills
  • a set of non-renewable resource (consumable)
  • task having several modes of execution, giving for each mode a deterministic resource consumption, deterministic duration and skills needed The goal is to minimize the overall makespan, allocating unit resource to tasks fulfilling the skills requirement.

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_original_quantity_resource WithoutResourceAvailabilityChange

get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration DeterministicTaskDuration

get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_duration_lower_bound UncertainBoundedTaskDuration

get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# get_task_duration_upper_bound UncertainBoundedTaskDuration

get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_original_quantity_resource WithoutResourceAvailabilityChange

_get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration DeterministicTaskDuration

_get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
)

Return the Distribution of the duration of the given task in the given mode. Because the duration is deterministic, the distribution always returns the same duration.

# _get_task_duration_lower_bound UncertainBoundedTaskDuration

_get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# _get_task_duration_upper_bound UncertainBoundedTaskDuration

_get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# MultiModeMultiSkillRCPSPCalendar

Multimode multiskill Resource project scheduling problem with resource variability template It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with variable availability
  • resource can be unitary and have skills
  • a set of non-renewable resource (consumable)
  • task having several modes of execution, giving for each mode a deterministic resource consumption, deterministic duration and skills needed The goal is to minimize the overall makespan, allocating unit resource to tasks fulfilling the skills requirement.

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration DeterministicTaskDuration

get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_duration_lower_bound UncertainBoundedTaskDuration

get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# get_task_duration_upper_bound UncertainBoundedTaskDuration

get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration DeterministicTaskDuration

_get_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the fixed deterministic task duration of the given task in the given mode.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
)

Return the Distribution of the duration of the given task in the given mode. Because the duration is deterministic, the distribution always returns the same duration.

# _get_task_duration_lower_bound UncertainBoundedTaskDuration

_get_task_duration_lower_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the lower bound for the task duration of the given task in the given mode.

# _get_task_duration_upper_bound UncertainBoundedTaskDuration

_get_task_duration_upper_bound(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return the upper bound for the task duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# MultiModeRCPSP_Stochastic_Durations

Multimode Resource project scheduling problem with stochastic durations template. It consists in :

  • a scheduling problem with precedence constraint between task
  • a set of renewable resource with constant availability (capacity)
  • a set of non-renewable resource (consumable)
  • task having several modes of execution, giving for each mode a deterministic resource consumption and a stochastic duration The goal is to minimize the overall expected makespan

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

Get the probability distribution of next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_original_quantity_resource WithoutResourceAvailabilityChange

get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state SchedulingDomain

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_original_quantity_resource WithoutResourceAvailabilityChange

_get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the univariate Distribution of the duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: { 1: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}), 2: ConstantModeConsumption({'rt_1': 0, 'rt_2': 3, 'ru_1': 1}), } }

E.g. with time varying resource consumption { 12: { 1: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}), 2: VaryingModeConsumption({'rt_1': [1,1,1,1,2,2,2], 'rt_2': [0,0,0,0,0,0,0], 'ru_1': [1,1,1,1,1,1,1]}), } }

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying univariate distribution.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# SingleModeRCPSP_Stochastic_Durations

Resource project scheduling problem template. It consists in :

  • a scheduling problem with precedence constraint between task
  • a set of renewable resource with constant availability (capacity)
  • task having a deterministic resource consumption and a stochastic duration The goal is to minimize the overall expected makespan

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

Get the probability distribution of next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_original_quantity_resource WithoutResourceAvailabilityChange

get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state SchedulingDomain

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_original_quantity_resource WithoutResourceAvailabilityChange

_get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the univariate Distribution of the duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_mode SingleMode

_get_tasks_mode(
  self
) -> dict[int, ModeConsumption]

Return a dictionary where the key is a task id and the value is a ModeConsumption object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}) }

E.g. with time varying resource consumption { 12: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}) }

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption.

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying univariate distribution.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# SingleModeRCPSP_Stochastic_Durations_WithConditionalTasks

Resource project scheduling problem with stochastic duration and conditional tasks template. It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with constant availability (capacity)
  • task having a deterministic resource consumption and a stochastic duration given as a distribution
  • based on duration of tasks, some optional tasks have to be executed. The goal is to minimize the overall expected makespan

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

Get the probability distribution of next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The probability distribution of next state.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_original_quantity_resource WithoutResourceAvailabilityChange

get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_duration_distribution UncertainMultivariateTaskDuration

get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the multivariate Distribution of the duration of the given task in the given mode. Multivariate seetings need to be provided.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying multiivariate distribution.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state SchedulingDomain

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_original_quantity_resource WithoutResourceAvailabilityChange

_get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_duration_distribution UncertainMultivariateTaskDuration

_get_task_duration_distribution(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0,
multivariate_settings: Optional[dict[str, int]] = None
) -> Distribution

Return the univariate Distribution of the duration of the given task in the given mode.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_mode SingleMode

_get_tasks_mode(
  self
) -> dict[int, ModeConsumption]

Return a dictionary where the key is a task id and the value is a ModeConsumption object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}) }

E.g. with time varying resource consumption { 12: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}) }

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption.

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode, sampled from the underlying univariate distribution.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# SingleModeRCPSP_Simulated_Stochastic_Durations_WithConditionalTasks

Resource project scheduling problem with stochastic duration and conditional tasks template. It consists in :

  • a deterministic scheduling problem with precedence constraint between task
  • a set of renewable resource with constant availability (capacity)
  • task having a deterministic resource consumption and a stochastic duration that is simulated as blackbox
  • based on duration of tasks, some optional tasks have to be executed. The goal is to minimize the overall expected makespan

# add_to_current_conditions WithConditionalTasks

add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# all_tasks_possible MixedRenewable

all_tasks_possible(
  self,
state: State
) -> bool

Return a True is for each task there is at least one mode in which the task can be executed, given the resource configuration in the state provided as argument. Returns False otherwise. If this function returns False, the scheduling problem is unsolvable from this state. This is to cope with the use of non-renable resources that may lead to state from which a task will not be possible anymore.

# check_if_action_can_be_started SchedulingDomain

check_if_action_can_be_started(
  self,
state: State,
action: SchedulingAction
) -> tuple[bool, dict[str, int]]

Check if a start or resume action can be applied. It returns a boolean and a dictionary of resources to use.

# check_unique_resource_names UncertainResourceAvailabilityChanges

check_unique_resource_names(
  self
) -> bool

Return True if there are no duplicates in resource names across both resource types and resource units name lists.

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# find_one_ressource_to_do_one_task WithResourceSkills

find_one_ressource_to_do_one_task(
  self,
task: int,
mode: int
) -> list[str]

For the common case when it is possible to do the task by one resource unit. For general case, it might just return no possible ressource unit.

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_all_condition_items WithConditionalTasks

get_all_condition_items(
  self
) -> Enum

Return an Enum with all the elements that can be used to define a condition.

Example: return ConditionElementsExample(Enum): OK = 0 NC_PART_1_OPERATION_1 = 1 NC_PART_1_OPERATION_2 = 2 NC_PART_2_OPERATION_1 = 3 NC_PART_2_OPERATION_2 = 4 HARDWARE_ISSUE_MACHINE_A = 5 HARDWARE_ISSUE_MACHINE_B = 6

# get_all_resources_skills WithResourceSkills

get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# get_all_tasks_skills WithResourceSkills

get_all_tasks_skills(
  self
) -> dict[int, dict[int, dict[str, Any]]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# get_all_unconditional_tasks WithConditionalTasks

get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_available_tasks WithConditionalTasks

get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_max_horizon SchedulingDomain

get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# get_mode_costs WithModeCosts

get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# get_objectives SchedulingDomain

get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_original_quantity_resource WithoutResourceAvailabilityChange

get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# get_preallocations WithPreallocations

get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# get_predecessors WithPrecedence

get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# get_quantity_resource DeterministicResourceAvailabilityChanges

get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# get_resource_cost_per_time_unit WithResourceCosts

get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# get_resource_renewability MixedRenewable

get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# get_resource_type_for_unit WithResourceUnits

get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# get_resource_types_names WithResourceTypes

get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# get_resource_units_names WithResourceUnits

get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# get_skills_names WithResourceSkills

get_skills_names(
  self
) -> set[str]

Return a list of all skill names as a list of str. Skill names are defined in the 2 dictionaries returned by the get_all_resources_skills and get_all_tasks_skills functions.

# get_skills_of_resource WithResourceSkills

get_skills_of_resource(
  self,
resource: str
) -> dict[str, Any]

Return the skills of a given resource

# get_skills_of_task WithResourceSkills

get_skills_of_task(
  self,
task: int,
mode: int
) -> dict[str, Any]

Return the skill requirements for a given task

# get_successors WithPrecedence

get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# get_task_existence_conditions WithConditionalTasks

get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# get_task_on_completion_added_conditions WithConditionalTasks

get_task_on_completion_added_conditions(
  self
) -> dict[int, list[Distribution]]

Return a dict of list. The key of the dict is the task id and each list is composed of a list of tuples. Each tuple contains the probability (first item in tuple) that the conditionElement (second item in tuple) is True. The probabilities in the inner list should sum up to 1. The dictionary should only contains the keys of tasks that can create conditions.

Example: return { 12: [ DiscreteDistribution([(ConditionElementsExample.NC_PART_1_OPERATION_1, 0.1), (ConditionElementsExample.OK, 0.9)]), DiscreteDistribution([(ConditionElementsExample.HARDWARE_ISSUE_MACHINE_A, 0.05), ('paper', 0.1), (ConditionElementsExample.OK, 0.95)]) ] }

# get_task_paused_non_renewable_resource_returned WithPreemptivity

get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# get_task_preemptivity WithPreemptivity

get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# get_task_progress CustomTaskProgress

get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to.

# get_task_resuming_type WithPreemptivity

get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# get_time_lags WithTimeLag

get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

e.g. { 12:{ 15: TimeLag(5, 10), 16: TimeLag(5, 20), 17: MinimumOnlyTimeLag(5), 18: MaximumOnlyTimeLag(15), } }

# Returns

A dictionary of TimeLag objects.

# get_time_window WithTimeWindow

get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a TimeWindow object. Note that the max time horizon needs to be provided to the TimeWindow constructors e.g. { 1: TimeWindow(10, 15, 20, 30, self.get_max_horizon()) 2: EmptyTimeWindow(self.get_max_horizon()) 3: EndTimeWindow(20, 25, self.get_max_horizon()) 4: EndBeforeOnlyTimeWindow(40, self.get_max_horizon()) }

# Returns

A dictionary of TimeWindow objects.

# get_variable_resource_consumption VariableResourceConsumption

get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# initialize_domain SchedulingDomain

initialize_domain(
  self
)

Initialize a scheduling domain. This function needs to be called when instantiating a scheduling domain.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# sample_completion_conditions WithConditionalTasks

sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# sample_quantity_resource UncertainResourceAvailabilityChanges

sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# sample_task_duration SimulatedTaskDuration

sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Sample, store and return task duration for the given task in the given mode.

# set_inplace_environment SchedulingDomain

set_inplace_environment(
  self,
inplace_environment: bool
)

Activate or not the fact that the simulator modifies the given state inplace or create a copy before. The inplace version is several times faster but will lead to bugs in graph search solver.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# update_complete_dummy_tasks SchedulingDomain

update_complete_dummy_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_simulation SchedulingDomain

update_complete_dummy_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_dummy_tasks_uncertain SchedulingDomain

update_complete_dummy_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of newly started tasks whose duration is 0 from ongoing to complete.

# update_complete_tasks SchedulingDomain

update_complete_tasks(
  self,
state: State
)

Update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_simulation SchedulingDomain

update_complete_tasks_simulation(
  self,
state: State
)

In a simulated scheduling environment, update the status of newly completed tasks in the state from ongoing to complete and update resource availability. This function will also log in task_details the time it was complete

# update_complete_tasks_uncertain SchedulingDomain

update_complete_tasks_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the status of newly completed tasks in the state from ongoing to complete, update resource availability and update on-completion conditions. This function will also log in task_details the time it was complete.

# update_conditional_tasks SchedulingDomain

update_conditional_tasks(
  self,
state: State,
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_simulation SchedulingDomain

update_conditional_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_conditional_tasks_uncertain SchedulingDomain

update_conditional_tasks_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update remaining tasks by checking conditions and potentially adding conditional tasks.

# update_pause_tasks SchedulingDomain

update_pause_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_simulation SchedulingDomain

update_pause_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulation scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_pause_tasks_uncertain SchedulingDomain

update_pause_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from ongoing to paused if specified in the action and update resource availability. This function will also log in task_details the time it was paused.

# update_progress SchedulingDomain

update_progress(
  self,
state: State
)

Update the progress of all ongoing tasks in the state.

# update_progress_simulation SchedulingDomain

update_progress_simulation(
  self,
state: State
)

In a simulation scheduling environment, update the progress of all ongoing tasks in the state.

# update_progress_uncertain SchedulingDomain

update_progress_uncertain(
  self,
states: DiscreteDistribution[State]
)

In an uncertain scheduling environment, update the progress of all ongoing tasks in the state.

# update_resource_availability SchedulingDomain

update_resource_availability(
  self,
state: State,
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resource_availability_simulation SchedulingDomain

update_resource_availability_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update resource availability for next time step. This should be called after update_time().

# update_resource_availability_uncertain SchedulingDomain

update_resource_availability_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update resource availability for next time step. This should be called after update_time().

# update_resume_tasks SchedulingDomain

update_resume_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed

# update_resume_tasks_simulation SchedulingDomain

update_resume_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulationn scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_resume_tasks_uncertain SchedulingDomain

update_resume_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from paused to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was resumed.

# update_start_tasks SchedulingDomain

update_start_tasks(
  self,
state: State,
action: SchedulingAction
)

Update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_simulation SchedulingDomain

update_start_tasks_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function will also log in task_details the time it was started.

# update_start_tasks_uncertain SchedulingDomain

update_start_tasks_uncertain(
  self,
state: State,
action: SchedulingAction
)

In an uncertain scheduling environment, update the status of a task from remaining to ongoing if specified in the action and update resource availability. This function returns a DsicreteDistribution of State. This function will also log in task_details the time it was started.

# update_time SchedulingDomain

update_time(
  self,
state: State,
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_simulation SchedulingDomain

update_time_simulation(
  self,
state: State,
action: SchedulingAction
)

In a simulated scheduling environment, update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# update_time_uncertain SchedulingDomain

update_time_uncertain(
  self,
states: DiscreteDistribution[State],
action: SchedulingAction
)

Update the time of the state if the time_progress attribute of the given EnumerableAction is True.

# _add_to_current_conditions WithConditionalTasks

_add_to_current_conditions(
  self,
task: int,
state
)

Samples completion conditions for a given task and add these conditions to the list of conditions in the given state. This function should be called when a task complete.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> StrDict[Space[D.T_event]]

To be implemented if needed one day.

# _get_all_resources_skills WithResourceSkills

_get_all_resources_skills(
  self
) -> dict[str, dict[str, Any]]

Return a nested dictionary where the first key is the name of a resource type or resource unit and the second key is the name of a skill. The value defines the details of the skill. E.g. {unit: {skill: (detail of skill)}}

# _get_all_tasks_skills WithResourceSkills

_get_all_tasks_skills(
  self
) -> dict[int, dict[str, Any]]

Return a nested dictionary where the first key is the name of a task and the second key is the name of a skill. The value defines the details of the skill. E.g. {task: {skill: (detail of skill)}}

# _get_all_unconditional_tasks WithConditionalTasks

_get_all_unconditional_tasks(
  self
) -> set[int]

Returns the set of all task ids for which there are no conditions. These tasks are to be considered at the start of a project (i.e. in the initial state).

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory: Memory[D.T_state]
) -> StrDict[Space[D.T_event]]

Returns the action space from a state. TODO : think about a way to avoid the instaceof usage.

# _get_available_tasks WithConditionalTasks

_get_available_tasks(
  self,
state
) -> set[int]

Returns the set of all task ids that can be considered under the conditions defined in the given state. Note that the set will contains all ids for all tasks in the domain that meet the conditions, that is tasks that are remaining, or that have been completed, paused or started / resumed.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> StrDict[Space[D.T_observation]]

Get the domain goals space (finite or infinite set).

This is a helper function called by default from Goals._get_goals(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The goals space.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) -> D.T_state

Create and return an empty initial state

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_max_horizon SchedulingDomain

_get_max_horizon(
  self
) -> int

Return the maximum time horizon (int)

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_mode_costs WithModeCosts

_get_mode_costs(
  self
) -> dict[int, dict[int, float]]

Return a nested dictionary where the first key is the id of a task (int), the second key the id of a mode and the value indicates the cost of execution the task in the mode.

# _get_next_state SchedulingDomain

_get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

This function will be used if the domain is defined with DeterministicTransitions. This function will be ignored if the domain is defined as having UncertainTransitions or Simulation.

# _get_next_state_distribution SchedulingDomain

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> Distribution[D.T_state]

This function will be used if the domain is defined with UncertainTransitions. This function will be ignored if the domain is defined as a Simulation. This function may also be used by uncertainty-specialised solvers on deterministic domains.

# _get_objectives SchedulingDomain

_get_objectives(
  self
) -> list[SchedulingObjectiveEnum]

Return the objectives to consider as a list. The items should be of SchedulingObjectiveEnum type.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> StrDict[Space[D.T_observation]]

To be implemented if needed one day.

# _get_original_quantity_resource WithoutResourceAvailabilityChange

_get_original_quantity_resource(
  self,
resource: str,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit).

# _get_preallocations WithPreallocations

_get_preallocations(
  self
) -> dict[int, list[str]]

Return a dictionary where the key is the id of a task (int) and the value indicates the pre-allocated resources for this task (as a list of str)

# _get_predecessors WithPrecedence

_get_predecessors(
  self
) -> dict[int, list[int]]

Return the predecessors of the task. Successors are given as a list for a task given as a key.

# _get_quantity_resource DeterministicResourceAvailabilityChanges

_get_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Return the resource availability (int) for the given resource (either resource type or resource unit) at the given time.

# _get_resource_cost_per_time_unit WithResourceCosts

_get_resource_cost_per_time_unit(
  self
) -> dict[str, float]

Return a dictionary where the key is the name of a resource (str) and the value indicates the cost of using this resource per time unit.

# _get_resource_renewability MixedRenewable

_get_resource_renewability(
  self
) -> dict[str, bool]

Return a dictionary where the key is a resource name (string) and the value whether this resource is renewable (True) or not (False).

# _get_resource_type_for_unit WithResourceUnits

_get_resource_type_for_unit(
  self
) -> dict[str, str]

Return a dictionary where the key is a resource unit name and the value a resource type name. An empty dictionary can be used if there are no resource unit matching a resource type.

# _get_resource_types_names WithResourceTypes

_get_resource_types_names(
  self
) -> list[str]

Return the names (string) of all resource types as a list.

# _get_resource_units_names WithResourceUnits

_get_resource_units_names(
  self
) -> list[str]

Return the names (string) of all resource units as a list.

# _get_successors WithPrecedence

_get_successors(
  self
) -> dict[int, list[int]]

Return the successors of the tasks. Successors are given as a list for a task given as a key.

# _get_task_existence_conditions WithConditionalTasks

_get_task_existence_conditions(
  self
) -> dict[int, list[int]]

Return a dictionary where the key is a task id and the value a list of conditions to be respected (True) for the task to be part of the schedule. If a task has no entry in the dictionary, there is no conditions for that task.

Example: return { 20: [get_all_condition_items().NC_PART_1_OPERATION_1], 21: [get_all_condition_items().HARDWARE_ISSUE_MACHINE_A] 22: [get_all_condition_items().NC_PART_1_OPERATION_1, get_all_condition_items().NC_PART_1_OPERATION_2] }e

# _get_task_paused_non_renewable_resource_returned WithPreemptivity

_get_task_paused_non_renewable_resource_returned(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value is of type bool indicating if the non-renewable resources are consumed when the task is paused (False) or made available again (True). E.g. { 2: False # if paused, non-renewable resource will be consumed 5: True # if paused, the non-renewable resource will be available again }

# _get_task_preemptivity WithPreemptivity

_get_task_preemptivity(
  self
) -> dict[int, bool]

Return a dictionary where the key is a task id and the value a boolean indicating if the task can be paused or stopped. E.g. { 1: False 2: True 3: False 4: False 5: True 6: False }

# _get_task_progress CustomTaskProgress

_get_task_progress(
  self,
task: int,
t_from: int,
t_to: int,
mode: Optional[int],
sampled_duration: Optional[int] = None
) -> float

# Returns

The task progress (float) between t_from and t_to based on the task duration and assuming linear progress.

# _get_task_resuming_type WithPreemptivity

_get_task_resuming_type(
  self
) -> dict[int, ResumeType]

Return a dictionary where the key is a task id and the value is of type ResumeType indicating if the task can be resumed (restarted from where it was paused with no time loss) or restarted (restarted from the start). E.g. { 1: ResumeType.NA 2: ResumeType.Resume 3: ResumeType.NA 4: ResumeType.NA 5: ResumeType.Restart 6: ResumeType.NA }

# _get_tasks_ids MultiMode

_get_tasks_ids(
  self
) -> Union[set[int], dict[int, Any], list[int]]

Return a set or dict of int = id of tasks

# _get_tasks_mode SingleMode

_get_tasks_mode(
  self
) -> dict[int, ModeConsumption]

Return a dictionary where the key is a task id and the value is a ModeConsumption object defining the resource consumption. If the domain is an instance of VariableResourceConsumption, VaryingModeConsumption objects should be used. If this is not the case (i.e. the domain is an instance of ConstantResourceConsumption), then ConstantModeConsumption should be used.

E.g. with constant resource consumption { 12: ConstantModeConsumption({'rt_1': 2, 'rt_2': 0, 'ru_1': 1}) }

E.g. with time varying resource consumption { 12: VaryingModeConsumption({'rt_1': [2,2,2,2,3], 'rt_2': [0,0,0,0,0], 'ru_1': [1,1,1,1,1]}) }

# _get_tasks_modes MultiMode

_get_tasks_modes(
  self
) -> dict[int, dict[int, ModeConsumption]]

Return a nested dictionary where the first key is a task id and the second key is a mode id. The value is a Mode object defining the resource consumption.

# _get_time_lags WithTimeLag

_get_time_lags(
  self
) -> dict[int, dict[int, TimeLag]]

Return nested dictionaries where the first key is the id of a task (int) and the second key is the id of another task (int). The value is a TimeLag object containing the MINIMUM and MAXIMUM time (int) that needs to separate the end of the first task to the start of the second task.

# _get_time_window WithTimeWindow

_get_time_window(
  self
) -> dict[int, TimeWindow]

Return a dictionary where the key is the id of a task (int) and the value is a dictionary of EmptyTimeWindow object.

# Returns

A dictionary of TimeWindow objects.

# _get_variable_resource_consumption VariableResourceConsumption

_get_variable_resource_consumption(
  self
) -> bool

Return true if the domain has variable resource consumption, false if the consumption of resource does not vary in time for any of the tasks

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _sample_completion_conditions WithConditionalTasks

_sample_completion_conditions(
  self,
task: int
) -> list[int]

Samples the condition distributions associated with the given task and return a list of sampled conditions.

# _sample_quantity_resource UncertainResourceAvailabilityChanges

_sample_quantity_resource(
  self,
resource: str,
time: int,
**kwargs
) -> int

Sample an amount of resource availability (int) for the given resource (either resource type or resource unit) at the given time. This number should be the sum of the number of resource available at time t and the number of resource of this type consumed so far).

# _sample_task_duration SimulatedTaskDuration

_sample_task_duration(
  self,
task: int,
mode: Optional[int] = 1,
progress_from: Optional[float] = 0.0
) -> int

Return a task duration for the given task in the given mode.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_info]]

This function will be used if the domain is defined as a Simulation (i.e. transitions are defined by call to a simulation). This function may also be used by simulation-based solvers on non-Simulation domains.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.