# hub.domain.flight_planning.domain

Domain specification

Domain

# State

Definition of a aircraft state during the flight plan

# Constructor State

State(
  trajectory,
pos
)

Initialisation of a state

# Parameters

trajectory : Trajectory information of the flight pos : Current position in the airways graph

# H_Action

Horizontal action that can be perform by the aircraft

# down H_Action

# straight H_Action

# up H_Action

# V_Action

Vertical action that can be perform by the aircraft

# climb V_Action

# cruise V_Action

# descent V_Action

# FlightPlanningDomain

Automated flight planning domain.

# Domain definition

The flight planning domain can be quickly defined as :

  • An origin, as ICAO code of an airport,
  • A destination, as ICAO code of an airport,
  • An aircraft type, as a string recognizable by the OpenAP library.

# Airways graph

A three-dimensional airway graph of waypoints is created. The graph is following the great circle which represents the shortest pass between the origin and the destination. The planner computes a plan by choosing waypoints in the graph, which are represented by 4-dimensionnal states. There is 3 phases in the graph :

  • The climbing phase
  • The cruise phase
  • The descent phase

The flight planning domain allows to choose a number of forward, lateral and vertical waypoints in the graph. It is also possible to choose different width (tiny, small, normal, large, xlarge) which will increase or decrease the graph width.

# State representation

Here, the states are represented by 4 features :

  • The position in the graph (x,y,z)
  • The aircraft mass, which can also represent the fuel consumption (integer)
  • The altitude (integer)
  • The time (seconds)

# Wind interpolation

The flight planning domain can take in consideration the wind conditions. That interpolation have a major impact on the results, as jet streams are high altitude wind which can increase or decrease the ground speed of the aircraft. It also have an impact on the computation time of a flight plan, as the objective and heuristic function became more complex.

# Objective (or cost) functions

There is three possible objective functions:

  • Fuel (Default)
  • Distance
  • Time

The chosen objective will represent the cost to go from a state to another. The aim of the algorithm is to minimize the cost.

# Heuristic functions

When using an A* algorithm to compute the flight plan, we need to feed it with a heuristic function, which guide the algorithm. For now, there is 5 different (not admissible) heuristic function, depending on self.heuristic_name:

  • fuel, which computes the required fuel to get to the goal. It takes in consideration the local wind & speed of the aircraft.
  • time, which computes the required time to get to the goal. It takes in consideration the local wind & speed of the aircraft.
  • distance, wich computes the distance to the goal.
  • lazy_fuel, which propagates the fuel consummed so far.
  • lazy_time, which propagates the time spent on the flight so far
  • None : we give a 0 cost value, which will transform the A* algorithm into a Dijkstra-like algorithm.

# Aircraft performance models

The flight planning domain can use two possible A/C performance models:

  • OpenAP: the aircraft performance model is based on the OpenAP library.
  • Poll-Schumann: the aircraft performance model is based on Poll-Schumann equations as stated on the paper: "An estimation method for the fuel burn and other performance characteristics of civil transport aircraft in the cruise" by Poll and Schumann; The Aernautical Journal, 2020.

# Optional features

The flight planning domain has several optional features :

  • Fuel loop: this is an optimisation of the loaded fuel for the aircraft. It will run some flights to computes the loaded fuel, using distance objective & heuristic.

  • Constraints definition: you can define constraints such as

    • A time constraint, represented by a time windows
    • A fuel constraint, represented by the maximum fuel for instance.
  • Slopes: you can define your own climbing & descending slopes which have to be between 10.0 and 25.0.

# Constructor FlightPlanningDomain

FlightPlanningDomain(
  origin: typing.Union[str, tuple],
destination: typing.Union[str, tuple],
actype: ,
weather_date: typing.Optional[skdecide.hub.domain.flight_planning.domain.WeatherDate] = None,
wind_interpolator: typing.Optional[skdecide.hub.domain.flight_planning.weather_interpolator.weather_tools.interpolator.GenericInterpolator.GenericWindInterpolator] = None,
objective: = fuel,
heuristic_name: = fuel,
perf_model_name: = openap,
constraints = None,
nb_forward_points: = 41,
nb_lateral_points: = 11,
nb_vertical_points: typing.Optional[int] = None,
take_off_weight: typing.Optional[int] = None,
fuel_loaded: typing.Optional[float] = None,
fuel_loop: = False,
fuel_loop_solver_cls: typing.Optional[type[skdecide.solvers.Solver]] = None,
fuel_loop_solver_kwargs: typing.Optional[dict[str, typing.Any]] = None,
fuel_loop_tol: = 0.001,
climbing_slope: typing.Optional[float] = None,
descending_slope: typing.Optional[float] = None,
graph_width: typing.Optional[str] = None,
res_img_dir: typing.Optional[str] = None,
starting_time: = 28800.0
)

Initialisation of a flight planning instance

# Parameters

origin (Union[str, tuple]): ICAO code of the airport, or a tuple (lat,lon,alt), of the origin of the flight plan. Altitude should be in ft destination (Union[str, tuple]): ICAO code of the airport, or a tuple (lat,lon,alt), of the destination of the flight plan. Altitude should be in ft actype (str): Aircarft type describe in openap datas (https://github.com/junzis/openap/tree/master/openap/data/aircraft) weather_date (WeatherDate, optional): Date for the weather, needed for days management. If None, no wind will be applied. wind_interpolator (GenericWindInterpolator, optional): Wind interpolator for the flight plan. If None, create one from the specified weather_date. The data is either already present locally or be downloaded from https://www.ncei.noaa.gov objective (str, optional): Cost function of the flight plan. It can be either fuel, distance or time. Defaults to "fuel". heuristic_name (str, optional): Heuristic of the flight plan, it will guide the aircraft through the graph. It can be either fuel, distance or time. Defaults to "fuel". perf_model_name (str, optional): Aircraft performance model used in the flight plan. It can be either openap or PS (Poll-Schumann). Defaults to "openap". constraints (type, optional): Constraints dictionnary (keyValues : ['time', 'fuel'] ) to be defined in for the flight plan. Defaults to None. nb_points_forward (int, optional): Number of forward nodes in the graph. Defaults to 41. nb_points_lateral (int, optional): Number of lateral nodes in the graph. Defaults to 11. nb_points_vertical (int, optional): Number of vertical nodes in the graph. Defaults to None. take_off_weight (int, optional): Take off weight of the aircraft. Defaults to None. fuel_loaded (float, optional): Fuel loaded in the aricraft for the flight plan. Defaults to None. fuel_loop (bool, optional): Boolean to create a fuel loop to optimize the fuel loaded for the flight. Defaults to False fuel_loop_solver_cls (type[Solver], optional): Solver class used in the fuel loop. Defaults to LazyAstar. fuel_loop_solver_kwargs (dict[str, Any], optional): Kwargs to initialize the solver used in the fuel loop. climbing_slope (float, optional): Climbing slope of the aircraft, has to be between 10.0 and 25.0. Defaults to None. descending_slope (float, optional): Descending slope of the aircraft, has to be between 10.0 and 25.0. Defaults to None. graph_width (str, optional): Airways graph width, in ["tiny", "small", "normal", "large", "xlarge"]. Defaults to None res_img_dir (str, optional): Directory in which images will be saved. Defaults to None starting_time (float, optional): Start time of the flight, in seconds. Defaults to 8AM (3_600.0 * 8.0)

# check_value Rewards

check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its reward specification.

TIP

This function returns always True by default because any kind of reward should be accepted at this level.

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# flying FlightPlanningDomain

flying(
  self,
from_: ,
to_: tuple[float, float, int]
) ->

Compute the trajectory of a flying object from a given point to a given point

# Parameters

from_ (pd.DataFrame): the trajectory of the object so far to_ (tuple[float, float]): the destination of the object

# Returns

pd.DataFrame: the final trajectory of the object

# get_action_space Events

get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events.get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# get_agents MultiAgent

get_agents(
  self
) -> set[str]

Return a singleton for single agent domains.

We must be here consistent with skdecide.core.autocast() which transforms a single agent domain into a multi agents domain whose only agent has the id "agent".

# get_applicable_actions Events

get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# get_enabled_events Events

get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events.get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# get_goals Goals

get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals.get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# get_initial_state DeterministicInitialized

get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized.get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# get_initial_state_distribution UncertainInitialized

get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized.get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# get_next_state DeterministicTransitions

get_next_state(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> D.T_state

Get the next state given a memory and action.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The deterministic next state.

# get_next_state_distribution UncertainTransitions

get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> DiscreteDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# get_observation TransformedObservable

get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_distribution PartiallyObservable

get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# get_observation_space PartiallyObservable

get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable.get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# get_transition_value UncertainTransitions

get_transition_value(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]],
next_state: Optional[D.T_state] = None
) -> StrDict[Value[D.T_value]]

Get the value (reward or cost) of a transition.

The transition to consider is defined by the function parameters.

TIP

If this function never depends on the next_state parameter for its computation, it is recommended to indicate it by overriding UncertainTransitions._is_transition_value_dependent_on_next_state_() to return False. This information can then be exploited by solvers to avoid computing next state to evaluate a transition value (more efficient).

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.
  • next_state: The next state in which the transition ends (if needed for the computation).

# Returns

The transition value (reward or cost).

# heuristic FlightPlanningDomain

heuristic(
  self,
s: ,
heuristic_name: = None
) -> skdecide.core.Value[float]

Heuristic to be used by search algorithms, depending on the objective and constraints.

# Parameters

s (D.T_state): Actual state objective (str, optional): Objective function. Defaults to None.

# Returns

Value[D.T_value]: Heuristic value of the state.

# is_action Events

is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events.get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# is_applicable_action Events

is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# is_enabled_event Events

is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events.is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# is_goal Goals

is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals.get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# is_observation PartiallyObservable

is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable.get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# is_terminal UncertainTransitions

is_terminal(
  self,
state: D.T_state
) -> StrDict[D.T_predicate]

Indicate whether a state is terminal.

A terminal state is a state with no outgoing transition (except to itself with value 0).

# Parameters

  • state: The state to consider.

# Returns

True if the state is terminal (False otherwise).

# is_transition_value_dependent_on_next_state UncertainTransitions

is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions.is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# render Renderable

render(
  self,
memory: Optional[Memory[D.T_state]] = None,
**kwargs: Any
) -> Any

Compute a visual render of the given memory (state or history), or the internal one if omitted.

By default, Renderable.render() provides some boilerplate code and internally calls Renderable._render(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

A render (e.g. image) or nothing (if the function handles the display directly).

# reset Initializable

reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable.reset() provides some boilerplate code and internally calls Initializable._reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# sample Simulation

sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation.sample() provides some boilerplate code and internally calls Simulation._sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation.sample() to call the external simulator and not use the Simulation._sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# set_memory Simulation

set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment.step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain.set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain.step(my_action)

# set_network FlightPlanningDomain

set_network(
  self,
p0: ,
p1: ,
nb_forward_points: ,
nb_lateral_points: ,
nb_vertical_points: ,
climbing_slope: = None,
descending_slope: = None,
graph_width: = None
)

Creation of the airway graph.

# Parameters

p0 : Origin of the flight plan p1 : Destination of the flight plan nb_forward_points (int): Number of forward points in the graph nb_lateral_points (int): Number of lateral points in the graph nb_vertical_points (int): Number of vertical points in the graph climbing_slope (float, optional): Climbing slope of the plane during climbing phase. Defaults to None. descending_slope (float, optional): Descent slope of the plane during descent phase. Defaults to None. graph_width (float, optional): Graph width of the graph. Defaults to None.

# Returns

A 3D matrix containing for each points its latitude, longitude, altitude between origin & destination.

# step Environment

step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment.step() provides some boilerplate code and internally calls Environment._step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment.step() to call the external environment and not use the Environment._step() helper function.

WARNING

Before calling Environment.step() the first time or when the end of an episode is reached, Initializable.reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# _check_value Rewards

_check_value(
  self,
value: Value[D.T_value]
) -> bool

Check that a value is compliant with its cost specification (must be positive).

TIP

This function calls PositiveCost._is_positive() to determine if a value is positive (can be overridden for advanced value types).

# Parameters

  • value: The value to check.

# Returns

True if the value is compliant (False otherwise).

# _get_action_space Events

_get_action_space(
  self
) -> StrDict[Space[D.T_event]]

Get the (cached) domain action space (finite or infinite set).

By default, Events._get_action_space() internally calls Events._get_action_space_() the first time and automatically caches its value to make future calls more efficient (since the action space is assumed to be constant).

# Returns

The action space.

# _get_action_space_ Events

_get_action_space_(
  self
) -> skdecide.core.Space[argparse.Action]

Define action space.

# _get_applicable_actions Events

_get_applicable_actions(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> StrDict[Space[D.T_event]]

Get the space (finite or infinite set) of applicable actions in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_applicable_actions() provides some boilerplate code and internally calls Events._get_applicable_actions_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of applicable actions.

# _get_applicable_actions_from Events

_get_applicable_actions_from(
  self,
memory:
) -> skdecide.core.Space[argparse.Action]

Get the applicable actions from a state.

# _get_enabled_events Events

_get_enabled_events(
  self,
memory: Optional[Memory[D.T_state]] = None
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history), or in the internal one if omitted.

By default, Events._get_enabled_events() provides some boilerplate code and internally calls Events._get_enabled_events_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

The space of enabled events.

# _get_enabled_events_from Events

_get_enabled_events_from(
  self,
memory: Memory[D.T_state]
) -> Space[D.T_event]

Get the space (finite or infinite set) of enabled uncontrollable events in the given memory (state or history).

This is a helper function called by default from Events._get_enabled_events(), the difference being that the memory parameter is mandatory here.

# Parameters

  • memory: The memory to consider.

# Returns

The space of enabled events.

# _get_goals Goals

_get_goals(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) domain goals space (finite or infinite set).

By default, Goals._get_goals() internally calls Goals._get_goals_() the first time and automatically caches its value to make future calls more efficient (since the goals space is assumed to be constant).

WARNING

Goal states are assumed to be fully observable (i.e. observation = state) so that there is never uncertainty about whether the goal has been reached or not. This assumption guarantees that any policy that does not reach the goal with certainty incurs in infinite expected cost. - Geffner, 2013: A Concise Introduction to Models and Methods for Automated Planning

# Returns

The goals space.

# _get_goals_ Goals

_get_goals_(
  self
) -> skdecide.core.Space[skdecide.hub.domain.flight_planning.domain.State]

Get the domain goals space (finite or infinite set).

Set the end position as goal.

# _get_initial_state DeterministicInitialized

_get_initial_state(
  self
) -> D.T_state

Get the (cached) initial state.

By default, DeterministicInitialized._get_initial_state() internally calls DeterministicInitialized._get_initial_state_() the first time and automatically caches its value to make future calls more efficient (since the initial state is assumed to be constant).

# Returns

The initial state.

# _get_initial_state_ DeterministicInitialized

_get_initial_state_(
  self
) ->

Get the initial state.

Set the start position as initial state.

# _get_initial_state_distribution UncertainInitialized

_get_initial_state_distribution(
  self
) -> Distribution[D.T_state]

Get the (cached) probability distribution of initial states.

By default, UncertainInitialized._get_initial_state_distribution() internally calls UncertainInitialized._get_initial_state_distribution_() the first time and automatically caches its value to make future calls more efficient (since the initial state distribution is assumed to be constant).

# Returns

The probability distribution of initial states.

# _get_initial_state_distribution_ UncertainInitialized

_get_initial_state_distribution_(
  self
) -> Distribution[D.T_state]

Get the probability distribution of initial states.

This is a helper function called by default from UncertainInitialized._get_initial_state_distribution(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The probability distribution of initial states.

# _get_memory_maxlen History

_get_memory_maxlen(
  self
) -> int

Get the (cached) memory max length.

By default, FiniteHistory._get_memory_maxlen() internally calls FiniteHistory._get_memory_maxlen_() the first time and automatically caches its value to make future calls more efficient (since the memory max length is assumed to be constant).

# Returns

The memory max length.

# _get_memory_maxlen_ FiniteHistory

_get_memory_maxlen_(
  self
) -> int

Get the memory max length.

This is a helper function called by default from FiniteHistory._get_memory_maxlen(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

The memory max length.

# _get_next_state DeterministicTransitions

_get_next_state(
  self,
memory: ,
action:
) ->

Compute the next state

# Parameters

memory (D.T_state): The current state action (D.T_event): The action to perform

# Returns

D.T_state: The next state

# _get_next_state_distribution UncertainTransitions

_get_next_state_distribution(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> SingleValueDistribution[D.T_state]

Get the discrete probability distribution of next state given a memory and action.

TIP

In the Markovian case (memory only holds last state ), given an action , this function can be mathematically represented by , where is the next state random variable.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The discrete probability distribution of next state.

# _get_observation TransformedObservable

_get_observation(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> StrDict[D.T_observation]

Get the deterministic observation given a state and action.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_distribution PartiallyObservable

_get_observation_distribution(
  self,
state: D.T_state,
action: Optional[StrDict[list[D.T_event]]] = None
) -> Distribution[StrDict[D.T_observation]]

Get the probability distribution of the observation given a state and action.

In mathematical terms (discrete case), given an action , this function represents: , where is the random variable of the observation.

# Parameters

  • state: The state to be observed.
  • action: The last applied action (or None if the state is an initial state).

# Returns

The probability distribution of the observation.

# _get_observation_space PartiallyObservable

_get_observation_space(
  self
) -> StrDict[Space[D.T_observation]]

Get the (cached) observation space (finite or infinite set).

By default, PartiallyObservable._get_observation_space() internally calls PartiallyObservable._get_observation_space_() the first time and automatically caches its value to make future calls more efficient (since the observation space is assumed to be constant).

# Returns

The observation space.

# _get_observation_space_ PartiallyObservable

_get_observation_space_(
  self
) -> skdecide.core.Space[skdecide.hub.domain.flight_planning.domain.State]

Define observation space.

# _get_terminal_state_time_fuel FlightPlanningDomain

_get_terminal_state_time_fuel(
  self,
state:
) ->

Get the domain terminal state information to compare with the constraints

# Parameters

state (State): terminal state to retrieve the information on fuel and time.

# Returns

dict: dictionnary containing both fuel and time information.

# _get_transition_value UncertainTransitions

_get_transition_value(
  self,
memory: ,
action: ,
next_state: typing.Optional[skdecide.hub.domain.flight_planning.domain.State] = None
) -> skdecide.core.Value[float]

Get the value (reward or cost) of a transition. Set cost to distance travelled between points

# Parameters

memory (D.T_state): The current state action (D.T_event): The action to perform next_state (Optional[D.T_state], optional): The next state. Defaults to None.

# Returns

Value[D.T_value]: Cost to go from memory to next state

# _init_memory History

_init_memory(
  self,
state: Optional[D.T_state] = None
) -> Memory[D.T_state]

Initialize memory (possibly with a state) according to its specification and return it.

This function is automatically called by Initializable._reset() to reinitialize the internal memory whenever the domain is used as an environment.

# Parameters

  • state: An optional state to initialize the memory with (typically the initial state).

# Returns

The new initialized memory.

# _is_action Events

_is_action(
  self,
event: D.T_event
) -> bool

Indicate whether an event is an action (i.e. a controllable event for the agents).

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain action space provided by Events._get_action_space(), but it can be overridden for faster implementations.

# Parameters

  • event: The event to consider.

# Returns

True if the event is an action (False otherwise).

# _is_applicable_action Events

_is_applicable_action(
  self,
action: StrDict[D.T_event],
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an action is applicable in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_applicable_action() provides some boilerplate code and internally calls Events._is_applicable_action_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the action is applicable (False otherwise).

# _is_applicable_action_from Events

_is_applicable_action_from(
  self,
action: StrDict[D.T_event],
memory: Memory[D.T_state]
) -> bool

Indicate whether an action is applicable in the given memory (state or history).

This is a helper function called by default from Events._is_applicable_action(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of applicable actions provided by Events._get_applicable_actions_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the action is applicable (False otherwise).

# _is_enabled_event Events

_is_enabled_event(
  self,
event: D.T_event,
memory: Optional[Memory[D.T_state]] = None
) -> bool

Indicate whether an uncontrollable event is enabled in the given memory (state or history), or in the internal one if omitted.

By default, Events._is_enabled_event() provides some boilerplate code and internally calls Events._is_enabled_event_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

True if the event is enabled (False otherwise).

# _is_enabled_event_from Events

_is_enabled_event_from(
  self,
event: D.T_event,
memory: Memory[D.T_state]
) -> bool

Indicate whether an event is enabled in the given memory (state or history).

This is a helper function called by default from Events._is_enabled_event(), the difference being that the memory parameter is mandatory here.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the space of enabled events provided by Events._get_enabled_events_from(), but it can be overridden for faster implementations.

# Parameters

  • memory: The memory to consider.

# Returns

True if the event is enabled (False otherwise).

# _is_goal Goals

_is_goal(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[D.T_predicate]

Indicate whether an observation belongs to the goals.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain goals space provided by Goals._get_goals(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation is a goal (False otherwise).

# _is_observation PartiallyObservable

_is_observation(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check that an observation indeed belongs to the domain observation space.

TIP

By default, this function is implemented using the skdecide.core.Space.contains() function on the domain observation space provided by PartiallyObservable._get_observation_space(), but it can be overridden for faster implementations.

# Parameters

  • observation: The observation to consider.

# Returns

True if the observation belongs to the domain observation space (False otherwise).

# _is_positive PositiveCosts

_is_positive(
  self,
cost: D.T_value
) -> bool

Determine if a value is positive (can be overridden for advanced value types).

# Parameters

  • cost: The cost to evaluate.

# Returns

True if the cost is positive (False otherwise).

# _is_terminal UncertainTransitions

_is_terminal(
  self,
state:
) ->

Indicate whether a state is terminal.

Stop an episode only when goal reached.

# _is_transition_value_dependent_on_next_state UncertainTransitions

_is_transition_value_dependent_on_next_state(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation (cached).

By default, UncertainTransitions._is_transition_value_dependent_on_next_state() internally calls UncertainTransitions._is_transition_value_dependent_on_next_state_() the first time and automatically caches its value to make future calls more efficient (since the returned value is assumed to be constant).

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _is_transition_value_dependent_on_next_state_ UncertainTransitions

_is_transition_value_dependent_on_next_state_(
  self
) -> bool

Indicate whether _get_transition_value() requires the next_state parameter for its computation.

This is a helper function called by default from UncertainTransitions._is_transition_value_dependent_on_next_state(), the difference being that the result is not cached here.

TIP

The underscore at the end of this function's name is a convention to remind that its result should be constant.

# Returns

True if the transition value computation depends on next_state (False otherwise).

# _render Renderable

_render(
  self,
memory: Optional[Memory[D.T_state]] = None,
**kwargs: Any
) -> Any

Compute a visual render of the given memory (state or history), or the internal one if omitted.

By default, Renderable._render() provides some boilerplate code and internally calls Renderable._render_from(). The boilerplate code automatically passes the _memory attribute instead of the memory parameter whenever the latter is None.

# Parameters

  • memory: The memory to consider (if None, the internal memory attribute _memory is used instead).

# Returns

A render (e.g. image) or nothing (if the function handles the display directly).

# _render_from Renderable

_render_from(
  self,
memory: ,
**kwargs: typing.Any
) -> typing.Any

Render visually the map.

# Returns

matplotlib figure

# _reset Initializable

_reset(
  self
) -> StrDict[D.T_observation]

Reset the state of the environment and return an initial observation.

By default, Initializable._reset() provides some boilerplate code and internally calls Initializable._state_reset() (which returns an initial state). The boilerplate code automatically stores the initial state into the _memory attribute and samples a corresponding observation.

# Returns

An initial observation.

# _sample Simulation

_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Sample one transition of the simulator's dynamics.

By default, Simulation._sample() provides some boilerplate code and internally calls Simulation._state_sample() (which returns a transition outcome). The boilerplate code automatically samples an observation corresponding to the sampled next state.

TIP

Whenever an existing simulator needs to be wrapped instead of implemented fully in scikit-decide (e.g. a simulator), it is recommended to overwrite Simulation._sample() to call the external simulator and not use the Simulation._state_sample() helper function.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The environment outcome of the sampled transition.

# _set_memory Simulation

_set_memory(
  self,
memory: Memory[D.T_state]
) -> None

Set internal memory attribute _memory to given one.

This can be useful to set a specific "starting point" before doing a rollout with successive Environment._step() calls.

# Parameters

  • memory: The memory to set internally.

# Example

# Set simulation_domain memory to my_state (assuming Markovian domain)
simulation_domain._set_memory(my_state)

# Start a 100-steps rollout from here (applying my_action at every step)
for _ in range(100):
    simulation_domain._step(my_action)

# _state_reset Initializable

_state_reset(
  self
) -> D.T_state

Reset the state of the environment and return an initial state.

This is a helper function called by default from Initializable._reset(). It focuses on the state level, as opposed to the observation one for the latter.

# Returns

An initial state.

# _state_sample Simulation

_state_sample(
  self,
memory: Memory[D.T_state],
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one sample of the transition's dynamics.

This is a helper function called by default from Simulation._sample(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • memory: The source memory (state or history) of the transition.
  • action: The action taken in the given memory (state or history) triggering the transition.

# Returns

The transition outcome of the sampled transition.

# _state_step Environment

_state_step(
  self,
action: StrDict[list[D.T_event]]
) -> TransitionOutcome[D.T_state, StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Compute one step of the transition's dynamics.

This is a helper function called by default from Environment._step(). It focuses on the state level, as opposed to the observation one for the latter.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The transition outcome of this step.

# _step Environment

_step(
  self,
action: StrDict[list[D.T_event]]
) -> EnvironmentOutcome[StrDict[D.T_observation], StrDict[Value[D.T_value]], StrDict[D.T_predicate], StrDict[D.T_info]]

Run one step of the environment's dynamics.

By default, Environment._step() provides some boilerplate code and internally calls Environment._state_step() (which returns a transition outcome). The boilerplate code automatically stores next state into the _memory attribute and samples a corresponding observation.

TIP

Whenever an existing environment needs to be wrapped instead of implemented fully in scikit-decide (e.g. compiled ATARI games), it is recommended to overwrite Environment._step() to call the external environment and not use the Environment._state_step() helper function.

WARNING

Before calling Environment._step() the first time or when the end of an episode is reached, Initializable._reset() must be called to reset the environment's state.

# Parameters

  • action: The action taken in the current memory (state or history) triggering the transition.

# Returns

The environment outcome of this step.

# fuel_optimisation

fuel_optimisation(
  origin: typing.Union[str, tuple],
destination: typing.Union[str, tuple],
actype: ,
constraints: ,
weather_date: ,
solver_cls: type[skdecide.solvers.Solver],
solver_kwargs: dict[str, typing.Any],
max_steps: = 100,
fuel_tol: = 0.001
) ->

Function to optimise the fuel loaded in the plane, doing multiple fuel loops to approach an optimal

# Parameters

origin (Union[str, tuple]): ICAO code of the departure airport of th flight plan e.g LFPG for Paris-CDG, or a tuple (lat,lon)

destination (Union[str, tuple]):
    ICAO code of the arrival airport of th flight plan e.g LFBO for Toulouse-Blagnac airport, or a tuple (lat,lon)

actype (str):
    Aircarft type describe in openap datas (https://github.com/junzis/openap/tree/master/openap/data/aircraft)

constraints (dict):
    Constraints that will be defined for the flight plan

wind_interpolator (GenericWindInterpolator):
    Define the wind interpolator to use wind informations for the flight plan

fuel_loaded (float):
    Fuel loaded in the plane for the flight

solver_cls (type[Solver]):
    Solver class used in the fuel loop.

solver_kwargs (dict[str, Any]):
    Kwargs to initialize the solver used in the fuel loop.

max_steps (int):
    max steps to use in the internal fuel loop

fuel_tol (float):
    tolerance on fuel used to stop the optimization

# Returns

float:
    Return the quantity of fuel to be loaded in the plane for the flight