# hub.solver.lrtdp.lrtdp

Domain specification

Domain

# LRTDP

This is the skdecide implementation of "Labeled RTDP: Improving the Convergence of Real-Time Dynamic Programming" by Blai Bonet and Hector Geffner (ICAPS 2003)

# Constructor LRTDP

LRTDP(
  domain_factory: Callable[[], T_domain],
heuristic: Callable[[T_domain, D.T_state], StrDict[Value[D.T_value]]] = <lambda function>,
use_labels: bool = True,
time_budget: int = 3600000,
rollout_budget: int = 100000,
max_depth: int = 1000,
residual_moving_average_window: int = 100,
epsilon: float = 0.001,
discount: float = 1.0,
online_node_garbage: bool = False,
continuous_planning: bool = True,
parallel: bool = False,
shared_memory_proxy = None,
callback: Callable[[LRTDP, Optional[int]], bool] = <lambda function>,
verbose: bool = False
) -> None

Construct a LRTDP solver instance

# Parameters

  • domain_factory (Callable[[], T_domain], optional): The lambda function to create a domain instance.
  • heuristic (Callable[[T_domain, D.T_state], D.T_agent[Value[D.T_value]]], optional): Lambda function taking as arguments the domain and a state, and returning the heuristic estimate from the state to the goal. Defaults to (lambda d, s: Value(cost=0)).
  • use_labels (bool, optional): Boolean indicating whether labels must be used (True) or not (False), in which case the algorithm is equivalent to the standard RTDP). Defaults to True. time_budget (int, optional): Maximum solving time in milliseconds. Defaults to 3600000. rollout_budget (int, optional): Maximum number of rollouts (deactivated when use_labels is True). Defaults to 100000. max_depth (int, optional): Maximum depth of each LRTDP trial (rollout). Defaults to 1000. residual_moving_average_window (int, optional): Number of latest computed residual values to memorize in order to compute the average Bellman error (residual) at the root state of the search (deactivated when use_labels is True). Defaults to 100. epsilon (float, optional): Maximum Bellman error (residual) allowed to decide that a state is solved, or to decide when no labels are used that the value function of the root state of the search has converged (in the latter case: the root state's Bellman error is averaged over the residual_moving_average_window, deactivated when use_labels is True). Defaults to 0.001. discount (float, optional): Value function's discount factor. Defaults to 1.0. online_node_garbage (bool, optional): Boolean indicating whether the search graph which is no more reachable from the root solving state should be deleted (True) or not (False). Defaults to False. continuous_planning (bool, optional): Boolean whether the solver should optimize again the policy from the current solving state (True) or not (False) even if the policy is already defined in this state. Defaults to True. parallel (bool, optional): Parallelize LRTDP trials on different processes using duplicated domains (True) or not (False). Defaults to False. shared_memory_proxy (type, optional): The optional shared memory proxy. Defaults to None. callback (Callable[[LRTDP, Optional[int]], optional): Function called at the end of each LRTDP trial, taking as arguments the solver and the thread/process ID (i.e. parallel domain ID, which is equal to None in case of sequential execution, i.e. when 'parallel' is set to False in this constructor) from which the callback is called, and returning True if the solver must be stopped. The callback lambda function cannot take the (potentially parallelized) domain as argument because we could not otherwise serialize (i.e. pickle) the solver to pass it to the corresponding parallel domain process in case of parallel execution. Nevertheless, the ParallelSolver.get_domain method callable on the solver instance can be used to retrieve either the user domain in sequential execution, or the parallel domains proxy ParallelDomain in parallel execution from which domain methods can be called by using the callback's process ID argument. Defaults to (lambda slv, i=None: False). verbose (bool, optional): Boolean indicating whether verbose messages should be logged (True) or not (False). Defaults to False.

# autocast Solver

autocast(
  self,
domain_cls: Optional[type[Domain]] = None
) -> None

Autocast itself to the level corresponding to the given domain class.

# Parameters

  • domain_cls: the domain class to which level the solver needs to autocast itself. By default, use the original domain factory passed to its constructor.

# call_domain_method ParallelSolver

call_domain_method(
  self,
name,
*args
)

Calls a parallel domain's method. This is the only way to get a domain method for a parallel domain.

# check_domain Solver

check_domain(
  domain: Domain
) -> bool

Check whether a domain is compliant with this solver type.

By default, Solver.check_domain() provides some boilerplate code and internally calls Solver._check_domain_additional() (which returns True by default but can be overridden to define specific checks in addition to the "domain requirements"). The boilerplate code automatically checks whether all domain requirements are met.

# Parameters

  • domain: The domain to check.

# Returns

True if the domain is compliant with the solver type (False otherwise).

# close ParallelSolver

close(
  self
)

Joins the parallel domains' processes.

WARNING

Not calling this method (or not using the 'with' context statement) results in the solver forever waiting for the domain processes to exit.

# complete_with_default_hyperparameters Hyperparametrizable

complete_with_default_hyperparameters(
  kwargs: dict[str, Any],
names: Optional[list[str]] = None
)

Add missing hyperparameters to kwargs by using default values

Args: kwargs: keyword arguments to complete (e.g. for __init__, init_model, or solve) names: names of the hyperparameters to add if missing. By default, all available hyperparameters.

Returns: a new dictionary, completion of kwargs

# copy_and_update_hyperparameters Hyperparametrizable

copy_and_update_hyperparameters(
  names: Optional[list[str]] = None,
**kwargs_by_name: dict[str, Any]
) -> list[Hyperparameter]

Copy hyperparameters definition of this class and update them with specified kwargs.

This is useful to define hyperparameters for a child class for which only choices of the hyperparameter change for instance.

Args: names: names of hyperparameters to copy. Default to all. **kwargs_by_name: for each hyperparameter specified by its name, the attributes to update. If a given hyperparameter name is not specified, the hyperparameter is copied without further update.

Returns:

# get_default_hyperparameters Hyperparametrizable

get_default_hyperparameters(
  names: Optional[list[str]] = None
) -> dict[str, Any]

Get hyperparameters default values.

Args: names: names of the hyperparameters to choose. By default, all available hyperparameters will be suggested.

Returns: a mapping between hyperparameter's name_in_kwargs and its default value (None if not specified)

# get_domain ParallelSolver

get_domain(
  self
)

Returns the domain, optionally creating a parallel domain if not already created.

# get_domain_requirements Solver

get_domain_requirements(
) -> list[type]

Get domain requirements for this solver class to be applicable.

Domain requirements are classes from the skdecide.builders.domain package that the domain needs to inherit from.

# Returns

A list of classes to inherit from.

# get_hyperparameter Hyperparametrizable

get_hyperparameter(
  name: str
) -> Hyperparameter

Get hyperparameter from given name.

# get_hyperparameters_by_name Hyperparametrizable

get_hyperparameters_by_name(
) -> dict[str, Hyperparameter]

Mapping from name to corresponding hyperparameter.

# get_hyperparameters_names Hyperparametrizable

get_hyperparameters_names(
) -> list[str]

List of hyperparameters names.

# get_nb_explored_states LRTDP

get_nb_explored_states(
  self
) -> int

Get the number of states present in the search graph (which can be lower than the number of actually explored states if node garbage was set to True in the LRTDP instance's constructor)

# Returns

int: Number of states present in the search graph

# get_nb_rollouts LRTDP

get_nb_rollouts(
  self
) -> int

Get the number of rollouts since the beginning of the search from the root solving state

# Returns

int: Number of rollouts (LRTDP trials)

# get_next_action DeterministicPolicies

get_next_action(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]

Get the next deterministic action (from the solver's current policy).

# Parameters

  • observation: The observation for which next action is requested.

# Returns

The next deterministic action.

# get_next_action_distribution UncertainPolicies

get_next_action_distribution(
  self,
observation: StrDict[D.T_observation]
) -> Distribution[StrDict[list[D.T_event]]]

Get the probabilistic distribution of next action for the given observation (from the solver's current policy).

# Parameters

  • observation: The observation to consider.

# Returns

The probabilistic distribution of next action.

# get_policy LRTDP

get_policy(
  self
) -> dict[StrDict[D.T_observation], tuple[StrDict[list[D.T_event]], D.T_value]]

Get the (partial) solution policy defined for the states for which the Q-value has been updated at least once (which is optimal if the algorithm has converged and labels are used)

WARNING

Only defined over the states reachable from the last root solving state when node garbage was set to True in the LRTDP instance's constructor

# Returns

dict[ D.T_agent[D.T_observation], tuple[D.T_agent[D.T_concurrency[D.T_event]], D.T_value], ]: Mapping from states to pairs of action and best Q-value

# get_residual_moving_average LRTDP

get_residual_moving_average(
  self
) -> float

Get the average Bellman error (residual) at the root state of the search, or an infinite value if the number of computed residuals is lower than the epsilon moving average window set in the LRTDP instance's constructor

# Returns

float: Bellman error at the root state of the search averaged over the epsilon moving average window

# get_solving_time LRTDP

get_solving_time(
  self
) -> int

Get the solving time in milliseconds since the beginning of the search from the root solving state

# Returns

int: Solving time in milliseconds

# get_utility Utilities

get_utility(
  self,
observation: StrDict[D.T_observation]
) -> D.T_value

Get the estimated on-policy utility of the given observation.

In mathematical terms, for a fully observable domain, this function estimates:

where is the current policy, any represents a trajectory sampled from the policy, is the return (cumulative reward) and the initial state for the trajectories.

# Parameters

  • observation: The observation to consider.

# Returns

The estimated on-policy utility of the given observation.

# is_policy_defined_for Policies

is_policy_defined_for(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check whether the solver's current policy is defined for the given observation.

# Parameters

  • observation: The observation to consider.

# Returns

True if the policy is defined for the given observation memory (False otherwise).

# reset Solver

reset(
  self
) -> None

Reset whatever is needed on this solver before running a new episode.

This function does nothing by default but can be overridden if needed (e.g. to reset the hidden state of a LSTM policy network, which carries information about past observations seen in the previous episode).

# sample_action Policies

sample_action(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]

Sample an action for the given observation (from the solver's current policy).

# Parameters

  • observation: The observation for which an action must be sampled.

# Returns

The sampled action.

# solve FromInitialState

solve(
  self,
from_memory: Optional[Memory[D.T_state]] = None
) -> None

Run the solving process.

After solving by calling self._solve(), autocast itself so that rollout methods apply to the domain original characteristics.

# Parameters

  • from_memory: The source memory (state or history) from which we begin the solving process. If None, initial state is used if the domain is initializable, else a ValueError is raised.

TIP

The nature of the solutions produced here depends on other solver's characteristics like policy and assessibility.

# solve_from FromAnyState

solve_from(
  self,
memory: Memory[D.T_state]
) -> None

Run the solving process from a given state.

After solving by calling self._solve_from(), autocast itself so that rollout methods apply to the domain original characteristics.

# Parameters

  • memory: The source memory (state or history) of the transition.

TIP

The nature of the solutions produced here depends on other solver's characteristics like policy and assessibility.

# suggest_hyperparameter_with_optuna Hyperparametrizable

suggest_hyperparameter_with_optuna(
  trial: optuna.trial.Trial,
name: str,
prefix: str,
**kwargs
) -> Any

Suggest hyperparameter value during an Optuna trial.

This can be used during Optuna hyperparameters tuning.

Args: trial: optuna trial during hyperparameters tuning name: name of the hyperparameter to choose prefix: prefix to add to optuna corresponding parameter name (useful for disambiguating hyperparameters from subsolvers in case of meta-solvers) **kwargs: options for optuna hyperparameter suggestions

Returns:

kwargs can be used to pass relevant arguments to

  • trial.suggest_float()
  • trial.suggest_int()
  • trial.suggest_categorical()

For instance it can

  • add a low/high value if not existing for the hyperparameter or override it to narrow the search. (for float or int hyperparameters)
  • add a step or log argument (for float or int hyperparameters, see optuna.trial.Trial.suggest_float())
  • override choices for categorical or enum parameters to narrow the search

# suggest_hyperparameters_with_optuna Hyperparametrizable

suggest_hyperparameters_with_optuna(
  trial: optuna.trial.Trial,
names: Optional[list[str]] = None,
kwargs_by_name: Optional[dict[str, dict[str, Any]]] = None,
fixed_hyperparameters: Optional[dict[str, Any]] = None,
prefix: str
) -> dict[str, Any]

Suggest hyperparameters values during an Optuna trial.

Args: trial: optuna trial during hyperparameters tuning names: names of the hyperparameters to choose. By default, all available hyperparameters will be suggested. If fixed_hyperparameters is provided, the corresponding names are removed from names. kwargs_by_name: options for optuna hyperparameter suggestions, by hyperparameter name fixed_hyperparameters: values of fixed hyperparameters, useful for suggesting subbrick hyperparameters, if the subbrick class is not suggested by this method, but already fixed. Will be added to the suggested hyperparameters. prefix: prefix to add to optuna corresponding parameters (useful for disambiguating hyperparameters from subsolvers in case of meta-solvers)

Returns: mapping between the hyperparameter name and its suggested value. If the hyperparameter has an attribute name_in_kwargs, this is used as the key in the mapping instead of the actual hyperparameter name. the mapping is updated with fixed_hyperparameters.

kwargs_by_name[some_name] will be passed as **kwargs to suggest_hyperparameter_with_optuna(name=some_name)

# _check_domain_additional Solver

_check_domain_additional(
  domain: Domain
) -> bool

Check whether the given domain is compliant with the specific requirements of this solver type (i.e. the ones in addition to "domain requirements").

This is a helper function called by default from Solver.check_domain(). It focuses on specific checks, as opposed to taking also into account the domain requirements for the latter.

# Parameters

  • domain: The domain to check.

# Returns

True if the domain is compliant with the specific requirements of this solver type (False otherwise).

# _get_next_action DeterministicPolicies

_get_next_action(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]

Get the best computed action in terms of best Q-value in a given state. The search subgraph which is no more reachable after executing the returned action is also deleted if node garbage was set to True in the LRTDP instance's constructor. The solver is run from observation if continuous_planning was set to True in the LRTDP instance's constructor or if no solution is defined (i.e. has been previously computed) in observation.

WARNING

Returns a random action if no action is defined in the given state, which is why it is advised to call LRTDP.is_solution_defined_for before

# Parameters

  • observation (D.T_agent[D.T_observation]): State for which the best action is requested

# Returns

D.T_agent[D.T_concurrency[D.T_event]]: Best computed action

# _get_next_action_distribution UncertainPolicies

_get_next_action_distribution(
  self,
observation: StrDict[D.T_observation]
) -> Distribution[StrDict[list[D.T_event]]]

Get the probabilistic distribution of next action for the given observation (from the solver's current policy).

# Parameters

  • observation: The observation to consider.

# Returns

The probabilistic distribution of next action.

# _get_utility Utilities

_get_utility(
  self,
observation: StrDict[D.T_observation]
) -> D.T_value

Get the best Q-value in a given state

WARNING

Returns None if no action is defined in the given state, which is why it is advised to call LRTDP.is_solution_defined_for before

# Parameters

  • observation (D.T_agent[D.T_observation]): State from which the best Q-value is requested

# Returns

D.T_value: Minimum Q-value of the given state over the applicable actions in this state

# _initialize Solver

_initialize(
  self
)

Launches the parallel domains. This method requires to have previously recorded the self._domain_factory, the set of lambda functions passed to the solver's constructor (e.g. heuristic lambda for heuristic-based solvers), and whether the parallel domain jobs should notify their status via the IPC protocol (required when interacting with other programming languages like C++)

# _is_policy_defined_for Policies

_is_policy_defined_for(
  self,
observation: StrDict[D.T_observation]
) -> bool

Check whether the solver's current policy is defined for the given observation.

# Parameters

  • observation: The observation to consider.

# Returns

True if the policy is defined for the given observation memory (False otherwise).

# _is_solution_defined_for LRTDP

_is_solution_defined_for(
  self,
observation: StrDict[D.T_observation]
) -> bool

Indicates whether the solution policy is defined for a given state

# Parameters

  • observation (D.T_agent[D.T_observation]): State for which an entry is searched in the policy graph

# Returns

bool: True if the state has been explored and an action is defined in this state, False otherwise

# _reset Solver

_reset(
  self
) -> None

Clears the search graph.

# _sample_action Policies

_sample_action(
  self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]

Sample an action for the given observation (from the solver's current policy).

# Parameters

  • observation: The observation for which an action must be sampled.

# Returns

The sampled action.

# _solve FromInitialState

_solve(
  self,
from_memory: Optional[Memory[D.T_state]] = None
) -> None

Run the solving process.

# Parameters

  • from_memory: The source memory (state or history) from which we begin the solving process. If None, initial state is used if the domain is initializable, else a ValueError is raised.

TIP

The nature of the solutions produced here depends on other solver's characteristics like policy and assessibility.

# _solve_from FromAnyState

_solve_from(
  self,
memory: Memory[D.T_state]
) -> None

Run the LRTDP algorithm from a given root solving state

# Parameters

  • memory (D.T_memory[D.T_state]): State from which to run the LRTDP algorithm (root of the search graph)