# hub.solver.iw.iw
Domain specification
# IW
This is the skdecide implementation of the Iterated Width algorithm as described in "Width and Serialization of Classical Planning Problems" by Nir Lipovetzky and Hector Geffner (2012)
# Constructor IW
IW(
domain_factory: Callable[[], Domain],
state_features: Callable[[Domain, D.T_state], Any],
use_state_feature_hash: bool = False,
node_ordering: Callable[[float, int, int, float, int, int], bool] = <lambda function>,
time_budget: int = 0,
parallel: bool = False,
shared_memory_proxy = None,
callback: Callable[[IW], bool] = <lambda function>,
verbose: bool = False
) -> None
Construct a IW solver instance
# Parameters
- domain_factory (Callable[[], Domain]): The lambda function to create a domain instance.
- state_features (Callable[[Domain, D.T_state], Any]): State feature vector used to compute the novelty measure
- use_state_feature_hash (bool, optional): Boolean indicating whether states must be hashed by using their features (True) or by using their native hash function (False). Defaults to False.
- node_ordering (type, optional): Lambda function called to rank two search nodes A and B, taking as inputs A's g-score, A's novelty, A's search depth, B's g-score, B's novelty, B's search depth, and returning true when B should be preferred to A (defaults to rank nodes based on their g-scores). Defaults to ( lambda a_gscore, a_novelty, a_depth, b_gscore, b_novelty, b_depth: a_gscore > b_gscore ).
- time_budget (int, optional): Maximum time allowed (in milliseconds) to continue searching for better plans after a first plan reaching a goal has been found. Defaults to 0.
- parallel (bool, optional): Parallelize the generation of state-action transitions on different processes using duplicated domains (True) or not (False). Defaults to False.
- shared_memory_proxy (type, optional): The optional shared memory proxy. Defaults to None.
- callback (type, optional): Lambda function called before popping the next state from the (priority) open queue, taking as arguments the solver and the domain, and returning true if the solver must be stopped. Defaults to (lambda slv:False).
- verbose (bool, optional): Boolean indicating whether verbose messages should be logged (True) or not (False). Defaults to False.
# autocast Solver
autocast(
self,
domain_cls: Optional[type[Domain]] = None
) -> None
Autocast itself to the level corresponding to the given domain class.
# Parameters
- domain_cls: the domain class to which level the solver needs to autocast itself. By default, use the original domain factory passed to its constructor.
# call_domain_method ParallelSolver
call_domain_method(
self,
name,
*args
)
Calls a parallel domain's method. This is the only way to get a domain method for a parallel domain.
# check_domain Solver
check_domain(
domain: Domain
) -> bool
Check whether a domain is compliant with this solver type.
By default, Solver.check_domain()
provides some boilerplate code and internally
calls Solver._check_domain_additional()
(which returns True by default but can be overridden to define
specific checks in addition to the "domain requirements"). The boilerplate code automatically checks whether all
domain requirements are met.
# Parameters
- domain: The domain to check.
# Returns
True if the domain is compliant with the solver type (False otherwise).
# close ParallelSolver
close(
self
)
Joins the parallel domains' processes. Not calling this method (or not using the 'with' context statement) results in the solver forever waiting for the domain processes to exit.
# complete_with_default_hyperparameters Hyperparametrizable
complete_with_default_hyperparameters(
kwargs: dict[str, Any],
names: Optional[list[str]] = None
)
Add missing hyperparameters to kwargs by using default values
Args:
kwargs: keyword arguments to complete (e.g. for __init__
, init_model
, or solve
)
names: names of the hyperparameters to add if missing.
By default, all available hyperparameters.
Returns: a new dictionary, completion of kwargs
# copy_and_update_hyperparameters Hyperparametrizable
copy_and_update_hyperparameters(
names: Optional[list[str]] = None,
**kwargs_by_name: dict[str, Any]
) -> list[Hyperparameter]
Copy hyperparameters definition of this class and update them with specified kwargs.
This is useful to define hyperparameters for a child class for which only choices of the hyperparameter change for instance.
Args: names: names of hyperparameters to copy. Default to all. **kwargs_by_name: for each hyperparameter specified by its name, the attributes to update. If a given hyperparameter name is not specified, the hyperparameter is copied without further update.
Returns:
# get_default_hyperparameters Hyperparametrizable
get_default_hyperparameters(
names: Optional[list[str]] = None
) -> dict[str, Any]
Get hyperparameters default values.
Args: names: names of the hyperparameters to choose. By default, all available hyperparameters will be suggested.
Returns: a mapping between hyperparameter's name_in_kwargs and its default value (None if not specified)
# get_domain ParallelSolver
get_domain(
self
)
Returns the domain, optionally creating a parallel domain if not already created.
# get_domain_requirements Solver
get_domain_requirements(
) -> list[type]
Get domain requirements for this solver class to be applicable.
Domain requirements are classes from the skdecide.builders.domain
package that the domain needs to inherit from.
# Returns
A list of classes to inherit from.
# get_explored_states IW
get_explored_states(
self
) -> set[StrDict[D.T_observation]]
Get the set of states present in the search graph (i.e. the graph's state nodes minus the nodes' encapsulation and their neighbors)
# Returns
set[D.T_agent[D.T_observation]]: set of states present in the search graph
# get_hyperparameter Hyperparametrizable
get_hyperparameter(
name: str
) -> Hyperparameter
Get hyperparameter from given name.
# get_hyperparameters_by_name Hyperparametrizable
get_hyperparameters_by_name(
) -> dict[str, Hyperparameter]
Mapping from name to corresponding hyperparameter.
# get_hyperparameters_names Hyperparametrizable
get_hyperparameters_names(
) -> list[str]
List of hyperparameters names.
# get_intermediate_scores IW
get_intermediate_scores(
self
) -> list[tuple[int, int, float]]
Get the history of tuples of time point (in milliseconds), current width, and root state's f-score, recorded each time a goal state is encountered during the search
# Returns
list[tuple[int, int, float]]: list of tuples of time point (in milliseconds), current width, and root state's f-score
# get_nb_explored_states IW
get_nb_explored_states(
self
) -> int
Get the number of states present in the search graph
# Returns
int: Number of states present in the search graph
# get_nb_of_pruned_states IW
get_nb_of_pruned_states(
self
) -> int
Get the number of states pruned by the novelty measure among the ones present in the search graph
# Returns
int: Number of states pruned by the novelty measure among the ones present in the search graph graph
# get_nb_tip_states IW
get_nb_tip_states(
self
) -> int
Get the number of states present in the priority queue (i.e. those explored states that have not been yet closed by IW) of the current width search procedure (throws a runtime exception if no active width sub-solver is active)
WARNING
Throws a runtime exception if no active width sub-solver is active
# Returns
int: Number of states present in the (priority) open queue of the current width search procedure
# get_next_action DeterministicPolicies
get_next_action(
self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]
Get the next deterministic action (from the solver's current policy).
# Parameters
- observation: The observation for which next action is requested.
# Returns
The next deterministic action.
# get_next_action_distribution UncertainPolicies
get_next_action_distribution(
self,
observation: StrDict[D.T_observation]
) -> Distribution[StrDict[list[D.T_event]]]
Get the probabilistic distribution of next action for the given observation (from the solver's current policy).
# Parameters
- observation: The observation to consider.
# Returns
The probabilistic distribution of next action.
# get_top_tip_state IW
get_top_tip_state(
self
) -> StrDict[D.T_observation]
Get the top tip state, i.e. the tip state with the lowest lexicographical score (according to the node ordering functor given in the IWSolver instance's constructor) of the current width search procedure
WARNING
Returns None if no active width sub-solver is active or if the priority queue of the current width search procedure is empty
# Returns
D.T_agent[D.T_observation]: Next tip state to be closed by the current width search procedure
# get_utility Utilities
get_utility(
self,
observation: StrDict[D.T_observation]
) -> D.T_value
Get the estimated on-policy utility of the given observation.
In mathematical terms, for a fully observable domain, this function estimates:
# Parameters
- observation: The observation to consider.
# Returns
The estimated on-policy utility of the given observation.
# is_policy_defined_for Policies
is_policy_defined_for(
self,
observation: StrDict[D.T_observation]
) -> bool
Check whether the solver's current policy is defined for the given observation.
# Parameters
- observation: The observation to consider.
# Returns
True if the policy is defined for the given observation memory (False otherwise).
# reset Solver
reset(
self
) -> None
Reset whatever is needed on this solver before running a new episode.
This function does nothing by default but can be overridden if needed (e.g. to reset the hidden state of a LSTM policy network, which carries information about past observations seen in the previous episode).
# sample_action Policies
sample_action(
self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]
Sample an action for the given observation (from the solver's current policy).
# Parameters
- observation: The observation for which an action must be sampled.
# Returns
The sampled action.
# solve FromInitialState
solve(
self,
from_memory: Optional[Memory[D.T_state]] = None
) -> None
Run the solving process.
After solving by calling self._solve(), autocast itself so that rollout methods apply to the domain original characteristics.
# Parameters
- from_memory: The source memory (state or history) from which we begin the solving process. If None, initial state is used if the domain is initializable, else a ValueError is raised.
TIP
The nature of the solutions produced here depends on other solver's characteristics like
policy
and assessibility
.
# solve_from FromAnyState
solve_from(
self,
memory: Memory[D.T_state]
) -> None
Run the solving process from a given state.
After solving by calling self._solve_from(), autocast itself so that rollout methods apply to the domain original characteristics.
# Parameters
- memory: The source memory (state or history) of the transition.
TIP
The nature of the solutions produced here depends on other solver's characteristics like
policy
and assessibility
.
# suggest_hyperparameter_with_optuna Hyperparametrizable
suggest_hyperparameter_with_optuna(
trial: optuna.trial.Trial,
name: str,
prefix: str,
**kwargs
) -> Any
Suggest hyperparameter value during an Optuna trial.
This can be used during Optuna hyperparameters tuning.
Args: trial: optuna trial during hyperparameters tuning name: name of the hyperparameter to choose prefix: prefix to add to optuna corresponding parameter name (useful for disambiguating hyperparameters from subsolvers in case of meta-solvers) **kwargs: options for optuna hyperparameter suggestions
Returns:
kwargs can be used to pass relevant arguments to
- trial.suggest_float()
- trial.suggest_int()
- trial.suggest_categorical()
For instance it can
- add a low/high value if not existing for the hyperparameter or override it to narrow the search. (for float or int hyperparameters)
- add a step or log argument (for float or int hyperparameters, see optuna.trial.Trial.suggest_float())
- override choices for categorical or enum parameters to narrow the search
# suggest_hyperparameters_with_optuna Hyperparametrizable
suggest_hyperparameters_with_optuna(
trial: optuna.trial.Trial,
names: Optional[list[str]] = None,
kwargs_by_name: Optional[dict[str, dict[str, Any]]] = None,
fixed_hyperparameters: Optional[dict[str, Any]] = None,
prefix: str
) -> dict[str, Any]
Suggest hyperparameters values during an Optuna trial.
Args:
trial: optuna trial during hyperparameters tuning
names: names of the hyperparameters to choose.
By default, all available hyperparameters will be suggested.
If fixed_hyperparameters
is provided, the corresponding names are removed from names
.
kwargs_by_name: options for optuna hyperparameter suggestions, by hyperparameter name
fixed_hyperparameters: values of fixed hyperparameters, useful for suggesting subbrick hyperparameters,
if the subbrick class is not suggested by this method, but already fixed.
Will be added to the suggested hyperparameters.
prefix: prefix to add to optuna corresponding parameters
(useful for disambiguating hyperparameters from subsolvers in case of meta-solvers)
Returns:
mapping between the hyperparameter name and its suggested value.
If the hyperparameter has an attribute name_in_kwargs
, this is used as the key in the mapping
instead of the actual hyperparameter name.
the mapping is updated with fixed_hyperparameters
.
kwargs_by_name[some_name] will be passed as **kwargs to suggest_hyperparameter_with_optuna(name=some_name)
# _check_domain_additional Solver
_check_domain_additional(
domain: Domain
) -> bool
Check whether the given domain is compliant with the specific requirements of this solver type (i.e. the ones in addition to "domain requirements").
This is a helper function called by default from Solver.check_domain()
. It focuses on specific checks, as
opposed to taking also into account the domain requirements for the latter.
# Parameters
- domain: The domain to check.
# Returns
True if the domain is compliant with the specific requirements of this solver type (False otherwise).
# _get_next_action DeterministicPolicies
_get_next_action(
self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]
Get the best computed action in terms of minimum cost-to-go in a given state.
The solver is run from observation
if no solution is defined (i.e. has been
previously computed) in observation
.
WARNING
Returns a random action if no action is defined in the given state,
which is why it is advised to call IW.is_solution_defined_for
before
# Parameters
- observation (D.T_agent[D.T_observation]): State for which the best action is requested
# Returns
D.T_agent[D.T_concurrency[D.T_event]]: Best computed action
# _get_next_action_distribution UncertainPolicies
_get_next_action_distribution(
self,
observation: StrDict[D.T_observation]
) -> Distribution[StrDict[list[D.T_event]]]
Get the probabilistic distribution of next action for the given observation (from the solver's current policy).
# Parameters
- observation: The observation to consider.
# Returns
The probabilistic distribution of next action.
# _get_utility Utilities
_get_utility(
self,
observation: StrDict[D.T_observation]
) -> D.T_value
Get the minimum cost-to-go in a given state
WARNING
Returns None if no action is defined in the given state, which is why
it is advised to call IW.is_solution_defined_for
before
# Parameters
- observation (D.T_agent[D.T_observation]): State from which the minimum cost-to-go is requested
# Returns
D.T_value: Minimum cost-to-go of the given state over the applicable actions in this state
# _initialize Solver
_initialize(
self
)
Launches the parallel domains. This method requires to have previously recorded the self._domain_factory, the set of lambda functions passed to the solver's constructor (e.g. heuristic lambda for heuristic-based solvers), and whether the parallel domain jobs should notify their status via the IPC protocol (required when interacting with other programming languages like C++)
# _is_policy_defined_for Policies
_is_policy_defined_for(
self,
observation: StrDict[D.T_observation]
) -> bool
Check whether the solver's current policy is defined for the given observation.
# Parameters
- observation: The observation to consider.
# Returns
True if the policy is defined for the given observation memory (False otherwise).
# _is_solution_defined_for IW
_is_solution_defined_for(
self,
observation: StrDict[D.T_observation]
) -> bool
Indicates whether the solution policy (potentially built from merging several previously computed plans) is defined for a given state
# Parameters
- observation (D.T_agent[D.T_observation]): State for which an entry is searched in the policy graph
# Returns
bool: True if a plan that goes through the state has been previously computed, False otherwise
# _reset Solver
_reset(
self
) -> None
Clears the search graph.
# _sample_action Policies
_sample_action(
self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]
Sample an action for the given observation (from the solver's current policy).
# Parameters
- observation: The observation for which an action must be sampled.
# Returns
The sampled action.
# _solve FromInitialState
_solve(
self,
from_memory: Optional[Memory[D.T_state]] = None
) -> None
Run the solving process.
# Parameters
- from_memory: The source memory (state or history) from which we begin the solving process. If None, initial state is used if the domain is initializable, else a ValueError is raised.
TIP
The nature of the solutions produced here depends on other solver's characteristics like
policy
and assessibility
.
# _solve_from FromAnyState
_solve_from(
self,
memory: Memory[D.T_state]
) -> None
Run the IW algorithm from a given root solving state
# Parameters
- memory (D.T_memory[D.T_state]): State from which IW graph traversals are performed (root of the search graph)