# hub.solver.bfws.bfws
Domain specification
# BFWS
This is the skdecide implementation Best First Width Search from "Best-First Width Search: Exploration and Exploitation in Classical Planning" by Nir Lipovetzky and Hector Geffner (2017)
# Constructor BFWS
BFWS(
domain_factory: Callable[[], Domain],
state_features: Callable[[Domain, D.T_state], Any],
heuristic: Callable[[Domain, D.T_state], StrDict[Value[D.T_value]]] = <lambda function>,
parallel: bool = False,
shared_memory_proxy = None,
callback: Callable[[BFWS], bool] = <lambda function>,
verbose: bool = False
) -> None
Construct a BFWS solver instance
# Parameters
- domain_factory (Callable[[], Domain]): The lambda function to create a domain instance.
- state_features (Callable[[Domain, D.T_state], Any]): State feature vector used to compute the novelty measure
- heuristic (Callable[[Domain, D.T_state], D.T_agent[Value[D.T_value]]]): Lambda function taking as arguments the domain and a state object, and returning the heuristic estimate from the state to the goal. Defaults to (lambda d, s: Value(cost=0)).
- parallel (bool, optional): Parallelize the generation of state-action transitions on different processes using duplicated domains (True) or not (False). Defaults to False.
- shared_memory_proxy (type, optional): The optional shared memory proxy. Defaults to None.
- callback (Callable[[BFWS], bool], optional): Lambda function called before popping the next state from the (priority) open queue, taking as arguments the solver and the domain, and returning true if the solver must be stopped. Defaults to (lambda slv: False).
- verbose (bool, optional): Boolean indicating whether verbose messages should be logged (True) or not (False). Defaults to False.
# autocast Solver
autocast(
self,
domain_cls: Optional[type[Domain]] = None
) -> None
Autocast itself to the level corresponding to the given domain class.
# Parameters
- domain_cls: the domain class to which level the solver needs to autocast itself. By default, use the original domain factory passed to its constructor.
# call_domain_method ParallelSolver
call_domain_method(
self,
name,
*args
)
Calls a parallel domain's method. This is the only way to get a domain method for a parallel domain.
# check_domain Solver
check_domain(
domain: Domain
) -> bool
Check whether a domain is compliant with this solver type.
By default, Solver.check_domain()
provides some boilerplate code and internally
calls Solver._check_domain_additional()
(which returns True by default but can be overridden to define
specific checks in addition to the "domain requirements"). The boilerplate code automatically checks whether all
domain requirements are met.
# Parameters
- domain: The domain to check.
# Returns
True if the domain is compliant with the solver type (False otherwise).
# close ParallelSolver
close(
self
)
Joins the parallel domains' processes. Not calling this method (or not using the 'with' context statement) results in the solver forever waiting for the domain processes to exit.
# complete_with_default_hyperparameters Hyperparametrizable
complete_with_default_hyperparameters(
kwargs: dict[str, Any],
names: Optional[list[str]] = None
)
Add missing hyperparameters to kwargs by using default values
Args:
kwargs: keyword arguments to complete (e.g. for __init__
, init_model
, or solve
)
names: names of the hyperparameters to add if missing.
By default, all available hyperparameters.
Returns: a new dictionary, completion of kwargs
# copy_and_update_hyperparameters Hyperparametrizable
copy_and_update_hyperparameters(
names: Optional[list[str]] = None,
**kwargs_by_name: dict[str, Any]
) -> list[Hyperparameter]
Copy hyperparameters definition of this class and update them with specified kwargs.
This is useful to define hyperparameters for a child class for which only choices of the hyperparameter change for instance.
Args: names: names of hyperparameters to copy. Default to all. **kwargs_by_name: for each hyperparameter specified by its name, the attributes to update. If a given hyperparameter name is not specified, the hyperparameter is copied without further update.
Returns:
# get_default_hyperparameters Hyperparametrizable
get_default_hyperparameters(
names: Optional[list[str]] = None
) -> dict[str, Any]
Get hyperparameters default values.
Args: names: names of the hyperparameters to choose. By default, all available hyperparameters will be suggested.
Returns: a mapping between hyperparameter's name_in_kwargs and its default value (None if not specified)
# get_domain ParallelSolver
get_domain(
self
)
Returns the domain, optionally creating a parallel domain if not already created.
# get_domain_requirements Solver
get_domain_requirements(
) -> list[type]
Get domain requirements for this solver class to be applicable.
Domain requirements are classes from the skdecide.builders.domain
package that the domain needs to inherit from.
# Returns
A list of classes to inherit from.
# get_explored_states BFWS
get_explored_states(
self
) -> set[StrDict[D.T_observation]]
Get the set of states present in the search graph (i.e. the graph's state nodes minus the nodes' encapsulation and their neighbors)
# Returns
set[D.T_agent[D.T_observation]]: Set of states present in the search graph
# get_hyperparameter Hyperparametrizable
get_hyperparameter(
name: str
) -> Hyperparameter
Get hyperparameter from given name.
# get_hyperparameters_by_name Hyperparametrizable
get_hyperparameters_by_name(
) -> dict[str, Hyperparameter]
Mapping from name to corresponding hyperparameter.
# get_hyperparameters_names Hyperparametrizable
get_hyperparameters_names(
) -> list[str]
List of hyperparameters names.
# get_nb_explored_states BFWS
get_nb_explored_states(
self
) -> int
Get the number of states present in the search graph
# Returns
int: Number of states present in the search graph
# get_nb_tip_states BFWS
get_nb_tip_states(
self
) -> int
Get the number of states present in the priority queue (i.e. those explored states that have not been yet closed by BFWS)
# Returns
int: Number of states present in the (priority) open queue
# get_next_action DeterministicPolicies
get_next_action(
self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]
Get the next deterministic action (from the solver's current policy).
# Parameters
- observation: The observation for which next action is requested.
# Returns
The next deterministic action.
# get_next_action_distribution UncertainPolicies
get_next_action_distribution(
self,
observation: StrDict[D.T_observation]
) -> Distribution[StrDict[list[D.T_event]]]
Get the probabilistic distribution of next action for the given observation (from the solver's current policy).
# Parameters
- observation: The observation to consider.
# Returns
The probabilistic distribution of next action.
# get_plan BFWS
get_plan(
self,
observation: StrDict[D.T_observation]
) -> list[tuple[StrDict[D.T_observation], StrDict[list[D.T_event]], D.T_value]]
Get the solution plan starting in a given state
WARNING
Returns an empty list if no plan has been previously computed that goes through the given state. Throws a runtime exception if a state cycle is detected in the plan
# Parameters
- observation (D.T_agent[D.T_observation]): State from which a solution plan to a goal state is requested
# Returns
list[ tuple[ D.T_agent[D.T_observation], D.T_agent[D.T_concurrency[D.T_event]], D.T_value, ] ]: Sequence of tuples of state, action and transition cost (computed as the difference of g-scores between this state and the next one) visited along the execution of the plan
# get_policy BFWS
get_policy(
self
) -> dict[StrDict[D.T_observation], tuple[StrDict[list[D.T_event]], D.T_value]]
Get the (partial) solution policy defined for the states for which a solution plan that goes through them has been previously computed at least once
WARNING
Only defined over the states reachable from the root solving state
# Returns
dict[ D.T_agent[D.T_observation], tuple[D.T_agent[D.T_concurrency[D.T_event]], D.T_value], ]: Mapping from states to pairs of action and minimum cost-to-go
# get_solving_time BFWS
get_solving_time(
self
) -> int
Get the solving time in milliseconds since the beginning of the search from the root solving state
# Returns
int: Solving time in milliseconds
# get_top_tip_state BFWS
get_top_tip_state(
self
) -> StrDict[D.T_observation]
Get the top tip state, i.e. the tip state with the lowest f-score
WARNING
Returns None if the priority queue is empty
# Returns
D.T_agent[D.T_observation]: Next tip state to be closed by BFWS
# get_utility Utilities
get_utility(
self,
observation: StrDict[D.T_observation]
) -> D.T_value
Get the estimated on-policy utility of the given observation.
In mathematical terms, for a fully observable domain, this function estimates:
# Parameters
- observation: The observation to consider.
# Returns
The estimated on-policy utility of the given observation.
# is_policy_defined_for Policies
is_policy_defined_for(
self,
observation: StrDict[D.T_observation]
) -> bool
Check whether the solver's current policy is defined for the given observation.
# Parameters
- observation: The observation to consider.
# Returns
True if the policy is defined for the given observation memory (False otherwise).
# reset Solver
reset(
self
) -> None
Reset whatever is needed on this solver before running a new episode.
This function does nothing by default but can be overridden if needed (e.g. to reset the hidden state of a LSTM policy network, which carries information about past observations seen in the previous episode).
# sample_action Policies
sample_action(
self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]
Sample an action for the given observation (from the solver's current policy).
# Parameters
- observation: The observation for which an action must be sampled.
# Returns
The sampled action.
# solve FromInitialState
solve(
self,
from_memory: Optional[Memory[D.T_state]] = None
) -> None
Run the solving process.
After solving by calling self._solve(), autocast itself so that rollout methods apply to the domain original characteristics.
# Parameters
- from_memory: The source memory (state or history) from which we begin the solving process. If None, initial state is used if the domain is initializable, else a ValueError is raised.
TIP
The nature of the solutions produced here depends on other solver's characteristics like
policy
and assessibility
.
# solve_from FromAnyState
solve_from(
self,
memory: Memory[D.T_state]
) -> None
Run the solving process from a given state.
After solving by calling self._solve_from(), autocast itself so that rollout methods apply to the domain original characteristics.
# Parameters
- memory: The source memory (state or history) of the transition.
TIP
The nature of the solutions produced here depends on other solver's characteristics like
policy
and assessibility
.
# suggest_hyperparameter_with_optuna Hyperparametrizable
suggest_hyperparameter_with_optuna(
trial: optuna.trial.Trial,
name: str,
prefix: str,
**kwargs
) -> Any
Suggest hyperparameter value during an Optuna trial.
This can be used during Optuna hyperparameters tuning.
Args: trial: optuna trial during hyperparameters tuning name: name of the hyperparameter to choose prefix: prefix to add to optuna corresponding parameter name (useful for disambiguating hyperparameters from subsolvers in case of meta-solvers) **kwargs: options for optuna hyperparameter suggestions
Returns:
kwargs can be used to pass relevant arguments to
- trial.suggest_float()
- trial.suggest_int()
- trial.suggest_categorical()
For instance it can
- add a low/high value if not existing for the hyperparameter or override it to narrow the search. (for float or int hyperparameters)
- add a step or log argument (for float or int hyperparameters, see optuna.trial.Trial.suggest_float())
- override choices for categorical or enum parameters to narrow the search
# suggest_hyperparameters_with_optuna Hyperparametrizable
suggest_hyperparameters_with_optuna(
trial: optuna.trial.Trial,
names: Optional[list[str]] = None,
kwargs_by_name: Optional[dict[str, dict[str, Any]]] = None,
fixed_hyperparameters: Optional[dict[str, Any]] = None,
prefix: str
) -> dict[str, Any]
Suggest hyperparameters values during an Optuna trial.
Args:
trial: optuna trial during hyperparameters tuning
names: names of the hyperparameters to choose.
By default, all available hyperparameters will be suggested.
If fixed_hyperparameters
is provided, the corresponding names are removed from names
.
kwargs_by_name: options for optuna hyperparameter suggestions, by hyperparameter name
fixed_hyperparameters: values of fixed hyperparameters, useful for suggesting subbrick hyperparameters,
if the subbrick class is not suggested by this method, but already fixed.
Will be added to the suggested hyperparameters.
prefix: prefix to add to optuna corresponding parameters
(useful for disambiguating hyperparameters from subsolvers in case of meta-solvers)
Returns:
mapping between the hyperparameter name and its suggested value.
If the hyperparameter has an attribute name_in_kwargs
, this is used as the key in the mapping
instead of the actual hyperparameter name.
the mapping is updated with fixed_hyperparameters
.
kwargs_by_name[some_name] will be passed as **kwargs to suggest_hyperparameter_with_optuna(name=some_name)
# _check_domain_additional Solver
_check_domain_additional(
domain: Domain
) -> bool
Check whether the given domain is compliant with the specific requirements of this solver type (i.e. the ones in addition to "domain requirements").
This is a helper function called by default from Solver.check_domain()
. It focuses on specific checks, as
opposed to taking also into account the domain requirements for the latter.
# Parameters
- domain: The domain to check.
# Returns
True if the domain is compliant with the specific requirements of this solver type (False otherwise).
# _get_next_action DeterministicPolicies
_get_next_action(
self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]
Get the best computed action in terms of minimum cost-to-go in a given state.
The solver is run from observation
if no solution is defined (i.e. has been
previously computed) in observation
.
WARNING
Returns a random action if no action is defined in the given state,
which is why it is advised to call BFWS.is_solution_defined_for
before
# Parameters
- observation (D.T_agent[D.T_observation]): State for which the best action is requested
# Returns
D.T_agent[D.T_concurrency[D.T_event]]: Best computed action
# _get_next_action_distribution UncertainPolicies
_get_next_action_distribution(
self,
observation: StrDict[D.T_observation]
) -> Distribution[StrDict[list[D.T_event]]]
Get the probabilistic distribution of next action for the given observation (from the solver's current policy).
# Parameters
- observation: The observation to consider.
# Returns
The probabilistic distribution of next action.
# _get_utility Utilities
_get_utility(
self,
observation: StrDict[D.T_observation]
) -> D.T_value
Get the minimum cost-to-go in a given state
WARNING
Returns None if no action is defined in the given state, which is why
it is advised to call BFWS.is_solution_defined_for
before
# Parameters
- observation (D.T_agent[D.T_observation]): State from which the minimum cost-to-go is requested
# Returns
D.T_value: Minimum cost-to-go of the given state over the applicable actions in this state
# _initialize Solver
_initialize(
self
)
Launches the parallel domains. This method requires to have previously recorded the self._domain_factory, the set of lambda functions passed to the solver's constructor (e.g. heuristic lambda for heuristic-based solvers), and whether the parallel domain jobs should notify their status via the IPC protocol (required when interacting with other programming languages like C++)
# _is_policy_defined_for Policies
_is_policy_defined_for(
self,
observation: StrDict[D.T_observation]
) -> bool
Check whether the solver's current policy is defined for the given observation.
# Parameters
- observation: The observation to consider.
# Returns
True if the policy is defined for the given observation memory (False otherwise).
# _is_solution_defined_for BFWS
_is_solution_defined_for(
self,
observation: StrDict[D.T_observation]
) -> bool
Indicates whether the solution policy (potentially built from merging several previously computed plans) is defined for a given state
# Parameters
- observation (D.T_agent[D.T_observation]): State for which an entry is searched in the policy graph
# Returns
bool: True if a plan that goes through the state has been previously computed, False otherwise
# _reset Solver
_reset(
self
) -> None
Clears the search graph.
# _sample_action Policies
_sample_action(
self,
observation: StrDict[D.T_observation]
) -> StrDict[list[D.T_event]]
Sample an action for the given observation (from the solver's current policy).
# Parameters
- observation: The observation for which an action must be sampled.
# Returns
The sampled action.
# _solve FromInitialState
_solve(
self,
from_memory: Optional[Memory[D.T_state]] = None
) -> None
Run the solving process.
# Parameters
- from_memory: The source memory (state or history) from which we begin the solving process. If None, initial state is used if the domain is initializable, else a ValueError is raised.
TIP
The nature of the solutions produced here depends on other solver's characteristics like
policy
and assessibility
.
# _solve_from FromAnyState
_solve_from(
self,
memory: Memory[D.T_state]
) -> None
Run the BFWS algorithm from a given root solving state
# Parameters
- memory (D.T_memory[D.T_state]): State from which BFWS graph traversals are performed (root of the search graph)