discrete_optimization.generic_tools.optuna package
Submodules
discrete_optimization.generic_tools.optuna.timed_percentile_pruner module
discrete_optimization.generic_tools.optuna.utils module
Utilities for optuna.
- discrete_optimization.generic_tools.optuna.utils.drop_already_tried_hyperparameters(trial: Trial) None [source]
Fail the trial if using same hyperparameters as a previous one.
- discrete_optimization.generic_tools.optuna.utils.generic_optuna_experiment_monoproblem(problem: Problem, solvers_to_test: list[type[SolverDO]], kwargs_fixed_by_solver: dict[type[SolverDO], dict[str, Any]] | None = None, suggest_optuna_kwargs_by_name_by_solver: dict[type[SolverDO], dict[str, dict[str, Any]]] | None = None, additional_hyperparameters_by_solver: dict[type[SolverDO], list[Hyperparameter]] | None = None, n_trials: int = 150, check_satisfy: bool = True, computation_time_in_study: bool = True, study_basename: str = 'study', create_another_study: bool = True, overwrite_study=False, storage_path: str = './optuna-journal.log', sampler: BaseSampler | None = None, pruner: BasePruner | None = None, seed: int | None = None, min_time_per_solver: int = 5, callbacks: list[Callback] | None = None) optuna.Study [source]
Create and run an optuna study to tune solvers hyperparameters on a given problem.
The optuna study will choose a solver and its hyperparameters in order to optimize the fitness on the given problem.
Pruning is potentially done at each optimization step thanks to dedicated callback. This can be done - either according to the optimization step number (but this is meaningful only when considering
a single solver or at least solvers of a same family so that comparing step number can be done),
or according to the elapsed time (which should be more meaningful when comparing several types of solvers).
The optuna study can be monitored with optuna-dashboard with
optuna-dashboard optuna-journal.log
(or the relevant path set by storage_path)
- Parameters:
problem – problem to consider
solvers_to_test – list of solvers to consider
kwargs_fixed_by_solver – fixed hyperparameters by solver. Can also be other parameters needed by solvers’ __init__(), init_model(), and solve() methods
suggest_optuna_kwargs_by_name_by_solver – kwargs_by_name passed to solvers’ suggest_with_optuna(). Useful to restrict or specify choices, step, high, …
additional_hyperparameters_by_solver – additional user-defined hyperparameters by solver, to be suggested by optuna
n_trials – Number of trials to be run in the optuna study
check_satisfy – Decide whether checking if solution found satisfies the problem. If not satisfying, we consider the trial as failed and prune it without reporting the value.
computation_time_in_study – if True the intermediate reporting and pruning will be labelled according to elapsed time instead of solver internal iteration number.
study_basename – Base name of the study generated. If create_another_study is True, a timestamp will be added to this base name.
create_another_study – if True a timestamp prefix will be added to the study base name in order to avoid overwriting or continuing a previously created study. Should be False, if one wants to add trials to an existing study.
overwrite_study – if True, any study with the same name as the one generated here will be deleted before starting the optuna study. Should be False, if one wants to add trials to an existing study.
storage_path – path to the journal used by optuna used to log the study. Can be a NFS path to allow parallelized optuna studies.
sampler – sampler used by the optuna study. If None, a TPESampler is used with the provided seed.
pruner –
pruner used by the optuna study. If None,
if computation_time_in_study is True: TimedPercentilePruner(percentile=50, n_warmup_steps=min_time_per_solver)
else: MedianPruner()
is used
seed – used to create the sampler if sampler is None. Should be set to an integer if one wants to ensure reproducible results.
min_time_per_solver – if no pruner is defined, and computation_time_in_study is True, we wait for these many seconds before allowing pruning.
callbacks – list of callbacks to plug in solvers’ solve(). By default, use ObjectiveLogger(step_verbosity_level=logging.INFO, end_verbosity_level=logging.INFO) Moreover a OptunaCallback will be added to report intermediate values and prune accordingly.
Returns:
- discrete_optimization.generic_tools.optuna.utils.generic_optuna_experiment_multiproblem(problems: list[Problem], solvers_to_test: list[type[SolverDO]], kwargs_fixed_by_solver: dict[type[SolverDO], dict[str, Any]] | None = None, suggest_optuna_kwargs_by_name_by_solver: dict[type[SolverDO], dict[str, dict[str, Any]]] | None = None, additional_hyperparameters_by_solver: dict[type[SolverDO], list[Hyperparameter]] | None = None, n_trials: int = 150, check_satisfy: bool = True, study_basename: str = 'study', create_another_study: bool = True, overwrite_study=False, storage_path: str = './optuna-journal.log', sampler: BaseSampler | None = None, pruner: BasePruner | None = None, seed: int | None = None, prop_startup_instances: float = 0.2, randomize_instances: bool = True, report_cumulated_fitness: bool = False, callbacks: list[Callback] | None = None) optuna.Study [source]
Create and run an optuna study to tune solvers hyperparameters on several instances of a problem.
The optuna study will choose a solver and its hyperparameters in order to optimize the average fitness on given problem instances.
Pruning is potentially made after each instance is solved based on how previous solvers performed on this same instance.
The optuna study can be monitored with optuna-dashboard with
optuna-dashboard optuna-journal.log
(or the relevant path set by storage_path)
- Parameters:
problems – list of problem instances to consider
solvers_to_test – list of solvers to consider
kwargs_fixed_by_solver – fixed hyperparameters by solver. Can also be other parameters needed by solvers’ __init__(), init_model(), and solve() methods
suggest_optuna_kwargs_by_name_by_solver – kwargs_by_name passed to solvers’ suggest_with_optuna(). Useful to restrict or specify choices, step, high, …
additional_hyperparameters_by_solver – additional user-defined hyperparameters by solver, to be suggested by optuna
n_trials – Number of trials to be run in the optuna study
check_satisfy – Decide whether checking if solution found satisfies the problem. If not satisfying, we consider the trial as failed and prune it without reporting the value.
study_basename – Base name of the study generated. If create_another_study is True, a timestamp will be added to this base name.
create_another_study – if True a timestamp prefix will be added to the study base name in order to avoid overwriting or continuing a previously created study. Should be False, if one wants to add trials to an existing study.
overwrite_study – if True, any study with the same name as the one generated here will be deleted before starting the optuna study. Should be False, if one wants to add trials to an existing study.
storage_path – path to the journal used by optuna used to log the study. Can be a NFS path to allow parallelized optuna studies.
sampler – sampler used by the optuna study. If None, a TPESampler is used with the provided seed.
pruner – pruner used by the optuna study. If None, a WilcoxonPruner is used.
seed – used to create the sampler if sampler is None. Should be set to an integer if one wants to ensure reproducible results.
prop_startup_instances – used if Pruner is None. Proportion of instances used to startup before allowing pruning.
randomize_instances – whether randomizing instances order when running a trial. Should probably set to False if report_cumulated_fitness is set to True.
report_cumulated_fitness – whether reporting cumulated fitness instead of individual fitness for each problem instance. Should be set to False when using WilcoxonPruner.
callbacks – list of callbacks to plug in solvers’ solve(). By default, use ObjectiveLogger(step_verbosity_level=logging.INFO, end_verbosity_level=logging.INFO)
Returns: