discrete_optimization.generic_tools.ls package
Submodules
discrete_optimization.generic_tools.ls.hill_climber module
- class discrete_optimization.generic_tools.ls.hill_climber.HillClimber(problem: Problem, mutator: Mutation, restart_handler: RestartHandler, mode_mutation: ModeMutation, params_objective_function: ParamsObjectiveFunction | None = None, store_solution: bool = False, **kwargs)[source]
Bases:
SolverDO
,WarmstartMixin
- set_warm_start(solution: Solution) None [source]
Make the solver warm start from the given solution.
Will be ignored if arg initial_variable is set and not None in call to solve().
- solve(nb_iteration_max: int, initial_variable: Solution | None = None, callbacks: list[Callback] | None = None, **kwargs: Any) ResultStorage [source]
Generic solving function.
- Parameters:
callbacks – list of callbacks used to hook into the various stage of the solve
**kwargs – any argument specific to the solver
Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.
Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem
- class discrete_optimization.generic_tools.ls.hill_climber.HillClimberPareto(problem: Problem, mutator: Mutation, restart_handler: RestartHandler, mode_mutation: ModeMutation, params_objective_function: ParamsObjectiveFunction | None = None, store_solution: bool = False)[source]
Bases:
HillClimber
- solve(nb_iteration_max: int, initial_variable: Solution | None = None, update_iteration_pareto: int = 1000, callbacks: list[Callback] | None = None, **kwargs: Any) ParetoFront [source]
Generic solving function.
- Parameters:
callbacks – list of callbacks used to hook into the various stage of the solve
**kwargs – any argument specific to the solver
Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.
Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem
discrete_optimization.generic_tools.ls.local_search module
- class discrete_optimization.generic_tools.ls.local_search.ModeMutation(value)[source]
Bases:
Enum
An enumeration.
- MUTATE = 0
- MUTATE_AND_EVALUATE = 1
- class discrete_optimization.generic_tools.ls.local_search.RestartHandler[source]
Bases:
object
- best_fitness: float | TupleFitness
- restart(cur_solution: Solution, cur_objective: float | TupleFitness) tuple[Solution, float | TupleFitness] [source]
- update(nv: Solution, fitness: float | TupleFitness, improved_global: bool, improved_local: bool) None [source]
- class discrete_optimization.generic_tools.ls.local_search.RestartHandlerLimit(nb_iteration_no_improvement: int)[source]
Bases:
RestartHandler
- restart(cur_solution: Solution, cur_objective: float | TupleFitness) tuple[Solution, float | TupleFitness] [source]
discrete_optimization.generic_tools.ls.simulated_annealing module
- class discrete_optimization.generic_tools.ls.simulated_annealing.SimulatedAnnealing(problem: Problem, mutator: Mutation, restart_handler: RestartHandler, temperature_handler: TemperatureScheduling, mode_mutation: ModeMutation, params_objective_function: ParamsObjectiveFunction | None = None, store_solution: bool = False, **kwargs)[source]
Bases:
SolverDO
,WarmstartMixin
- aggreg_from_dict: Callable[[dict[str, float]], float]
- set_warm_start(solution: Solution) None [source]
Make the solver warm start from the given solution.
Will be ignored if arg initial_variable is set and not None in call to solve().
- solve(nb_iteration_max: int, initial_variable: Solution | None = None, callbacks: list[Callback] | None = None, **kwargs: Any) ResultStorage [source]
Generic solving function.
- Parameters:
callbacks – list of callbacks used to hook into the various stage of the solve
**kwargs – any argument specific to the solver
Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.
Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem
- class discrete_optimization.generic_tools.ls.simulated_annealing.TemperatureScheduling[source]
Bases:
object
- nb_iteration: int
- restart_handler: RestartHandler
- temperature: float
- class discrete_optimization.generic_tools.ls.simulated_annealing.TemperatureSchedulingFactor(temperature: float, restart_handler: RestartHandler, coefficient: float = 0.99)[source]
Bases:
TemperatureScheduling