discrete_optimization.tsp.solvers package
Submodules
discrete_optimization.tsp.solvers.cp_mzn module
- class discrete_optimization.tsp.solvers.cp_mzn.CPTspModel[source]
Bases:
object- FLOAT_VERSION = 0
- INT_VERSION = 1
- class discrete_optimization.tsp.solvers.cp_mzn.CpTspSolver(problem: TspProblem, model_type: CPTspModel, cp_solver_name: CpSolverName = CpSolverName.CHUFFED, params_objective_function: ParamsObjectiveFunction | None = None, silent_solve_error: bool = False, **kwargs)[source]
Bases:
MinizincCpSolver,TspSolver- init_model(**args: Any) None[source]
Instantiate a CP model instance
Afterwards, self.instance should not be None anymore.
- retrieve_solution(_output_item: str | None = None, **kwargs: Any) TspSolution[source]
Return a d-o solution from the variables computed by minizinc.
- Parameters:
_output_item – string representing the minizinc solver output passed by minizinc to the solution constructor
**kwargs – keyword arguments passed by minzinc to the solution contructor containing the objective value (key “objective”), and the computed variables as defined in minizinc model.
Returns:
discrete_optimization.tsp.solvers.cpsat module
- class discrete_optimization.tsp.solvers.cpsat.CpSatTspSolver(problem: Problem, params_objective_function: ParamsObjectiveFunction | None = None, **kwargs: Any)[source]
Bases:
OrtoolsCpSatSolver,TspSolver,WarmstartMixin- retrieve_solution(cpsolvercb: CpSolverSolutionCallback) Solution[source]
Construct a do solution from the cpsat solver internal solution.
It will be called each time the cpsat solver find a new solution. At that point, value of internal variables are accessible via cpsolvercb.Value(VARIABLE_NAME).
- Parameters:
cpsolvercb – the ortools callback called when the cpsat solver finds a new solution.
- Returns:
the intermediate solution, at do format.
- set_warm_start(solution: TspSolution) None[source]
Make the solver warm start from the given solution.
discrete_optimization.tsp.solvers.dp module
- class discrete_optimization.tsp.solvers.dp.DpTspSolver(problem: Problem, params_objective_function: ParamsObjectiveFunction | None = None, **kwargs: Any)[source]
Bases:
TspSolver,DpSolver,WarmstartMixin- hyperparameters: list[Hyperparameter] = [CategoricalHyperparameter(name='solver', default=<class 'builtins.CABS'>, depends_on=None, name_in_kwargs='solver'), CategoricalHyperparameter(name='closest_distance', default=False, depends_on=None, name_in_kwargs='closest_distance')]
Hyperparameters available for this solver.
- These hyperparameters are to be feed to **kwargs found in
__init__()
init_model() (when available)
solve()
- init_model(**kwargs: Any) None[source]
Initialize internal model used to solve.
Can initialize a ortools, milp, gurobi, … model.
- problem: TspProblem
- set_warm_start(solution: TspSolution) None[source]
Make the solver warm start from the given solution.
- transitions: dict
discrete_optimization.tsp.solvers.gpdp module
- class discrete_optimization.tsp.solvers.gpdp.GpdpBasedTspSolver(problem: Problem, **kwargs: Any)[source]
Bases:
TspSolver,WarmstartMixin- init_model(**kwargs: Any) None[source]
Initialize internal model used to solve.
Can initialize a ortools, milp, gurobi, … model.
- set_warm_start(solution: Solution) None[source]
Make the solver warm start from the given solution.
- solve(callbacks: list[Callback] | None = None, time_limit: float | None = 100.0, **kwargs: Any) ResultStorage[source]
Generic solving function.
- Parameters:
callbacks – list of callbacks used to hook into the various stage of the solve
**kwargs – any argument specific to the solver
Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.
Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem
discrete_optimization.tsp.solvers.lns_cpsat module
- class discrete_optimization.tsp.solvers.lns_cpsat.SubpathTspConstraintHandler(problem: TspProblem, fraction_segment_to_fix: float = 0.9)[source]
Bases:
OrtoolsCpSatConstraintHandler- adding_constraint_from_results_store(solver: CpSatTspSolver, result_storage: ResultStorage, result_storage_last_iteration: ResultStorage, **kwargs: Any) Iterable[Constraint][source]
Add constraints to the internal model of a solver based on previous solutions
- Parameters:
solver – solver whose internal model is updated
result_storage – all results so far
result_storage_last_iteration – results from last LNS iteration only
**kwargs
- Returns:
list of added constraints
- class discrete_optimization.tsp.solvers.lns_cpsat.TspConstraintHandler(problem: TspProblem, fraction_segment_to_fix: float = 0.9)[source]
Bases:
OrtoolsCpSatConstraintHandler- adding_constraint_from_results_store(solver: CpSatTspSolver, result_storage: ResultStorage, result_storage_last_iteration: ResultStorage, **kwargs: Any) Iterable[Constraint][source]
Add constraints to the internal model of a solver based on previous solutions
- Parameters:
solver – solver whose internal model is updated
result_storage – all results so far
result_storage_last_iteration – results from last LNS iteration only
**kwargs
- Returns:
list of added constraints
discrete_optimization.tsp.solvers.lp_iterative module
- class discrete_optimization.tsp.solvers.lp_iterative.LPIterativeTspSolver(problem: TspProblem, graph_builder: Callable[[TspProblem], tuple[DiGraph, DiGraph, dict[int, set[tuple[int, int]]], dict[int, set[tuple[int, int]]]]], params_objective_function: ParamsObjectiveFunction | None = None, **kwargs: Any)[source]
Bases:
TspSolver- init_model(method: MILPSolver = MILPSolver.CBC, **kwargs: Any) None[source]
Initialize internal model used to solve.
Can initialize a ortools, milp, gurobi, … model.
- plot_solve(solutions: list[set[tuple[int, int]]], rebuilt_solution: list[list[int]], cost: list[float], nb_components: list[int], rebuilt_obj: list[float], show: bool = True, plot_folder: str | None = None) None[source]
- solve(**kwargs: Any) ResultStorage[source]
Generic solving function.
- Parameters:
callbacks – list of callbacks used to hook into the various stage of the solve
**kwargs – any argument specific to the solver
Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.
Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem
- class discrete_optimization.tsp.solvers.lp_iterative.MILPSolver(value)[source]
Bases:
EnumAn enumeration.
- CBC = 1
- GUROBI = 0
- discrete_optimization.tsp.solvers.lp_iterative.build_graph_complete(tsp_model: TspProblem) tuple[DiGraph, DiGraph, dict[int, set[tuple[int, int]]], dict[int, set[tuple[int, int]]]][source]
- discrete_optimization.tsp.solvers.lp_iterative.build_graph_pruned(tsp_model: Point2DTspProblem) tuple[DiGraph, DiGraph, dict[int, set[tuple[int, int]]], dict[int, set[tuple[int, int]]]][source]
- discrete_optimization.tsp.solvers.lp_iterative.build_the_cycles(x_solution: set[tuple[int, int]], component: set[int], graph: DiGraph, start_index: int, end_index: int) tuple[list[int], dict[int, int]][source]
- discrete_optimization.tsp.solvers.lp_iterative.rebuild_tsp_routine(sorted_connected_component: Sequence[tuple[set[int], int]], paths_component: dict[int, list[int]], node_to_component: dict[int, int], indexes: dict[int, dict[int, int]], graph: DiGraph, edges: set[tuple[int, int]], nodeCount: int, list_points: Sequence[Point], evaluate_function_indexes: Callable[[int, int], float], tsp_model: TspProblem, start_index: int = 0, end_index: int = 0) tuple[list[int], dict[str, float]][source]
discrete_optimization.tsp.solvers.optal module
- class discrete_optimization.tsp.solvers.optal.OptalTspSolver(problem: TspProblem, params_objective_function: ParamsObjectiveFunction | None = None, **kwargs: Any)[source]
Bases:
OptalSolver- build_command(parameters_cp: ParametersCp | None = None, time_limit: int = 10, **args: Any)[source]
Build the command line call for optal cp. You can pass parameters from the Parameters class of optal cp for example : searchType=fds, worker0-1.noOverlapPropagationLevel=4 if you want worker 0 and 1 to use this parameters etc. TODO : list such parameters in hyperparameter of this wrapped solver.
- init_model(scaling: float = 100.0, **args: Any) None[source]
Instantiate a CP model instance
Afterwards, self.instance should not be None anymore.
- problem: TspProblem
- retrieve_current_solution(dict_results: dict) TspSolution[source]
- discrete_optimization.tsp.solvers.optal.tsp_to_dict(problem: TspProblem, scaling: float = 100)[source]
Exports the TSP problem to a JSON file, computing and storing the full distance matrix.
discrete_optimization.tsp.solvers.ortools_routing module
Simple travelling salesman problem between cities.
- class discrete_optimization.tsp.solvers.ortools_routing.ORtoolsTspSolver(problem: TspProblem, params_objective_function: ParamsObjectiveFunction | None = None, **kwargs: Any)[source]
Bases:
TspSolver- init_model(**kwargs: Any) None[source]
Initialize internal model used to solve.
Can initialize a ortools, milp, gurobi, … model.
- solve(time_limit: int | None = 100, **kwargs: Any) ResultStorage[source]
Prints solution on console.
discrete_optimization.tsp.solvers.quantum module
- class discrete_optimization.tsp.solvers.quantum.QaoaTspSolver(problem: Point2DTspProblem, params_objective_function: ParamsObjectiveFunction | None = None, **kwargs)[source]
Bases:
TspSolver,QiskitQaoaSolver
- class discrete_optimization.tsp.solvers.quantum.Tsp2dQiskit(problem: Point2DTspProblem)[source]
Bases:
object
- class discrete_optimization.tsp.solvers.quantum.VqeTspSolver(problem: Point2DTspProblem, params_objective_function: ParamsObjectiveFunction | None = None, **kwargs)[source]
Bases:
TspSolver,QiskitVqeSolver
discrete_optimization.tsp.solvers.toulbar module
- class discrete_optimization.tsp.solvers.toulbar.ToulbarTspSolver(problem: TspProblem, **kwargs: Any)[source]
Bases:
ToulbarSolver,TspSolver,WarmstartMixin- hyperparameters: list[Hyperparameter] = [CategoricalHyperparameter(name='vns', default=None, depends_on=None, name_in_kwargs='vns'), CategoricalHyperparameter(name='encoding_all_diff', default='salldiffkp', depends_on=None, name_in_kwargs='encoding_all_diff')]
Hyperparameters available for this solver.
- These hyperparameters are to be feed to **kwargs found in
__init__()
init_model() (when available)
solve()
- init_model(**kwargs: Any) None[source]
Initialize internal model used to solve.
Can initialize a ortools, milp, gurobi, … model.
- retrieve_solution(solution_from_toulbar2: tuple[list, float, int]) TspSolution[source]
- set_warm_start(solution: TspSolution) None[source]
Make the solver warm start from the given solution.
- class discrete_optimization.tsp.solvers.toulbar.TspConstraintHandlerToulbar(fraction_nodes: float = 0.5)[source]
Bases:
ConstraintHandler- adding_constraint_from_results_store(solver: ToulbarTspSolver, result_storage: ResultStorage, result_storage_last_iteration: ResultStorage, **kwargs: Any) Iterable[Any][source]
Add constraints to the internal model of a solver based on previous solutions
- Parameters:
solver – solver whose internal model is updated
result_storage – all results so far
result_storage_last_iteration – results from last LNS iteration only
**kwargs
- Returns:
list of added constraints
discrete_optimization.tsp.solvers.tsp_solver module
- class discrete_optimization.tsp.solvers.tsp_solver.TspSolver(problem: Problem, params_objective_function: ParamsObjectiveFunction | None = None, **kwargs: Any)[source]
Bases:
SolverDO- problem: TspProblem