discrete_optimization.ovensched package
Subpackages
- discrete_optimization.ovensched.solvers package
- Submodules
- discrete_optimization.ovensched.solvers.cpsat module
- discrete_optimization.ovensched.solvers.dp module
- discrete_optimization.ovensched.solvers.greedy module
- discrete_optimization.ovensched.solvers.mutations module
- discrete_optimization.ovensched.solvers.optal module
- Module contents
Submodules
discrete_optimization.ovensched.parser module
- discrete_optimization.ovensched.parser.get_data_available(data_folder: str | None = None, data_home: str | None = None) list[str][source]
Get datasets available for oven scheduling problem.
- Params:
- data_folder: folder where datasets for ovensched should be found.
If None, we look in “ovensched” subdirectory of data_home.
- data_home: root directory for all datasets. If None, set by
default to “~/discrete_optimization_data”
- discrete_optimization.ovensched.parser.parse_dat_file(file_path: str) OvenSchedulingProblem[source]
Parses a .dat file and returns an OvenSchedulingProblem instance. This version includes robust error handling.
discrete_optimization.ovensched.problem module
- class discrete_optimization.ovensched.problem.MachineData(capacity: 'int', initial_attribute: 'TaskAttribute', availability: 'list[tuple[int, int]]')[source]
Bases:
object- availability: list[tuple[int, int]]
- capacity: int
- initial_attribute: int
- class discrete_optimization.ovensched.problem.OvenSchedulingProblem(n_jobs: int, n_machines: int, tasks_data: list[TaskData], machines_data: list[MachineData], setup_costs: list[list[int]], setup_times: list[list[int]])[source]
Bases:
SchedulingProblem[int],AllocationProblem[int,tuple[int,int]]Defines an instance of the Oven Scheduling Problem (OSP) and its evaluation logic.
- evaluate(variable: OvenSchedulingSolution) dict[str, float][source]
Evaluates a solution’s performance.
- get_attribute_register() EncodingRegister[source]
Defines the solution’s encoding.
- get_dummy_solution(**kwargs) Solution[source]
Generate a dummy (random) solution using the vector-based encoding.
This creates a VectorOvenSchedulingSolution with a random permutation, which is then decoded into a feasible schedule using the greedy decoder.
- Parameters:
**kwargs – Optional arguments: - random_seed: Random seed for reproducibility - max_open_batches: Max open batches for decoder (default: use class default)
- Returns:
VectorOvenSchedulingSolution with random permutation
- get_objective_register() ObjectiveRegister[source]
Defines the problem’s objectives.
- satisfy(variable: OvenSchedulingSolution) bool[source]
Checks if a solution is feasible by verifying static and temporal constraints.
If a timed_schedule is already cached in the solution, this function will validate the cached timings. Otherwise, it will attempt to compute them.
- property tasks_list: list[int]
List of all tasks to schedule or allocate to.
- property unary_resources_list: list[tuple[int, int]]
Available unary resources.
It can correspond to employees (rcpsp-multiskill), teams (workforce-scheduling), or a mix of several types.
- class discrete_optimization.ovensched.problem.OvenSchedulingSolution(problem: OvenSchedulingProblem, schedule_per_machine: dict[int, list[ScheduleInfo]])[source]
Bases:
SchedulingSolution[int],AllocationSolution[int,tuple[int,int]]Represents a solution to the Oven Scheduling Problem.
- copy() OvenSchedulingSolution[source]
Creates a deep copy of the solution.
- get_max_nb_batch_per_machine() int[source]
Get the maximum number of batches used on any machine in this solution.
This is useful for configuring CP-SAT models with warm-start: instead of using a large default, we can set max_nb_batch close to what the warm-start solution uses, reducing the model size.
- Returns:
Maximum number of batches across all machines
- get_summary_string() str[source]
Generate a human-readable summary of the solution.
- Returns:
A formatted string describing the solution
- is_allocated(task: int, unary_resource: tuple[int, int]) bool[source]
Return the usage of the unary resource for the given task.
- Parameters:
task
unary_resource
Returns:
- problem: OvenSchedulingProblem
- schedule_per_task: dict[int, tuple[int, int]]
- class discrete_optimization.ovensched.problem.ScheduleInfo(tasks: set[int], task_attribute: int, start_time: int, end_time: int, machine_batch_index: tuple[int, int])[source]
Bases:
objectDescribe the batch
- end_time: int
- machine_batch_index: tuple[int, int]
- start_time: int
- task_attribute: int
- tasks: set[int]
- class discrete_optimization.ovensched.problem.TaskData(attribute: 'TaskAttribute', min_duration: 'int', max_duration: 'int', earliest_start: 'int', latest_end: 'int', eligible_machines: 'set[Machine]', size: 'int')[source]
Bases:
object- attribute: int
- earliest_start: int
- eligible_machines: set[int]
- latest_end: int
- max_duration: int
- min_duration: int
- size: int
discrete_optimization.ovensched.solution_vector module
Vector-based solution encoding for the Oven Scheduling Problem.
This module provides a permutation-based encoding suitable for metaheuristic algorithms like simulated annealing, hill climbing, and genetic algorithms.
- class discrete_optimization.ovensched.solution_vector.VectorOvenSchedulingSolution(problem: OvenSchedulingProblem, permutation: npt.NDArray[np.int_] | None = None, schedule_per_machine: dict['Machine', list['ScheduleInfo']] | None = None)[source]
Bases:
SolutionVector-based solution for the Oven Scheduling Problem.
This solution is encoded as a permutation of tasks, which is decoded into a schedule using a greedy decoding scheme. This encoding is suitable for metaheuristic algorithms like simulated annealing and genetic algorithms.
The permutation represents the priority order in which tasks are scheduled. A greedy decoder processes tasks in this order and assigns them to machines.
The decoder can maintain multiple open batches per machine, allowing better consolidation of tasks with the same attribute.
- problem
The OvenSchedulingProblem instance
- Type:
- permutation
Task permutation (numpy array of task indices)
- _schedule_cache
Cached decoded schedule
- _cache_max_open
Max open batches used for cached schedule
- DEFAULT_MAX_OPEN_BATCHES = 5
- change_problem(new_problem: OvenSchedulingProblem) VectorOvenSchedulingSolution[source]
Change the problem instance.
- Parameters:
new_problem – New problem instance
- Returns:
New solution with the new problem
- copy() VectorOvenSchedulingSolution[source]
Create a deep copy of the solution.
- evaluate(max_open_batches: int | None = None) dict[str, float][source]
Evaluate the solution.
- Parameters:
max_open_batches – Maximum open batches for decoder (uses DEFAULT_MAX_OPEN_BATCHES if None)
- Returns:
Dictionary of objective values
- get_schedule(max_open_batches: int | None = None)[source]
Get the decoded schedule (cached).
- Parameters:
max_open_batches – Maximum open batches per machine during decoding. If None, uses DEFAULT_MAX_OPEN_BATCHES.
- Returns:
OvenSchedulingSolution instance
- get_summary_string(max_open_batches: int | None = None) str[source]
Generate a human-readable summary of the solution.
- Parameters:
max_open_batches – Maximum open batches for decoder (uses DEFAULT_MAX_OPEN_BATCHES if None)
- Returns:
Formatted string describing the solution
- is_feasible(max_open_batches: int | None = None) bool[source]
Check if the solution is feasible.
- Parameters:
max_open_batches – Maximum open batches for decoder (uses DEFAULT_MAX_OPEN_BATCHES if None)
- Returns:
True if feasible, False otherwise
- print_summary(max_open_batches: int | None = None)[source]
Print a human-readable summary of the solution.
- Parameters:
max_open_batches – Maximum open batches for decoder (uses DEFAULT_MAX_OPEN_BATCHES if None)
- problem: OvenSchedulingProblem
- property schedule_per_machine
Get the schedule_per_machine dictionary (delegates to decoded schedule).
This property makes VectorOvenSchedulingSolution compatible with the problem’s evaluate() and satisfy() methods.
- to_oven_scheduling_solution(max_open_batches: int | None = None) OvenSchedulingSolution[source]
Convert to standard OvenSchedulingSolution.
This is useful for compatibility with solvers that expect the standard solution format (e.g., CP-SAT warm-start).
- Parameters:
max_open_batches – Max open batches for decoder (uses DEFAULT_MAX_OPEN_BATCHES if None)
- Returns:
OvenSchedulingSolution with decoded schedule
- discrete_optimization.ovensched.solution_vector.decode_permutation_to_schedule(problem: OvenSchedulingProblem, permutation: npt.NDArray[np.int_], max_open_batches_per_machine: int = 3) dict['Machine', list['ScheduleInfo']][source]
Decode a task permutation into a schedule using an improved greedy strategy.
This decoder maintains multiple open batches per machine simultaneously, allowing better batching of tasks with the same attribute even if they appear at different positions in the permutation.
- Strategy:
Process tasks in permutation order
For each task, try to add it to any compatible open batch on eligible machines
Compatible = same attribute, fits capacity, duration compatible
If no compatible batch, start new one (closing oldest if max batches reached)
Uses heap to manage batch priority (oldest batches get closed first)
- Parameters:
problem – The OvenSchedulingProblem instance
permutation – Array of task indices in scheduling order
max_open_batches_per_machine – Maximum number of open batches per machine (default: 3)
- Returns:
Dictionary mapping each machine to its list of scheduled batches
- discrete_optimization.ovensched.solution_vector.extract_permutation_from_schedule(problem: OvenSchedulingProblem, schedule_per_machine: dict['Machine', list['ScheduleInfo']]) npt.NDArray[np.int_][source]
Extract a task permutation from a schedule.
Tasks are sorted by their start time (earlier tasks first). If two tasks start at the same time, they are sorted by task ID.
- Parameters:
problem – The problem instance (for validation)
schedule_per_machine – The schedule to extract from
- Returns:
Array of task indices in start-time order
- discrete_optimization.ovensched.solution_vector.generate_random_permutation(problem: OvenSchedulingProblem) npt.NDArray[np.int_][source]
Generate a random task permutation.
- Parameters:
problem – The problem instance
- Returns:
Random permutation of task indices
discrete_optimization.ovensched.utils module
Utility functions for visualization and analysis of oven scheduling solutions.
- discrete_optimization.ovensched.utils.plot_attribute_distribution(solution: OvenSchedulingSolution, figsize: tuple[int, int] = (10, 6)) None[source]
Plot distribution of task attributes across batches.
- Parameters:
solution – The solution to analyze
figsize – Figure size (width, height) in inches
- Raises:
ImportError – If matplotlib is not available
- discrete_optimization.ovensched.utils.plot_machine_utilization(solution: OvenSchedulingSolution, figsize: tuple[int, int] = (12, 6)) None[source]
Plot machine utilization statistics.
- Parameters:
solution – The solution to analyze
figsize – Figure size (width, height) in inches
- Raises:
ImportError – If matplotlib is not available
- discrete_optimization.ovensched.utils.plot_solution(solution: OvenSchedulingSolution, figsize: tuple[int, int] = (16, 10), show_task_ids: bool = True, show_setup_times: bool = True, title: str | None = None) None[source]
Plot a Gantt chart visualization of the oven scheduling solution.
- Parameters:
solution – The solution to visualize
figsize – Figure size (width, height) in inches
show_task_ids – Whether to show task IDs on the chart
show_setup_times – Whether to show setup times as hatched areas
title – Custom title for the plot (default: auto-generated)
- Raises:
ImportError – If matplotlib is not available
Module contents
Oven Scheduling Problem module.
- class discrete_optimization.ovensched.MachineData(capacity: 'int', initial_attribute: 'TaskAttribute', availability: 'list[tuple[int, int]]')[source]
Bases:
object- availability: list[tuple[int, int]]
- capacity: int
- initial_attribute: int
- class discrete_optimization.ovensched.OvenSchedulingProblem(n_jobs: int, n_machines: int, tasks_data: list[TaskData], machines_data: list[MachineData], setup_costs: list[list[int]], setup_times: list[list[int]])[source]
Bases:
SchedulingProblem[int],AllocationProblem[int,tuple[int,int]]Defines an instance of the Oven Scheduling Problem (OSP) and its evaluation logic.
- evaluate(variable: OvenSchedulingSolution) dict[str, float][source]
Evaluates a solution’s performance.
- get_attribute_register() EncodingRegister[source]
Defines the solution’s encoding.
- get_dummy_solution(**kwargs) Solution[source]
Generate a dummy (random) solution using the vector-based encoding.
This creates a VectorOvenSchedulingSolution with a random permutation, which is then decoded into a feasible schedule using the greedy decoder.
- Parameters:
**kwargs – Optional arguments: - random_seed: Random seed for reproducibility - max_open_batches: Max open batches for decoder (default: use class default)
- Returns:
VectorOvenSchedulingSolution with random permutation
- get_objective_register() ObjectiveRegister[source]
Defines the problem’s objectives.
- satisfy(variable: OvenSchedulingSolution) bool[source]
Checks if a solution is feasible by verifying static and temporal constraints.
If a timed_schedule is already cached in the solution, this function will validate the cached timings. Otherwise, it will attempt to compute them.
- property tasks_list: list[int]
List of all tasks to schedule or allocate to.
- property unary_resources_list: list[tuple[int, int]]
Available unary resources.
It can correspond to employees (rcpsp-multiskill), teams (workforce-scheduling), or a mix of several types.
- class discrete_optimization.ovensched.OvenSchedulingSolution(problem: OvenSchedulingProblem, schedule_per_machine: dict[int, list[ScheduleInfo]])[source]
Bases:
SchedulingSolution[int],AllocationSolution[int,tuple[int,int]]Represents a solution to the Oven Scheduling Problem.
- copy() OvenSchedulingSolution[source]
Creates a deep copy of the solution.
- get_max_nb_batch_per_machine() int[source]
Get the maximum number of batches used on any machine in this solution.
This is useful for configuring CP-SAT models with warm-start: instead of using a large default, we can set max_nb_batch close to what the warm-start solution uses, reducing the model size.
- Returns:
Maximum number of batches across all machines
- get_summary_string() str[source]
Generate a human-readable summary of the solution.
- Returns:
A formatted string describing the solution
- is_allocated(task: int, unary_resource: tuple[int, int]) bool[source]
Return the usage of the unary resource for the given task.
- Parameters:
task
unary_resource
Returns:
- problem: OvenSchedulingProblem
- schedule_per_task: dict[int, tuple[int, int]]
- class discrete_optimization.ovensched.ScheduleInfo(tasks: set[int], task_attribute: int, start_time: int, end_time: int, machine_batch_index: tuple[int, int])[source]
Bases:
objectDescribe the batch
- end_time: int
- machine_batch_index: tuple[int, int]
- start_time: int
- task_attribute: int
- tasks: set[int]
- class discrete_optimization.ovensched.TaskData(attribute: 'TaskAttribute', min_duration: 'int', max_duration: 'int', earliest_start: 'int', latest_end: 'int', eligible_machines: 'set[Machine]', size: 'int')[source]
Bases:
object- attribute: int
- earliest_start: int
- eligible_machines: set[int]
- latest_end: int
- max_duration: int
- min_duration: int
- size: int