API Reference¶
This section provides auto-generated documentation from the solvOR source code.
Core Types¶
See the Types page for Result, Status, Progress, and ProgressCallback.
Main Solvers¶
Linear Programming¶
solvor.simplex
¶
Simplex Solver, linear programming that aged like wine.
You're walking along edges of a giant crystal always uphill, until you hit a corner that's optimal. You can visualize it. Most algorithms are abstract symbol manipulation. Simplex is a journey through space.
from solvor.simplex import solve_lp
# minimize c @ x, subject to A @ x <= b, x >= 0
result = solve_lp(c, A, b)
result = solve_lp(c, A, b, minimize=False) # maximize
How it works: starts at a vertex of the feasible polytope (phase 1 finds one if needed). Each iteration pivots to an adjacent vertex with better objective value. Bland's rule prevents cycling. Terminates when no improving neighbor exists.
Use this for:
- Linear objectives with linear constraints
- Resource allocation, blending, production planning
- Transportation and assignment problems
- When you need exact optimum (not approximate)
Parameters:
c: objective coefficients (minimize c @ x)
A: constraint matrix (A @ x <= b)
b: constraint bounds
minimize: True for min, False for max (default: True)
This also does the grunt work inside MILP, solving LP relaxations at each node.
Don't use this for: integer constraints (use MILP), non-linear objectives (use gradient or anneal), or problems with poor numerical scaling (simplex can struggle with badly scaled coefficients).
solve_lp(c, A, b, *, minimize=True, eps=1e-10, max_iter=100000)
¶
Solve linear program: minimize c @ x subject to A @ x <= b, x >= 0.
Source code in solvor/simplex.py
solvor.milp
¶
MILP Solver, linear optimization with integer constraints.
Linear programming with integer constraints. The workhorse of discrete optimization: diet problems, scheduling, set covering, facility location.
from solvor.milp import solve_milp
# minimize c @ x, subject to A @ x <= b, x >= 0, some x integer
result = solve_milp(c, A, b, integers=[0, 2])
result = solve_milp(c, A, b, integers=[0, 1], minimize=False) # maximize
# warm start from previous solution (prunes search tree)
result = solve_milp(c, A, b, integers=[0, 2], warm_start=previous.solution)
# find multiple solutions (result.solutions contains all found)
result = solve_milp(c, A, b, integers=[0, 2], solution_limit=5)
How it works: branch and bound using simplex as subroutine. Solves LP relaxations (ignoring integer constraints), branches on fractional values, prunes subtrees that can't beat current best. Best-first search prioritizes promising branches.
Use this for:
- Linear objectives with integer constraints
- Diet/blending, scheduling, set covering
- Facility location, power grid design
- When you need proven optimal values
Parameters:
c: objective coefficients (minimize c @ x)
A: constraint matrix (A @ x <= b)
b: constraint bounds
integers: indices of integer-constrained variables
minimize: True for min, False for max (default: True)
warm_start: initial solution to prune search tree
solution_limit: find multiple solutions (default: 1)
heuristics: rounding + local search heuristics (default: True)
lns_iterations: LNS improvement passes, 0 = off (default: 0)
lns_destroy_frac: fraction of variables to unfix per LNS iteration (default: 0.3)
seed: random seed for LNS reproducibility
max_nodes: branch-and-bound node limit (default: 100000)
gap_tol: optimality gap tolerance (default: 1e-6)
CP is more expressive for logical constraints. SAT handles pure boolean. For continuous-only problems, use simplex directly.
Constraint Programming¶
solvor.sat
¶
SAT solver, boolean satisfiability with clause learning.
You're navigating a maze where every dead end teaches you which turns to avoid. Hit a contradiction? Learn a new rule that prevents the same mistake. The solver gets smarter with each conflict, cutting through exponential search space.
from solvor.sat import solve_sat
# Clauses in CNF: each clause is OR'd literals, all clauses AND'd
# Positive int = true, negative = false
# (x1 OR x2) AND (NOT x1 OR x3) AND (NOT x2 OR NOT x3)
result = solve_sat([[1, 2], [-1, 3], [-2, -3]])
result = solve_sat(clauses, solution_limit=10) # Find multiple solutions
How it works: CDCL (Conflict-Driven Clause Learning) with VSIDS variable ordering. Makes decisions, propagates unit clauses, and when conflicts occur, learns new clauses that prevent the same conflict. Restarts periodically with Luby sequence.
Use this for:
- Boolean satisfiability problems
- Logic puzzles (Sudoku encoded as SAT)
- Scheduling conflicts and dependency resolution
- Hardware/software verification
Parameters:
clauses: list of clauses, each clause is list of literals (int)
assumptions: literals that must be true
max_conflicts: conflict limit before giving up
solution_limit: find multiple solutions
For integer domains use CP. For exact cover use DLX. Don't use for: optimization (MILP), continuous variables (simplex/gradient).
BinaryImplications
¶
Binary clause (a ∨ b) means ¬a → b and ¬b → a. Stores clause index for conflict analysis.
Source code in solvor/sat.py
luby(i)
¶
Luby restart sequence: 1,1,2,1,1,2,4,1,1,2,1,1,2,4,8,...
solvor.cp
¶
CP Solver, constraint programming with backtracking search.
Write constraints like a human ("all different", "x + y == 10"), and the solver finds valid assignments. Perfect for puzzles and scheduling.
from solvor.cp import Model
m = Model()
x = m.int_var(0, 9, 'x')
y = m.int_var(0, 9, 'y')
m.add(m.all_different([x, y]))
m.add(x + y == 10)
result = m.solve() # {'x': 3, 'y': 7} or similar
# global constraints for scheduling and routing
m.add(m.circuit([x, y, z])) # Hamiltonian cycle
m.add(m.no_overlap(starts, durations)) # intervals don't overlap
m.add(m.cumulative(starts, durations, demands, capacity)) # resource limit
How it works: DFS with constraint propagation and arc consistency. Falls back to SAT encoding automatically for complex global constraints. MRV (minimum remaining values) heuristic picks the most constrained variable first.
Use this for:
- Sudoku, N-Queens, logic puzzles
- Nurse rostering and timetabling
- Scheduling with complex constraints
- When "all different" or other global constraints fit naturally
Parameters:
Model.int_var(lb, ub, name): create integer variable
Model.add(constraint): add constraint
Model.solve(hints, solution_limit, solver): find solutions
hints: initial value hints to guide search
solution_limit: find multiple solutions
solver: 'auto' (default), 'dfs', or 'sat'
For optimization use MILP. For heavier constraint logic, see Z3.
Expr
¶
Wrapper for expression tuples to support comparison operators.
Source code in solvor/cp.py
Model
¶
Source code in solvor/cp.py
155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 | |
circuit(variables)
¶
cumulative(starts, durations, demands, capacity)
¶
At any time, sum of active demands <= capacity.
Source code in solvor/cp.py
no_overlap(starts, durations)
¶
Intervals (starts[i], starts[i]+durations[i]) don't overlap.
Source code in solvor/cp.py
solve(*, hints=None, solution_limit=1, solver='auto', **kwargs)
¶
Solve the constraint satisfaction problem.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
hints
|
dict[str, int] | None
|
Initial value hints to guide search. |
None
|
solution_limit
|
int
|
Max solutions to find (default 1). |
1
|
solver
|
str
|
'auto' (default), 'dfs' (fast backtracking), or 'sat' (SAT encoding). 'auto' picks DFS for simple constraints, SAT for global constraints. |
'auto'
|
**kwargs
|
Any
|
Additional solver options. |
{}
|
Returns:
| Type | Description |
|---|---|
Result
|
Result with solution dict mapping variable names to values. |
Source code in solvor/cp.py
Metaheuristics¶
solvor.anneal
¶
Simulated Annealing, black box optimization that handles local optima well.
Inspired by metallurgical annealing: heat metal, let it cool slowly, atoms find low-energy configurations. Here, "temperature" controls how likely we are to accept worse solutions, allowing escape from local optima.
from solvor.anneal import anneal, linear_cooling
result = anneal(start, objective_fn, neighbor_fn)
result = anneal(start, objective_fn, neighbor_fn, minimize=False) # maximize
result = anneal(start, objective_fn, neighbor_fn, cooling=linear_cooling())
How it works: at each step, generate a neighbor. If better, accept it. If worse, accept with probability exp(-delta/temperature). High temperature = explore freely, low temperature = exploit best found. Temperature decreases over time.
Use this for:
- Black-box optimization without gradients
- Landscapes with many local optima
- Fast prototyping when problem structure is unknown
- When you can define a good neighbor function
Parameters:
initial: starting solution
objective_fn: function mapping solution to score
neighbors: function returning a random neighbor
temperature: starting temperature (default: 1000)
cooling: cooling schedule or rate (default: 0.9995)
The neighbor function is key: good neighbors make small moves, not random jumps. Think "swap two cities" for TSP, not "shuffle everything".
Cooling schedules:
exponential_cooling(rate) : temp = initial * rate^iter (default, classic)
linear_cooling(min_temp) : temp decreases linearly to min_temp
logarithmic_cooling(c) : temp = initial / (1 + c * log(1 + iter))
Don't use this for: problems needing guarantees, or constraints easier to encode in MILP/CP. Consider tabu (memory) or genetic (population) for more control.
anneal(initial, objective_fn, neighbors, *, minimize=True, temperature=1000.0, cooling=0.9995, min_temp=1e-08, max_iter=100000, seed=None, on_progress=None, progress_interval=0)
¶
Simulated annealing with configurable cooling schedule.
Source code in solvor/anneal.py
exponential_cooling(rate=0.9995)
¶
Geometric cooling: temp = initial * rate^iteration.
Source code in solvor/anneal.py
linear_cooling(min_temp=1e-08)
¶
Linear cooling: temp decreases linearly from initial to min_temp.
Source code in solvor/anneal.py
logarithmic_cooling(c=1.0)
¶
Logarithmic cooling: temp = initial / (1 + c * log(1 + iteration)).
Source code in solvor/anneal.py
solvor.tabu
¶
Tabu Search, local search with memory to escape cycles.
Like anneal, this explores neighbors. Unlike anneal, it's greedy: always pick the best neighbor. The trick is the tabu list, a memory of recent moves that are temporarily forbidden. This prevents cycling back to solutions you just left, forcing the search to explore new territory.
Use this for routing (traveling salesmen problem), scheduling, or when you want more control than anneal. Tabu is deterministic where anneal is probabilistic, so results are more reproducible and easier to debug.
from solvor.tabu import tabu_search, solve_tsp
result = tabu_search(start, objective_fn, neighbors_fn)
result = solve_tsp(distance_matrix) # built-in TSP helper
How it works: each iteration picks the best neighbor not on the tabu list.
The chosen move goes on the list for cooldown iterations. Aspiration allows
tabu moves if they beat the global best.
Use this for:
- Routing (TSP, VRP)
- Scheduling and timetabling
- When you want deterministic, reproducible results
- When you need more control than anneal
The neighbor function must return (move, new_solution) pairs where move is hashable (so it can go in the tabu list). Think (i, j) for "swap cities i and j".
Parameters:
initial: starting solution
objective_fn: function mapping solution to score
neighbors: returns list of (move, new_solution) pairs
cooldown: how long moves stay tabu (default: 10)
max_no_improve: stop after this many iterations without improvement
seed: random seed for tie-breaking when multiple neighbors have equal cost
Genetic is population-based (more overhead, better diversity), anneal is probabilistic (simpler setup), tabu is greedy with memory (more predictable).
solvor.genetic
¶
Genetic Algorithm, population-based search that excels at multi-objective problems.
Slower than modern solvers for single-objective problems, and gradients will beat it on continuous optimization. But genetic algorithms are king at multi-objective optimization where objectives compete (Pareto fronts). Great at exploration when you're searching in the dark without structure to exploit. And they parallelize beautifully while most solvers don't.
Unlike anneal/tabu (single solution), this evolves a whole population. More overhead, but better diversity and less likely to get trapped.
from solvor.genetic import evolve
result = evolve(objective_fn, population, crossover_fn, mutate_fn)
result = evolve(objective_fn, pop, cross, mut, minimize=False) # maximize
How it works: each generation, select parents via tournament, combine with crossover to produce children, apply random mutations. Elite individuals survive unchanged. Adaptive mutation rate increases when stagnating.
Use this for:
- Multi-objective optimization (Pareto fronts)
- Exploration when problem structure is unknown
- Problems that parallelize well
- When you can define good crossover/mutation operators
Parameters:
objective_fn: fitness function mapping solution to score
population: starting solutions (bigger = more diversity but slower)
crossover: combine two parents into one child
mutate: small random changes to a solution
elite_size: survivors per generation (default: 2)
mutation_rate: how often to mutate (default: 0.1)
tournament_k: selection pressure (default: 3)
Don't use this for: problems with gradient info (use gradient descent), convex problems (use simplex), or discrete structured problems (use CP/SAT).
evolve(objective_fn, population, crossover, mutate, *, minimize=True, elite_size=2, mutation_rate=0.1, adaptive_mutation=False, max_iter=100, tournament_k=3, seed=None, on_progress=None, progress_interval=0)
¶
Genetic algorithm with optional adaptive mutation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
adaptive_mutation
|
bool
|
If True, increase mutation rate when population diversity is low (stagnation), decrease when improving. |
False
|
Source code in solvor/genetic.py
Graph Algorithms¶
solvor.dijkstra
¶
Dijkstra's algorithm for weighted shortest paths.
The classic algorithm for finding shortest paths in graphs with non-negative edge weights. Named after Edsger Dijkstra, who designed it in 1956.
from solvor.dijkstra import dijkstra
result = dijkstra(start, goal, neighbors)
result = dijkstra(start, lambda s: s.is_target, neighbors)
How it works: maintains a priority queue of (distance, node) pairs. Each iteration pops the closest unvisited node, marks it visited, and relaxes its edges. Guarantees shortest path when all edges are non-negative.
Use this for:
- Road networks and routing
- Any graph where "shortest" means minimum total weight
- When edge weights are non-negative
- As foundation for A* (add heuristic for goal-directed search)
Parameters:
start: starting node
goal: target node, or predicate function returning True at goal
neighbors: function returning (neighbor, edge_cost) pairs
Two variants available:
dijkstra() - Callback-based, works with any node type (pure Python)
dijkstra_edges() - Edge-list, integer nodes 0..n-1, has Rust backend (5-10x faster)
Use dijkstra_edges(backend="python") for the pure Python implementation.
For negative edges use bellman_ford, Dijkstra's negativity was legendary, just not in his algorithm. For unweighted graphs use bfs (simpler). With a good distance estimate, use a_star.
dijkstra(start, goal, neighbors, *, max_iter=1000000, max_cost=None)
¶
Find shortest path in a weighted graph with non-negative edges.
Source code in solvor/dijkstra.py
dijkstra_edges(n_nodes, edges, source, *, target=None)
¶
Edge-list Dijkstra for integer node graphs.
Source code in solvor/dijkstra.py
solvor.a_star
¶
A* search for goal-directed shortest paths with heuristics.
Dijkstra explores in all directions like ripples in a pond. A* knows where it's going and prioritizes paths that look promising. Give it a heuristic (estimate to goal) and it expands far fewer nodes.
from solvor.a_star import astar, astar_grid
result = astar(start, goal, neighbors, heuristic)
result = astar_grid(maze, (0, 0), (9, 9), directions=8)
How it works: like Dijkstra but prioritizes by f(n) = g(n) + h(n), where g is cost so far and h is estimated cost to goal. Heuristic must be admissible (never overestimate) for optimal results. Weighted A* (weight > 1) trades optimality for speed.
Use this for:
- Pathfinding with known goal location
- When you have a good distance heuristic
- Grid navigation and game AI
- When Dijkstra is too slow
Parameters:
start: starting node
goal: target node or predicate function
neighbors: returns (neighbor, cost) pairs
heuristic: estimates distance to goal (must not overestimate)
weight: heuristic weight, >1 for faster suboptimal search
astar_grid provides built-in heuristics for 2D grids.
Don't use this for: unknown goal (use dijkstra), negative edges (use bellman_ford).
astar(start, goal, neighbors, heuristic, *, weight=1.0, max_iter=1000000, max_cost=None)
¶
A* search with heuristic guidance, returns optimal path when weight=1.
Source code in solvor/a_star.py
astar_grid(grid, start, goal, *, directions=4, heuristic='auto', blocked=1, costs=None, weight=1.0, max_iter=1000000)
¶
A* for 2D grids with built-in heuristics and neighbor generation.
Source code in solvor/a_star.py
solvor.flow
¶
Network Flow, for assignment, matching, and bottleneck analysis.
Use max_flow when you need "how much can I push through this network?" Use min_cost_flow when cost matters: "what's the cheapest way to route X units?"
from solvor.flow import max_flow, min_cost_flow
# graph: {node: [(neighbor, capacity, cost), ...]}
result = max_flow(graph, source='s', sink='t')
result = min_cost_flow(graph, source='s', sink='t', demand=10)
How it works: max_flow uses Ford-Fulkerson with BFS (Edmonds-Karp), repeatedly finding augmenting paths until none exist. min_cost_flow uses successive shortest paths via Bellman-Ford, finding the cheapest augmenting path each time. The max-flow min-cut theorem means you find bottlenecks for free.
Use this for:
- Assignment and matching problems
- Server-to-request allocation
- Transportation and logistics
- Finding network bottlenecks (max-flow = min-cut)
Parameters:
graph: adjacency dict {node: [(neighbor, capacity, cost), ...]}
source: source node
sink: sink node
demand: (min_cost_flow only) units to route
Works well with NetworkX for graph construction. For heavier graph work (shortest paths, centrality, community detection), NetworkX is more extensive.
max_flow(graph, source, sink)
¶
Find maximum flow from source to sink using Ford-Fulkerson (FFA) with BFS.
Source code in solvor/flow.py
min_cost_flow(graph, source, sink, demand)
¶
Route demand units from source to sink at minimum total cost.
Source code in solvor/flow.py
100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | |
solve_assignment(cost_matrix)
¶
Solve assignment problem via min-cost flow reduction.