API Reference¶
This section provides auto-generated documentation from the solvOR source code.
Core Types¶
solvor.types
¶
Shared types for all solvers.
Progress
dataclass
¶
Solver progress info passed to callbacks.
iteration: Current iteration number objective: Current objective value best: Best objective found so far (None if same as objective) evaluations: Number of objective function evaluations
Source code in solvor/types.py
Main Solvers¶
Linear Programming¶
solvor.simplex
¶
Simplex Solver, linear programming that aged like wine.
You're walking along edges of a giant crystal always uphill, until you hit a corner that's optimal. You can visualize it. Most algorithms are abstract symbol manipulation. Simplex is a journey through space.
Use this for linear objectives with linear constraints and continuous variables. Resource allocation, blending, production planning, transportation. Unlike gradient descent which approximates, simplex finds the exact optimum for LP.
from solvor.simplex import solve_lp
# minimize c @ x, subject to A @ x <= b, x >= 0
result = solve_lp(c, A, b)
result = solve_lp(c, A, b, minimize=False) # maximize
This is also doing the grunt work inside MILP, which is branch and bound that calls simplex repeatedly to solve LP relaxations.
Don't use this for: integer constraints (use MILP), non-linear objectives (use gradient or anneal), or problems with poor numerical scaling (simplex can struggle with badly scaled coefficients).
solve_lp(c, A, b, *, minimize=True, eps=1e-10, max_iter=100000)
¶
Solve linear program: minimize c @ x subject to A @ x <= b, x >= 0.
Source code in solvor/simplex.py
solvor.milp
¶
MILP Solver, linear optimization with integer constraints.
Use this when you need to minimize or maximize a linear objective with linear constraints, and some variables must be integers. Classic problems: diet/blending (minimize cost meeting nutritional requirements), scheduling with discrete slots, set covering, power grid design, energy management (optimizing consumption across timeslots based on production forecasts).
Under the hood, this is branch and bound using simplex as a subroutine. It solves LP relaxations (ignoring integer constraints), then branches on fractional values, pruning subtrees that can't beat the current best. Best-first search prioritizes the most promising branches.
from solvor.milp import solve_milp
# minimize c @ x, subject to A @ x <= b, x >= 0, some x integer
result = solve_milp(c, A, b, integers=[0, 2])
result = solve_milp(c, A, b, integers=[0, 1], minimize=False) # maximize
# warm start from previous solution (prunes search tree)
result = solve_milp(c, A, b, integers=[0, 2], warm_start=previous.solution)
# find multiple solutions (result.solutions contains all found)
result = solve_milp(c, A, b, integers=[0, 2], solution_limit=5)
CP is more expressive for logical constraints like "all different" but doesn't optimize. SAT handles boolean satisfiability only. MILP is for when you have a clear linear objective and need the optimal value.
If your problem is purely continuous (no integers), just use simplex (directly).
Constraint Programming¶
solvor.sat
¶
SAT solver, boolean satisfiability with clause learning.
You're navigating a maze where every dead end teaches you which turns to avoid. Hit a contradiction? Learn a new rule that prevents the same mistake. The solver gets smarter with each conflict, cutting through exponential search space.
Use this for "is this configuration valid?" problems. Logic puzzles, scheduling conflicts, dependency resolution, anything that boils down to: given these boolean constraints, find an assignment that satisfies all of them.
from solvor.sat import solve_sat
# Clauses in CNF: each clause is OR'd literals, all clauses AND'd
# Positive int = true, negative = false
# (x1 OR x2) AND (NOT x1 OR x3) AND (NOT x2 OR NOT x3)
result = solve_sat([[1, 2], [-1, 3], [-2, -3]])
result = solve_sat(clauses, solution_limit=10) # Find multiple solutions
This is the engine behind CP. Constraint programming encodes integer variables as booleans and feeds them here. For exact cover problems specifically, DLX is more efficient than encoding to SAT.
Don't use this for: optimization (use MILP), continuous variables (use simplex or gradient), or when integer domains are more natural (use CP, which handles the boolean encoding for you).
BinaryImplications
¶
Binary clause (a ∨ b) means ¬a → b and ¬b → a. Stores clause index for conflict analysis.
Source code in solvor/sat.py
luby(i)
¶
Luby restart sequence: 1,1,2,1,1,2,4,1,1,2,1,1,2,4,8,...
solvor.cp
¶
CP-SAT Solver, constraint programming with a SAT backend.
Use this for puzzles and scheduling with "all different", arithmetic constraints, and logical combinations. Sudoku, N-Queens, nurse rostering, timetabling. The solver encodes your integer variables and constraints into SAT clauses, then hands it off to the SAT solver. You get the expressiveness of CP with the raw solving power of modern SAT.
Don't use this for: optimization problems (use milp), pure linear problems (simplex is simpler and faster), or trivially small problems where the encoding overhead isn't worth it.
from solvor.cp import Model
m = Model()
x = m.int_var(0, 9, 'x')
y = m.int_var(0, 9, 'y')
m.add(m.all_different([x, y]))
m.add(x + y == 10)
result = m.solve() # {'x': 3, 'y': 7} or similar
# warm start with hints (guides search toward known values)
result = m.solve(hints={'x': 3})
# find multiple solutions (result.solutions contains all found)
result = m.solve(solution_limit=10)
# global constraints for scheduling and routing
m.add(m.circuit([x, y, z])) # Hamiltonian cycle
m.add(m.no_overlap(starts, durations)) # intervals don't overlap
m.add(m.cumulative(starts, durations, demands, capacity)) # resource limit
For problems that need both satisfaction and optimization, or heavier constraint logic, z3 sits nicely between this CP approach and full MILP.
Model
¶
Source code in solvor/cp.py
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 | |
circuit(variables)
¶
cumulative(starts, durations, demands, capacity)
¶
At any time, sum of active demands <= capacity.
Source code in solvor/cp.py
no_overlap(starts, durations)
¶
Intervals (starts[i], starts[i]+durations[i]) don't overlap.
Source code in solvor/cp.py
Metaheuristics¶
solvor.anneal
¶
Simulated Annealing, a black box optimization that handles local optima well.
Use this when you have an objective function and can generate neighbors, but don't know much else about the problem structure. Fast to prototype, low constraints on problem formulation. Works well on landscapes with many local optima where gradient descent would get stuck.
Don't use this for: problems where you need guarantees, or when you have constraints that are easier to encode in MILP/CP. If you need more control over the search, consider tabu (memory of visited states) or genetic (population-based exploration).
from solvor.anneal import anneal, linear_cooling
result = anneal(start, objective_fn, neighbor_fn)
result = anneal(start, objective_fn, neighbor_fn, minimize=False) # maximize
result = anneal(start, objective_fn, neighbor_fn, cooling=linear_cooling())
The neighbor function is the key part, good neighbors make small moves, not random jumps. Think "swap two cities" for TSP, not "shuffle everything". Small perturbations let the algorithm actually explore the neighborhood instead of teleporting randomly.
If it's getting stuck: try higher starting temperature or slower cooling. If it's taking forever: cool faster.
Cooling schedules
exponential_cooling(rate) : temp = initial * rate^iter (default, classic) linear_cooling(min_temp) : temp decreases linearly to min_temp logarithmic_cooling© : temp = initial / (1 + c * log(1 + iter)), slow cooling
Or pass any callable: (initial_temp, iteration, max_iter) -> temperature
exponential_cooling(rate=0.9995)
¶
Geometric cooling: temp = initial * rate^iteration.
Source code in solvor/anneal.py
linear_cooling(min_temp=1e-08)
¶
Linear cooling: temp decreases linearly from initial to min_temp.
Source code in solvor/anneal.py
logarithmic_cooling(c=1.0)
¶
Logarithmic cooling: temp = initial / (1 + c * log(1 + iteration)).
Source code in solvor/anneal.py
solvor.tabu
¶
Tabu Search, local search with memory to escape cycles.
Like anneal, this explores neighbors. Unlike anneal, it's greedy: always pick the best neighbor. The trick is the tabu list, a memory of recent moves that are temporarily forbidden. This prevents cycling back to solutions you just left, forcing the search to explore new territory.
Use this for routing (traveling salesmen probelm), scheduling, or when you want more control than anneal. Tabu is deterministic where anneal is probabilistic, so results are more reproducible and easier to debug.
from solvor.tabu import tabu_search, solve_tsp
result = tabu_search(start, objective_fn, neighbors_fn)
result = solve_tsp(distance_matrix) # built-in TSP helper
The neighbor function is different from anneal: it must return a list of (move, new_solution) pairs, where move is hashable (so it can go in the tabu list). Think (i, j) for "swap cities i and j" rather than just the new tour.
Genetic is population-based (more overhead, better diversity), anneal is probabilistic (simpler setup), tabu is greedy with memory (more predictable).
solvor.genetic
¶
Genetic Algorithm, population-based search that excels at multi-objective problems.
Slower than modern solvers for single-objective problems, and gradients will beat it on continuous optimization. But genetic algorithms are king at multi-objective optimization where objectives compete (Pareto fronts). Great at exploration when you're searching in the dark without structure to exploit. And they parallelize beautifully while most solvers don't.
Unlike anneal/tabu (single solution), this evolves a whole population. More overhead, but better diversity and less likely to get trapped.
from solvor.genetic import evolve
result = evolve(objective_fn, population, crossover_fn, mutate_fn)
result = evolve(objective_fn, pop, cross, mut, minimize=False) # maximize
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
objective_fn
|
Callable[[T], float]
|
fitness function mapping a solution to a score |
required |
population
|
Sequence[T]
|
starting solutions, bigger = more diversity but slower |
required |
crossover
|
Callable[[T, T], T]
|
combine two parents into one child; this matters a lot |
required |
mutate
|
Callable[[T], T]
|
small random changes to a solution; keep it subtle |
required |
elite_size
|
int
|
survivors per generation, too high = stagnation (default: 2) |
required |
mutation_rate
|
float
|
how often to mutate (default: 0.1) |
required |
max_iter
|
int
|
iterations/generations to run (default: 100) |
required |
tournament_k
|
int
|
selection pressure, higher = greedier (default: 3) |
required |
Don't use this for: problems with gradient info (use gradient descent), convex problems (use simplex), or discrete structured problems (use CP/SAT).
evolve(objective_fn, population, crossover, mutate, *, minimize=True, elite_size=2, mutation_rate=0.1, adaptive_mutation=False, max_iter=100, tournament_k=3, seed=None, on_progress=None, progress_interval=0)
¶
Genetic algorithm with optional adaptive mutation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
adaptive_mutation
|
bool
|
If True, increase mutation rate when population diversity is low (stagnation), decrease when improving. |
False
|
Source code in solvor/genetic.py
Graph Algorithms¶
solvor.dijkstra
¶
Dijkstra's algorithm for weighted shortest paths.
Use this when edges have non-negative weights and you need the optimal path. Road networks, routing with distances, any graph where "shortest" means "minimum total weight". This is the foundation for A*, which adds a heuristic for faster goal-directed search.
from solvor.dijkstra import dijkstra
result = dijkstra(start, goal, neighbors)
result = dijkstra(start, lambda s: s.is_target, neighbors)
The neighbors function returns (neighbor, edge_cost) pairs. Edge costs must be non-negative, use bellman_ford for negative weights.
For negative edges use bellman_ford, Dijkstra's negativity was legendary, just not in his algorithm. For unweighted graphs use bfs (simpler). With a good distance estimate, use astar.
solvor.a_star
¶
A* search for goal-directed shortest paths with heuristics.
Use this when you have an estimate of distance to the goal. A* expands fewer nodes than Dijkstra by prioritizing promising directions. The heuristic must be admissible (never overestimate) for optimal results.
from solvor.a_star import astar, astar_grid
result = astar(start, goal, neighbors, heuristic)
result = astar_grid(maze, (0, 0), (9, 9), directions=8)
astar_grid provides built-in heuristics and neighbor generation for 2D grids. For weighted A* (faster but suboptimal), set weight > 1.
Don't use this for: unknown goal location (use dijkstra), negative edges (use bellman_ford), or unweighted graphs (use bfs).
solvor.flow
¶
Network Flow, for assignment, matching, and bottleneck analysis.
Use max_flow when you need "how much can I push through this network?" - assigning workers to tasks, servers to requests, finding bottlenecks. Use min_cost_flow when cost matters: "what's the cheapest way to route X units?" Think of: transportation, logistics and resource allocation with prices.
The max-flow min-cut theorem makes this useful for systems thinking: the maximum flow equals the minimum cut, so you find bottlenecks for free.
from solvor.flow import max_flow, min_cost_flow
# graph: {node: [(neighbor, capacity, cost), ...]}
result = max_flow(graph, source='s', sink='t')
result = min_cost_flow(graph, source='s', sink='t', demand=10)
Works well with NetworkX for graph construction and visualization:
import networkx as nx
G = nx.DiGraph()
G.add_edge('s', 'a', capacity=10, cost=1)
G.add_edge('a', 't', capacity=5, cost=2)
graph = {u: [(v, d['capacity'], d.get('cost', 0))
for v, d in G[u].items()] for u in G}
result = max_flow(graph, 's', 't')
For heavier graph work (shortest paths, centrality, community detection), NetworkX is very extensive. This solver focuses on the flow problems.