Skip to content

API Reference

This section provides auto-generated documentation from the solvOR source code.

Core Types

solvor.types

Shared types for all solvers.

Progress dataclass

Solver progress info passed to callbacks.

iteration: Current iteration number objective: Current objective value best: Best objective found so far (None if same as objective) evaluations: Number of objective function evaluations

Source code in solvor/types.py
@dataclass(frozen=True, slots=True)
class Progress:
    """Solver progress info passed to callbacks.

    iteration: Current iteration number
    objective: Current objective value
    best: Best objective found so far (None if same as objective)
    evaluations: Number of objective function evaluations
    """

    iteration: int
    objective: float
    best: float | None = None
    evaluations: int = 0

Main Solvers

Linear Programming

solvor.simplex

Simplex Solver, linear programming that aged like wine.

You're walking along edges of a giant crystal always uphill, until you hit a corner that's optimal. You can visualize it. Most algorithms are abstract symbol manipulation. Simplex is a journey through space.

Use this for linear objectives with linear constraints and continuous variables. Resource allocation, blending, production planning, transportation. Unlike gradient descent which approximates, simplex finds the exact optimum for LP.

from solvor.simplex import solve_lp

# minimize c @ x, subject to A @ x <= b, x >= 0
result = solve_lp(c, A, b)
result = solve_lp(c, A, b, minimize=False)  # maximize

This is also doing the grunt work inside MILP, which is branch and bound that calls simplex repeatedly to solve LP relaxations.

Don't use this for: integer constraints (use MILP), non-linear objectives (use gradient or anneal), or problems with poor numerical scaling (simplex can struggle with badly scaled coefficients).

solve_lp(c, A, b, *, minimize=True, eps=1e-10, max_iter=100000)

Solve linear program: minimize c @ x subject to A @ x <= b, x >= 0.

Source code in solvor/simplex.py
def solve_lp(
    c: Sequence[float],
    A: Sequence[Sequence[float]],
    b: Sequence[float],
    *,
    minimize: bool = True,
    eps: float = 1e-10,
    max_iter: int = 100_000,
) -> Result:
    """Solve linear program: minimize c @ x subject to A @ x <= b, x >= 0."""
    check_matrix_dims(c, A, b)
    warn_large_coefficients(A)

    m, n = len(b), len(c)
    weights = list(c) if minimize else [-ci for ci in c]

    matrix = []
    for i in range(m):
        row = array("d", A[i])
        row.extend([0.0] * m)
        row[n + i] = 1.0
        row.append(b[i])
        matrix.append(row)

    obj = array("d", weights)
    obj.extend([0.0] * (m + 1))
    matrix.append(obj)

    basis = array("i", range(n, n + m))
    basis_set = set(basis)

    if any(matrix[i][-1] < -eps for i in range(m)):
        status, iters, matrix, basis, basis_set = _phase1(matrix, basis, basis_set, m, n, eps, max_iter)
        if status != Status.OPTIMAL:
            return Result(tuple([0.0] * n), float("inf"), iters, iters, Status.INFEASIBLE)
        max_iter -= iters
    else:
        iters = 0

    status, iters2, matrix, basis, basis_set = _phase2(matrix, basis, basis_set, m, eps, max_iter)
    return _extract(matrix, basis, m, n, status, iters + iters2, minimize)

solvor.milp

MILP Solver, linear optimization with integer constraints.

Use this when you need to minimize or maximize a linear objective with linear constraints, and some variables must be integers. Classic problems: diet/blending (minimize cost meeting nutritional requirements), scheduling with discrete slots, set covering, power grid design, energy management (optimizing consumption across timeslots based on production forecasts).

Under the hood, this is branch and bound using simplex as a subroutine. It solves LP relaxations (ignoring integer constraints), then branches on fractional values, pruning subtrees that can't beat the current best. Best-first search prioritizes the most promising branches.

from solvor.milp import solve_milp

# minimize c @ x, subject to A @ x <= b, x >= 0, some x integer
result = solve_milp(c, A, b, integers=[0, 2])
result = solve_milp(c, A, b, integers=[0, 1], minimize=False)  # maximize

# warm start from previous solution (prunes search tree)
result = solve_milp(c, A, b, integers=[0, 2], warm_start=previous.solution)

# find multiple solutions (result.solutions contains all found)
result = solve_milp(c, A, b, integers=[0, 2], solution_limit=5)

CP is more expressive for logical constraints like "all different" but doesn't optimize. SAT handles boolean satisfiability only. MILP is for when you have a clear linear objective and need the optimal value.

If your problem is purely continuous (no integers), just use simplex (directly).

Constraint Programming

solvor.sat

SAT solver, boolean satisfiability with clause learning.

You're navigating a maze where every dead end teaches you which turns to avoid. Hit a contradiction? Learn a new rule that prevents the same mistake. The solver gets smarter with each conflict, cutting through exponential search space.

Use this for "is this configuration valid?" problems. Logic puzzles, scheduling conflicts, dependency resolution, anything that boils down to: given these boolean constraints, find an assignment that satisfies all of them.

from solvor.sat import solve_sat

# Clauses in CNF: each clause is OR'd literals, all clauses AND'd
# Positive int = true, negative = false
# (x1 OR x2) AND (NOT x1 OR x3) AND (NOT x2 OR NOT x3)
result = solve_sat([[1, 2], [-1, 3], [-2, -3]])
result = solve_sat(clauses, solution_limit=10)  # Find multiple solutions

This is the engine behind CP. Constraint programming encodes integer variables as booleans and feeds them here. For exact cover problems specifically, DLX is more efficient than encoding to SAT.

Don't use this for: optimization (use MILP), continuous variables (use simplex or gradient), or when integer domains are more natural (use CP, which handles the boolean encoding for you).

BinaryImplications

Binary clause (a ∨ b) means ¬a → b and ¬b → a. Stores clause index for conflict analysis.

Source code in solvor/sat.py
class BinaryImplications:
    """Binary clause (a ∨ b) means ¬a → b and ¬b → a. Stores clause index for conflict analysis."""

    __slots__ = ("pos", "neg")

    def __init__(self, n_vars: int):
        self.pos: list[list[tuple[int, int]]] = [[] for _ in range(n_vars + 1)]
        self.neg: list[list[tuple[int, int]]] = [[] for _ in range(n_vars + 1)]

    def add(self, lit_a: int, lit_b: int, clause_idx: int) -> None:
        if lit_a > 0:
            self.neg[lit_a].append((lit_b, clause_idx))
        else:
            self.pos[-lit_a].append((lit_b, clause_idx))
        if lit_b > 0:
            self.neg[lit_b].append((lit_a, clause_idx))
        else:
            self.pos[-lit_b].append((lit_a, clause_idx))

    def implications(self, false_lit: int) -> list[tuple[int, int]]:
        return self.neg[false_lit] if false_lit > 0 else self.pos[-false_lit]

    def clear_learned(self, original_count: int) -> None:
        for lst in self.pos:
            lst[:] = [(lit, idx) for lit, idx in lst if idx < original_count]
        for lst in self.neg:
            lst[:] = [(lit, idx) for lit, idx in lst if idx < original_count]

luby(i)

Luby restart sequence: 1,1,2,1,1,2,4,1,1,2,1,1,2,4,8,...

Source code in solvor/sat.py
def luby(i: int) -> int:
    """Luby restart sequence: 1,1,2,1,1,2,4,1,1,2,1,1,2,4,8,..."""
    k = 1
    while True:
        if i == (1 << k) - 1:
            return 1 << (k - 1)
        if i >= (1 << (k - 1)):
            i -= (1 << (k - 1)) - 1
            k = 1
        else:
            k += 1

solvor.cp

CP-SAT Solver, constraint programming with a SAT backend.

Use this for puzzles and scheduling with "all different", arithmetic constraints, and logical combinations. Sudoku, N-Queens, nurse rostering, timetabling. The solver encodes your integer variables and constraints into SAT clauses, then hands it off to the SAT solver. You get the expressiveness of CP with the raw solving power of modern SAT.

Don't use this for: optimization problems (use milp), pure linear problems (simplex is simpler and faster), or trivially small problems where the encoding overhead isn't worth it.

from solvor.cp import Model

m = Model()
x = m.int_var(0, 9, 'x')
y = m.int_var(0, 9, 'y')
m.add(m.all_different([x, y]))
m.add(x + y == 10)
result = m.solve()  # {'x': 3, 'y': 7} or similar

# warm start with hints (guides search toward known values)
result = m.solve(hints={'x': 3})

# find multiple solutions (result.solutions contains all found)
result = m.solve(solution_limit=10)

# global constraints for scheduling and routing
m.add(m.circuit([x, y, z]))  # Hamiltonian cycle
m.add(m.no_overlap(starts, durations))  # intervals don't overlap
m.add(m.cumulative(starts, durations, demands, capacity))  # resource limit

For problems that need both satisfaction and optimization, or heavier constraint logic, z3 sits nicely between this CP approach and full MILP.

Model

Source code in solvor/cp.py
class Model:
    def __init__(self):
        self._next_bool = 1
        self._vars = {}
        self._constraints = []
        self._clauses = []

    def _new_bool_var(self):
        v = self._next_bool
        self._next_bool += 1
        return v

    def int_var(self, lb, ub, name=None):
        if name is None:
            name = f"_v{len(self._vars)}"
        var = IntVar(self, lb, ub, name)
        self._vars[name] = var
        return var

    def all_different(self, variables):
        return ("all_different", tuple(variables))

    def circuit(self, variables):
        """Hamiltonian circuit: variables form a single cycle visiting all nodes."""
        return ("circuit", tuple(variables))

    def no_overlap(self, starts, durations):
        """Intervals (starts[i], starts[i]+durations[i]) don't overlap."""
        if len(starts) != len(durations):
            raise ValueError("starts and durations must have same length")
        return ("no_overlap", tuple(starts), tuple(durations))

    def cumulative(self, starts, durations, demands, capacity):
        """At any time, sum of active demands <= capacity."""
        if len(starts) != len(durations) or len(durations) != len(demands):
            raise ValueError("starts, durations, demands must have same length")
        return ("cumulative", tuple(starts), tuple(durations), tuple(demands), capacity)

    def add(self, constraint):
        self._constraints.append(constraint)

    def _encode_exactly_one(self, lits):
        if not lits:
            return
        self._clauses.append(lits)
        for a, b in combinations(lits, 2):
            self._clauses.append([-a, -b])

    def _encode_at_most_one(self, lits):
        for a, b in combinations(lits, 2):
            self._clauses.append([-a, -b])

    def _encode_vars(self):
        for var in self._vars.values():
            lits = [var.bool_vars[v] for v in range(var.lb, var.ub + 1)]
            self._encode_exactly_one(lits)

    def _encode_all_different(self, variables):
        all_vals = set()
        for var in variables:
            all_vals.update(range(var.lb, var.ub + 1))

        for val in all_vals:
            lits = []
            for var in variables:
                if val in var.bool_vars:
                    lits.append(var.bool_vars[val])
            if len(lits) > 1:
                self._encode_at_most_one(lits)

    def _encode_eq_const(self, var, val):
        if val in var.bool_vars:
            self._clauses.append([var.bool_vars[val]])
        else:
            self._clauses.append([])

    def _encode_ne_const(self, var, val):
        if val in var.bool_vars:
            self._clauses.append([-var.bool_vars[val]])

    def _encode_eq_var(self, var1, var2):
        common = set(var1.bool_vars.keys()) & set(var2.bool_vars.keys())
        for val in common:
            self._clauses.append([-var1.bool_vars[val], var2.bool_vars[val]])
            self._clauses.append([var1.bool_vars[val], -var2.bool_vars[val]])

        for val in set(var1.bool_vars.keys()) - common:
            self._clauses.append([-var1.bool_vars[val]])
        for val in set(var2.bool_vars.keys()) - common:
            self._clauses.append([-var2.bool_vars[val]])

    def _encode_ne_var(self, var1, var2):
        common = set(var1.bool_vars.keys()) & set(var2.bool_vars.keys())
        for val in common:
            self._clauses.append([-var1.bool_vars[val], -var2.bool_vars[val]])

    def _flatten_sum(self, expr):
        terms = []
        const = 0

        def flatten(e):
            nonlocal const
            if isinstance(e, IntVar):
                terms.append(e)
            elif isinstance(e, int):
                const += e
            elif isinstance(e, tuple) and e[0] == "add":
                flatten(e[1])
                flatten(e[2])

        flatten(expr)
        return terms, const

    def _encode_sum_eq(self, variables, target):
        n = len(variables)
        if n == 0:
            if target != 0:
                self._clauses.append([])
            return

        min_sum = sum(v.lb for v in variables)
        max_sum = sum(v.ub for v in variables)

        if target < min_sum or target > max_sum:
            self._clauses.append([])
            return

        if n == 1:
            self._encode_eq_const(variables[0], target)
            return

        if n == 2:
            v1, v2 = variables
            for val1 in range(v1.lb, v1.ub + 1):
                val2 = target - val1
                if val2 < v2.lb or val2 > v2.ub:
                    self._clauses.append([-v1.bool_vars[val1]])
                else:
                    self._clauses.append([-v1.bool_vars[val1], v2.bool_vars[val2]])
            return

        partial_sum = self.int_var(variables[0].lb + variables[1].lb, variables[0].ub + variables[1].ub)

        for v1 in range(variables[0].lb, variables[0].ub + 1):
            for v2 in range(variables[1].lb, variables[1].ub + 1):
                s = v1 + v2
                if s in partial_sum.bool_vars:
                    self._clauses.append(
                        [-variables[0].bool_vars[v1], -variables[1].bool_vars[v2], partial_sum.bool_vars[s]]
                    )

        self._encode_sum_eq([partial_sum] + list(variables[2:]), target)

    def _encode_circuit(self, variables):
        """Encode circuit constraint: successor variables form a single Hamiltonian cycle."""
        n = len(variables)
        if n == 0:
            return

        # All different
        self._encode_all_different(variables)

        # No self-loops: x[i] != i
        for i, var in enumerate(variables):
            if i in var.bool_vars:
                self._clauses.append([-var.bool_vars[i]])

        # Subtour elimination using MTZ formulation with auxiliary order variables
        # t[i] represents position of node i in the tour
        # For nodes other than 0: if x[i] = j, then t[j] = t[i] + 1
        if n <= 1:
            return

        t = [self.int_var(0 if i == 0 else 1, n - 1, f"_circuit_t{i}") for i in range(n)]
        # t[0] is fixed to 0
        self._clauses.append([t[0].bool_vars[0]])

        # For each edge i -> j (j != 0): t[j] >= t[i] + 1
        for i, var in enumerate(variables):
            for j in range(1, n):  # j != 0 (depot can be revisited)
                if j in var.bool_vars:
                    # x[i] = j implies t[j] > t[i]
                    for ti in range(var.lb, var.ub + 1):
                        if ti not in t[i].bool_vars:
                            continue
                        # If x[i] = j and t[i] = ti, then t[j] >= ti + 1
                        for tj in range(t[j].lb, ti + 1):  # t[j] <= ti (violation)
                            if tj in t[j].bool_vars:
                                # NOT(x[i]=j) OR NOT(t[i]=ti) OR NOT(t[j]=tj)
                                self._clauses.append([-var.bool_vars[j], -t[i].bool_vars[ti], -t[j].bool_vars[tj]])

    def _encode_no_overlap(self, starts, durations):
        """Encode no-overlap constraint: intervals don't overlap."""
        n = len(starts)
        for i in range(n):
            for j in range(i + 1, n):
                # Either i ends before j starts, or j ends before i starts
                # start[i] + dur[i] <= start[j] OR start[j] + dur[j] <= start[i]
                self._encode_disjunctive_le(starts[i], durations[i], starts[j], durations[j])

    def _encode_disjunctive_le(self, start1, dur1, start2, dur2):
        """Encode: end1 <= start2 OR end2 <= start1."""
        # Forbid all (s1, s2) combinations where neither ordering is possible
        for s1 in range(start1.lb, start1.ub + 1):
            for s2 in range(start2.lb, start2.ub + 1):
                # If s1 + dur1 > s2 (i before j fails) and s2 + dur2 > s1 (j before i fails)
                # then infeasible for this (s1, s2) combination
                i_before_j = s1 + dur1 <= s2
                j_before_i = s2 + dur2 <= s1

                if not i_before_j and not j_before_i:
                    # This combination is infeasible
                    self._clauses.append([-start1.bool_vars[s1], -start2.bool_vars[s2]])

    def _encode_cumulative(self, starts, durations, demands, capacity):
        """Encode cumulative constraint: sum of active demands <= capacity at all times."""
        n = len(starts)
        if n == 0:
            return

        # Find time horizon
        min_start = min(s.lb for s in starts)
        max_end = max(s.ub + d for s, d in zip(starts, durations))

        # For each time point, sum of active demands <= capacity
        for t in range(min_start, max_end):
            # Collect (start, duration, demand) for tasks that could be active at time t
            active_lits = []
            active_demands = []
            for i in range(n):
                # Task i is active at t if start[i] <= t < start[i] + dur[i]
                # So start[i] in range [t - dur[i] + 1, t]
                for s in range(max(starts[i].lb, t - durations[i] + 1), min(starts[i].ub, t) + 1):
                    if s in starts[i].bool_vars and s <= t < s + durations[i]:
                        active_lits.append(starts[i].bool_vars[s])
                        active_demands.append(demands[i])

            if not active_lits:
                continue

            # Encode: sum of demands for active tasks <= capacity
            # For each subset with sum > capacity, at least one must be false
            # This is exponential but necessary for correctness
            # Use simpler encoding: for small numbers of active tasks
            if len(active_lits) <= 10:
                self._encode_capacity_constraint(active_lits, active_demands, capacity)

    def _encode_capacity_constraint(self, lits, demands, capacity):
        """Encode sum constraint: if all lits true, demands sum must <= capacity."""
        # Find minimal infeasible subsets
        n = len(lits)
        for size in range(1, n + 1):
            for subset in combinations(range(n), size):
                if sum(demands[i] for i in subset) > capacity:
                    # At least one of these must be false
                    # Check if any smaller subset is already infeasible
                    is_minimal = True
                    for smaller_size in range(1, size):
                        for smaller in combinations(subset, smaller_size):
                            if sum(demands[i] for i in smaller) > capacity:
                                is_minimal = False
                                break
                        if not is_minimal:
                            break
                    if is_minimal:
                        self._clauses.append([-lits[i] for i in subset])

    def _encode_constraint(self, constraint):
        if isinstance(constraint, tuple):
            kind = constraint[0]

            if kind == "all_different":
                self._encode_all_different(constraint[1])

            elif kind == "circuit":
                self._encode_circuit(constraint[1])

            elif kind == "no_overlap":
                self._encode_no_overlap(constraint[1], constraint[2])

            elif kind == "cumulative":
                self._encode_cumulative(constraint[1], constraint[2], constraint[3], constraint[4])

            elif kind == "eq_const":
                self._encode_eq_const(constraint[1], constraint[2])

            elif kind == "ne_const":
                self._encode_ne_const(constraint[1], constraint[2])

            elif kind == "eq_var":
                self._encode_eq_var(constraint[1], constraint[2])

            elif kind == "ne_var":
                self._encode_ne_var(constraint[1], constraint[2])

            elif kind == "add":
                terms, const = self._flatten_sum(constraint)
                if len(terms) == 0:
                    if const != 0:
                        self._clauses.append([])
                else:
                    pass

            elif kind == "sum_eq":
                self._encode_sum_eq(list(constraint[1]), constraint[2])

            elif kind == "sum_le":
                terms, target = constraint[1], constraint[2]
                aux = self.int_var(sum(v.lb for v in terms), min(sum(v.ub for v in terms), target))
                self._encode_sum_eq(list(terms), aux)

            elif kind == "sum_ge":
                terms, target = constraint[1], constraint[2]
                aux = self.int_var(max(sum(v.lb for v in terms), target), sum(v.ub for v in terms))
                self._encode_sum_eq(list(terms), aux)

    def sum_eq(self, variables, target):
        return ("sum_eq", tuple(variables), target)

    def sum_le(self, variables, target):
        return ("sum_le", tuple(variables), target)

    def sum_ge(self, variables, target):
        return ("sum_ge", tuple(variables), target)

    def solve(self, *, hints: dict[str, int] | None = None, solution_limit: int = 1, **kwargs):
        self._clauses = []
        self._encode_vars()

        for constraint in self._constraints:
            self._encode_constraint(constraint)

        if any(len(c) == 0 for c in self._clauses):
            return Result(None, 0, 0, 0, Status.INFEASIBLE)

        # Convert hints to SAT assumptions
        assumptions = list(kwargs.pop("assumptions", []) or [])
        if hints:
            for name, val in hints.items():
                if name in self._vars:
                    var = self._vars[name]
                    if val in var.bool_vars:
                        assumptions.append(var.bool_vars[val])

        sat_result = solve_sat(self._clauses, assumptions=assumptions or None, solution_limit=solution_limit, **kwargs)

        if sat_result.status == SATStatus.INFEASIBLE:
            return Result(None, 0, sat_result.iterations, sat_result.evaluations, Status.INFEASIBLE)

        if sat_result.status == SATStatus.MAX_ITER:
            return Result(None, 0, sat_result.iterations, sat_result.evaluations, Status.MAX_ITER)

        def decode_sat_solution(sat_sol: dict[int, bool]) -> dict[str, int]:
            """Convert SAT solution to CP variable assignments."""
            cp_sol = {}
            for name, var in self._vars.items():
                if name.startswith("_"):
                    continue
                for val, bool_var in var.bool_vars.items():
                    if sat_sol.get(bool_var, False):
                        cp_sol[name] = val
                        break
            return cp_sol

        solution = decode_sat_solution(sat_result.solution)  # type: ignore[arg-type]

        # Convert multiple SAT solutions to CP solutions
        if sat_result.solutions is not None:
            cp_solutions = tuple(decode_sat_solution(s) for s in sat_result.solutions)
            return Result(
                solution,
                0,
                sat_result.iterations,
                sat_result.evaluations,
                solutions=cp_solutions,
            )

        return Result(solution, 0, sat_result.iterations, sat_result.evaluations)

circuit(variables)

Hamiltonian circuit: variables form a single cycle visiting all nodes.

Source code in solvor/cp.py
def circuit(self, variables):
    """Hamiltonian circuit: variables form a single cycle visiting all nodes."""
    return ("circuit", tuple(variables))

cumulative(starts, durations, demands, capacity)

At any time, sum of active demands <= capacity.

Source code in solvor/cp.py
def cumulative(self, starts, durations, demands, capacity):
    """At any time, sum of active demands <= capacity."""
    if len(starts) != len(durations) or len(durations) != len(demands):
        raise ValueError("starts, durations, demands must have same length")
    return ("cumulative", tuple(starts), tuple(durations), tuple(demands), capacity)

no_overlap(starts, durations)

Intervals (starts[i], starts[i]+durations[i]) don't overlap.

Source code in solvor/cp.py
def no_overlap(self, starts, durations):
    """Intervals (starts[i], starts[i]+durations[i]) don't overlap."""
    if len(starts) != len(durations):
        raise ValueError("starts and durations must have same length")
    return ("no_overlap", tuple(starts), tuple(durations))

Metaheuristics

solvor.anneal

Simulated Annealing, a black box optimization that handles local optima well.

Use this when you have an objective function and can generate neighbors, but don't know much else about the problem structure. Fast to prototype, low constraints on problem formulation. Works well on landscapes with many local optima where gradient descent would get stuck.

Don't use this for: problems where you need guarantees, or when you have constraints that are easier to encode in MILP/CP. If you need more control over the search, consider tabu (memory of visited states) or genetic (population-based exploration).

from solvor.anneal import anneal, linear_cooling

result = anneal(start, objective_fn, neighbor_fn)
result = anneal(start, objective_fn, neighbor_fn, minimize=False)  # maximize
result = anneal(start, objective_fn, neighbor_fn, cooling=linear_cooling())

The neighbor function is the key part, good neighbors make small moves, not random jumps. Think "swap two cities" for TSP, not "shuffle everything". Small perturbations let the algorithm actually explore the neighborhood instead of teleporting randomly.

If it's getting stuck: try higher starting temperature or slower cooling. If it's taking forever: cool faster.

Cooling schedules

exponential_cooling(rate) : temp = initial * rate^iter (default, classic) linear_cooling(min_temp) : temp decreases linearly to min_temp logarithmic_cooling© : temp = initial / (1 + c * log(1 + iter)), slow cooling

Or pass any callable: (initial_temp, iteration, max_iter) -> temperature

exponential_cooling(rate=0.9995)

Geometric cooling: temp = initial * rate^iteration.

Source code in solvor/anneal.py
def exponential_cooling(rate: float = 0.9995) -> CoolingSchedule:
    """Geometric cooling: temp = initial * rate^iteration."""

    def schedule(initial_temp: float, iteration: int, max_iter: int) -> float:
        return initial_temp * (rate**iteration)

    return schedule

linear_cooling(min_temp=1e-08)

Linear cooling: temp decreases linearly from initial to min_temp.

Source code in solvor/anneal.py
def linear_cooling(min_temp: float = 1e-8) -> CoolingSchedule:
    """Linear cooling: temp decreases linearly from initial to min_temp."""

    def schedule(initial_temp: float, iteration: int, max_iter: int) -> float:
        return initial_temp - (initial_temp - min_temp) * iteration / max_iter

    return schedule

logarithmic_cooling(c=1.0)

Logarithmic cooling: temp = initial / (1 + c * log(1 + iteration)).

Source code in solvor/anneal.py
def logarithmic_cooling(c: float = 1.0) -> CoolingSchedule:
    """Logarithmic cooling: temp = initial / (1 + c * log(1 + iteration))."""

    def schedule(initial_temp: float, iteration: int, max_iter: int) -> float:
        return initial_temp / (1 + c * log(1 + iteration))

    return schedule

solvor.tabu

Tabu Search, local search with memory to escape cycles.

Like anneal, this explores neighbors. Unlike anneal, it's greedy: always pick the best neighbor. The trick is the tabu list, a memory of recent moves that are temporarily forbidden. This prevents cycling back to solutions you just left, forcing the search to explore new territory.

Use this for routing (traveling salesmen probelm), scheduling, or when you want more control than anneal. Tabu is deterministic where anneal is probabilistic, so results are more reproducible and easier to debug.

from solvor.tabu import tabu_search, solve_tsp

result = tabu_search(start, objective_fn, neighbors_fn)
result = solve_tsp(distance_matrix)  # built-in TSP helper

The neighbor function is different from anneal: it must return a list of (move, new_solution) pairs, where move is hashable (so it can go in the tabu list). Think (i, j) for "swap cities i and j" rather than just the new tour.

Genetic is population-based (more overhead, better diversity), anneal is probabilistic (simpler setup), tabu is greedy with memory (more predictable).

solvor.genetic

Genetic Algorithm, population-based search that excels at multi-objective problems.

Slower than modern solvers for single-objective problems, and gradients will beat it on continuous optimization. But genetic algorithms are king at multi-objective optimization where objectives compete (Pareto fronts). Great at exploration when you're searching in the dark without structure to exploit. And they parallelize beautifully while most solvers don't.

Unlike anneal/tabu (single solution), this evolves a whole population. More overhead, but better diversity and less likely to get trapped.

from solvor.genetic import evolve

result = evolve(objective_fn, population, crossover_fn, mutate_fn)
result = evolve(objective_fn, pop, cross, mut, minimize=False)  # maximize

Parameters:

Name Type Description Default
objective_fn Callable[[T], float]

fitness function mapping a solution to a score

required
population Sequence[T]

starting solutions, bigger = more diversity but slower

required
crossover Callable[[T, T], T]

combine two parents into one child; this matters a lot

required
mutate Callable[[T], T]

small random changes to a solution; keep it subtle

required
elite_size int

survivors per generation, too high = stagnation (default: 2)

required
mutation_rate float

how often to mutate (default: 0.1)

required
max_iter int

iterations/generations to run (default: 100)

required
tournament_k int

selection pressure, higher = greedier (default: 3)

required

Don't use this for: problems with gradient info (use gradient descent), convex problems (use simplex), or discrete structured problems (use CP/SAT).

evolve(objective_fn, population, crossover, mutate, *, minimize=True, elite_size=2, mutation_rate=0.1, adaptive_mutation=False, max_iter=100, tournament_k=3, seed=None, on_progress=None, progress_interval=0)

Genetic algorithm with optional adaptive mutation.

Parameters:

Name Type Description Default
adaptive_mutation bool

If True, increase mutation rate when population diversity is low (stagnation), decrease when improving.

False
Source code in solvor/genetic.py
def evolve[T](
    objective_fn: Callable[[T], float],
    population: Sequence[T],
    crossover: Callable[[T, T], T],
    mutate: Callable[[T], T],
    *,
    minimize: bool = True,
    elite_size: int = 2,
    mutation_rate: float = 0.1,
    adaptive_mutation: bool = False,
    max_iter: int = 100,
    tournament_k: int = 3,
    seed: int | None = None,
    on_progress: ProgressCallback | None = None,
    progress_interval: int = 0,
) -> Result:
    """
    Genetic algorithm with optional adaptive mutation.

    Args:
        adaptive_mutation: If True, increase mutation rate when population
            diversity is low (stagnation), decrease when improving.
    """
    rng = Random(seed)
    evaluate = Evaluator(objective_fn, minimize)
    pop_size = len(population)

    pop = [Individual(sol, evaluate(sol)) for sol in population]
    pop.sort(key=attrgetter("fitness"))
    best_solution = pop[0].solution
    best_fitness = pop[0].fitness

    def tournament():
        contestants = rng.sample(pop, min(tournament_k, len(pop)))
        return min(contestants, key=attrgetter("fitness"))

    current_mutation_rate = mutation_rate
    stagnation_count = 0

    for iteration in range(max_iter):
        new_pop = pop[:elite_size]

        while len(new_pop) < pop_size:
            p1 = tournament()
            p2 = tournament()
            child_sol = crossover(p1.solution, p2.solution)

            if rng.random() < current_mutation_rate:
                child_sol = mutate(child_sol)

            child = Individual(child_sol, evaluate(child_sol))
            new_pop.append(child)

        pop = sorted(new_pop, key=attrgetter("fitness"))[:pop_size]

        improved = False
        if pop[0].fitness < best_fitness:
            best_solution = pop[0].solution
            best_fitness = pop[0].fitness
            improved = True

        # Adaptive mutation: adjust rate based on progress
        if adaptive_mutation:
            if improved:
                stagnation_count = 0
                # Decrease mutation when improving
                current_mutation_rate = max(0.01, current_mutation_rate * 0.95)
            else:
                stagnation_count += 1
                if stagnation_count >= 5:
                    # Increase mutation when stagnating
                    current_mutation_rate = min(0.5, current_mutation_rate * 1.2)

        if report_progress(on_progress, progress_interval, iteration + 1,
                          evaluate.to_user(pop[0].fitness), evaluate.to_user(best_fitness), evaluate.evals):
            return Result(best_solution, evaluate.to_user(best_fitness), iteration + 1, evaluate.evals, Status.FEASIBLE)

    final_obj = evaluate.to_user(best_fitness)
    return Result(best_solution, final_obj, max_iter, evaluate.evals, Status.FEASIBLE)

Graph Algorithms

solvor.dijkstra

Dijkstra's algorithm for weighted shortest paths.

Use this when edges have non-negative weights and you need the optimal path. Road networks, routing with distances, any graph where "shortest" means "minimum total weight". This is the foundation for A*, which adds a heuristic for faster goal-directed search.

from solvor.dijkstra import dijkstra

result = dijkstra(start, goal, neighbors)
result = dijkstra(start, lambda s: s.is_target, neighbors)

The neighbors function returns (neighbor, edge_cost) pairs. Edge costs must be non-negative, use bellman_ford for negative weights.

For negative edges use bellman_ford, Dijkstra's negativity was legendary, just not in his algorithm. For unweighted graphs use bfs (simpler). With a good distance estimate, use astar.

solvor.a_star

A* search for goal-directed shortest paths with heuristics.

Use this when you have an estimate of distance to the goal. A* expands fewer nodes than Dijkstra by prioritizing promising directions. The heuristic must be admissible (never overestimate) for optimal results.

from solvor.a_star import astar, astar_grid

result = astar(start, goal, neighbors, heuristic)
result = astar_grid(maze, (0, 0), (9, 9), directions=8)

astar_grid provides built-in heuristics and neighbor generation for 2D grids. For weighted A* (faster but suboptimal), set weight > 1.

Don't use this for: unknown goal location (use dijkstra), negative edges (use bellman_ford), or unweighted graphs (use bfs).

solvor.flow

Network Flow, for assignment, matching, and bottleneck analysis.

Use max_flow when you need "how much can I push through this network?" - assigning workers to tasks, servers to requests, finding bottlenecks. Use min_cost_flow when cost matters: "what's the cheapest way to route X units?" Think of: transportation, logistics and resource allocation with prices.

The max-flow min-cut theorem makes this useful for systems thinking: the maximum flow equals the minimum cut, so you find bottlenecks for free.

from solvor.flow import max_flow, min_cost_flow

# graph: {node: [(neighbor, capacity, cost), ...]}
result = max_flow(graph, source='s', sink='t')
result = min_cost_flow(graph, source='s', sink='t', demand=10)

Works well with NetworkX for graph construction and visualization:

import networkx as nx
G = nx.DiGraph()
G.add_edge('s', 'a', capacity=10, cost=1)
G.add_edge('a', 't', capacity=5, cost=2)
graph = {u: [(v, d['capacity'], d.get('cost', 0))
             for v, d in G[u].items()] for u in G}
result = max_flow(graph, 's', 't')

For heavier graph work (shortest paths, centrality, community detection), NetworkX is very extensive. This solver focuses on the flow problems.