Skip to content

API Reference

This section provides auto-generated documentation from the solvOR source code.

Core Types

See the Types page for Result, Status, Progress, and ProgressCallback.

Main Solvers

Linear Programming

solvor.simplex

Simplex Solver, linear programming that aged like wine.

You're walking along edges of a giant crystal always uphill, until you hit a corner that's optimal. You can visualize it. Most algorithms are abstract symbol manipulation. Simplex is a journey through space.

from solvor.simplex import solve_lp

# minimize c @ x, subject to A @ x <= b, x >= 0
result = solve_lp(c, A, b)
result = solve_lp(c, A, b, minimize=False)  # maximize

How it works: starts at a vertex of the feasible polytope (phase 1 finds one if needed). Each iteration pivots to an adjacent vertex with better objective value. Bland's rule prevents cycling. Terminates when no improving neighbor exists.

Use this for:

  • Linear objectives with linear constraints
  • Resource allocation, blending, production planning
  • Transportation and assignment problems
  • When you need exact optimum (not approximate)

Parameters:

c: objective coefficients (minimize c @ x)
A: constraint matrix (A @ x <= b)
b: constraint bounds
minimize: True for min, False for max (default: True)

This also does the grunt work inside MILP, solving LP relaxations at each node.

Don't use this for: integer constraints (use MILP), non-linear objectives (use gradient or anneal), or problems with poor numerical scaling (simplex can struggle with badly scaled coefficients).

solve_lp(c, A, b, *, minimize=True, eps=1e-10, max_iter=100000)

Solve linear program: minimize c @ x subject to A @ x <= b, x >= 0.

Source code in solvor/simplex.py
def solve_lp(
    c: Sequence[float],
    A: Sequence[Sequence[float]],
    b: Sequence[float],
    *,
    minimize: bool = True,
    eps: float = 1e-10,
    max_iter: int = 100_000,
) -> Result:
    """Solve linear program: minimize c @ x subject to A @ x <= b, x >= 0."""
    check_matrix_dims(c, A, b)
    warn_large_coefficients(A)

    m, n = len(b), len(c)
    weights = list(c) if minimize else [-ci for ci in c]

    matrix = []
    for i in range(m):
        row = array("d", A[i])
        row.extend([0.0] * m)
        row[n + i] = 1.0
        row.append(b[i])
        matrix.append(row)

    obj = array("d", weights)
    obj.extend([0.0] * (m + 1))
    matrix.append(obj)

    basis = array("i", range(n, n + m))
    basis_set = set(basis)

    if any(matrix[i][-1] < -eps for i in range(m)):
        status, iters, matrix, basis, basis_set = _phase1(matrix, basis, basis_set, m, n, eps, max_iter)
        if status != Status.OPTIMAL:
            return Result(tuple([0.0] * n), float("inf"), iters, iters, Status.INFEASIBLE)
        max_iter -= iters
    else:
        iters = 0

    status, iters2, matrix, basis, basis_set = _phase2(matrix, basis, basis_set, m, eps, max_iter)
    return _extract(matrix, basis, m, n, status, iters + iters2, minimize)

solvor.milp

MILP Solver, linear optimization with integer constraints.

Linear programming with integer constraints. The workhorse of discrete optimization: diet problems, scheduling, set covering, facility location.

from solvor.milp import solve_milp

# minimize c @ x, subject to A @ x <= b, x >= 0, some x integer
result = solve_milp(c, A, b, integers=[0, 2])
result = solve_milp(c, A, b, integers=[0, 1], minimize=False)  # maximize

# warm start from previous solution (prunes search tree)
result = solve_milp(c, A, b, integers=[0, 2], warm_start=previous.solution)

# find multiple solutions (result.solutions contains all found)
result = solve_milp(c, A, b, integers=[0, 2], solution_limit=5)

How it works: branch and bound using simplex as subroutine. Solves LP relaxations (ignoring integer constraints), branches on fractional values, prunes subtrees that can't beat current best. Best-first search prioritizes promising branches.

Use this for:

  • Linear objectives with integer constraints
  • Diet/blending, scheduling, set covering
  • Facility location, power grid design
  • When you need proven optimal values

Parameters:

c: objective coefficients (minimize c @ x)
A: constraint matrix (A @ x <= b)
b: constraint bounds
integers: indices of integer-constrained variables
minimize: True for min, False for max (default: True)
warm_start: initial solution to prune search tree
solution_limit: find multiple solutions (default: 1)
heuristics: rounding + local search heuristics (default: True)
lns_iterations: LNS improvement passes, 0 = off (default: 0)
lns_destroy_frac: fraction of variables to unfix per LNS iteration (default: 0.3)
seed: random seed for LNS reproducibility
max_nodes: branch-and-bound node limit (default: 100000)
gap_tol: optimality gap tolerance (default: 1e-6)

CP is more expressive for logical constraints. SAT handles pure boolean. For continuous-only problems, use simplex directly.

Constraint Programming

solvor.sat

SAT solver, boolean satisfiability with clause learning.

You're navigating a maze where every dead end teaches you which turns to avoid. Hit a contradiction? Learn a new rule that prevents the same mistake. The solver gets smarter with each conflict, cutting through exponential search space.

from solvor.sat import solve_sat

# Clauses in CNF: each clause is OR'd literals, all clauses AND'd
# Positive int = true, negative = false
# (x1 OR x2) AND (NOT x1 OR x3) AND (NOT x2 OR NOT x3)
result = solve_sat([[1, 2], [-1, 3], [-2, -3]])
result = solve_sat(clauses, solution_limit=10)  # Find multiple solutions

How it works: CDCL (Conflict-Driven Clause Learning) with VSIDS variable ordering. Makes decisions, propagates unit clauses, and when conflicts occur, learns new clauses that prevent the same conflict. Restarts periodically with Luby sequence.

Use this for:

  • Boolean satisfiability problems
  • Logic puzzles (Sudoku encoded as SAT)
  • Scheduling conflicts and dependency resolution
  • Hardware/software verification

Parameters:

clauses: list of clauses, each clause is list of literals (int)
assumptions: literals that must be true
max_conflicts: conflict limit before giving up
solution_limit: find multiple solutions

For integer domains use CP. For exact cover use DLX. Don't use for: optimization (MILP), continuous variables (simplex/gradient).

BinaryImplications

Binary clause (a ∨ b) means ¬a → b and ¬b → a. Stores clause index for conflict analysis.

Source code in solvor/sat.py
class BinaryImplications:
    """Binary clause (a ∨ b) means ¬a → b and ¬b → a. Stores clause index for conflict analysis."""

    __slots__ = ("pos", "neg")

    def __init__(self, n_vars: int):
        self.pos: list[list[tuple[int, int]]] = [[] for _ in range(n_vars + 1)]
        self.neg: list[list[tuple[int, int]]] = [[] for _ in range(n_vars + 1)]

    def add(self, lit_a: int, lit_b: int, clause_idx: int) -> None:
        if lit_a > 0:
            self.neg[lit_a].append((lit_b, clause_idx))
        else:
            self.pos[-lit_a].append((lit_b, clause_idx))
        if lit_b > 0:
            self.neg[lit_b].append((lit_a, clause_idx))
        else:
            self.pos[-lit_b].append((lit_a, clause_idx))

    def implications(self, false_lit: int) -> list[tuple[int, int]]:
        return self.neg[false_lit] if false_lit > 0 else self.pos[-false_lit]

    def clear_learned(self, original_count: int) -> None:
        for lst in self.pos:
            lst[:] = [(lit, idx) for lit, idx in lst if idx < original_count]
        for lst in self.neg:
            lst[:] = [(lit, idx) for lit, idx in lst if idx < original_count]

luby(i)

Luby restart sequence: 1,1,2,1,1,2,4,1,1,2,1,1,2,4,8,...

Source code in solvor/sat.py
def luby(i: int) -> int:
    """Luby restart sequence: 1,1,2,1,1,2,4,1,1,2,1,1,2,4,8,..."""
    k = 1
    while True:
        if i == (1 << k) - 1:
            return 1 << (k - 1)
        if i >= (1 << (k - 1)):
            i -= (1 << (k - 1)) - 1
            k = 1
        else:
            k += 1

solvor.cp

CP Solver, constraint programming with backtracking search.

Write constraints like a human ("all different", "x + y == 10"), and the solver finds valid assignments. Perfect for puzzles and scheduling.

from solvor.cp import Model

m = Model()
x = m.int_var(0, 9, 'x')
y = m.int_var(0, 9, 'y')
m.add(m.all_different([x, y]))
m.add(x + y == 10)
result = m.solve()  # {'x': 3, 'y': 7} or similar

# global constraints for scheduling and routing
m.add(m.circuit([x, y, z]))  # Hamiltonian cycle
m.add(m.no_overlap(starts, durations))  # intervals don't overlap
m.add(m.cumulative(starts, durations, demands, capacity))  # resource limit

How it works: DFS with constraint propagation and arc consistency. Falls back to SAT encoding automatically for complex global constraints. MRV (minimum remaining values) heuristic picks the most constrained variable first.

Use this for:

  • Sudoku, N-Queens, logic puzzles
  • Nurse rostering and timetabling
  • Scheduling with complex constraints
  • When "all different" or other global constraints fit naturally

Parameters:

Model.int_var(lb, ub, name): create integer variable
Model.add(constraint): add constraint
Model.solve(hints, solution_limit, solver): find solutions

hints: initial value hints to guide search
solution_limit: find multiple solutions
solver: 'auto' (default), 'dfs', or 'sat'

For optimization use MILP. For heavier constraint logic, see Z3.

Expr

Wrapper for expression tuples to support comparison operators.

Source code in solvor/cp.py
class Expr:
    """Wrapper for expression tuples to support comparison operators."""

    def __init__(self, data):
        self.data = data

    def __eq__(self, other):
        if isinstance(other, Expr):
            return ("ne_expr", self.data, other.data, False)  # False = eq
        if isinstance(other, (int, IntVar)):
            return ("ne_expr", self.data, other, False)
        return NotImplemented

    def __ne__(self, other):
        if isinstance(other, Expr):
            return ("ne_expr", self.data, other.data, True)  # True = ne
        if isinstance(other, (int, IntVar)):
            return ("ne_expr", self.data, other, True)
        return NotImplemented

    def __add__(self, other):
        return Expr(("add", self.data, other.data if isinstance(other, Expr) else other))

    def __radd__(self, other):
        return Expr(("add", other.data if isinstance(other, Expr) else other, self.data))

    def __sub__(self, other):
        if isinstance(other, int):
            return Expr(("add", self.data, -other))
        if isinstance(other, Expr):
            return Expr(("sub", self.data, other.data))
        return NotImplemented

    def __mul__(self, other):
        if isinstance(other, int):
            return Expr(("mul", self.data, other))
        return NotImplemented

    def __rmul__(self, other):
        if isinstance(other, int):
            return Expr(("mul", self.data, other))
        return NotImplemented

Model

Source code in solvor/cp.py
class Model:
    def __init__(self):
        self._next_bool = 1
        self._vars = {}
        self._constraints = []

    def _new_bool_var(self):
        v = self._next_bool
        self._next_bool += 1
        return v

    def int_var(self, lb, ub, name=None):
        if name is None:
            name = f"_v{len(self._vars)}"
        var = IntVar(self, lb, ub, name)
        self._vars[name] = var
        return var

    def all_different(self, variables):
        return ("all_different", tuple(variables))

    def circuit(self, variables):
        """Hamiltonian circuit: variables form a single cycle visiting all nodes."""
        return ("circuit", tuple(variables))

    def no_overlap(self, starts, durations):
        """Intervals (starts[i], starts[i]+durations[i]) don't overlap."""
        if len(starts) != len(durations):
            raise ValueError("starts and durations must have same length")
        return ("no_overlap", tuple(starts), tuple(durations))

    def cumulative(self, starts, durations, demands, capacity):
        """At any time, sum of active demands <= capacity."""
        if len(starts) != len(durations) or len(durations) != len(demands):
            raise ValueError("starts, durations, demands must have same length")
        return ("cumulative", tuple(starts), tuple(durations), tuple(demands), capacity)

    def add(self, constraint):
        self._constraints.append(constraint)

    def _flatten_sum(self, expr):
        terms = []
        const = 0

        def flatten(e):
            nonlocal const
            if isinstance(e, IntVar):
                terms.append(e)
            elif isinstance(e, int):
                const += e
            elif isinstance(e, tuple) and e[0] == "add":
                flatten(e[1])
                flatten(e[2])

        flatten(expr)
        return terms, const

    def sum_eq(self, variables, target):
        return ("sum_eq", tuple(variables), target)

    def sum_le(self, variables, target):
        return ("sum_le", tuple(variables), target)

    def sum_ge(self, variables, target):
        return ("sum_ge", tuple(variables), target)

    def _choose_solver(self) -> str:
        """Pick best solver based on constraints.

        Returns 'dfs' for simple constraints (all_different, eq, ne).
        Returns 'sat' for global constraints (circuit, no_overlap, cumulative, sum_*).
        """
        SAT_REQUIRED = {"circuit", "no_overlap", "cumulative", "sum_eq", "sum_le", "sum_ge"}
        for c in self._constraints:
            if isinstance(c, tuple) and c[0] in SAT_REQUIRED:
                return "sat"
        return "dfs"

    def solve(
        self,
        *,
        hints: dict[str, int] | None = None,
        solution_limit: int = 1,
        solver: str = "auto",
        **kwargs: Any,
    ) -> Result:
        """Solve the constraint satisfaction problem.

        Args:
            hints: Initial value hints to guide search.
            solution_limit: Max solutions to find (default 1).
            solver: 'auto' (default), 'dfs' (fast backtracking), or 'sat' (SAT encoding).
                    'auto' picks DFS for simple constraints, SAT for global constraints.
            **kwargs: Additional solver options.

        Returns:
            Result with solution dict mapping variable names to values.
        """
        if solver == "auto":
            solver = self._choose_solver()

        if solver == "dfs":
            return self._solve_dfs(hints=hints, solution_limit=solution_limit, **kwargs)
        elif solver == "sat":
            from solvor.cp_encoder import SATEncoder

            encoder = SATEncoder(self)
            return encoder.solve(hints=hints, solution_limit=solution_limit, **kwargs)
        else:
            raise ValueError(f"Unknown solver: {solver}. Use 'auto', 'dfs', or 'sat'.")

    def _solve_dfs(self, *, hints: dict[str, int] | None = None, solution_limit: int = 1, **kwargs):
        """DFS backtracking solver with constraint propagation.

        Uses arc consistency and MRV heuristic for fast constraint satisfaction.
        Falls back to SAT for global constraints (circuit, no_overlap, cumulative, sum_*).
        """
        # Fall back to SAT for constraints that DFS can't handle
        sat_required = {"circuit", "no_overlap", "cumulative", "sum_eq", "sum_le", "sum_ge"}
        for c in self._constraints:
            if isinstance(c, tuple) and c[0] in sat_required:
                from solvor.cp_encoder import SATEncoder

                encoder = SATEncoder(self)
                return encoder.solve(hints=hints, solution_limit=solution_limit, **kwargs)

        # Initialize domains as sets
        domains = {name: set(range(var.lb, var.ub + 1)) for name, var in self._vars.items()}

        # Apply hints
        if hints:
            for name, val in hints.items():
                if name in domains and val in domains[name]:
                    domains[name] = {val}

        # Propagate initial constraints
        if not self._propagate(domains):
            return Result(None, 0, 0, 0, Status.INFEASIBLE)

        solutions: list[dict[str, int]] = []
        iterations = [0]  # Use list for mutation in nested function

        def backtrack(domains: dict[str, set[int]]) -> bool:
            iterations[0] += 1

            # Check if all assigned
            unassigned = [n for n in domains if len(domains[n]) > 1 and not n.startswith("_")]
            if not unassigned:
                # Found solution
                sol = {n: next(iter(d)) for n, d in domains.items() if not n.startswith("_")}
                solutions.append(sol)
                return len(solutions) >= solution_limit

            # MRV: pick variable with smallest domain
            var_name = min(unassigned, key=lambda n: len(domains[n]))
            var_domain = list(domains[var_name])

            for val in var_domain:
                # Make assignment
                new_domains = {n: d.copy() for n, d in domains.items()}
                new_domains[var_name] = {val}

                # Propagate
                if self._propagate(new_domains):
                    if backtrack(new_domains):
                        return True

            return False

        backtrack(domains)

        if not solutions:
            return Result(None, 0, iterations[0], 0, Status.INFEASIBLE)

        if len(solutions) == 1:
            return Result(solutions[0], 0, iterations[0], 0)

        return Result(solutions[0], 0, iterations[0], 0, solutions=tuple(solutions))

    def _propagate(self, domains: dict[str, set[int]]) -> bool:
        """Apply arc consistency until fixpoint. Returns False if domain wipeout."""
        changed = True
        while changed:
            changed = False
            for constraint in self._constraints:
                old_sizes = {n: len(d) for n, d in domains.items()}
                if not self._propagate_constraint(constraint, domains):
                    return False
                # Check for domain wipeout or changes
                for n, d in domains.items():
                    if not d:
                        return False
                    if len(d) < old_sizes[n]:
                        changed = True
        return True

    def _propagate_constraint(self, constraint, domains: dict[str, set[int]]) -> bool:
        """Propagate a single constraint. Returns False if inconsistent."""
        if not isinstance(constraint, tuple):
            return True

        kind = constraint[0]

        if kind == "all_different":
            return self._propagate_all_different(constraint[1], domains)

        elif kind == "eq_const":
            var, val = constraint[1], constraint[2]
            if val not in domains[var.name]:
                return False
            domains[var.name] = {val}

        elif kind == "ne_const":
            var, val = constraint[1], constraint[2]
            domains[var.name].discard(val)

        elif kind == "eq_var":
            var1, var2 = constraint[1], constraint[2]
            common = domains[var1.name] & domains[var2.name]
            if not common:
                return False
            domains[var1.name] = common
            domains[var2.name] = common.copy()

        elif kind == "ne_var":
            var1, var2 = constraint[1], constraint[2]
            # If one is assigned, remove from other
            if len(domains[var1.name]) == 1:
                val = next(iter(domains[var1.name]))
                domains[var2.name].discard(val)
            if len(domains[var2.name]) == 1:
                val = next(iter(domains[var2.name]))
                domains[var1.name].discard(val)

        elif kind == "ne_expr":
            return self._propagate_ne_expr(constraint[1], constraint[2], constraint[3], domains)

        return True

    def _propagate_all_different(self, variables, domains: dict[str, set[int]]) -> bool:
        """Propagate all_different: assigned values removed from other domains."""
        # Remove assigned values from other domains
        for var in variables:
            if len(domains[var.name]) == 1:
                val = next(iter(domains[var.name]))
                for other in variables:
                    if other is not var:
                        domains[other.name].discard(val)
        return True

    def _propagate_ne_expr(self, left, right, is_ne: bool, domains: dict[str, set[int]]) -> bool:
        """Propagate (left_expr != right_expr) or (left_expr == right_expr)."""
        left_terms, left_const = self._flatten_sum(left)
        right_terms, right_const = self._flatten_sum(right)

        if len(left_terms) == 1 and len(right_terms) == 1:
            var1, var2 = left_terms[0], right_terms[0]
            offset = right_const - left_const

            if is_ne:
                # var1 != var2 + offset
                if len(domains[var1.name]) == 1:
                    v1 = next(iter(domains[var1.name]))
                    domains[var2.name].discard(v1 - offset)
                if len(domains[var2.name]) == 1:
                    v2 = next(iter(domains[var2.name]))
                    domains[var1.name].discard(v2 + offset)
            else:
                # var1 == var2 + offset
                valid1 = {v for v in domains[var1.name] if (v - offset) in domains[var2.name]}
                valid2 = {v for v in domains[var2.name] if (v + offset) in domains[var1.name]}
                if not valid1 or not valid2:
                    return False
                domains[var1.name] = valid1
                domains[var2.name] = valid2

        return True

circuit(variables)

Hamiltonian circuit: variables form a single cycle visiting all nodes.

Source code in solvor/cp.py
def circuit(self, variables):
    """Hamiltonian circuit: variables form a single cycle visiting all nodes."""
    return ("circuit", tuple(variables))

cumulative(starts, durations, demands, capacity)

At any time, sum of active demands <= capacity.

Source code in solvor/cp.py
def cumulative(self, starts, durations, demands, capacity):
    """At any time, sum of active demands <= capacity."""
    if len(starts) != len(durations) or len(durations) != len(demands):
        raise ValueError("starts, durations, demands must have same length")
    return ("cumulative", tuple(starts), tuple(durations), tuple(demands), capacity)

no_overlap(starts, durations)

Intervals (starts[i], starts[i]+durations[i]) don't overlap.

Source code in solvor/cp.py
def no_overlap(self, starts, durations):
    """Intervals (starts[i], starts[i]+durations[i]) don't overlap."""
    if len(starts) != len(durations):
        raise ValueError("starts and durations must have same length")
    return ("no_overlap", tuple(starts), tuple(durations))

solve(*, hints=None, solution_limit=1, solver='auto', **kwargs)

Solve the constraint satisfaction problem.

Parameters:

Name Type Description Default
hints dict[str, int] | None

Initial value hints to guide search.

None
solution_limit int

Max solutions to find (default 1).

1
solver str

'auto' (default), 'dfs' (fast backtracking), or 'sat' (SAT encoding). 'auto' picks DFS for simple constraints, SAT for global constraints.

'auto'
**kwargs Any

Additional solver options.

{}

Returns:

Type Description
Result

Result with solution dict mapping variable names to values.

Source code in solvor/cp.py
def solve(
    self,
    *,
    hints: dict[str, int] | None = None,
    solution_limit: int = 1,
    solver: str = "auto",
    **kwargs: Any,
) -> Result:
    """Solve the constraint satisfaction problem.

    Args:
        hints: Initial value hints to guide search.
        solution_limit: Max solutions to find (default 1).
        solver: 'auto' (default), 'dfs' (fast backtracking), or 'sat' (SAT encoding).
                'auto' picks DFS for simple constraints, SAT for global constraints.
        **kwargs: Additional solver options.

    Returns:
        Result with solution dict mapping variable names to values.
    """
    if solver == "auto":
        solver = self._choose_solver()

    if solver == "dfs":
        return self._solve_dfs(hints=hints, solution_limit=solution_limit, **kwargs)
    elif solver == "sat":
        from solvor.cp_encoder import SATEncoder

        encoder = SATEncoder(self)
        return encoder.solve(hints=hints, solution_limit=solution_limit, **kwargs)
    else:
        raise ValueError(f"Unknown solver: {solver}. Use 'auto', 'dfs', or 'sat'.")

Metaheuristics

solvor.anneal

Simulated Annealing, black box optimization that handles local optima well.

Inspired by metallurgical annealing: heat metal, let it cool slowly, atoms find low-energy configurations. Here, "temperature" controls how likely we are to accept worse solutions, allowing escape from local optima.

from solvor.anneal import anneal, linear_cooling

result = anneal(start, objective_fn, neighbor_fn)
result = anneal(start, objective_fn, neighbor_fn, minimize=False)  # maximize
result = anneal(start, objective_fn, neighbor_fn, cooling=linear_cooling())

How it works: at each step, generate a neighbor. If better, accept it. If worse, accept with probability exp(-delta/temperature). High temperature = explore freely, low temperature = exploit best found. Temperature decreases over time.

Use this for:

  • Black-box optimization without gradients
  • Landscapes with many local optima
  • Fast prototyping when problem structure is unknown
  • When you can define a good neighbor function

Parameters:

initial: starting solution
objective_fn: function mapping solution to score
neighbors: function returning a random neighbor
temperature: starting temperature (default: 1000)
cooling: cooling schedule or rate (default: 0.9995)

The neighbor function is key: good neighbors make small moves, not random jumps. Think "swap two cities" for TSP, not "shuffle everything".

Cooling schedules:

exponential_cooling(rate)  : temp = initial * rate^iter (default, classic)
linear_cooling(min_temp)   : temp decreases linearly to min_temp
logarithmic_cooling(c)     : temp = initial / (1 + c * log(1 + iter))

Don't use this for: problems needing guarantees, or constraints easier to encode in MILP/CP. Consider tabu (memory) or genetic (population) for more control.

anneal(initial, objective_fn, neighbors, *, minimize=True, temperature=1000.0, cooling=0.9995, min_temp=1e-08, max_iter=100000, seed=None, on_progress=None, progress_interval=0)

Simulated annealing with configurable cooling schedule.

Source code in solvor/anneal.py
def anneal[T](
    initial: T,
    objective_fn: Callable[[T], float],
    neighbors: Callable[[T], T],
    *,
    minimize: bool = True,
    temperature: float = 1000.0,
    cooling: float | CoolingSchedule = 0.9995,
    min_temp: float = 1e-8,
    max_iter: int = 100_000,
    seed: int | None = None,
    on_progress: ProgressCallback | None = None,
    progress_interval: int = 0,
) -> Result:
    """Simulated annealing with configurable cooling schedule."""
    rng = Random(seed)
    evaluate = Evaluator(objective_fn, minimize)

    if callable(cooling):
        schedule = cooling
    else:
        schedule = exponential_cooling(cooling)

    solution, obj = initial, evaluate(initial)
    best_solution, best_obj = solution, obj
    initial_temp = temperature

    for iteration in range(1, max_iter + 1):
        temperature = schedule(initial_temp, iteration, max_iter)

        if temperature < min_temp:
            break

        neighbor = neighbors(solution)
        neighbor_obj = evaluate(neighbor)
        delta = neighbor_obj - obj

        if delta < 0 or rng.random() < exp(-delta / temperature):
            solution, obj = neighbor, neighbor_obj

            if obj < best_obj:
                best_solution, best_obj = solution, obj

        if report_progress(
            on_progress, progress_interval, iteration, evaluate.to_user(obj), evaluate.to_user(best_obj), evaluate.evals
        ):
            return Result(best_solution, evaluate.to_user(best_obj), iteration, evaluate.evals, Status.FEASIBLE)

    final_obj = evaluate.to_user(best_obj)
    status = Status.MAX_ITER if iteration == max_iter else Status.FEASIBLE
    return Result(best_solution, final_obj, iteration, evaluate.evals, status)

exponential_cooling(rate=0.9995)

Geometric cooling: temp = initial * rate^iteration.

Source code in solvor/anneal.py
def exponential_cooling(rate: float = 0.9995) -> CoolingSchedule:
    """Geometric cooling: temp = initial * rate^iteration."""

    def schedule(initial_temp: float, iteration: int, max_iter: int) -> float:
        return initial_temp * (rate**iteration)

    return schedule

linear_cooling(min_temp=1e-08)

Linear cooling: temp decreases linearly from initial to min_temp.

Source code in solvor/anneal.py
def linear_cooling(min_temp: float = 1e-8) -> CoolingSchedule:
    """Linear cooling: temp decreases linearly from initial to min_temp."""

    def schedule(initial_temp: float, iteration: int, max_iter: int) -> float:
        return initial_temp - (initial_temp - min_temp) * iteration / max_iter

    return schedule

logarithmic_cooling(c=1.0)

Logarithmic cooling: temp = initial / (1 + c * log(1 + iteration)).

Source code in solvor/anneal.py
def logarithmic_cooling(c: float = 1.0) -> CoolingSchedule:
    """Logarithmic cooling: temp = initial / (1 + c * log(1 + iteration))."""

    def schedule(initial_temp: float, iteration: int, max_iter: int) -> float:
        return initial_temp / (1 + c * log(1 + iteration))

    return schedule

solvor.tabu

Tabu Search, local search with memory to escape cycles.

Like anneal, this explores neighbors. Unlike anneal, it's greedy: always pick the best neighbor. The trick is the tabu list, a memory of recent moves that are temporarily forbidden. This prevents cycling back to solutions you just left, forcing the search to explore new territory.

Use this for routing (traveling salesmen problem), scheduling, or when you want more control than anneal. Tabu is deterministic where anneal is probabilistic, so results are more reproducible and easier to debug.

from solvor.tabu import tabu_search, solve_tsp

result = tabu_search(start, objective_fn, neighbors_fn)
result = solve_tsp(distance_matrix)  # built-in TSP helper

How it works: each iteration picks the best neighbor not on the tabu list. The chosen move goes on the list for cooldown iterations. Aspiration allows tabu moves if they beat the global best.

Use this for:

  • Routing (TSP, VRP)
  • Scheduling and timetabling
  • When you want deterministic, reproducible results
  • When you need more control than anneal

The neighbor function must return (move, new_solution) pairs where move is hashable (so it can go in the tabu list). Think (i, j) for "swap cities i and j".

Parameters:

initial: starting solution
objective_fn: function mapping solution to score
neighbors: returns list of (move, new_solution) pairs
cooldown: how long moves stay tabu (default: 10)
max_no_improve: stop after this many iterations without improvement
seed: random seed for tie-breaking when multiple neighbors have equal cost

Genetic is population-based (more overhead, better diversity), anneal is probabilistic (simpler setup), tabu is greedy with memory (more predictable).

solvor.genetic

Genetic Algorithm, population-based search that excels at multi-objective problems.

Slower than modern solvers for single-objective problems, and gradients will beat it on continuous optimization. But genetic algorithms are king at multi-objective optimization where objectives compete (Pareto fronts). Great at exploration when you're searching in the dark without structure to exploit. And they parallelize beautifully while most solvers don't.

Unlike anneal/tabu (single solution), this evolves a whole population. More overhead, but better diversity and less likely to get trapped.

from solvor.genetic import evolve

result = evolve(objective_fn, population, crossover_fn, mutate_fn)
result = evolve(objective_fn, pop, cross, mut, minimize=False)  # maximize

How it works: each generation, select parents via tournament, combine with crossover to produce children, apply random mutations. Elite individuals survive unchanged. Adaptive mutation rate increases when stagnating.

Use this for:

  • Multi-objective optimization (Pareto fronts)
  • Exploration when problem structure is unknown
  • Problems that parallelize well
  • When you can define good crossover/mutation operators

Parameters:

objective_fn: fitness function mapping solution to score
population: starting solutions (bigger = more diversity but slower)
crossover: combine two parents into one child
mutate: small random changes to a solution
elite_size: survivors per generation (default: 2)
mutation_rate: how often to mutate (default: 0.1)
tournament_k: selection pressure (default: 3)

Don't use this for: problems with gradient info (use gradient descent), convex problems (use simplex), or discrete structured problems (use CP/SAT).

evolve(objective_fn, population, crossover, mutate, *, minimize=True, elite_size=2, mutation_rate=0.1, adaptive_mutation=False, max_iter=100, tournament_k=3, seed=None, on_progress=None, progress_interval=0)

Genetic algorithm with optional adaptive mutation.

Parameters:

Name Type Description Default
adaptive_mutation bool

If True, increase mutation rate when population diversity is low (stagnation), decrease when improving.

False
Source code in solvor/genetic.py
def evolve[T](
    objective_fn: Callable[[T], float],
    population: Sequence[T],
    crossover: Callable[[T, T], T],
    mutate: Callable[[T], T],
    *,
    minimize: bool = True,
    elite_size: int = 2,
    mutation_rate: float = 0.1,
    adaptive_mutation: bool = False,
    max_iter: int = 100,
    tournament_k: int = 3,
    seed: int | None = None,
    on_progress: ProgressCallback | None = None,
    progress_interval: int = 0,
) -> Result:
    """
    Genetic algorithm with optional adaptive mutation.

    Args:
        adaptive_mutation: If True, increase mutation rate when population
            diversity is low (stagnation), decrease when improving.
    """
    rng = Random(seed)
    evaluate = Evaluator(objective_fn, minimize)
    pop_size = len(population)

    pop = [Individual(sol, evaluate(sol)) for sol in population]
    pop.sort(key=attrgetter("fitness"))
    best_solution = pop[0].solution
    best_fitness = pop[0].fitness

    def tournament():
        contestants = rng.sample(pop, min(tournament_k, len(pop)))
        return min(contestants, key=attrgetter("fitness"))

    current_mutation_rate = mutation_rate
    stagnation_count = 0

    for iteration in range(max_iter):
        new_pop = pop[:elite_size]

        while len(new_pop) < pop_size:
            p1 = tournament()
            p2 = tournament()
            child_sol = crossover(p1.solution, p2.solution)

            if rng.random() < current_mutation_rate:
                child_sol = mutate(child_sol)

            child = Individual(child_sol, evaluate(child_sol))
            new_pop.append(child)

        pop = sorted(new_pop, key=attrgetter("fitness"))[:pop_size]

        improved = False
        if pop[0].fitness < best_fitness:
            best_solution = pop[0].solution
            best_fitness = pop[0].fitness
            improved = True

        # Adaptive mutation: adjust rate based on progress
        if adaptive_mutation:
            if improved:
                stagnation_count = 0
                # Decrease mutation when improving
                current_mutation_rate = max(0.01, current_mutation_rate * 0.95)
            else:
                stagnation_count += 1
                if stagnation_count >= 5:
                    # Increase mutation when stagnating
                    current_mutation_rate = min(0.5, current_mutation_rate * 1.2)

        if report_progress(
            on_progress,
            progress_interval,
            iteration + 1,
            evaluate.to_user(pop[0].fitness),
            evaluate.to_user(best_fitness),
            evaluate.evals,
        ):
            return Result(best_solution, evaluate.to_user(best_fitness), iteration + 1, evaluate.evals, Status.FEASIBLE)

    final_obj = evaluate.to_user(best_fitness)
    return Result(best_solution, final_obj, max_iter, evaluate.evals, Status.FEASIBLE)

Graph Algorithms

solvor.dijkstra

Dijkstra's algorithm for weighted shortest paths.

The classic algorithm for finding shortest paths in graphs with non-negative edge weights. Named after Edsger Dijkstra, who designed it in 1956.

from solvor.dijkstra import dijkstra

result = dijkstra(start, goal, neighbors)
result = dijkstra(start, lambda s: s.is_target, neighbors)

How it works: maintains a priority queue of (distance, node) pairs. Each iteration pops the closest unvisited node, marks it visited, and relaxes its edges. Guarantees shortest path when all edges are non-negative.

Use this for:

  • Road networks and routing
  • Any graph where "shortest" means minimum total weight
  • When edge weights are non-negative
  • As foundation for A* (add heuristic for goal-directed search)

Parameters:

start: starting node
goal: target node, or predicate function returning True at goal
neighbors: function returning (neighbor, edge_cost) pairs

Two variants available:

dijkstra() - Callback-based, works with any node type (pure Python)
dijkstra_edges() - Edge-list, integer nodes 0..n-1, has Rust backend (5-10x faster)

Use dijkstra_edges(backend="python") for the pure Python implementation.

For negative edges use bellman_ford, Dijkstra's negativity was legendary, just not in his algorithm. For unweighted graphs use bfs (simpler). With a good distance estimate, use a_star.

dijkstra(start, goal, neighbors, *, max_iter=1000000, max_cost=None)

Find shortest path in a weighted graph with non-negative edges.

Source code in solvor/dijkstra.py
def dijkstra[S](
    start: S,
    goal: S | Callable[[S], bool],
    neighbors: Callable[[S], Iterable[tuple[S, float]]],
    *,
    max_iter: int = 1_000_000,
    max_cost: float | None = None,
) -> Result:
    """Find shortest path in a weighted graph with non-negative edges."""
    is_goal = goal if callable(goal) else lambda s: s == goal

    g: dict[S, float] = {start: 0.0}
    parent: dict[S, S] = {}
    closed: set[S] = set()
    counter = 0
    heap: list[tuple[float, int, S]] = [(0.0, counter, start)]
    counter += 1
    iterations = 0
    evaluations = 1  # Counts nodes added to frontier (start + discovered neighbors)

    while heap and iterations < max_iter:
        cost, _, current = heappop(heap)

        if current in closed:
            continue

        iterations += 1
        closed.add(current)

        if is_goal(current):
            path = reconstruct_path(parent, current)
            return Result(path, g[current], iterations, evaluations)

        if max_cost is not None and cost > max_cost:
            continue

        for neighbor, edge_cost in neighbors(current):
            if neighbor in closed:
                continue

            tentative_g = g[current] + edge_cost

            if tentative_g < g.get(neighbor, float("inf")):
                g[neighbor] = tentative_g
                parent[neighbor] = current
                heappush(heap, (tentative_g, counter, neighbor))
                counter += 1
                evaluations += 1

    if iterations >= max_iter:
        return Result(None, float("inf"), iterations, evaluations, Status.MAX_ITER)
    return Result(None, float("inf"), iterations, evaluations, Status.INFEASIBLE)

dijkstra_edges(n_nodes, edges, source, *, target=None)

Edge-list Dijkstra for integer node graphs.

Source code in solvor/dijkstra.py
@with_rust_backend
def dijkstra_edges(
    n_nodes: int,
    edges: list[tuple[int, int, float]],
    source: int,
    *,
    target: int | None = None,
) -> Result:
    """Edge-list Dijkstra for integer node graphs."""
    adj: list[list[tuple[int, float]]] = [[] for _ in range(n_nodes)]
    for u, v, w in edges:
        adj[u].append((v, w))

    if target is not None:
        return dijkstra(source, target, lambda s: adj[s])

    # All-distances mode: minimal Dijkstra to collect distances
    dist: dict[int, float] = {source: 0.0}
    heap: list[tuple[float, int]] = [(0.0, source)]
    iterations = 0
    while heap:
        d, u = heappop(heap)
        if d > dist.get(u, float("inf")):
            continue
        iterations += 1
        for v, w in adj[u]:
            nd = d + w
            if nd < dist.get(v, float("inf")):
                dist[v] = nd
                heappush(heap, (nd, v))
    return Result(dist, 0.0, iterations, 0)

solvor.a_star

A* search for goal-directed shortest paths with heuristics.

Dijkstra explores in all directions like ripples in a pond. A* knows where it's going and prioritizes paths that look promising. Give it a heuristic (estimate to goal) and it expands far fewer nodes.

from solvor.a_star import astar, astar_grid

result = astar(start, goal, neighbors, heuristic)
result = astar_grid(maze, (0, 0), (9, 9), directions=8)

How it works: like Dijkstra but prioritizes by f(n) = g(n) + h(n), where g is cost so far and h is estimated cost to goal. Heuristic must be admissible (never overestimate) for optimal results. Weighted A* (weight > 1) trades optimality for speed.

Use this for:

  • Pathfinding with known goal location
  • When you have a good distance heuristic
  • Grid navigation and game AI
  • When Dijkstra is too slow

Parameters:

start: starting node
goal: target node or predicate function
neighbors: returns (neighbor, cost) pairs
heuristic: estimates distance to goal (must not overestimate)
weight: heuristic weight, >1 for faster suboptimal search

astar_grid provides built-in heuristics for 2D grids.

Don't use this for: unknown goal (use dijkstra), negative edges (use bellman_ford).

astar(start, goal, neighbors, heuristic, *, weight=1.0, max_iter=1000000, max_cost=None)

A* search with heuristic guidance, returns optimal path when weight=1.

Source code in solvor/a_star.py
def astar[S](
    start: S,
    goal: S | Callable[[S], bool],
    neighbors: Callable[[S], Iterable[tuple[S, float]]],
    heuristic: Callable[[S], float],
    *,
    weight: float = 1.0,
    max_iter: int = 1_000_000,
    max_cost: float | None = None,
) -> Result:
    """A* search with heuristic guidance, returns optimal path when weight=1."""
    is_goal = goal if callable(goal) else lambda s: s == goal

    g: dict[S, float] = {start: 0.0}
    parent: dict[S, S] = {}
    closed: set[S] = set()
    counter = 0
    f_start = weight * heuristic(start)
    heap: list[tuple[float, float, int, S]] = [(f_start, 0.0, counter, start)]
    counter += 1
    iterations = 0
    evaluations = 1  # Counts nodes added to frontier (start + discovered neighbors)

    while heap and iterations < max_iter:
        _, _, _, current = heappop(heap)

        if current in closed:
            continue

        iterations += 1
        closed.add(current)

        if is_goal(current):
            path = reconstruct_path(parent, current)
            status = Status.OPTIMAL if weight == 1.0 else Status.FEASIBLE
            return Result(path, g[current], iterations, evaluations, status)

        if max_cost is not None and g[current] > max_cost:
            continue

        for neighbor, edge_cost in neighbors(current):
            if neighbor in closed:
                continue

            tentative_g = g[current] + edge_cost

            if tentative_g < g.get(neighbor, float("inf")):
                g[neighbor] = tentative_g
                parent[neighbor] = current
                f_new = tentative_g + weight * heuristic(neighbor)
                heappush(heap, (f_new, -tentative_g, counter, neighbor))
                counter += 1
                evaluations += 1

    if iterations >= max_iter:
        return Result(None, float("inf"), iterations, evaluations, Status.MAX_ITER)
    return Result(None, float("inf"), iterations, evaluations, Status.INFEASIBLE)

astar_grid(grid, start, goal, *, directions=4, heuristic='auto', blocked=1, costs=None, weight=1.0, max_iter=1000000)

A* for 2D grids with built-in heuristics and neighbor generation.

Source code in solvor/a_star.py
def astar_grid(
    grid: Sequence[Sequence[int]],
    start: tuple[int, int],
    goal: tuple[int, int],
    *,
    directions: Literal[4, 8] = 4,
    heuristic: Literal["auto", "manhattan", "octile", "euclidean", "chebyshev"] = "auto",
    blocked: int | set[int] = 1,
    costs: dict[int, float] | None = None,
    weight: float = 1.0,
    max_iter: int = 1_000_000,
) -> Result:
    """A* for 2D grids with built-in heuristics and neighbor generation."""
    rows = len(grid)
    cols = len(grid[0]) if rows else 0
    blocked_set = {blocked} if isinstance(blocked, int) else set(blocked)
    cost_map = costs or {}
    dirs = _DIRS_8 if directions == 8 else _DIRS_4
    gr, gc = goal

    h_name = heuristic
    if h_name == "auto":
        h_name = "octile" if directions == 8 else "manhattan"

    match h_name:
        case "manhattan":

            def h(s):
                return abs(s[0] - gr) + abs(s[1] - gc)
        case "euclidean":

            def h(s):
                return ((s[0] - gr) ** 2 + (s[1] - gc) ** 2) ** 0.5
        case "octile":

            def h(s):
                dr, dc = abs(s[0] - gr), abs(s[1] - gc)
                return max(dr, dc) + _SQRT2_MINUS_1 * min(dr, dc)
        case "chebyshev":

            def h(s):
                return max(abs(s[0] - gr), abs(s[1] - gc))
        case _:

            def h(s):
                return abs(s[0] - gr) + abs(s[1] - gc)

    def neighbors(pos: tuple[int, int]):
        r, c = pos
        for dr, dc in dirs:
            nr, nc = r + dr, c + dc
            if 0 <= nr < rows and 0 <= nc < cols:
                cell = grid[nr][nc]
                if cell not in blocked_set:
                    base = cost_map.get(cell, 1.0)
                    if dr != 0 and dc != 0:
                        base *= _SQRT2
                    yield (nr, nc), base

    return astar(start, goal, neighbors, h, weight=weight, max_iter=max_iter)

solvor.flow

Network Flow, for assignment, matching, and bottleneck analysis.

Use max_flow when you need "how much can I push through this network?" Use min_cost_flow when cost matters: "what's the cheapest way to route X units?"

from solvor.flow import max_flow, min_cost_flow

# graph: {node: [(neighbor, capacity, cost), ...]}
result = max_flow(graph, source='s', sink='t')
result = min_cost_flow(graph, source='s', sink='t', demand=10)

How it works: max_flow uses Ford-Fulkerson with BFS (Edmonds-Karp), repeatedly finding augmenting paths until none exist. min_cost_flow uses successive shortest paths via Bellman-Ford, finding the cheapest augmenting path each time. The max-flow min-cut theorem means you find bottlenecks for free.

Use this for:

  • Assignment and matching problems
  • Server-to-request allocation
  • Transportation and logistics
  • Finding network bottlenecks (max-flow = min-cut)

Parameters:

graph: adjacency dict {node: [(neighbor, capacity, cost), ...]}
source: source node
sink: sink node
demand: (min_cost_flow only) units to route

Works well with NetworkX for graph construction. For heavier graph work (shortest paths, centrality, community detection), NetworkX is more extensive.

max_flow(graph, source, sink)

Find maximum flow from source to sink using Ford-Fulkerson (FFA) with BFS.

Source code in solvor/flow.py
def max_flow[Node](
    graph: dict[Node, list[tuple[Node, int, ...]]],
    source: Node,
    sink: Node,
) -> Result:
    """Find maximum flow from source to sink using Ford-Fulkerson (FFA) with BFS."""
    capacity = defaultdict(lambda: defaultdict(int))
    for u in graph:
        for v, cap, *_ in graph[u]:
            capacity[u][v] += cap

    flow = defaultdict(lambda: defaultdict(int))
    total_flow = 0
    iterations = 0

    def bfs():
        """Find augmenting path from source to sink via BFS."""
        visited = {source}
        queue = deque([(source, [source])])

        while queue:
            node, path = queue.popleft()
            if node == sink:
                return path

            for neighbor in capacity[node]:
                # Forward capacity minus used, plus reverse flow we can cancel
                residual = capacity[node][neighbor] - flow[node][neighbor] + flow[neighbor][node]
                if neighbor not in visited and residual > 0:
                    visited.add(neighbor)
                    queue.append((neighbor, path + [neighbor]))

        return None

    while path := bfs():
        iterations += 1
        path_flow = float("inf")
        for u, v in zip(path, path[1:]):
            residual = capacity[u][v] - flow[u][v] + flow[v][u]
            path_flow = min(path_flow, residual)

        for u, v in zip(path, path[1:]):
            if flow[v][u] > 0:
                reduce = min(path_flow, flow[v][u])
                flow[v][u] -= reduce
                path_flow_remaining = path_flow - reduce
                flow[u][v] += path_flow_remaining
            else:
                flow[u][v] += path_flow

        total_flow += path_flow

    flows = {(u, v): flow[u][v] for u in flow for v in flow[u] if flow[u][v] > 0}
    return Result(flows, total_flow, iterations, iterations)

min_cost_flow(graph, source, sink, demand)

Route demand units from source to sink at minimum total cost.

Source code in solvor/flow.py
def min_cost_flow[Node](
    graph: dict[Node, list[tuple[Node, int, int]]],
    source: Node,
    sink: Node,
    demand: int,
) -> Result:
    """Route demand units from source to sink at minimum total cost."""
    capacity = defaultdict(lambda: defaultdict(int))
    cost = defaultdict(lambda: defaultdict(lambda: float("inf")))
    nodes = set()

    for u in graph:
        nodes.add(u)
        for v, cap, c in graph[u]:
            nodes.add(v)
            capacity[u][v] += cap
            cost[u][v] = min(cost[u][v], c)
            if cost[v][u] == float("inf"):
                cost[v][u] = -c

    flow = defaultdict(lambda: defaultdict(int))
    total_cost = 0
    total_flow = 0
    iterations = 0

    def bellman_ford():
        dist = {n: float("inf") for n in nodes}
        parent = {n: None for n in nodes}
        dist[source] = 0

        for _ in range(len(nodes) - 1):
            updated = False
            for u in nodes:
                if dist[u] == float("inf"):
                    continue
                for v in nodes:
                    residual = capacity[u][v] - flow[u][v] + flow[v][u]
                    if residual > 0 and dist[u] + cost[u][v] < dist[v]:
                        dist[v] = dist[u] + cost[u][v]
                        parent[v] = u
                        updated = True
            if not updated:
                break

        if dist[sink] == float("inf"):
            return None, float("inf")

        path = []
        node = sink
        while node is not None:
            path.append(node)
            node = parent[node]
        path.reverse()

        return path, dist[sink]

    while total_flow < demand:
        iterations += 1
        path, path_cost = bellman_ford()
        if path is None:
            return Result({}, float("inf"), iterations, iterations, Status.INFEASIBLE)

        path_flow = demand - total_flow
        for u, v in zip(path, path[1:]):
            residual = capacity[u][v] - flow[u][v] + flow[v][u]
            path_flow = min(path_flow, residual)

        for u, v in zip(path, path[1:]):
            if flow[v][u] > 0:
                reduce = min(path_flow, flow[v][u])
                flow[v][u] -= reduce
                remaining = path_flow - reduce
                flow[u][v] += remaining
                total_cost += cost[u][v] * remaining - cost[v][u] * reduce
            else:
                flow[u][v] += path_flow
                total_cost += cost[u][v] * path_flow

        total_flow += path_flow

    flows = {(u, v): flow[u][v] for u in flow for v in flow[u] if flow[u][v] > 0}
    return Result(flows, total_cost, iterations, iterations)

solve_assignment(cost_matrix)

Solve assignment problem via min-cost flow reduction.

Source code in solvor/flow.py
def solve_assignment(
    cost_matrix: Sequence[Sequence[float]],
) -> Result:
    """Solve assignment problem via min-cost flow reduction."""
    n = len(cost_matrix)
    m = len(cost_matrix[0]) if n > 0 else 0

    graph = defaultdict(list)
    source, sink = "source", "sink"

    for i in range(n):
        graph[source].append((f"L{i}", 1, 0))
        for j in range(m):
            graph[f"L{i}"].append((f"R{j}", 1, cost_matrix[i][j]))

    for j in range(m):
        graph[f"R{j}"].append((sink, 1, 0))

    result = min_cost_flow(graph, source, sink, min(n, m))

    assignment = [-1] * n
    flow_solution: dict[tuple[str, str], float] = result.solution  # type: ignore[assignment]
    for (u, v), f in flow_solution.items():
        if f > 0 and u.startswith("L") and v.startswith("R"):
            i = int(u[1:])
            j = int(v[1:])
            assignment[i] = j

    return Result(assignment, result.objective, result.iterations, result.evaluations, result.status)