I mean, certainly you COULD reduce the Traveling Salesman Problem to SAT (it's NP-complete, after all), but I would assume that it would be a totally impractical way of doing it compared to more specialized algorithms.
I'm asking since the title contains the word "underused", but doesn't elaborate on practical use-cases. What would you use it for?
SAT particularly exceeds when you have side constraints. Imagine travelling salesman where you have to top up petrol only in cities, or where you need to sleep at night, or some roads are slower at rush hour, or you have a particular place you want to be at a particular time/date.
A specialised travelling salesman solver would be useless for any of these. Adding side constraints to SAT is simple.
In general, I wouldn't try to write sat models directly unless you are writing a logic problem. Consider looking at conjure ( https://github.com/conjure-cp/conjure ) which adds a nice language on top, and get target various NP complete solvers, including Sat (which often is the best). (I'm one of the minor conjure developers)
In last year's Advent of Code programming contest, one of the trickier problems involved taking a collection of points in 3D space, each of which has a "range" defined by Manhattan distance, and finding the location that is "in range" of the maximum number of them.
Now, this isn't a theoretically intractable problem. Each point defines the center of an octahedral volume of space, and by repeatedly slicing each volume using the others' boundaries, you can partition the entire space into something like O(N^3) disjoint pieces. You can then enumerate these pieces to determine which one is contained in the largest number of octahedra. But actually implementing this is tricky; each piece ends up having an irregular polyhedral shape, and calculating the geometry of their boundaries would be non-trivial.
Instead, some of the participants noticed out that you could represent the constraints as a formula:
find (x,y,z) that maximize sum((|x-xi| + |y-yi| + |z-zi|) < ri)
Here's one such solution: https://www.reddit.com/r/adventofcode/comments/a8s17l/2018_d...
It’s pretty easy to solve with an octree. My goal was run-time speed rather than quickest solution.
My octree solution runs in ~3 milliseconds. If memory serves, the Z3 solver took around 17 seconds on the same machine.
I made a nifty (imho) video that shows the octree traversal order. There’s no need to create irregular shapes.
We do reasonably well on the annual CASC competition of theorem provers (we're iprover), even winning a division where this approach is particularly strong!
Our algorithm ...
 H. Ganzinger, G. Hagen, R. Nieuwenhuis, A. Oliveras, C. Tinelli, DPLL(T): Fast Decision Procedures. https://www.cs.upc.edu/~oliveras/espai/papers/dpllt.pdf
 R. Nieuwenhuis, A. Oliveras, C. Tinelli, Abstract DPLL and Abstract DPLL Modulo Theories. http://www.cs.upc.edu/~roberto/papers/lpar04.pdf
I guess that you can think of our algorithm as lazy SAT modulo quantified first order logic, with the crucial difference that first order logic is undecidable. But anyway in this case the "decision procedure for T" would just be checking whether any two complementary literals which are true in the model are unifiable, which is decidable.
Basically CDCL(T) uses the theory part (the "T") as a callback in the inner loop of the SAT solver (to validate partial models). iProver and similar refinement provers tend to have their own outer loops ("refinement loops" in other communities) that makes a succession of calls to the SAT solver, to completion, and use the model or unsat core to refine their own abstraction for the next iteration of the outer loop.
Then there's SATPLAN. If I had VC money I would be seeking young'uns trying to change the world with SATPLAN, of course in a way that's married to machine learning. (Machine learning doesn't "think", it "learns"; logic-based AI "thinks" but can't "learn").
E: part 1 says at the outset:
> As an example, the often-talked-about dependency management problem, is also NP-Complete and thus translates into SAT, and SAT could be translated into dependency manager. The problem our group worked on, generating key and lock cuttings based on user-provided lock-chart and manufacturer-specified geometry, is also NP-complete.<
Part 3 is already deep in the rabbit hole of comparing heuristics. People like me just write something that "compiles" to DIMACS and fires Minisat.
Assign scores and differentiate on those scores. This is (very loosely) how AlphaGo also works - you take a tree-based algorithm (MCTS) and then use neural nets to score the next node to expand.
However, trying to use his method to optimise a linear typewriter proved unsuccessful, as projected solving time was around 13 million years.
(But your 13M years is a lot quicker than my projected seven hundred ninety-eight trillion, seven hundred twenty billion, nine hundred fifty-two million, one hundred seventy-six thousand, seven hundred and forty-five years you'd need for an exhaustive search of the 26! possibilities.)
Basically, the problem "find an input that causes the program to execute this sequence of statements".
Identification of missing optimizations in the LLVM backend.
In particular, pub — the package manager for Dart — explicitly uses the CDCL process described in the article:
Any ideas on how to switch to an SAT solver? How would I even write such a large SAT Boolean expression, and how would I tell the SAT solver that the costs will change for each choice depending on business rules and other choices made? It seemed impossible with a SAT system but I assume I’m not familiar enough with the concepts.
Also if you have thousands of variables, you probably shouldn't be writing SMT statements directly. Usually you have some sort of script for your problem domain that emits SMT then invokes the solver. E.g. the popular SMT solver Z3 has a very simple lisp-like syntax that's easy to emit from a script.
As for changing costs, you might be able to use simple expressions like:
costOfShippingXusingZ = 100 - numOfXusingZ * 2
Or you might use "soft constraints" that let you add weights to optional constraints.
It's important to note that with SMT/SAT solvers you generally want to keep your formulas linear or "first-order" which means don't multiply variables together if you can help it.
This might be useful: https://rise4fun.com/Z3/tutorial/optimization
the follow on is that the same program should also process the model (answer) that Z3 produces and render it in a domain-appropriate fashion.
this is easiest if you use something like the python interface to Z3, instead of actually generating SMT-LIB sexps and feeding them to Z3.
The hardest part is getting the model back at the end (well, and making problems that don't scale too badly with N).
However large graphs might not fit in memory.
A* also allows you to find a solution faster by overestimating your heuristic: For example if you multiply your heurisitic with 1.1 you are guaranteed to find a path whose length is within 10% of the shortest path, requiring less CPU time and memory for the search.
If you’re minimizing the cost, shouldn’t this be formulated as an optimization problem?
I've only recently learned about SAT type solvers as MILP/LP problems are very fast and globally optimal, unlike things like A* or Genetic Algorithms. I think GA's are cool because you can write one in less than 1/2 page of Python, where MIP solvers are usually large black-box C codesets (Ex: GLPK, CPLEX, GUROBI, XPRESS, Open-Solver).
Fun to code though!
My only issue with LP and MILP solvers is scalibility. You have none! Multicore doesn't really seem to help any.
My favorite MILP is commercial and perhaps somewhat obscure for general population - Mosek. Any recommendations in regards to other good options? perhaps open source that might be in same or better league.
For more information, see this page: http://plato.asu.edu/guide.html
Among other documents, it links to this extensive survey: https://www.microsoft.com/en-us/research/wp-content/uploads/...
All the best!
I wrote the hexiom solver that got linked above and the motivation in that case was that I saw that someone tried to solve the puzzle with a hand-written backtracking search, but that their algorithm wasn't able to find a solution to one of the levels.
My version of the SAT solver managed to quickly find a solution for every case, including the highly symmetric one where the usual search algorithm got stuck. The nicest thing is that the implementation was very declarative, in that my job was to produce a set of constraints for the SAT solver, instead of to produce an imperative algorithm. The symmetry breaking was one example of something hat was very easy to do in a SAT-solver setting but which would be trickier to do by hand.
They can trivially break lots of hash algorithms. You can also find bitwise equivalents to functions which you wouldn't naturally expect to be easily definable in terms of bitwise operations. You can extend existing concepts with a lot of extraordinary new functionality.
If you want more information about the vulnerability and mechanisms of exploiting it, project members assigned it CVE-2016-10253 .
The magic sauce is in being able to combine these things. Chuck a bag full of keys, sans known tumblers (those are embedded in doors, you don't steal doors) into a machine and learn with eg. stochastic gradient descent what are tumblerset-candidates that satisfy the entire system. Sensor arrays + AI that thinks + machine learning = wow.
The complaint was having to compile problems into boolean satisifiability problems. Here we see one need only describe their problems as puzzles. These examples are nice in that they include code that shows how straight forward description of these problems can be
That said, DNF package manager in Fedora, Mageia, OpenMandriva, and Yocto uses a SAT solver. As does the Zypper package manager in (open)SUSE.
Of the many ways to solve a complex problems that we have on our tool box -- algorithms, neural networks, constraint solvers -- I would totally agree to say that solvers are underused. But when you get used to them it's not hard to find use cases.
I used this to proof-of-concept an attack on Seahash; this was the first time I'd ever used an SMT solver, but it was really very straightforward.
Reductions are pretty easy to write and modern SAT solvers are crazy fast.
Three examples I've done this for, myself.
- Hardware circuits: is a hand-optimized combinational function (optimized with hand-written karnaugh maps, or whatever) equivalent to a high level specification? You could for example use this to write N versions of an instruction decoder, a reference spec and versions specialized to different hardware devices, and check them all together.
- Packet processing: are two different firewall/packet filters equivalent? In other words, is there any packet P for which two filters F(P) and G(P) have different results? This might be useful for example as a cloud user to ensure certain assumptions can't be violated (locking out necessary ports, etc)
- Cryptographic code: is this piece of C code equivalent to a high level specification? Is it still the same if I optimize it? This is possible by bounding the input of the code and then extracting models from them. You can do this in practice for pretty interesting functions. For example, I showed a reference spec of ChaCha20's RFC was equivalent to some C code I wrote, up-to some bounded stream length. (Unfortunately when SIMD is involved it's trickier to pull this off but that's mostly a technical limitation more than a theoretical one.)
There is a variant of the third approach I've also seen used: an easy way of attacking weak RNGs! For example, the ancient JellyBean version of Android had a weak RNG at one point that allowed Bitcoin wallets to be stolen by cracking the keys. Effectively, this PRNG was seeded with a 64-bit state. But 64-bits isn't that much: two different seeds may possibly collide and give the same resulting random value. So here's how you phrase the problem: instead of equality, you're basically trying to prove the PRNG is non-injective. If R(n) is a function that takes a seed state 'n' and produces a pseudo-random number, then translate R to boolean expression, and ask this question: forall x y, iff x != y, R(x) == R(y). Are there any two values `x` and `y` that are not equal, but do result in the same resulting RNG values? If this equation is satisfiable, then your RNG is weak. In practice, a SAT solver can prove this is the case for the JellyBean RNG in less than 0.1 seconds. And you can also do it the other way, prove injectivity: forall x y, x == y || R(x) != R(y). You prove that for all x,y either x and y are equal, and if they're not, they will never result in the same output. This can also be proved in less than a fraction of a second. 
SAT solvers and SMT solvers are useful for a wide variety of practical applications, but they are not magic. You just need a bit of familiarity with them, and you'll probably find a lot of problems can be phrased that way! The secret, of course, is actually learning how to phrase problems in a manner that a SAT solver can actually work with. That's an entirely different problem.
 You actually want to prove something slightly different: forall x y, not(x == y || R(x) != R(y)). You want to find two inputs that result in "unsatisfiable" in this case. Negating the result gets you that: if any two x/y are not the same, but R(x) == R(y), then the overall result is 'true', which is a satisfiable case with a counter example.
Equivalence checking, logic synthesis and optimisation, formal verification, etc.
Say, you want to implement a package manager, with dependencies.
Static symmetry breaking is performed as a kind of preprocessing, while dynamic symmetry breaking is interleaved with the search process. The static method has been used more but there are promising attempts to use dynamic symmetry breaking with CDCL solvers. See, for example, cosy (https://github.com/lip6/cosy) a symmetry breaking library which can be used with CDCL solvers.
That is certainly true. However, once all the other components have been optimized the hell out of them (of course that is a never ending story itself but probably with diminishing returns) the optimization of the symmetry breaking parts will follow. As I mentioned before, efficiently dealing with isomorphism in the search space is the ultimate (theoretical) optimization.
This development can already been observed looking at the history of graph automorphism packages. For a very long time "naughty" was the only significant player than suddenly came "saucy" and "bliss". As far as I know, one of the motivations for the newcomers was to use them for symmetry breaking.
Problems that are "hard" are usually specially crafted (SAT competition).
Getting back to your question: In practice most problems do not show exponential runtime behaviour. Using a good SAT solver is key here since the more naive an implementation is, the more probable it is to run into exponential runtime with practical problems.
Writing an encoding from problem P to SAT is usually much simpler and easier to do correctly than writing a dedicated solver for P. SAT solvers are so efficient than very often they will be competitive with dedicated ones.
At the end of the day, you want to solve your high-level problem, not writing a solver, especially when such good solvers are already available.
That's kind of backwards: reducing any problem in NP (not just NP-complete ones) to SAT is possible and mathematically straightforward because SAT is NP-complete. You do reductions from SAT to other problems in order to show that they are NP-complete (which involves cleverness in working out how to represent propositional logic in e.g. Sudoku).
Of course the problems being solved are NP hard in general, but in practice, on the kind of problems they're actually given, the solvers can be sufficiently fast.
You can solve combinatorial search problems with whatever method you want, but it can be very hard to beat a SAT solver, because you may have to duplicate all the strategies, engineering, and heuristics that SAT solvers have. If your problem has special structure then it may be worth writing your own solver. Constraint satisfaction problems can often be written with high level constraints that have good constraint propagators (e.g. the alldifferent constraint).
The API is not exactly Pythonic (numpy/scipy/opencv style) but it's loaded with features.
Where does the article talk about this?