The algorithm he describes is not very interesting because it doesn't even implement unit-propagation. The idea with unit propagation is that if my current assignment has set a variable A=True and I also have a clause (~A + B) then B must be true for the instance to be satisfiable. Unit propagation is about propagating such implied constraints. If he'd implemented unit propagation along with his brute force search he'd have ended up with what is called the DPLL algorithm for SAT.
Modern SAT solvers are actually even smarter and have two important improvements over DPLL: (i) Conflict-Driven Clause Learning (CDCL) which introduced a very cool technique called non-chronological backtracking, and (ii) 2-literal watching which is an extremely clever data structure for efficient unit propagation. Both of these ideas are generalizable to many other versions of constraint solving (e.g. sudoko, router search, etc.) so they are well worth learning.
CDCL was introduced by this paper: https://dl.acm.org/citation.cfm?id=244560 while 2-literal watching is from the Chaff solver: https://www.princeton.edu/~chaff/publication/DAC2001v56.pdf.
Competitive SAT solvers are little diamonds of engineering.
For instance, CryptoMinisat first implemented XOR detection along with Gaussian elimination. They were using SAT on cryptographic problems, so it made sense to specialize for that domain. But now there's a bunch of crypto-based problems in the SAT competition and many of the other solvers also implement Gaussian elimination. But I'd guess that most users of SAT solvers wouldn't see a noticeable performance difference if you got rid of Gaussian elimination from the solvers.
Another good example is Alan Mischenko's ABC which wins the hardware model checking competition every year but uses a fairly ancient version of MiniSAT as its SAT solving engine. It's not that a faster solver wouldn't make ABC faster, but that the real smarts in model checking are in what queries you send to the solver, not in squeezing an extra 2X or 3X performance out of the SAT solver using the same old dumb queries.
You're probably right about the kitchen sink approach, however without it we probably wouldn't have MapleSat. MapleSat uses machine learning to replace the VSIDS heuristic. A bold move, but it has been replicated this year.
I checked (got curious about what ABC does): ABC uses minisat 2.2 as default and satoko (for 3 years) and glucose 3.0 (for 2 years) as options.
I don't know which configuration they used for the competitions. I think I saw "bmc -g" in some of the slides which would indicate glucose. However I also saw "abc super_solve" for which the name gives no information. (Maybe not the same years)
Indeed, smart applications of SAT solvers do more than raw performance.
Though, every little bit helps. Are you going to tell scientists what they should research? :)
But deep learning neural network are also said to be used when there's no good solution known.
It's the first time I see the symmetry.
When should I use a SAT vs an ANN?
Other than SAT and ANN, are there other kind or technology that automatically solve/approximate solutions?
A good guide (in my experience) -- with SAT while the answer is hard to find it's easy to check (like a Sudoku, or building a timetable with no clashes), while ANN give answers where verifying the answer isn't easy to perform computstionally (if I give you a translation of a text into Latin, verifying it is about as hard as just making a new one from scratch)
How could a SAT/SMT help me in this task?
I know it's already used for math theorem checking.
All men are mortal.
Socrates is a man.
Therefore Socrate is mortal.
I have currently two way to solve syllogisms
One hard-coded Disjonction of cases (but not flexible enough for non canonical forms of syllogisms)
And a flexible one that generate a graph of transitivity.
Both are 100% accurate.
Could SAT/SMT give me a third way of determining if the conclusion is logically valid? (not non sequitur)
SMT is some extra stuff on top of SAT. There are a few frameworks which extend SAT. I'd look at minizinc, it is a fairly easy to use language which can produce SAT (and SMT, and others) as output. Often encoding things in SAT directly gets arkward.
System description: Transition system like automata or a guarded command language which induces a graph.
System Property: Stated in some logic, CTL/LTL
Output: Satisfied, or Unsatisfied with a counter example (usually a path in the graph which is false).
If your problem can fit into such a framework, have a look at model checkers: https://en.wikipedia.org/wiki/List_of_model_checking_tools
SAT solvers work out if it is possible to satisfy a logical model (hence the name).
In theory a neural network could approximate a function that solves SAT. In practice it is a much better idea to use a SAT solver. Neural networks aren't that powerful at solving logic puzzles.
One is heuristic vs algorithm. If SAT solver finds a solution, it's correct. If ANN finds a solution, it might be correct.
ANN is suitable for pattern recognition, statistical learning and representation learning. Learning continuous functions for example.
SAT solvers work best for discrete problems.
Internally we use SAT solvers to schedule resources on medical devices which makes the device look "smart", but this is not AI...
I said on another comment that I work on a logic checker for English, detecting automatically logical fallacies from text.
I consider such a task to be AI and how could SAT help to check if the conclusion of a syllogism follow?
"Internally we use SAT solvers to schedule resources on medical devices" this is cool!
Feel free to ask any questions.
Also: have you looked at the poly-algorithm for 4-coloring? I rather write the SAT instance by hand...
One of the interesting things about it is that you don't have to put formulas into conjunctive normal form.