
NeuroSAT: Learning a SAT Solver from Single-Bit Supervision - kg9000
https://arxiv.org/abs/1802.03685
======
zero_k
Wow, that's kinda interesting. I somehow cannot get rid of the feeling of
having a really nice hammer and then treating everything as a nail. There are
other uses of deep/machine learning that could help SAT solvers.

For example, one could try to better guess the learnt clauses to keep/throw
away or to restart when the search space is deemed non-interesting through
prediction models built using machine learning. See (my) blogpost here:
[https://www.msoos.org/2018/01/predicting-clause-
usefulness/](https://www.msoos.org/2018/01/predicting-clause-usefulness/)
(sorry, self-promotion, but relevant)

Let's not forget the work that could be done on auto-configuring SAT solvers,
tuning their configuration to the instance, as per the competition at:
[http://aclib.net/cssc2014/](http://aclib.net/cssc2014/)

Another piece of work in this domain are portifolio solvers, which pick the
best-fitting SAT solver from a list of potentials, after having guessed the
best one given the instance profile, e.g. priss at
[http://tools.computational-
logic.org/content/riss.php](http://tools.computational-
logic.org/content/riss.php)

I think there are some interesting low-hanging fruits in there somewhere,
using regular SAT solvers and machine/deep learning, exploiting domain-
specific information and know-how.

~~~
keenerd
I have not been too impressed with fancier SAT solvers. I'm currently drafting
a blog post where I compare Minisat, Picosat, Cryptominisat, Lingeling and
Glucose. (Those were all that I could easily get running on Linux. Open to
more suggestions.)

The venerable Minisat was (for the most part) the fastest in my simple use
case. What I did notice though was that the "smarter" solvers were more
consistent. A poorly written CNF might take 50x longer than it should (when
compared to N-1 and N+1 instances). Minisat (but moreso Picosat) would
occasionally hit an edge case or something and slow way down. Fancier solvers
produced a nice clean line on the graphs, without anomalous 50x spikes.

~~~
zero_k
For smaller problems, the fancier solvers are actually worse than
MiniSat/Picosat. That's because of the overhead of creating and maintaining
datastructures and the fancy preprocessing. The larger, more complex solvers
are meant to have a better chance to solve more complex problems and will use
hybrid strategies to make sure they don't accidentally go down into some
rabbit hole. So I would expect them to have better consistency.

Note that winning in the competition, until 2017, meant you solved the most
instances, each with a ~5000s timeout. So if you solved every single one
within 4999s you won, you were the best... This obviously encouraged
incredibly long startup times that are gained back over the overall 5000s
timeout. It clearly does not mimic normal use-cases.

~~~
keenerd
Thanks. That makes sense. Most of my stuff has (at most) a few million clauses
and usually takes between 1-600 seconds.

------
wgjordan
For a second I was wondering whether test prep for American college-bound high
schoolers had entered an amazing AI-fueled arms race. Then I realized this was
actually about the Boolean satisfiability problem.

