void kissat_add (kissat * solver, int lit);
kissat_add(s, [N != 0]);
kissat_add(s, -1); kissat_add(s, 2); kissat_add(s, 0);
DIMACS is a pretty terrible format, but at least you don't have to build the clauses up yourself using kissat_add.
DNF-SAT can be trivially solved in O(n), because each clause directly encodes a solution (except for clauses that are contradictory, of course). But there's no known way to "convert" arbitrary formulas to DNF in polynomial time, so such a solver is not useful.
"the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable."
I originally thought it was an AI to solve college placement tests, which would also have been cool but in a different way.
Let nonce be 32 boolean inputs, representing numbers from 0..2^32-1.
Let other_bits and difficulty be a collection of constant boolean inputs, in the following expression.
Let f(other_bits, nonce, diff) be the boolean logic circuit used in Bitcoin mining, for a recent candidate block.
Roughly speaking, f combines nonce with other_bits to make a bit string, performs SHA-256 on the bit string, then SHA-256 on that, and then checks if the leading diff number of bits are zero.
Folding in the constants, f reduces to a boolean logic circuit with 32 inputs and 1 output, g(nonce) = result.
The SAT problem: Does there exist a value of nonce for which g(nonce) = 1?
If you can solve this quickly, you just sped up Bitcoin mining by a factor 2^32 iterations, which is a lot. You can take this further as well.
I don't know of any SAT solver heuristic which really helps with this.
It's all about the difficulty vs the size of the instance.
That puts it in the hardest class of instances for its size (subject to available knowledge), because all instances can be solved by brute force.
The problem is, since the instance is so large, we don't know what happens. No one has ever ran one to completion.
So no, it's not in the hardest class of instances for its size. Its difficulty is unknown.
How current is your information ?
I think I've seen random instances in some of the more recent SAT competitions with only a few thousand variables that have not been solved. That said in theory there should be 100 variable instances that are too hard to solve but I haven't seen any really hard instances that small.
One paper was about creating random sat instances where the probability of unsat ist 50/50 and as it turned out: none of those instances was hard.
You can always craft hard instances to defeat a specific heuristic. But crafting one that is hard for all of them is the problem.
or at least the developers have already noticed this coincidence for themselves.
Maybe this https://www.coursera.org/learn/automated-reasoning-sat would be a good start?
Ideally I would like to be able to take predicted outputs from ML plus business requirements to solve allocation problems; and more pie in the sky I would like to know enough to get inspiration for how to link a neural network with a sat solver, such as perhaps replacing or augmenting beam search in sequence decoding tasks.
Because dropdown >=2.0.0 depends on icons >=2.0.0 and root depends
on icons <2.0.0, dropdown >=2.0.0 is forbidden.
And because menu >=1.1.0 depends on dropdown >=2.0.0, menu >=1.1.0
And because menu <1.1.0 depends on dropdown >=1.0.0 <2.0.0 which
depends on intl <4.0.0, every version of menu requires intl
So, because root depends on both menu >=1.0.0 and intl >=5.0.0,
version solving failed.
IIRC: SAT Solvers regularly solve 30,000+ nodes of traveling salesman problem in just seconds (Or something along that magnitude. I forget exactly...). Yeah, its NP complete, but that doesn't mean its worthless.
It will take some experimentation to figure out if your problem is within the "feasible" category of SAT solving. I don't think there's any method aside from "try it, and if it works, be happy. If not, be sad".
If we knew which problems were easily solved with SAT-solvers, we probably would be well on our way to proving N == NP ?? So just guess and check.
Honestly, people may forget that brute-force solvers can be incredibly useful. When I was playing with a hash-algorithm I was inventing (just for fun on the side...) I searched the entire 32-bit space for the best integer that best mixed-up the bits from input to output. The brute-force solver took only seconds to try every 32-bit number.
We can brute-force a surprising number of "intractable" problems from our college years. Yeah, its inefficient, but who cares if its solved in seconds or minutes?
There's also that a^3 + b^3 == c^3 problem that was solved in a few weeks by brute forcing every number (or something like that).
[Actually, Fermat claimed that a^n+b^n=c^n is impossible for n>2, but the proof for that wasn't discovered until the 1990s. It's Andrew Wiles's famous proof of Fermat's Last Theorem.]
I was wrong and confused things for a different problem.
a^3 + b^3 + c^3 == X, where X is some "hard" number, like 42.
They discovered the a, b, and c values for X == 42 recently, by simplifying the problem over 2 variables and then brute forcing for weeks.
Anyway, I was trying to reference the sum of 3-cubes issue.
I proved to a vendor they had screwed up CSPRNG seeding by doing a scatterplot of the first result from a large number of seeds. It ran so fast I bumped the loop counter up to max_int and it still ran in a minute or two on a laptop from 10 years ago.
When you can fit everything in cache, a CPU can sure run a lot of arithmetic in no time at all.
4096 SIMD cores (163840 "CUDA threads" I guess) on a Vega64 at 1.5GHz with single-clock add / subtract / multiply / AND / OR / XOR instructions. Brute force is surprisingly parallel, because there's no "shared information" between threads. So its super easy to program and parallelize.
I started searching 40+ bit numbers to see how far up I could go while staying under 1-minute runtimes.
If you can keep your program state in under 100-bytes, so that your entire program fits inside of the Register-space of GPUs... things go fast. Extremely fast.
N-Queens was popular before Sudoku become a thing, and you might find some basic tutorials on N-Queens -> SAT solvers. Also an easy problem that's fun to solve on your own.
TSP is... very complicated to reduce to 3SAT. Try it after you understand the other examples. I did a search online, and found a good set of slides here: https://www.cs.cmu.edu/~ckingsf/bioinfo-lectures/sat.pdf
The tl;dr: TSP -> Hamiltonian Cycle -> 3SAT -> Solver.
Because all NP-complete problems convert into each other, its possible to convert other NP Complete problems (ie: Sudoku) into a TSP problem, then use TSP-solvers to answer the problem.
3SAT just seems to be the one NP-complete problem that everyone's reducing to. There's an implicit assumption that its easiest to convert to 3SAT.
I don't know if there's an established formalised terminology for this type of conversion, and I haven't found one from a quick web search. But look up something like "translate", "convert", "reduce", "encode", or the like in conjunction with "NP Complete" and you shall find what you are looking for.
Technically they're polynomial-time many-one reductions.
But in practice I think people usually just call them "reductions".
Inspired by that, I tried to use Z3 to solve the "best arrangement of letters on a linear typewriter" problem but never got a good answer because the solution space is far too large (26! which is seven hundred ninety-eight trillion, seven hundred twenty billion, nine hundred fifty-two million, one hundred seventy-six thousand, seven hundred and forty-five years at 16000 iterations a second.)
* SAT (more specifically, SMT) has been applied with some success to automated vulnerability discovery via symbolic execution. Underconstrained symex has cropped up more recently as a strategy for minimizing state space explosions.
* Others have mentioned register allocation, but SAT/SMT is broadly applicable to compilation problems: instruction selection and superoptimization come to mind.
it's not a pure SAT problem, but Z3 works (in a way I don't understand) by reducing constraint problems involving integers and even more complicated domains into SAT problems.
I would like to write a language like that but I don't know enough about computer logic / hard ai etc. to know where to go in terms of the internals of the compiler (Bolting complicated analysis onto an existing compiler is really really ugly if there is a lot of global state - e.g. some parts of the D compiler although that's still WIP to be fair to the programmers)
I'd say that when dependent + refinement types get more mature and get pass the basic prototype idea in research languages/environments, these ideas could easily transfer to mainstream static typed languages and be a new module of future compilers.
One example that I like is that some Linux package managers have switched to using a SAT solver to satisfy package version constraints.
This includes anything from complicated optimization problems, to theorem proving, to inverting cryptographic hash functions, to any other NP algorithm that hasn't been thought up yet.
Whether this would have any practical use would depend on the polynomial exponents and constant factors, of course. But even in a theoretical sense it would be very, very surprising to a lot of people if it were true.
Also P=NP would be a very big deal.
The problem is that current solvers are heuristic and may fail or take too long (ie. degrade to exponential time) on any particular instance.
If a polynomial-time algorithm is ever discovered for all 3-SAT instances (ie. P = NP) then of course its actual complexity would be the determining factor.