One of my projects involves I'm.entiong part of the JVMs execution in Z3 constraints and we have been able to prove stuff about program equivalence this way which is exciting.
Once the problems get big enough we will need to find clever ways of shaping Z3s search path, maybe by writing a theory, but for now it works for anything we have thrown at it.
Soft constraints are constraints that incur some cost/penalty when violated.
Then the problem objective changes from "find an allocation that is a solution" (SAT) to "find the minimal cost solution (if one exists)" (Weighted SAT / Constraint Optimization).
- SAT problem (a,b are binary):
Hard constraint: a = b
Solutions: a=0,b=0 and a=1,b=1
- Constraint optimization:
Soft constraint: cost(a,b) = a+b
Optimal solution: a=0,b=0 with cost 0. The allocation a=1,b=1 is a solution because it fulfills the hard constraints, but it is not optimal because it's cost (2) is larger than the cost of another solution (0).
The official docs are a rare learning resource, but require a bit more time to digest.
Pysat is a python package for that provides efficient SAT solver APIs and encodings, including a partial maxsat solver.
RC2 partial maxsat algorithm: https://pysathq.github.io/docs/html/api/examples/rc2.html
In your case, X would be the number of "nice to have constraints" that are satisfied. It would be simpler to formulate the these conditions in some SMT language, rather than in SAT directly.
what I like about Reduced Ordered Binary Decision Diagrams (ROBDD) is that they can also tell you how many solutions there are, so you first encode the mandatory requirements, and if there are plenty of solutions, you can try adding requirements for the next-most-desired properties, or give up on a certain property by keeping an eye on the number of solutions.
The biggest problem though, was when I would describe how we were using SAT to solve the graph, I generally would get extremely puzzled looks. Most other engineers have no exposure to them whatsoever.
Is there a canonical reference you'd recommend to understand this problem or a paper I could look at that was similar to this implementation?
I am wondering if MIPS and SAT relate together somehow?
The free SAT solvers are very good, and much better than commercial IP solvers at solving problems that are a natural fit for SAT. (Obviously, you can encode any IP as an SAT formula and vice versa, and the IP solvers are better at solving the problems where you actually have meaningful arithmetic.)
Gurobi is the fastest solver, and it's free for college students. SCIP, MIPCL, and CBC are the fastest free solvers, in that order.
To learn how to formulate a problem as a linear program, you can read through the examples at https://people.eecs.berkeley.edu/~vazirani/algorithms/chap7....
Edit: "Pyomo supports a wide range of problem types, including:
Mixed-integer linear programming
Mixed-integer quadratic programming
Mixed-integer nonlinear programming
Generalized disjunctive programming
Differential algebraic equations
Mathematical programs with equilibrium constraints"
Love it when someone links to one thing that has a pile of links to other useful techniques to learn about. :)
This is probably the largest practical application domain, which also drives most of the academic development.
They are _a_ component. It's quite possible to perform formal verification without them, for instance the CompCert verified C compiler, or the DeepSpec end-to-end verification project: https://deepspec.org/main. SAT solvers generally have poor support for constructive logic, the kind of logic used in theorem provers based on dependent types like Coq. Systems based on classical logic like HOL integrate better with SAT solvers, but the lack of dependent types makes a lot of things much cumbersome to prove, especially if one wants proofs that can be extracted to programs.
Arguably, this is no longer the case. FStar  has dependent types, monadic effects, refinement types and a weakest precondition calculus (i.e. a richer type system than Coq) and uses the Z3 SMT solver to discharge many proof obligations. I've been using FStar recently and it works surprisingly well.
 - https://www.fstar-lang.org/
> Z3 SMT solver
You are comparing different things.
Personally, I started with CBMC : it is quite a staple in the field and their documentation is very clear. Basically, it translates a C program (decorated with assumptions and properties to verify) into a SAT formula: if the formula is satisfiable, a property can be violated (and you even get a program trace that shows exactly how that happens).
Despite 15 generations attempting to codify tonal harmony, there are only a few relatively few hard rules, followed by an unending literature of exceptions.
The sudoku sample is a few lines there, and it used to win many CSP contests. http://www.hakank.org/picat/sudoku_pi.pi
cbmc is esp. nice to autocreate proper test cases.
With prolog it's usually easiest to formulate a problem. Efficient solutions are a bit of a black art though: https://www.metalevel.at/prolog/horror
I specify my problems in a higher-level DSL and a library then "compiles" it to SAT, where many solvers are available.
As an example, the often-talked-about dependency management problem, is also NP-Complete and thus translates into SAT, and SAT could be translated into dependency manager. The problem our group worked on, generating key and lock cuttings based on user-provided lock-chart and manufacturer-specified geometry, is also NP-complete.
I haven’t thought too much about how you’d solve it but you’ve got a few things to work with. There are the different classes, students, teachers and spots in the timetable. That’s a real pig to brute force but the sort of work a solver is used for.
Worst case in extremely big projects you just add some swap and let it run for a while, the time required is still in the order of tens of minutes.
Compare that to hours long c++ build times.
To give an accessible example, consider the problem of finding a factor of an integer. I know this isn't NP-Hard, but it's something most people can easily think about.
If you choose a random large integer, odds are it has a small factor. 50% of the time it's divisible by 2, and most numbers (in a very real sense) or divisible by 2, 3, 5, or 7.
So most of the time a randomly chosen number is easy to find a factor of. In a similar way, most real-world instances of NP-Hard problems are actually easy to find a solution for. The proportion of difficult instances shrinks as the instance size increases. Specifically, if you have an instance of SAT generated from a real-world situation, the likelihood is that it's not actually that hard, so even if it's large, an off-the-shelf SAT solver might be able to dash off a solution.
So when confronted by an NP-Hard problem don't just instantly give up. Choose an algorithm that has a go, and there's a good chance you'll get a solution.
 There are formal, unproven conjectures about this.
That still does not answer my question. I see and understand the utility of fixed parameter tractability.
This naturally made me wonder about the proofs that a certain problem type P' is NP-hard (by being able to transform problem from P' to known NP-hard problem P and back): can these translations be used to deduce the relevant fixed parameter for NP-hard problems if one of the 2 problem types (P' or P) has a known fixed parameter tractable algorithm...
I.e. you did not answer my question at all, you just reiterated basic complexity theory
Suppose we have two problems, A and B. We have polynomial time transforms between them, so they are considered "equally difficult".
In reading and re-reading the thread it appears that you (pl) are talking about having a "fixed parameter tractable algorithm" for one of the problems, say problem A, and you (sng) are asking if the transform then provides an equivalent fixed parameter tractable algorithm for B.
If that's the case then I don't know enough to answer, but my feeling is "probably not in general".
If that's not what you're asking then you might like to be more specific.
In either case I suspect I can't add anything useful.
Not sure what you mean with pl and sng in "you (pl)" and "you (sng)"
Most fantastic would of course be a general method so that given as inputs the transforms between problem types, and the known fixed parameter, and fixed parameter tractable algorithm, gives as output the new fixed parameter, and a new fixed parameter tractable algorithm for the new fixed parameter.
I assume we do not have such a general method, so similarily interesting would be data points for constructing such a method: examples of deducing the fixed parameter for a problem domain, given the fixed parameter (and tractable algorithm) from those of a different problem type.
With enough examples perhaps a general method can be guessed.
OK, cool, thanks!
> Not sure what you mean with pl and sng in "you (pl)" and "you (sng)"*
"Plural" and "Singular" - in English I can't make it clear that in some cases I'm talking about you and previous speakers, and in other cases I'm talking only about you. Other languages make it easier to make the distinction, in English it must be inferred, or made explicit.
> Most fantastic would of course be a general method so that given as inputs the transforms between problem types, and the known fixed parameter, and fixed parameter tractable algorithm, gives as output the new fixed parameter, and a new fixed parameter tractable algorithm for the new fixed parameter.
I feel like that's a lot to ask, but we are beyond my expertise.
> I assume we do not have such a general method ... With enough examples perhaps a general method can be guessed.
I have no intuition about this at all, so I'll stand aside and let others with greater knowledge comment, if they're around.
Also some NP-COMPLETE problems are not fixed parameter tractable as far as we know, but some are.
Also, instead of looking at fixed parameter tractability, it often makes more sense to look at approximation algorithms (if your goal is to optimize something, rather than getting a strict Yes/No answer).
The actual difficulty of an NP complete problem depends on what the problem is, and what particular instance you are trying to solve. In a "constraint solving" setting, underconstrained instances are usually trivially solvable with a YES solution, overconstrained instances have an easy to find NO solution, and the instances that have just the right amount of constraints are the ones that are trully hard.
What a good SAT solver does is that it can solve some instances that a naive backtracking algorithm probably would not be able to do. Here are some examples of tricks that SAT solvers perform:
* non chronological backtracking: when a search branch is deemed non-viable, instead of undoing just the last choice, the algorithm determines what choice was to "blame" for the lack of solutions, and undoes back to that instead.
* learning new constraints. When a search branch is deemed invalid, a new constraint is calculated based on the choices made on that search branch. This helps avoid the same sequence of choices from being repeated in a future search.
* aggressive randomization and restarts. The time distribution taken to solve a random instance of an NP-complete problem is fat-tailed. In these cases, an aggressive restart policy might speed things up even without other algorithmic improvements.
Still, it is possible to create efficient solvers for some subset of the NP-complete problem space. They will work fine on some problem exhibiting the proper "shape" / internal structure. What that is is often hard to tell...
So in practice using SAT solvers (or SMT, or constraint, or MIP ones...) has some empirical side to it. It can work extremely well in some cases: it's possible to solve problems with 10s of millions of Boolean variables for example. It can also fail (time out). That depends a lot on the problem type and the solver heuristics. It's common in this area to try different solvers, and be happy if one gets a result in reasonable time. So there can be magic, but also some frustration.
As a practical example, SMT solvers are used for software formal proof. It's typical for those tools (like Frama C) to try several solvers. If any find a proof, you're good and the proof is for free (no human work needed). If none work, it's up to the engineer to reshape the problem to "help" the solver, or bite the bullet and deal with the proof (semi) manually with coq or a similar tool.
All this being said, modern solvers are really powerful. If it's a human scale problem solvable "by hand" with lots of work, a solver should deal with it very quickly.
What would be a O(2^256) problem? A boolean formula with 256 binary variables? In many cases those can be broken down to independent smaller formulas which are easier to solve, in which case a solver might be quite fast on it, yes. But something that provably does not reduce to a smaller complexity would still take forever, I guess. Corrections welcome.
Also, there are other kinds of interesting solvers. For constraint problems, one can check minizinc . A bit like SMT-LIB, it's a language to describe constraint problems that is supported by many solvers. It's commonly used with gecode . For many problems using minizinc will be much easier than trying to convert the problem in SAT (or even SMT) format.
Happy hacking, it's a fascinating domain!
We are always glad to get more feedback!
Re the licensing, pySMT is open-source and distributed under APACHE 2.0 .