Constraint programming (and constraint propagation) is highly underrated.
For those not familiar with it yet, this is what's used under the hood by things like systemd: you give a bunch of constraints (ex: service A before service B, service B before service D but after service C, etc) and you get a solution (or none, but that's another story: when there are no solution, you can try to relax some constraints)
I took a similar approach with a somewhat complicated scheduling requirement across dates and times. I originally brute forced the problem, but the solution was a bit inelegant and not ideal. When I used constraint programming to solve the same problem, it took me a little longer to devise the solution, but the result was superior. Also, the constraint programming solution is a little more memory intensive, but a necessary proper solution.
Many constraint toolkits in other languages wrap around https://www.gecode.org/ . A rathe well established and performant library. I think the same folks behind Mozart/Oz are key behind this if I'm not mistaken.
For a free one, SWIProlog has a very usable and fairly performant constraint library too (CLPFD and CLPQ). It also has the more general "constraint handling rules" (CHR) as an embedded language. If you need to develop your own constraint search strategies, Mozart/Oz 's "space" concept is elegant.
Yes, Constraint Logic Programming evolved from Prolog, as the name implies, and because Prolog's indeterminism and backtracking is as natural as it gets for this kind of problems. While the article mentions the foundational Jaffar et al paper, there have been numerous constraint solvers embedded into Prolog ever since, known by names such as CLP(x) where x is a parameter for the domain (reals or finite domains, or Booleans as in SAT solvers), though this nomenclature suggests portability and a common approach that isn't there. An introductory article for robotic planning and scheduling based purely on Prolog and hence not requiring a magic CLP solver library and at the same time beyond constraint solvers in its ability to generate load plans in container ship logistics with an unbounded number of steps is described on the Quantum Prolog site [1]. The demo works in the browser even.
I posted a programming puzzle years back, and someone named Hakan Kjellerstrand pointed out that the problem leant itself to constraint programming and posted a solution in Minizinc.
I've used Google's ORTools and Microsoft Z3. If you need to input a lot of variables, I found Z3 to be better since it took SMTLIB formatted text as one big chunk- rather than numerous API calls which each suffer native binding overhead. Z3 can also iteratively solve, where you can append additional constraints and rerun the solver from its current position- I don't think ORTools had this feature.
In grad school we used a language called GAMS to model and solve linear, MIP, and other types of constraint type problems. It was really powerful, but equally a pain to write.
Like others have already suggested, look into google’s ORtools library
I love constraint programming. It allows you to code very close to the original spec. There are differences between systems though. Z3 might have a good raw performance, but using it feels like programming in "two worlds". In contrast, Picat or Prolog integrate seamlessly with constraints. I did a talk about constraint programming with Scryer Prolog here (Spanish audio): https://www.youtube.com/watch?v=c_yP_kr7DxI
I recall learning Prolog for a CS class back in the late 80s to model problems using its backtracking (chaining) algorithm with facts. Sometime later Rete-based rules languages became popular, but I always thought Prolog’s language approach was more suitable.
I'm guessing the library python-constraint is basically just brute-forcing all possible combinations, at least when a regular python function is used as a constraint? Barring some crazy introspection, it would have to, right? And if I'm right, I don't understand why you would use a library rather than just loop over x, y, and z the old-fashioned way.
From the docs it's using proper backtracking solvers. It doesn't need introspection to do this, just a python object that "records" its operations, eg:
Aside from the solver being a little smarter. Just syntactically, it might be fine with x,y,z but brute force looping over a,b,c,d...w,x,y,z is gonna take a really really wide monitor.
It’s bit smarter than that. There are a couple of tricks to make the search more informed. There are a couple introductory articles to CSP that are more in depth than TFA.
Obvious in hindsight but I hadn’t thought of using a non-optimizing solver for an optimization problem.
-
I think constraint programming may get more popular because of LLMs. One of the big issues with them is translating your problem into constraints. It’s tough for people to do that. But an LLM can do that well, even just to make sure its own solution is logical.
Recently I've been playing around with MILP solvers and this sounds similar. Can anyone with some experience in these kinds of optimization problems give a summary of the similarities and differences between CP and MILP?
CP is different in that it is less a sigle method than a framework for individual CS insights into enforcing constraints. This is usually down by a global constraint store and a watchlist of variable for the problem under consideration.
A good example is the famous “ All different” constraint. In MIP you would implement that as a fancy summ, in CP one can use a corollary from Berge’s lemma to get a specialised constraint to just enforce that.
Rule of Thump is that CP excels when the problem is thigh lay constraint and one has many global constraints ( those that apply to all variables).
Another point is that contrary to MIP programs that often are modelled directly using sums and inequalities, CP Programms usually are written in a declarative language such as ZINC, which in my opinion makes it easier to get into the subject.
If you are interested: This series of blog posts builds a CP solver in Julia
Finally there is the dissertation of Guido Tack, who developed a oss Cp solver as his PhD project ( I think that is the internal solver used by SAP software now)
I think the wonderful people of the association for constraint programming deserv a mention here, for those interested check out the cap summer schools 2023
In milps you can have an objective function to minimize/maximize and continuous variables. The techniques used are based on the characteristics of the mathematical space of the possible solution and methods to reduce the size of space that you have to look into.
The solution of milp solver are provably optimal.
CP allows for constrains that are difficult to model and/or solve in milps (or even mip).
There is some overlap between the two. For example, presolving in milp is basically a generalization of constraint programming.
For those not familiar with it yet, this is what's used under the hood by things like systemd: you give a bunch of constraints (ex: service A before service B, service B before service D but after service C, etc) and you get a solution (or none, but that's another story: when there are no solution, you can try to relax some constraints)
If you want to learn more, I suggest playing with Z3 from Microsoft Research: a nice tutorial is https://ericpony.github.io/z3py-tutorial/guide-examples.htm