Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I must say I'm not familiar at all with constraint programming, but it looks like something I've been wanting to know for a long time.

A practical problem I have right now is writing a device driver that needs to configure Phase Locked Loops in order to generate certain frequencies. However the setup is not straightforward, I have to configure a set of divisors, multipliers and some muxes. To make matters worse, at each step I have constraints for the possible frequency range at that point. And finally I sometimes have several of those PLL in series.

The only solution I have right now is using a C program that bruteforces all combinations, checks if the constraints are satisfied and outputs an array of configs for the subset of frequencies I need. If I need a new frequency I have to re-run my program and update the array. It works but is far from ideal.

Unfortunately all the tutorials I find online for this CP thing are using prolog or some third party library in C++, which is of course not practical for kernel programming...

If I could implement a clever solver that would run fast enough (unlike my bruteforce solution) I could just do it dynamically in the driver and it'd be much more elegant.

But I really don't know where to start. Any pointers/things to google for?



Are you proposing to write a simple constraint solver in C? If it's specific to your problem i.e. not too general, it's not the craziest idea I've ever heard.

I don't mind fielding questions over the basic theory if you want to email me (iain at my domain).


That's very nice of you! I'll gather my thoughts and send you an email soon. Thanks a lot.


I think the best advice I could give you is to be aware that what you're hoping for -- an algorithm with performance bounds tight enough that you can use it in a device driver -- probably doesn't exist. There is some chance, admittedly, that you will get lucky and find some structure in the problem that you can exploit to produce such an algorithm. But it's also entirely possible that there isn't any such structure, and the fact that you've resorted to brute force search to this point suggests there may not be.

Even if there isn't, there is still hope for ways to search the space faster. This problem sounds like something that an SMT solver might work well on. I would give that a try.


There are Simplex implementations in C that you should be able to compile in almost any C compiler... GLPK is the first one that comes to my mind.


Here is a collection of open source Operations Research (CP and LP falls under the umbrella of Operations Research) projects: http://www.coin-or.org/projects/

There are many fantastic open source libraries in there to handle many types of optimization or assignment.


Here is what 'constraint' programming is:

First we start with what 'mathematical' programming (optimization) is. For a reasonably general definition, let R denote the set of real numbers and m and n be positive integers. Let R^n be real n-dimensional space, that is, the set of n-tuples of real numbers. Let x be in R^n.

Next we have two functions. We have an 'objective' function f: R^n --> R, and we have a 'constraint' function g: R^n --> R^m. We can think of g as m functions mapping R^n --> R. Then the optimization problem is to find x so that f(x) is as large as possible while g(x) = 0 where here 0 is the point in R^m with all components 0. Or we can ask that g(x) <= 0 by which we mean that each of the m components of g(x) is <= 0 in R. In 'linear programming', functions f and g are both linear. So, function f is given by just n coefficients, and function g is given by an m x n matrix.

If there exists x in R^n so that g(x) = 0 (or <= 0 for that version of the problem), then x is a 'feasible' solution. If x is feasible and for any feasible solution u in R^n we have that f(x) >= f(u), then x is an 'optimal' solution.

Linear programming where we ask that all the components of x be integers is 'integer' linear programming (ILP) and is in NP-complete. Industry is awash in important ILP problems, e.g., airline scheduling. There is an enormous literature on practical means of solving ILP problems; Google Nemhauser long at Georgia Tech.

The problem may fail to be feasible. If the problem is feasible, it may fail to have an optimal solution.

Constraint programming is the same except we have no objective function and are looking for just feasible solutions.

Since in principle finding a feasible solution is as hard as finding an optimal solution, constraint programming is essentially the same subject as mathematical programming (optimization).

Constraint programming is nice when a planner is just trying to get some work done and knows that the cost will be the same for all alternatives. E.g., maybe yesterday they killed 5000 hogs, overnight chilled them down, today are cutting them into pieces and putting the pieces into boxes, and during the day want to back 18 wheel refrigerated trucks into the small loading dock in the right order to get all the boxes loaded and each truck with the boxes it needs for its deliveries. Once a real problem.

SAP in Germany has wanted to offer constraint programming as part of its software suite. Since constraint programming is essentially the same (mostly just different in how the user sees it and, thus, the user interface) as mathematical progamming, at one time SAP worked with C-PLEX, with some of the best software for linear and integer programming, the company of R. Bixby, long at Rice U.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: