An Intro to Integer Programming for Engineers: Simplified Bus Scheduling 122 points by dget on Nov 17, 2016 | hide | past | favorite | 23 comments

 Integer programming often seems magical. I remember early grad school and seeing "Optimal and Near-optimal Global Register Allocation Using 0–1 Integer Programming" by Goodwin et al, where the very crunchy problem of register allocation was solved largely by punting it to a ILP solver. Gotchas abounded but it was eye opening to see how many hard problems could be solved (or at least adequately approximated) in this fashion.I think this is one of those techniques that every serious computer scientist should have in their toolbox. Someone who knew some statistics, a smattering of something like SMT (enough to drive Z3) and some linear programming could routinely resemble Gandalf at any shop that has people struggling with difficult problems.The main thing is not to turn into a cookbook guy. I've known a few people whose ability to just build an actual algorithm to solve some interesting problem has atrophied since they reach for a ILP solver the minute anything gets difficult.
 Agreed that it seems magical. This is one of the areas where the open-source tools are way behind commercial solvers. Cplex and Gurobi are really impressive pieces of software.
 Doubly magical now that computers (and solvers) are so fast. A few times I've thought, "Oh, I could reduce this to a max-flow problem, and write/dig up an algorithm to solve it" before realising it'd just be easier to write it as a linear program (or an integer program "just in case".)And then if weird constraints come along that broke the max-flow reduction, I could usually shoehorn that into the formulation.Can't remember how to write Dijkstra's algorithm? Meh, integer programming it is. Want to write a sudoku solver and you can't be arsed with dancing links or backtracking or prolog or... Whatever, CBC will do it. It'll be nasty and a hell of a lot slower than doing it "the right way" in a fast language, but it might not be slower than doing it in Python or Javascript.
 The latter is what scares me. I solved a 'bit twiddling' problem using Z3 and felt a little ashamed. It is reminiscent of Brewster's letter to Martin Gardner "Feeding a recreational puzzle into a computer is no more than a step above dynamiting a trout stream". It's very easy to let skills atrophy and resort to using black boxes we don't really understand (well, I speak for myself, anyhow). When Z3 or cplex or whatever tool du jour gets wedged on a problem, I am not sure I have the background in these tools to get unstuck.
 With Z3, unfortunately, the issue is that we are really bad at gauging what makes for a hard SMT problem. Often, knowing that a particular problem is hard is the same as solving it! That said, you can do some really interesting things with SMT solvers if you can formulate the problem the right way.Gurobi, CPLEX, and friends are less magical if you have enough mathematical maturity. Typically, if you are comfortable with linear algebra (a subject woefully ignored in most undergraduate curricula), you can work with them. In my experience, working with optimization engines is a two step process. First, you need to figure out how to formulate the problem in a smart way (typically this means that it's convex with constraints in a certain format), which is normally the hard part. Then you scroll through the list of available algorithms that the engine has at your disposal and pick one with properties that you like (they often have suggestions if you have no strong preference). Normally, if you can formulate the problem so it's convex, you will at least have a good shot of doing well with your optimization.
 I have a Phd in optimization and I still think Cplex and Gurobi seem magical. The devil is in the details. Open source solvers are comparatively not very good.
 Oh for sure. You could go down a real rabbit hole if you had to implement that stuff yourself.
 I wrote a sudoku solver in my first year of college using some heuristsics and some sort of branch-and-cut algorithm, in java. It used bit masks to keep track of the state and possiblities, I was pretty proud, spent like weeks on that.Recently I re-implemented in Pulp in an afternoon using ILP.It's similarly fast, both can solve similar sets of problems. But the ILP solution was so much easier and shorter.
 > I wrote a sudoku solver in my first year of college using some heuristsics and some sort of branch-and-cut algorithm, in java.What class of cutting planes did you use in your B&C algorithm (full disclure: I do my PhD on cutting planes, thus I'm interested)?
 I mean branch-and-bound - i.e. just reject when solution becomes infeasible and back-track. No cutting planes in first year college. ;)I think the heuristics may have been based on selecting cells that had the least possible solutions, in order to quickly 'zoom in' on the areas where you can quickly prune infeasible solutions.
 C-PLEX and Gurobi -- R. Bixby.
 I had the opportunity of interacting with Gu and Bob Bixby of GuRoBi(& CPLEX). Some of the most talented OR scientists.
 Thanks. You are ahead of me. I'd heard of Bixby's role but not Gu's.
 Whats the distinction between Integer Programming and Constraint Programming? From a cutlery glance they appear to be the same solution to the same problem...
 Integer programming is effectively searching the edge of a convex polygon (polytope for higher dimensions) for an optimal value function, defined in terms of the coordinates of the points in space. The inequalities are planes that subdivide the solution space into permitted and non-permitted domains.Constraint programming generates lots of potential solutions (combinatorially) and prunes the search tree to make the large numbers tractable.The intuitions behind the two techniques are quite different.
 > Integer programming is effectively searching the edge of a convex polygon...Great definition of linear programming. In a sense the thing that makes integer programming hard is that the feasible region is not convex -- for two feasible solutions x and y, ax + (1-a)y for 0<=a<=1 is not necessarily feasible.I'd also say that integer programming is a kind of constraint-based programming extended with the addition of an objective function -- we're not just looking for any satisfactory answer, but an answer with a cost or value that cannot be improved upon. You're definitely right that "things called constraint-based programs" tend to be solved in different ways, though (and the languages they're expressed in tend to be nicer, too.)
 I cant think of any serious constraint solvers without some sort of optimization support, are there any?
 I think your definition of Integer programming is not correct. Integer programming or Mixed Integer Programming assumes that the solution has the all or some variables, integers. Most of the Integer programming models are usually using binary variable to indicate decisions.Further, I think you tried to describe Linear programming in the beginning of your post. But Linear programming searches the vertices of the polytope and not the edges.The biggest issue with Integer programming is the lack of convexity of the solution space when compared to Linear programming. So the approach for solving them is very different compared to Linear programming. If I made any mistakes in my post, I welcome to be corrected.