Hacker News new | comments | show | ask | jobs | submit login
An Intro to Integer Programming for Engineers: Simplified Bus Scheduling (remix.com)
139 points by a_w 212 days ago | hide | past | web | favorite | 28 comments

Google has a very nice suite of optimization tools, including an integer programming solver and a constraint solver, for third party use. Docs here: https://developers.google.com/optimization/ I got a chance to use it at work lately and it really is magical how you input a problem description and it outputs a solution.

The documentation is hit or miss in some cases. MPSolver is pretty well documented. The more powerful constraint solver . . . well, I have no idea how to use the SA or Tabu search features, and minimal confidence that I'm doing anything correctly, except that it seems to emit correct-ish answers.

SA and Tabu are both heuristic, so you're never going to get anything but correct-ish answers (definitely no guarantee of THE correct answer).

I think you mean that they will not necessarily find optimal solutions, at least when the problem size is large. I understand that. What I don't understand is how to verify that they're giving me anything better than local search. I also don't have the background to know what settings to use (initial temperature in the SA case or the similar setting for Tabu search) nor which heuristic better suits my problem.

I'm not familiar with Google's particular implementation of these, but with the right value of the parameters, they are equivalent to local search. You could test with these parameters and see if the solution changes.

In the simplest implementation of Tabu Search, a maximum tabu list length of zero will never reject any candidates.

A simple simulated annealing should become a naive local search if the probability of rejecting a candidate solution is zero.

As far as which heuristic is better: I'd avoid premature optimization when solving optimization problems. Pick one, and if you're satisfied with the results, then go with it? Academics usually test their heuristics against benchmark problem instances with known optimal solutions (or if no optimal solution is known, best known upper bounds). That would be a bit more thorough.

Yeah, I think the main issue is that I have only tested on pessimal problems so far. In this case, I'm only able to find one or two solutions before I run out of time to iterate, so neither Tabu nor SA are giving me much. Monday I'll be doing some more testing on realistic problems and I'll be able to get a better idea of which solution is best. Or maybe I'll find that I don't need heuristics at all, since the real problems may have a much lower branching factor than my pessimal case.

Only one or two feasible solutions? That's a little suspicious, but in general feasibility isn't any easier than optimality.

If you'd like to talk optimization/metaheuristics feel free to shoot me an email (in my profile). I have a fair bit of experience with them so I might be able to point you in the right direction.

I only have a few milliseconds to do the search. I erred in what I said though. I find one or two improvements on the initial result during the time I have allotted.

I'd reach out, but I doubt the company would approve of talking to a third party about the details of internal work. We also have some experts internally who I can talk to if I get stuck. :) Thank you for the kind offer, though.

Definitely true, and the lack of distinction drives me insane sometimes.

However, describe this distinction to the average executive and tell us how it goes. :-)

The solution presented in the article also wasn't guaranteed to be THE correct answer.

!correct-ish optimal-ish

I'm always glad to see articles about integer programming as it's an incredibly powerful tool which deserves to be more widely known.

However, I'm not sure the example chosen was the right introductory problem. As illustrated, we don't get the certifiably optimal solution because the true optimal solution may involve one of the duties we didn't randomly generate. If we don't get the optimal solution, why not just use a heuristic? Yes, there's a brief allusion to column generation which can address this problem, but surely there's an introductory problem which can be solved without requiring the user to think about column generation or similar?

Yeah, having read it I can't help but feel that a simulated annealing approach might end up generating a better result. If only because the limiting factor for simulated annealing is how quickly you can generate random duties and check some objective function, and not how quickly you can solve some NP complete problem for a large number of variables.

Of course having found a couple of good combinations of duties with simulated annealing there's nothing stopping you from trying the integer program on those (+ variations) to try and find a better version.

When I think of constraint-based problems I think of Prolog. A web search revealed several Prolog solutions to the bus scheduling problem. These solutions were all easier to read and understand than the imperative solution presented in this article. I then got to wondering if anyone had solved this problem using miniKanren. If they have, I couldn't find it or no one has ever posted such a solution.

I second this. Prolog is extremely well suited for combinatorial optimization, and combinatorial tasks in general. There is also significant overlap between constraint programming and integer programming: Many tasks can be easily solved with both approaches or a combination of them, and sometimes one of them is more suitable.

Several Prolog systems ship with great implementations of both approaches.

> Prolog is extremely well suited for combinatorial optimization, and combinatorial tasks in general.

I believe that it can be well suited for such problems—that is, that it is possible to write good solutions to them [0]—but isn't someone diving in without much experience likely just to write solutions that are age-of-universe slow because of combinatorial explosion?

[0] I don't know if it's available online anywhere, but Richard O'Keefe's "The craft of Prolog" (well worth reading for anyone interested in problem solving, programmer or not) has a gorgeous explanation of how to solve the problem "find a 9-digit number such that the `n`-digit initial [or maybe final?] segment is divisible by `n` for each `n` from 1 to 9", starting with the most naïve approach and progressing to a much more efficient one. It has a punchline that, for the benefit of anyone who hasn't read it, I won't spoil here.

The trick for achieving efficient Prolog solutions of such tasks is to use the Prolog system's built-in features for constraint logic programming, abbreviated as CLP. Constraints automatically prune those parts of the search space that do not contain solutions. This typically leads to extremely significant speedups over naively generating all possibilities.

For example, integer constraints are available under the name "constraint logic programming over finite domains", abbreviated as CLP(FD), in all most widely used Prolog implementations. To solve combinatorial optimization tasks with Prolog, I recommend to start with CLP(FD). If necesseary, you can always combine it with other approaches, such as applying the simplex algorithm and also custom methods and heuristics.

Please note that constraints have become widely available in Prolog systems only after the book you cite was published, and it does not mention these methods at all. With current systems, you can elegantly and efficiently solve the task using CLP(FD) constraints.

For example, here is a Prolog version that describes the task you mention via constraints on integers:

    solution(Ds) :-
            length(Ds, 9),
            Ds ins 0..9,
            foldl(segment_divisible(Ds), Ds, 1, _).

    segment_divisible(Ds, _, I0, I) :-
            I #= I0 + 1,
            length(Finals, I0),
            append(_, Finals, Ds),
            reverse(Finals, RFs),
            foldl(digits_number, RFs, 0-0, N-_),
            N #= I0*_.

    digits_number(D, I0-P0, I-P) :-
            I #= I0 + D*10^P0,
            P #= P0 + 1.

We can use it to generate solutions, for example:

    ?- solution(Ds), label(Ds).
    Ds = [0, 0, 0, 0, 0, 0, 0, 0, 0] ;
    Ds = [0, 0, 0, 0, 1, 2, 6, 0, 0] ;
    Ds = [0, 0, 0, 0, 1, 5, 1, 2, 0] ;
    Ds = [0, 0, 0, 0, 2, 7, 7, 2, 0] .

To exclude solutions where the leftmost digit is 0, we can simply impose an additional constraint on that digit. In addition, we are free to try different search strategies, completely decoupled from the constraints:

    ?- solution(Ds), Ds = [First|_], First #\= 0,
       reverse(Ds, Rs), label(Rs),
       portray_clause(Ds), false.
    [9, 0, 0, 0, 0, 0, 0, 0, 0].
    [8, 1, 0, 0, 0, 0, 0, 0, 0].
    [7, 2, 0, 0, 0, 0, 0, 0, 0].
    [6, 3, 0, 0, 0, 0, 0, 0, 0].
    [5, 4, 0, 0, 0, 0, 0, 0, 0].
    [4, 5, 0, 0, 0, 0, 0, 0, 0].
Constraints allow us to easily describe and efficiently such tasks. Note that this is very different from naive exhaustive search: The constraint solver automatically prunes those branches of the search space that cannot lead to a solution.

Thanks! I forgot a constraint: we must use exactly the digits 1 through 9 (of course, each once). I assume that could easily be incorporated?

Yes, this is easy to add, since there is a dedicated constraint called all_distinct/1 that we can use to express that a set of integers be pairwise distinct.

In this concrete case, we get this by simply adding all_distinct(Ds) to the query. For example:

    ?- solution(Ds),
       reverse(Ds, Rs),
This yields Ds = [6, 5, 4, 8, 7, 3, 1, 2, 0], in addition to other solutions. Now we only need to add that none of the integers must be 0. We can express this (for example) with the additional constraint Ds ins 1..9:

    ?- solution(Ds),
       Ds ins 1..9,
       reverse(Ds, Rs),
However, when you run this query, you get false in return. This means that there is no solution that satisfies all these constraints at the same time.

Therefore, the task you mentioned was probably about the prefices, i.e., the initial segments of the number. We can easily express this with small modifications to the program above:

    solution(Ds) :-
        length(Ds, 9),
        Ds ins 1..9,
        foldl(segment_divisible(Ds), Ds, 1, _).

    segment_divisible(Ds, _, I0, I) :-
        I #= I0 + 1,
        length(Prefix, I0),
        append(Prefix, _, Ds),
        reverse(Prefix, RPs),
        foldl(digits_number, RPs, 0-0, N-_),
        N #= I0*_.

    digits_number(D, I0-P0, I-P) :-
        I #= I0 + D*10^P0,
        P #= P0 + 1.

With this program, we get:

    ?- solution(Ds),
    Ds = [3, 8, 1, 6, 5, 4, 7, 2, 9] ;
This means that there is a solution, and it is in fact unique! This is determined very efficiently thanks to the pruning that is automatically applied by the constraint solver.

I'm sorry for the delay; I didn't realise that you'd responded. I couldn't find the O'Keefe book to look up the exact challenge, but I think you've correctly reverse-engineered and solved it. Thanks again!

Prolog is well suited for small (toy) combinatorial optimization, useless in practice. It's no wonder the whole 5GL idea tanked mainly because Prolog just couldn't deliver. Yes, I had fun with SWI Prolog and 2nd order abstractions, but in no way I am going to use it on large problems, though I might implement pieces of Warren's Abstract Machine when I see fit.

SWI-Prolog lacks a commercial-grade constraint solver that is sufficiently efficient for the tasks you might have tried. If you are interested in applying constraint solving for very complex combinatorial tasks, consider trying a Prolog system with more efficient constraint solvers which are routinely used in industry. Note that SWI-Prolog is not based on the WAM either.

The only one I know of is sicstus prolog and it hasn't kept up with the state of the art, such as OR-Tools and HaifaCSP, both of which are available from a FlatZinc language interface.

Prolog has some benefits in its more declarative nature, but it really constrains you because it forces you to use an entire prolog implementation, when all you really need is an interface to a solver.

Conversely, what if you really need a Prolog system or indeed any general purpose programming language, but only have an interface to a more limited solver? I find this even more constrained. As technology progresses, interfaces to other solvers that appear from time to time become available in Prolog systems if enough users demand it.

For many cases, a Prolog system is already more than sufficient on its own, and is far more flexible than specialized tools which can still be provided as add-ons to Prolog. It is straight-forward to use only a specific subset of Prolog, if that is all you need. In contrast, it is generally impossible to use basic features of Prolog, such as backtracking and logical variables, in more specialized tools and solvers.

I like how this tutorial takes the practicality a step further by suggesting and showing how to use a specific library to avoid having to try to solve the problem yourself

But for an understanding of the problem space and a good place to start trying to answer this unsolved problem.. this is still my favorite integer programming explanation:

> and i found this integer programming tutorial comprehensive in offering practical applications of the desired algorithm(i)

do any of these examples explain the problem best for you? can you write a program that will solve this question for you? now change the values of the variables, increase the number of variables does your algorithm still work on these other inputs?

can you rewrite the algorithm to give the correct answers for these other inputs?

can you develop enough coverage to prove all possible inputs return correct values?

if your coverage is substantial, seemingly complete, can you write against your algorithm to find inputs that will still require you to further develop the algorithm?

for me, i felt i was able to follow the explanation for how the author found the answer and through that was able to start to see the shape of the coverage necessary to accommodate such a problem (ii)

this specific tutorial is on integer programming.. one of karp's 21, and through reduction(iii) on our understanding of 0-1 integer programming we can understand all of the other 21

(o) 1972: Richard Karp: Reducibility Among Combinatorial Problems : http://www.cs.berkeley.edu/~luca/cs172/karp.pdf

(i) http://mat.gsia.cmu.edu/orclass/integer/integer.html

(ii) http://mat.gsia.cmu.edu/orclass/integer/node4.html

(iii) 1971: Stephen A. Cook: The Complexity of Theorem-Proving Procedures: http://4mhz.de/download.php?file=Cook1971_Letter.pdf

+1 for Mike Trick. The man is one of the best ambassadors the OR community has right now.

A well-written tutorial, but mildly annoying that it starts off with a basic off-by-one error; 9am to 9pm is 13 trips, not 12, so the total trip number is 52 (which indeed is the number of trips shown in the example, not 48 as stated just above).

I interpreted it as the bus starting at 9am and stopping at 9pm, making for 12 trips of one hour each. So the off-by-one error would be in the code generating the possible trips.

In fact, the code as shown only produces 48 trips, instead of the 52 given. I had to actually run it to make sure the Python interpreter in my had wasn't buggy. No idea what code they actually ran to get that result.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact