Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do research in combinatorial optimization. Does your company solve any problems with a combinatorial flavor? (say, things can be optimized using combinatorial algorithms instead of going for gradient descent.


We don't (yet), but we are growing rapidly and always looking for ways to help our customers solve optimization problems. Fwiw, we do support categorical parameters (as well as continuous and integer) and our ensemble of Bayesian optimization techniques are able to solve this mixed type problem much more efficiently than techniques like gradient decent. Although the way we handle the purely combinatorial (only category) problem isn't as flushed out as our mixed type problems. We are looking to grow the team (and our offerings) if you're interested though [1].

[1]: https://sigopt.com/careers


> [We] are able to solve this mixed type problem much more efficiently than techniques like gradient decent.

Naive gradient descent is probably the simplest strategy that one can imagine. How do your algorithms compare to Newton methods for minimizing the primal-dual gap (interior point methods)?


We compare to some standard convex optimization techniques implemented in scipy here [1]. We have comparisons to some other Bayesian methods here [2]. I'm happy to answer any questions!

[1]: https://github.com/sigopt/sigopt-examples/blob/master/ipytho...

[2]: http://arxiv.org/abs/1603.09441


What are the bread and butter of combinatorial optimization? In other words, the concepts you would first come across, at an undergrad level if possible?


Doing research in mixed-integer linear optimization (but wrote diploma thesis about some topic in combinatorial optimization):

> What are the bread and butter of combinatorial optimization? In other words, the concepts you would first come across, at an undergrad level if possible?

- Polyhedral combinatorics (Books: "Alexander Schrijver - Combinatorial Optimization: Polyhedra and Efficiency" (more focus on polyhedral combinatorics; IMHO the best book, but not the most approachable), "Bernhard Korte, Jens Vygen - Combinatorial Optimization: Theory and Algorithms" (more focus on algorithms; easier to read). This of course includes (mixed-)integer linear programming ((M)ILP).

- Of course learning about (M)ILPs means understanding linear programming (LP). Here I personally prefer "Alexander Schrijver - Theory of Linear and Integer Programming" (this books also covers ILP aspects, but not MILPs).

- Other books for learning about ILPs are "Dimitris Bertsimas, Robert Weismantel - Optimization Over Integers" (main focus is ILP, nevertheless a good book) and "Conforti, Cornuejols, Zambelli - Integer Programming". There are no really good books about MILPs that I know of, but these two books at least will cover some aspects of it.

- Sometimes semidefinite relaxations will occur (most famous example: Goemans-Williamson Algorithm; less famous, but also important: Lovasz-Schrijver hierarchy, Sherali-Adams hierarchy, Lasserre hierarchy)


I'd also like to throw in some work by a former colleague of mine at Argonne, Sven Leyffer on nonlinear programming: - A compendium he co-edited named (appropriately enough) Mixed Integer Nonlinear Programming - A review paper he co-authored for Acta Numerica: http://www.mcs.anl.gov/papers/P3060-1112.pdf

Also, yeah, the "Alexander Schrijver - Theory of Linear and Integer Programming" reference is solid.


The survey article by Sven Leyffer is a good paper, but comes from a completely different direction:

It comes from people from convex optimization trying to additionally apply some integrality conditions (a little bit as second-class citizen). On the other hand classical combinatorial optimization is integrality conditions as first-class citizen. I, coming from (M)ILP, would argue that the MINLP people coming from convex optimization tend to sidestep all the problems that make ILP so hard (and interesting). On the other hand MINLP people would equally vocally argue that the (M)ILP people tend to prefer "academic" problems and don't grasp how many important research questions they miss.

It's up to the reader to decide which side is right. :-)

My personal opinion in this "flamewar" is that if you come from a computer science background (in particular theoretical computer science) you will probably prefer classic MILP culture. On the other hand if you come from engineering you will probably prefer MINLP theory as outlined in Sven Leyffer's survey paper.


Yeah, I think that's probably the split - folks from computer science/discrete math on one side and folks from engineering on the other. I grew up in math, but I was on the numerical analysis side so I definitely ended up on the MINLP side, which is why that's what I generally reference. There is certainly something elegant about ILP problems which gets lost when treating them with the sledgehammer that is gradient-based convex optimization.


> There is certainly something elegant about ILP problems which gets lost when treating them with the sledgehammer that is gradient-based convex optimization.

It is funny that you call gradient-based convex optimization a "sledgehammer" since people working in combinatorial optimization (opposed to ILP) tend to denote ILP methods (e.g. cutting plane algorithms, branch & bound, branch & cut, relaxation hierarchies, ...) also as a "sledgehammer". :-D They are just jealous. :-)


I almost exclusively work on problems can be solved(by a combinatorial algorithm) in polynomial time. I have never used theory of mixed integer linear programs, but that's because I focus on different aspects of combinatorial optimization.

If I teach an undergrad course, I would model after Michel Goemans's course. http://www-math.mit.edu/~goemans/18433S15/18433.html I will also introduce some submodular functions(and touches submodular flow). It captures half of the things encountered in the course, and general and simple enough to be the first thing to try. For example, the following problem might be difficult if one tries to create an algorithm by modify the standard matching algorithms. However, one can easily show it is polynomial time solvable by proving some submodular property.

http://cstheory.stackexchange.com/questions/20245/subset-of-...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: