Hacker News new | past | comments | ask | show | jobs | submit login
A Time Leap Challenge for Sat Solving (arxiv.org)
100 points by panic 34 days ago | hide | past | favorite | 11 comments



I love these papers. In my undergrad (2008?) we read one by Robert Bixby (and others?) on integer programming, where IIRC the authors measured a 10000x speedup over some number of decades because of algorithmic improvements.

You can roll your eyes at constant-factor speedups for NP-complete problems, but in the real world these can (and, IME do) turn some problems from intractable to tractable, or from "go and get a coffee" to "interactive HTTP call".

That said, I suppose if you compared the resource requirements for playing Go to a certain level, the advancements in ML rather beat out the old-school optimisation algorithm progress. Not an apples-to-apples comparison, but that's not super important if you're willing to settle for suboptimal solutions.


I decided to try to track down the paper you're referencing, and can support your comment with some numbers I found in a similar one he wrote in 2012.

He claimed anywhere from 3,300x in linear programming from 1988 to 2004, to a 75,000x speedup in algorithmic improvements in CPLEX from 1991 to 2007[0].

One of the points he drives home in his tables is that the algorithmic improvement isn't "dwarfed" by processor/machine improvments. Rather they multiply eachother. A 3300x speedup on the algorithm side, combines with a 2000x speedup on the machine side, for a total speedup of 6.6 million x.

So what took >76 days to compute in ~1990 now takes <1 second. But using the same algorithms from 1990 on today's computers, those computations might take 30 minutes instead of the 1 second that we are accustomed to.

0: https://www.math.uni-bielefeld.de/documenta/vol-ismp/25_bixb... [page 114]


> turn some problems from intractable to tractable, or from "go and get a coffee" to "interactive HTTP call"

Or broaden the set of problems that can practically be attacked using SAT solvers.

Deductively proving program-correctness is one such problem. It's currently possible but clunky, [0] or put another way, it's practical given a patient and skilled user. Further advances in solvers will presumably translate into 'lowering the bar' for automated correctness proving.

[0] With the disclaimer that I'm not an expert, this quote seems to capture the state of things in Ada SPARK: none of the provers available in SPARK (CVC4, Z3 and Alt-Ergo) are able to prove the entire project on their own https://blog.adacore.com/using-spark-to-prove-255-bit-intege...


Let's say that through Why3 they manage to harness all kinds of SMT solvers (and others like grappa?).


Yes, this incredible speedup in mixed integer optimisation is also the underlying thesis of this book: https://www.dynamic-ideas.com/books/machine-learning-under-a...


I really wish that book was a little cheaper. It looks really interesting and I've had my eye on it for months, but there aren't really any used copies of it floating around to bring down the price.


I was on a book buying binge a few months ago, I got it then :-) Just thought to myself how much money I saved on books thanks to sites like library genesis ...


There was a slashdot post many years ago that spoke of a 100,000x general/global algorithmic improvement over the decades. The point was that it absolutely dwarfed the hardware improvements over the same time span.

To be clear, this was a post, not a comment. So it wasn't just some rando spouting off. Nevertheless, I was never able to find any backing for the claim. But it's always fascinated me.

It reminds me of something from a Frank McSherry comment about scaling down (algorithmically) being a third option, in addition to up (bigger server) and out (more servers).


I’m wondering if authors removed the debug and mem counting code from Knuth CWEB files. All operations are accompanied by memory access counters.

I looked at the code and it’s as easy change of the define macro.


Curious about this: What factor improvement would removing them make? Have you tried it on a few large instances (comparable to those in the paper)?


What's interesting is about same progress in compiler technology: https://www.quora.com/How-true-is-Proebstings-Law-for-compil...

I also think that point in the answer at Quora is also valid here: SAT solvers advancement allows to attack bigger problems that were impossible to conquer some 20 years ago.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: