You can roll your eyes at constant-factor speedups for NP-complete problems, but in the real world these can (and, IME do) turn some problems from intractable to tractable, or from "go and get a coffee" to "interactive HTTP call".
That said, I suppose if you compared the resource requirements for playing Go to a certain level, the advancements in ML rather beat out the old-school optimisation algorithm progress. Not an apples-to-apples comparison, but that's not super important if you're willing to settle for suboptimal solutions.
He claimed anywhere from 3,300x in linear programming from 1988 to 2004, to a 75,000x speedup in algorithmic improvements in CPLEX from 1991 to 2007.
One of the points he drives home in his tables is that the algorithmic improvement isn't "dwarfed" by processor/machine improvments. Rather they multiply eachother. A 3300x speedup on the algorithm side, combines with a 2000x speedup on the machine side, for a total speedup of 6.6 million x.
So what took >76 days to compute in ~1990 now takes <1 second. But using the same algorithms from 1990 on today's computers, those computations might take 30 minutes instead of the 1 second that we are accustomed to.
0: https://www.math.uni-bielefeld.de/documenta/vol-ismp/25_bixb... [page 114]
Or broaden the set of problems that can practically be attacked using SAT solvers.
Deductively proving program-correctness is one such problem. It's currently possible but clunky,  or put another way, it's practical given a patient and skilled user. Further advances in solvers will presumably translate into 'lowering the bar' for automated correctness proving.
 With the disclaimer that I'm not an expert, this quote seems to capture the state of things in Ada SPARK: none of the provers available in SPARK (CVC4, Z3 and Alt-Ergo) are able to prove the entire project on their own https://blog.adacore.com/using-spark-to-prove-255-bit-intege...
To be clear, this was a post, not a comment. So it wasn't just some rando spouting off. Nevertheless, I was never able to find any backing for the claim. But it's always fascinated me.
It reminds me of something from a Frank McSherry comment about scaling down (algorithmically) being a third option, in addition to up (bigger server) and out (more servers).
I looked at the code and it’s as easy change of the define macro.
I also think that point in the answer at Quora is also valid here: SAT solvers advancement allows to attack bigger problems that were impossible to conquer some 20 years ago.