Also, Dantzig, Fulkerson and Johnson solved a 50-city TSP to optimality, with an exact method, by hand, in 1954 . The practical running time was slower than what is proposed here, admittedly :-).
This is not a criticism of the paper, though: To their credit, the authors are quite straightforward about this, they're not trying to hide it. Their point is to demonstrate that their machine learning approach has potential.
These "best cases" are hand-tuned for the domain by algorithm experts - right? How big is the performance jump from generic z3 or whatever?
Edit: After alphazero, there was lots of chatter about improving combinatorical problems. This has proven elusive.
Ie, AI would find “fast” routes in the sense of “a human driving these gets done fast, all things considered” but not necessarily “fast” in a simple combinatorics problem — which is actually good, because simple combinatorics often fails in the real world (eg, longer routes without left turns are faster).
Did I miss something about how this more directly applies to combinatorics?
I dont really have a strong intuition for mathematics. It is possible that nets are simply good with spatial boards and not good enough with arbitrary graphs.
Correct. It is arguably more than just "hand-tuned". The algorithm I mentioned, LKH, was entirely designed for TSP, and I'm not aware of it being applicable to much else.
> How big is the performance jump from generic z3 or whatever?
For TSP, an enormous jump. Certainly more than 3 orders of magnitude.
Definitely makes you wonder if P = NP is even relevant when all the interesting instances can be solved to optimality or very near to it extremely often.