
Graph Isomorphism Strikes Back - nature24
https://www.quantamagazine.org/20170105-graph-isomorphism-retraction/
======
Ar-Curunir
> In Laci Babai, you have one of the most legendary and fearsome theoretical
> computer scientists there ever was, and in graph isomorphism, one of the
> most legendary and fearsome problems,” wrote Scott Aaronson, a theoretical
> computer scientist at the University of Texas, Austin, in an email. “A year
> ago, Laci threw an unbelievable knockout punch at [graph isomorphism], and
> now the problem itself seems to have gotten off the mat and thrown a
> counterpunch.”

Scott Aaronson is just such a fantastic writer. So impressive.

~~~
jdoliner
Agreed, this is a fantastic description of one of the true legends of CS. I
had the great fortune of having Laci as my algorithms professor. Many great
researchers are lousy lecturers, Laci is anything but. He's a captivating
lecturer. Listening to him it feel like your brain is moving twice as fast as
normal. That's how clearly and quickly he explains concepts.

~~~
evincarofautumn
That’s something I’ve noticed also in Simon Peyton–Jones, and sought to
emulate. You feel smarter (and more enthusiastic) just by listening to him.

~~~
long
I did an internship at Microsoft Research and my desk was down the hall from
his office.

He would walk down the hall to his office, saying his hellos to everyone with
a deep, sincere enthusiasm for life. It brightened the entire floor.

~~~
groovy2shoes
I was watching a talk of his the other day and thinking "he's old enough to be
my dad, but damn, he's cute..." ¬_¬

Throughout my life, I've found that the best lecturers aren't necessarily the
smartest or the most credentialed, they're the most apparently passionate and
enthusiastic — so much so that their enthusiasm has a tendency to "rub off" on
their audiences. Enthusiasm leads to engagement, which leads to attentiveness
and self-motivation for learning. Even a mediocre teacher can be a _great_
lecturer, and I've had a few in my lifetime (though unfortunately I can count
their number on one hand).

------
rawnlq
Direct link to the retraction:
[http://people.cs.uchicago.edu/~laci/update.html](http://people.cs.uchicago.edu/~laci/update.html)

Subexponential but not quasipolynomial

------
cvoss
Babai has just corrected error and is now claiming quasipolynomial time once
again. [0]

[0]
[https://news.ycombinator.com/item?id=13360061](https://news.ycombinator.com/item?id=13360061)

------
adrianN
I wonder whether someone can improve the algorithm back to quasipolynomial.
Hopefully Babai's approach is not a dead end, the improvement from his
algorithm is so huge that there might still be hope to salvage the ideas.

~~~
bitL
It's already the fastest algorithm available. Not sure why anyone would need
to salvage his ideas; anyone needing state-of-art graph isomorphism algorithm
will implement it.

~~~
CJefferson
Nope, it's theoretically amazing interesting, but anyone who really wants to
solve graph isomorphism would use Brendan McKay's Nauty, or one of the up-to-
date implementations (Saucy, Bliss, Hungry, Ferret).

Personal interest: I wrote Ferret, my PhD student wrote Hungry.

~~~
bonzini
Isn't VF2 faster than Nauty?

~~~
CJefferson
VF2 is faster than Nauty for EXTREMELY trivial problems (and many problems are
extremely triial), but I've found many real-world problems where it falls off
a cliff.

VF2 is basically the most stupid search imaginable -- if the problem is easy
it stumbles into the solution very quickly, but if any work is required it
degrades into a horrible exponential search.

On the other hand, Nauty is very difficult to confuse -- as it reasons about
the whole graph (whereas VF2 just does local reasoning), your graph has to
have global properties which make it hard, which are rare (and basically never
occur in real-world graphs, unless they come from some mathematical problem).

------
karolsk
I would love to know whether there are some practical (non-math/cs-user
facing) problems which can be solved with graph (iso,sub-iso,mono,epi,...)
morphism algorithms.

Sure, there's a lot you can do with it in the math and theoretical cs world,
probably in chemistry (derivation of chemical reactions?) and maybe
bioinformatics as well.

But despite the power of the approach - general comparison of structures
and/or combinatoric enumeration of all "instances" of pattern in data graph -
are there any success stories of companies with products which are killing it
because of using graph morphisms under the hood? (or maybe even directly
exposing pattern/data graph relations to the user)

~~~
CJefferson
The major use I know of is finding symmetries in other search problems.

Take your problem in Constraint Programming / Mixed Integer Programming / SAT
/ SMT, generate a parse tree, turn that parse tree into a graph, and find
symmetries of that graph.

If done in the right way, symmetries of the graph are symmetries of the
original problem, and knowledge of these symmetries can be used to avoid
redundant symmetric search. This is done all the time in combinatorial search
systems.

One problem for many other real-world problems is it is much more common to
want "almost symmetries" (for various definitions of almost symmetry). It
turns out this is a much harder problem, and algorithms for "pure" graph
isomorphism can't be easily modified, to solve the 'almost symmetry' problem.

~~~
baq
not disagreeing with you at all, more like expanding your comment.

for some definitions of 'almost', especially the vague human ones, there are
very surprising results (well, not for people with CS education, but for the
other ones). e.g. 2-SAT vs 3-SAT, hamiltonian path (visit all nodes on a
graph) vs eulerian path (visit all edges), shortest path vs longest path, etc.
like i said, the 'almost' is debatable, but for the untrained eye the problems
are very similar.

------
thomasahle
It's interesting how nearly all algorithmic asymptotic running times can be
specified just in terms of exp, log and powering. exp(exp(sqrt(log
n)))=exp(exp(exp(log(log n)/2))) is a beautiful thing. But how comes we so
rarely encounter other functions? Clearly this class doesn't cover everything
we might need. Where are the exp(exp(a^(-1)(n))) algorithms, where a^(-1) is
the inverse ackerman function?

~~~
jerf
There are some such algorithms. You don't encounter them as often because we
are biased towards encountering algorithms that can be implemented efficiently
or at least can be approximated efficiently (i.e., a great many "NP-complete"
problems admit of usable approximation algorithms with varying degrees of
usability).

Plus, in perhaps a bit deeper philosophical sense, we are ourselves biased
towards expressing problems that have relatively simple complexities as their
solution. While there are some simple problem that have complicated optimal
solutions, certainly, I'd say that in general the vast bulk of problems for
which the optimal solution has Ackermann's complexity are problems that we
can't really express or manipulate, either. We focus a lot on our ability to
create and express solutions because problems in the real world tend to force
themselves upon us since long since before computer science was even a thing,
but our ability to express problems is limited too!

~~~
VodkaHaze
Moreover, most hard problems are simply proven hard because they incorporate a
hard problem.

Eg. Many proofs that say "this problem is NP-Hard" just prove that "you would
have to solve xyz NP-Hard problem to solve this"

~~~
moyix
NP-Hardness proofs don't say "you would have to solve xyz NP-Hard problem to
solve this", they say "if you could solve the problem you're interested in,
you could use the solution to solve all other NP-complete problems". It's not
that they incorporate a hard problem -- rather, you show that your problem is
hard enough that it can be used to solve other hard problems.

