I don't think this is THAT world changing. Generalist / multitask learning is already displayed in many models across orgs like Deepmind and others. This Graph NN model is performing algorithms that typically rely on graph theory. It's unsurprising that there's little friction and high success in combining embedding spaces on various graph algorithms by representing those in a graph. If this model was realizing special success on other domains it would seem more significant.
The interesting part would be if they could put this somewhere on the path of a large language model, so that it could learn to apply logic to its transformations instead of just symbolically manipulate things. Then maybe we could get a language model that can do math.
Ha well that _would_ be interesting. This work is interesting on its own BUT yeah it doesn't yet do what you mentioned and, to me, its conclusions are not that surprising.
Or a little recurrent ALU side-chain glued on. It would at the very least put a stop to the "It can't even multiple two four-digit numbers correctly," squad which I admit, I can't do in my head either.
Large Language Models are bad at a lot of the same things humans are bad at. For example, humans can't add or multiply large numbers in their head. They need a piece of paper as an axillary memory. Or a calculator. If we need to find the shortest path in a graph, we write a short program to do it.
Likewise a Large Language Model can easily write the source code to do addition for arbitrarily large numbers, or solve graph related problems. (See AlphaCode.) It's not clear that they need to be "Generalist Algorithmic Learners" as this paper suggests. Time will tell.