Of the algorithms used in this work, which would be touching on foundational aspects of human intelligence, in your view?
Thinking about the variant of MCTS used in this work, for example, it's not clear to me that tree search, no matter how clever, touches much on human cognition, at least significantly more than deep blue did.
On the other hand, the idea of bootstrapping a network with huge numbers of expert interactions before 'graduating' it to more complex training and architectural enhancement might turn out be an important part of the Game Over algorithm, as you call it. Even if it doesn't resembles much how humans beings learn.
It's quite a low-key affair (no GPU clusters and so on) but what I think is remarkable is that it described a fairly complex cognitive (neural) architecture that was able to mimic certain kinds of child-level cognition with TINY amounts of training data. Instead, human supervisors guided the evolution of its cognitive strategy in a white-box kind of fashion to encourage it to answer questions as children did.
In many respects (like training data volume, layer depth, and so on) it couldn't be further from the current deep learning trends, and yet it seemed much more along the lines of what I imagine an actual AGI would be doing, especially one we hope to control.
A key point is that the central executive (the core, which controls the flow of data between slave systems) is a trainable neural network itself, which learns to generate the "mental actions" that control the flow of data between slave systems (like short-term and long-term memory), rather than rely on fixed rules to control the flow. This allows the system to generalise.
(Some of the training itself doesn't seem to be that similar to training a human child, though. It's instead tailored to the system's architecture.)
I'm excited to see how much richer this and similar systems will become, as researchers improve the neural architecture and processing efficiency. Will we see truly human-like language and reasoning systems sooner than expected?