With unlimited data, models memorize decisions. To advance AGI we need to quantify and measure skill-acquisition efficiency.
I made a summary about it here: https://twitter.com/EmilWallner/status/1193968380967043073
Announcement tweet: https://twitter.com/fchollet/status/1192121587467784192
It's a must-read for anyone who's interested in AGI.
I can see the plausibility of this but it raises an objection that the hardest tasks for a computer are those seemingly simple for a human being while tasks involving analysis and abstraction are often easiest for computers, Moravec's "law". I think one could also put it as solving neat problems is simple for machines, solving the messy intersection of several problems and exigencies is what's hard (and what's required for generality, since most things turn out to be messy).
The proof is fairly simple. Suppose E were such an environment. I claim that every intelligent agent is exactly as intelligent (according to intelligence as measured by performance in E) as some "blind" agent, where by "blind" I mean an agent which totally ignores its surroundings.
Let A be any deterministic agent. If we were to place A in the environment E, then A would take certain actions, call them a1, a2, a3, ..., based on A's interaction with E.
Now define a new agent B as follows. B totally ignores everything and instead just blindly takes actions a1, a2, a3, ...
By construction, B acts exactly the same as A within E, therefore, as measured by E-performance, A and B are equally intelligent. So, for any particular environment, every agent is just as intelligent as some "blind" agent. But that's clearly a very bad property for an alleged intelligence measure to possess!
What if you change the initial conditions of the environment?
Then A would act differently, but B would still act the same because it is blind.
So now they are not taking the same actions.
It seems impossible to have a blind agent that follows a non-blind in all situations.
If initial conditions are allowed, then you can consider the following environment: initial condition is an arbitrary source-code; the environment proceeds to implement said source-code. Clearly this "environment" is too all-encompasing to be considered as a single environment.
I do think the DeepMind team is pretty aware of how well their algorithms adapt to new problems. That has been a theme since the Atari paper.
Intelligence is hard to measure. It's personal and contextual. Even the IQ test for human is incomplete and inaccurate.
I would be really impressed if AI can provide a better theory than the multi-verse to explain the quantum mechanics. I consider that the moment of true AI.
There are 1000 tasks, all unique(!), and apparently all handwritten(!) one by one.
This seems to be what he has been working on for the past two years.