
The Measure of Intelligence: towards more human-like artificial systems - yamrzou
https://arxiv.org/abs/1911.01547
======
emilwallner
François Chollet’s core point: We can't measure an AI system's adaptability
and flexibility by measuring a specific skill.

With unlimited data, models memorize decisions. To advance AGI we need to
quantify and measure skill-acquisition efficiency.

I made a summary about it here:
[https://twitter.com/EmilWallner/status/1193968380967043073](https://twitter.com/EmilWallner/status/1193968380967043073)
Announcement tweet:
[https://twitter.com/fchollet/status/1192121587467784192](https://twitter.com/fchollet/status/1192121587467784192)

It's a must-read for anyone who's interested in AGI.

~~~
xamuel
That core point is like a variation on a lemma in a paper I recently published
(Lemma 2 in [1]): There can be no single deterministic interactive reward-
giving environment which can, by its lonesome, serve as a good proxy for
general intelligence of deterministic agents.

The proof is fairly simple. Suppose E were such an environment. I claim that
every intelligent agent is exactly as intelligent (according to intelligence
as measured by performance in E) as some "blind" agent, where by "blind" I
mean an agent which totally ignores its surroundings.

Let A be any deterministic agent. If we were to place A in the environment E,
then A would take certain actions, call them a1, a2, a3, ..., based on A's
interaction with E.

Now define a new agent B as follows. B totally ignores everything and instead
just blindly takes actions a1, a2, a3, ...

By construction, B acts exactly the same as A within E, therefore, as measured
by E-performance, A and B are equally intelligent. So, for any particular
environment, every agent is just as intelligent as some "blind" agent. But
that's clearly a very bad property for an alleged intelligence measure to
possess!

[1]
[https://philpapers.org/archive/ALEIVU.pdf](https://philpapers.org/archive/ALEIVU.pdf)

~~~
elcomet
I'm not sure I follow.

What if you change the initial conditions of the environment?

Then A would act differently, but B would still act the same because it is
blind.

So now they are not taking the same actions. It seems impossible to have a
blind agent that follows a non-blind in all situations.

~~~
xamuel
The argument is for deterministic environments--no initial condition
dependence.

If initial conditions are allowed, then you can consider the following
environment: initial condition is an arbitrary source-code; the environment
proceeds to implement said source-code. Clearly this "environment" is too all-
encompasing to be considered as a single environment.

------
sanxiyn
The code is at
[https://github.com/fchollet/ARC](https://github.com/fchollet/ARC).

There are 1000 tasks, all unique(!), and apparently all handwritten(!) one by
one.

------
js8
I am surprised he doesn't mention Bongard problems:
[http://www.foundalis.com/res/diss_research.html](http://www.foundalis.com/res/diss_research.html)

------
yamrzou
It’s worth noting that the author is François Chollet, the creator of Keras.

This seems to be what he has been working on for the past two years.

