Hacker News new | past | comments | ask | show | jobs | submit login
The Measure of Intelligence: towards more human-like artificial systems (arxiv.org)
80 points by yamrzou 20 days ago | hide | past | web | favorite | 12 comments



François Chollet’s core point: We can't measure an AI system's adaptability and flexibility by measuring a specific skill.

With unlimited data, models memorize decisions. To advance AGI we need to quantify and measure skill-acquisition efficiency.

I made a summary about it here: https://twitter.com/EmilWallner/status/1193968380967043073 Announcement tweet: https://twitter.com/fchollet/status/1192121587467784192

It's a must-read for anyone who's interested in AGI.


Interesting: "To avoid local-generalization systems that artificially "buy" performance on a specific task, Chollet restricts priors to 'Core Knowledge' found in developmental science theory: such as elementary physics, arithmetic, geometry and a basic understanding of intentions."

I can see the plausibility of this but it raises an objection that the hardest tasks for a computer are those seemingly simple for a human being while tasks involving analysis and abstraction are often easiest for computers, Moravec's "law"[1]. I think one could also put it as solving neat problems is simple for machines, solving the messy intersection of several problems and exigencies is what's hard (and what's required for generality, since most things turn out to be messy).

[1] https://en.wikipedia.org/wiki/Moravec%27s_paradox


That core point is like a variation on a lemma in a paper I recently published (Lemma 2 in [1]): There can be no single deterministic interactive reward-giving environment which can, by its lonesome, serve as a good proxy for general intelligence of deterministic agents.

The proof is fairly simple. Suppose E were such an environment. I claim that every intelligent agent is exactly as intelligent (according to intelligence as measured by performance in E) as some "blind" agent, where by "blind" I mean an agent which totally ignores its surroundings.

Let A be any deterministic agent. If we were to place A in the environment E, then A would take certain actions, call them a1, a2, a3, ..., based on A's interaction with E.

Now define a new agent B as follows. B totally ignores everything and instead just blindly takes actions a1, a2, a3, ...

By construction, B acts exactly the same as A within E, therefore, as measured by E-performance, A and B are equally intelligent. So, for any particular environment, every agent is just as intelligent as some "blind" agent. But that's clearly a very bad property for an alleged intelligence measure to possess!

[1] https://philpapers.org/archive/ALEIVU.pdf


I'm not sure I follow.

What if you change the initial conditions of the environment?

Then A would act differently, but B would still act the same because it is blind.

So now they are not taking the same actions. It seems impossible to have a blind agent that follows a non-blind in all situations.


The argument is for deterministic environments--no initial condition dependence.

If initial conditions are allowed, then you can consider the following environment: initial condition is an arbitrary source-code; the environment proceeds to implement said source-code. Clearly this "environment" is too all-encompasing to be considered as a single environment.


Isn't that basically Searle's Chinese Room argument?


Not really, unless I'm missing something. They have thematic similarities, I suppose.


There's a lot of interesting research focused on this problem space, training algorithms toward a more flexible problem-solving ability. Here are a couple:

https://arxiv.org/pdf/1902.09725.pdf https://openreview.net/pdf?id=Skc-Fo4Yg

I do think the DeepMind team is pretty aware of how well their algorithms adapt to new problems. That has been a theme since the Atari paper.


I am not sure that I get the definition of skill right. Does classifying hot dog or not count as a unique skill or not? Or does object detection count as one skill?

Intelligence is hard to measure. It's personal and contextual. Even the IQ test for human is incomplete and inaccurate.

I would be really impressed if AI can provide a better theory than the multi-verse to explain the quantum mechanics. I consider that the moment of true AI.


The code is at https://github.com/fchollet/ARC.

There are 1000 tasks, all unique(!), and apparently all handwritten(!) one by one.


I am surprised he doesn't mention Bongard problems: http://www.foundalis.com/res/diss_research.html


It’s worth noting that the author is François Chollet, the creator of Keras.

This seems to be what he has been working on for the past two years.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: