
Interview with Facebook's Head of AI - hongzi
https://www.wired.com/story/facebooks-ai-says-field-hit-wall/
======
ummonk
Very clickbaity headline. The full quote in the interview makes it obvious he
is talking about hitting a wall in terms of simply throwing more compute power
(at greater expense) at projects to get better results. He doesn’t imply
anything about algorithmic improvements in AI hitting a wall.

~~~
dang
Ok, we'll just make the title say that it's an interview, which it is.

------
unityByFreedom
Interesting, I thought Yann was still the head of AI at Facebook. Did they
just need someone who would tell them that achieving human-level intelligence
is coming soon?

~~~
xixixao
I think your question implies something that is clearly invalidated by the
interview:

> JP: As a lab, our objective is to match human intelligence. We're still
> very, very far from that, but we think it’s a great objective.

~~~
unityByFreedom
Yann is unlikely to say it's a great objective.

------
Veedrac
JP: “As a lab, our objective is to match human intelligence.”

Also JP: AGI is a “bogus concept”.

??? How can you possibly square these comments?

~~~
contextfree
He explains in the next paragraph? He doesn't consider human intelligence to
be general.

~~~
The_rationalist
Which is an extraordinary statement which he doesn't explain. Sad as it would
have been more useful than this whole interview where he only say obvious
things.

~~~
Traster
Here's my guess: There are things that humans can figure out and do, there are
things that are possible, but that humans haven't yet figured out to do. It
seems strange to define AGI as "Machines can do everything we can do, but not
more". For AGI should a machine be able to figure out calculus(and everything
that came with it)? Because that was beyond human intelligence for millennia.

You can answer the question of "Is this machine as smart as a human" by giving
it the same data as a human and asking it questions, how would you ever
determine if its intelligence is truly general?

~~~
ivalm
AGI is not about knowing how to use some human system (like calculus). It is
about having a general reasoning capability that is similar or better than
human. One of hard things about AGI is that it is not clear what this entails
(ie what exactly IS human general reasoning capability).

