Hacker News new | past | comments | ask | show | jobs | submit login
Interview with Facebook's Head of AI (wired.com)
69 points by hongzi 10 months ago | hide | past | favorite | 16 comments



Very clickbaity headline. The full quote in the interview makes it obvious he is talking about hitting a wall in terms of simply throwing more compute power (at greater expense) at projects to get better results. He doesn’t imply anything about algorithmic improvements in AI hitting a wall.


Ok, we'll just make the title say that it's an interview, which it is.


And things like Cerebras should change the compute landscape quite a bit.


Interesting, I thought Yann was still the head of AI at Facebook. Did they just need someone who would tell them that achieving human-level intelligence is coming soon?


I think your question implies something that is clearly invalidated by the interview:

> JP: As a lab, our objective is to match human intelligence. We're still very, very far from that, but we think it’s a great objective.


Yann is unlikely to say it's a great objective.


JP: “As a lab, our objective is to match human intelligence.”

Also JP: AGI is a “bogus concept”.

??? How can you possibly square these comments?


He explains in the next paragraph? He doesn't consider human intelligence to be general.


> Artificial general intelligence (AGI) is the intelligence of a machine that can understand or learn any intellectual task that a human being can.

That's what the term means.

Anyway, this idea that human intelligence is ‘not general’ is incredibly short-sighted coming from someone in such a post.

I'm mashing a bunch of pieces of carefully-refined material to convey a bunch of squiggles to convince people I've never met a point about a person I've never met. This sort of unstructured problem-solving is exactly what generality is.


Which is an extraordinary statement which he doesn't explain. Sad as it would have been more useful than this whole interview where he only say obvious things.


This idea (that human intelligence is not general in a technical sense) is discussed and explained in Lex Fridman's dicussion with Yann LeCun: https://lexfridman.com/yann-lecun/


This is an unusually bad argument for him.

Human intelligence being general is not the same as human intelligence being built of unspecialized pieces, or the idea that we should be able to solve all problems equally well, regardless of difficulty of how abstractly we have to approach them.

Consider the comment about shuffling all our optical nerves.

1. This is a HARDER problem, not an equivalent one, despite the apparent isomorphism. The minimum solution for the unshuffled problem is shorter than for the shuffled one. One would thus expect that it would take more time to learn, and more computation to process in real-time, independent of whether the human brain is specialized.

2. Our brain does handle problems where the optical nerve has been shuffled without introducing entropy, as examples like the inverted vision experiment show.

3. It's widely understood that at a low level human brains are more weakly general than at higher levels. If you presented these two tasks to a human on a computer, such that they can consider them holistically, the shuffled task would only be harder to the degree that the problem is fundamentally harder—humans would not struggle much to unshuffle a camera, given a little extra time.

4. Humans have learned to echolocate. They've learned to use sensory-substitution devices that project images on to their tongues. LeCun carefully says “to the same level of quality” because, you know what?, humans are so general that he can't even rule out the brain partially solving this shuffled-neurons problem too.


Here's my guess: There are things that humans can figure out and do, there are things that are possible, but that humans haven't yet figured out to do. It seems strange to define AGI as "Machines can do everything we can do, but not more". For AGI should a machine be able to figure out calculus(and everything that came with it)? Because that was beyond human intelligence for millennia.

You can answer the question of "Is this machine as smart as a human" by giving it the same data as a human and asking it questions, how would you ever determine if its intelligence is truly general?


AGI is not about knowing how to use some human system (like calculus). It is about having a general reasoning capability that is similar or better than human. One of hard things about AGI is that it is not clear what this entails (ie what exactly IS human general reasoning capability).


The straw man would be if taking all the knowledge that humans have generated in the world today, is the AI able to synthesize a novel result and explain how it came to that result, why that result is true, etc. Expanding upon that, can the AGI identify a high-level gap in the knowledge available and communicate what data would be needed to fill that gap (e.g. provide an experiment design that would allow it to answer a question).


perhaps paid to do the former but believes in the latter?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: