Hacker News new | past | comments | ask | show | jobs | submit login
Geoffrey Hinton "godfather of AI" says that AI will surpass human intelligence (twitter.com/bbcnewsnight)
20 points by sizzle 23 days ago | hide | past | favorite | 16 comments



Well, duh, this is basically the goal of Artificial Intelligence. With no timeframe, well, sure, that seems plausible.

But we're not there yet, and we're not close either. I do think that recent experimentations and products can be useful and maybe also serve as a brick for something more powerful, but this is not simply a scaling problem at this stage.

I think that many of us noticed how the hype is currently cooling down, a new AI winter is also plausible.

Managing expectations should be the norm in this field, given its history...


There's a timeframe. It's the first 5 words of the tweet.


I think we're far closer to AGI than most might realize. It possible that these systems may self-improve, and companies like OpenAI may already be doing this very thing. If that's the case, the time to AGI will decrease at an exponential, compounding rate.


EDIT: cite source

I have yet to see any evidence of this. To the contrary, we have seen for years that advances in LLMs require orders of magnitude more parameters/power for each generation. Neural architecture search has been underwhelming. RLHF seems to have regressed models like GPT3.5+, rather than improving them. And recently, researchers concluded that multi-modal models require exponentially more data for each extra "mode" being added to the model.[0]

Even in my own experimentation, I've tried to get some AGI-like behavior, and it just isn't there. I have convinced GPT4, for instance, to generate XML source for an SVG, but it looks nothing like what I describe.

I'd argue that these models don't generalize well at all, and I'd bet that, like with Moore's Law, advances in AI will require continual discovery of incremental new innovations and occasionally new architectures.

[0]https://arxiv.org/abs/2404.04125


Geoffrey Hinton's campaign to stay relevant is souring what was a nice legacy in the development of AI.


A click bait headline, but watching the video and what he said, everything seems somewhat reasonable.

If the experts don’t agree on risks, then clearly there’s a non-zero chance.

On some tasks, computing machines passed us by a while ago. I already ask an LLM several questions a day as it has a much “broader” intelligence than I do. Many times the code it suggests is exactly what I was about to write - sometimes better. The capabilities of generative AI today were in the realm of science fiction just a few years ago.

Artificial intelligence will continue to evolve faster than human intelligence. Sticking our head in the sand and acting like we will always be “superior” seems fool-hardy.


Already it knows more about me on most subjects.

Fortunately 'intelligence' is a hand-wavey subject!


a calculator from half a century ago surpasses human intelligence too


[flagged]


They don't seem like political ravings to me. Mostly quite sensible concerns. He also seems as qualified as anyone given there are no formal qualifications in predicting the future of AI.


Who are the people who are better-qualified?


were abacus makers qualified to talk about quantum computers?

would it make sense to worry about quantum computers breaking RSA in 2000 BCE?


I don't see the analogy. Arguably Geoffrey Hinton is to modern AI as Richard Feynman is to quantum computers. So if Richard Feynman happened to once assemble an abacus, then, yes.

If you mean to say that human-level AI is 4,000 years away, and current AI is a mere abacus by comparison, then... well, rather than argue the point, I'll carry the meta-point: what have you accomplished to earn the qualifications to make such an assertion? I'm especially curious whether Andy99 would agree that 123yawaworht456 is better-qualified than Geoffrey Hinton to voice an opinion on this matter. That bar seems very high indeed.


>If you mean to say that human-level AI is 4,000 years away, and current AI is a mere abacus by comparison

Yes.

>what have you accomplished to earn the qualifications to make such an assertion?

what has he? the technology he helped develop, which is currently being marketed as "AI", is not AI. A - yes, I - no. anyone familiar with its inner workings and severe limitations is qualified to say that.

sci-fi AI remains as remote now as it was at the time Terminator 2 was screened in theaters, and the current technology is not even a stepping stone to that - just like inventing the abacus was not a stepping stone to inventing the transistor.


In my unqualified opinion, to make a claim that we're so far away from "sci-fi AI" that confidently, you'd need to know not one but two things:

1) The inner workings of today's AI. Lots of people know this, including Hinton and yourself as well I'm sure.

2) The inner workings of the human mind, or human-level AI. How will "sci-fi" AI work on the inside?

If you don't know (2), then you don't know how far away (2) is from (1).

If your position is "I just know that what humans do is different, and can't be done by (1)" then... you might well be right, but you should not be confident: Skeptics have been repeatedly surprised at what AI can accomplish using seemingly simple algorithms, without needing some kind of magical, infinitely-recursive homunculus embedded in the code. We thought for a long time that chess was representative of human intelligence, until it was beaten by computer. Then we thought that Go was the true embodiment of human intuition, and would never in a million years be mastered by a computer, until it was. Then translation. Then perception and facial recognition. Then protein folding (yes, actually!). Then art and poetry, and humour.

To some extent, Deep Blue's defeat of Kasparov was a mere parlor trick, and no more meaningful a milestone towards sci-fi AI than a magician's disappearing-rabbit trick is towards teleportation.

But to another, very meaningful extent, we can't be sure that the human mind isn't a stack of parlor tricks itself. Until we truly understand how our own intelligence works, we have no good measure of how far away from it we are. And at the point where we do have that understanding, we'd be able to code it up.

I agree that merely scaling LLMs will not give us human-level AI. But I'm not sure that the human brain doesn't contain neural structures forming close analogs to Kalman filters, A* search routines, convnets, gaussian splats, and yes, token predictors. I'm sure there are other tricks in there too, but I think smart people are looking for them very hard. I don't think it will take 4,000 years.

What's the simplest mental-labor task that you are confident that AI will not be able to do within the next ten years?

(I'll set aside manual-labor tasks just because that ties too heavily to a different kind of economic progress in mass-production of actuators).


>What's the simplest mental-labor task that you are confident that AI will not be able to do within the next ten years?

any task where hallucinations and non-determinism are unacceptable. so... a lot of them.

hallucinations are an inherent flaw of "AI", not a bug that can be fixed. "AI" is a glorified data compression algorithm - a very lossy one. improvements - more/better training data, higher parameter count, etc - can't fix that fundamental limitation.

I see the "AI" (and its plausible future iterations) as yet another neat little thing that will enhance but not replace us. we can (and routinely do) create things that outperform our brains by orders and orders of magnitude in specific applications. calculators did not make mathematicians obsolete - all of our combined (peta/exa/yotta/whatever)flop computation capacity can't replace an elementary school math teacher, let alone someone like Terence Tao (and "AI" often fails at the most basic math despite having been fed possibly every math textbook in existence, which clearly demonstrates that it's "A" but not "I").

the point I'm trying to make is the tech we have is "AI", but the doomsday alarmism is presented as if we have (or will soon have) actual AI. I've read so much of nauseatingly overconfident "experts say" appeal-to-authority bullshit in the past decade about a variety of topics, I can clearly tell that we're going through another phase of hype and mass hysteria.

plenty of technologies have stagnated and even regressed in the past. AI hype gives me strong space era vibes - have there been an equivalent of HN back then, people would speculate about fighting Ivans on Mars, the consequences of post-scarcity brought about by mining resources from other planets, the ethics of space colonization, etc. and now - almost a century later - it's still sci-fi bullshit.


I agree that today's AI hallucinates, and will keep doing so no matter how much it's scaled with parameter count and FLOPs.

But there's also a ton of effort and an army of PhDs being bent to the purpose of fixing this, and not via mere scaling but also by mixing with other techniques. If the money doesn't dry up then I think they'll have cracked the hallucination problem within ten years easily. Certainly it won't take 1000.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: