Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google AI chief says there's a 50% chance we'll hit AGI in just 5 years (futurism.com)
27 points by geox on Nov 2, 2023 | hide | past | favorite | 81 comments


https://www.slate.com/articles/technology/future_tense/2017/...

> Slate has noticed a wily hedging mechanism among Silicon Valley soothsayers to circumvent these uncertainties—make predictions for “five to 10 years out.” It hits that sweet spot: just close enough that people can begin to taste it, but just far enough away that (almost) no one is going to call you out if it doesn’t become true. A review of press releases and tech articles stretching back to the 1990s finds that these Goldilocks forecasts are abundant. We’ve compiled a list of 81 predictions for innovations coming in “five to 10 years” to illustrate the cliché.



Could you elaborate on what you mean by this?



FSD is coming out next year, every year since 2016 (tm)


Basically Shane Legg (who is probably not the Google AI chief), chief AGI scientist at DeepMind, who said in 2011/2012 (https://www.vetta.org/2011/12/goodbye-2011-hello-2012/) that

> I give it a log-normal distribution with a mean of 2028 and a mode of 2025, under the assumption that nothing crazy happens like a nuclear war. I’d also like to add to this prediction that I expect to see an impressive proto-AGI within the next 8 years. By this I mean a system with basic vision, basic sound processing, basic movement control, and basic language abilities, with all of these things being essentially learnt rather than preprogrammed. It will also be able to solve a range of simple problems, including novel ones.

is saying the same thing but pushed his timeline back a few years. I assume if you ask Shane 3 years ago before GPT he would look away and murmur something like "kurtosis".


maybe progressing of technology always happen suddenly, we can't predict what will happen.


This just in: Google AI chief says his job is really, really important, and should be compensated accordingly.


But also that, like, he might not have anything to show for it for a few years so get off his case already.


I'm currently doing ecommerce.

I predict that in 5 years time all commerce will be ecommerce. I think we should budget accordingly fellows.


I'm currently doing ecommerce, own retail, and wholesale.

I predict that in 5 years time all our commerce will still be ecommerce, own retail and wholesale. I think we should budget accordingly fellows.


I'm currently doing ecommerce, digital, physical and services.

I predict that in 5 years time, all commerce will continue to be ecommerce, digital, phsyical and services. I think we should budget accordingly fellows.


I'm quite sure that the Winter[1] is coming again, especially if this scifi-level AGI hype continues. We've seen this many times, and I don't think there's any fundamental development that would bring about such a "qualitative" change in "machine intelligence".

The improvement in technical capabilities of neural netwoks (and RL somewhat too) has been wild and ANNs have jumped from silly toys to practical applications very quickly. But I think we are still deep in the Moravec's paradox[2].

The thing is that we tend to assess intelligence based on how human individuals are thought to differ in intelligence. Anybody can walk/drive, so walking/driving must be easy. Few master chess/painting/writing, so they must be hard.

But e.g. DeepBlue showed clearly that actually chess isn't that hard, people just suck at it. And conversely the failure of self driving cars showed that driving is actually hard, but humans are just very good at it.

I think it's the same thing with e.g. LLMs. People think that writing well is hard because humans who do it tend to have fancy degrees and high salaries. But writing/language is more likely closer to chess than it is to walking/driving. We just suck at it.

Before we have machines that run in a forest and make a sandwitch in a random kitchen, I'm not too worried about AGI overlords.

[1] https://en.m.wikipedia.org/wiki/AI_winter

[2] https://en.m.wikipedia.org/wiki/Moravec%27s_paradox


Oh, we will definitely have AGI in about 5 years. The problem is that term will be as meaningless as AI term today. After assigning "intelligence" to some pattern matching script essentially, we have devalued it into the ground so the AGI had to be invented to denote "true intelligence, this time honestly". But what will happen when OpenAI marketing team will become bored with incrementing version numbers? Exactly, the rebranding into AGI will inevitably happen. :)


I'm not so sure this is gonna happen, or at least catch on. No matter how much they try to hype the intelligence, the failures will be so spectacularly stupid that it's impossible maintain the illusion. The current systems do these all the time but they're just ignored due to the marvel.

With AVs that was already sort of tried, but when the Superhuman Eternally Vigilant Driver slams full speed into a truck that was plainly visible from a kilometer afar, it's hard to keep many people on the hypetrain.


Reasoning and persuasion trumps sensorimotor and physical perception skills by a large margin. If Moravec's paradox is why you aren't worried bout "AGI" then you have no imagination.


On what criterion?

It's not (just) the sensorimotor skills. It's that the Moravec's paradox hints strongly to current machine approaches having fundamental limitations in how they generalize from "one trick ponies" to adaptive and robust behavior that I think would be needed for an "AGI".

Of course all sorts of horrors can be and are accomplished by new technologies, but this doesn't make them intelligent in AGI sense.


The idea that the machine needs to be physically present to alter the physical world is laughable. all you need to do that is communication with humans who could alter it for you.

It doesn't hint at anything like that all. That's like saying humans being unable to fly or intuitively sense electromagnetic fields to divine locations says anything about how humans can generalize.

There are millions of humans who are disabled and can't drive or walk and will never be able to. are they not still general intelligences ? are they automatically less intelligent ?


That idea is of course laughable, but I didn't say anything of the sort.

It's not the physicality, but the messiness, ambiguity and harsh consequences of the natural environment. Current machines operate in environments that are deliberately designed to be highly structured with clear goals and where consequences of errors are highly mitigated.

Machine learning has made some more messiness possible, but it still depends quite a bit on predefined structure.


>It's not the physicality, but the messiness, ambiguity and harsh consequences of the natural environment.

Well yes and the point is that a GPT-X Super Intelligence (or any kind really) doesn't need to deal with any of that directly.


As always, define "AGI". Define it well enough that we'll know whether we reach it or not.

If you can't define it, Google's AI chief can claim he was right, but that will be like, just his opinion, man.


Artificial and Generally Intelligent. That's what it's supposed to mean. What it used to mean. A bar we have passed.

Now there's all sorts of weird offshoots where passing the bar is tantamount to Super Intelligence.

"For some, it might mean any that is needs to do everything a normal human can. For some, non-biological life axiomatically cannot become AGI. For some, it must be "conscious" and "sentient".

For some, it might require literal omniscience and omnipotence and accepting anything as AGI means, to them, that they are being told to worship it as a God. For some, it might mean something more like an AI that is more competent than the most competent human at literally every task.

For some, acknowledging it means that we must acknowledge it has person-like rights. For some it cannot be AGI if it lies. For some it cannot be AGI if it makes any mistake. For some it cannot be AGI until it has more power than humans. These are several definitions and implications that are partially or wholly mutually conflicting but I have seen different people say that AGI is each different one of those."

He should define his version but it obviously doesn't matter what he defines it as since humans are in the business of making up new posts on the fly.


> Artificial and Generally Intelligent. That's what it's supposed to mean. What it used to mean. A bar we have passed.

Since when? You can ask 10 people and get 10 different definitions


That was the idea when the term was coined. Obviously now it just means all sorts of things.


But what's "general" ?

Does it exist physically ? Can it play tennis ? Can it smell a flower ? Can it assemble an ikea shelf ? Can it make and serve me a tea ?

Some people, even on HN, say chatgpt already is AGI for example...


General was to distinguish between the Narrow intelligences of the time, intelligences that could only complete one task. General was taken to mean "many different tasks". It was not supposed to mean "any task imaginable".

Smell, play tennis, Assemble an Ikea shelf are clearly not hard bars of general intelligence.

There are millions of disabled humans who can't do any of the things you mentioned. Are they not general intelligences ?

Yes GPT-4 is AGI by the original definition.


> Yes GPT-4 is AGI by the original definition.

Where can I find this "original definition" ?

I can't find any consensus about chatgpt being AGI, even openAI doesn't present it as is, nor can I find any serious paper about it being AGI either

It is a language model, a very good one, but language is the party trick of intelligence, hence people get easily tricked and anthropomorphise chatgpt to give it attributes it doesn't actually displays.


1. "I think GPT-3 is artificial general intelligence, AGI. I think GPT-3 is as intelligent as a human. And I think that it is probably more intelligent than a human in a restricted way… in many ways it is more purely intelligent than humans are. I think humans are approximating what GPT-3 is doing, not vice versa.”

— Connor Leahy, co-founder of EleutherAI, creator of GPT-J (November 2020) https://www.youtube.com/watch?v=HrV19SjKUss&t=175s

2. https://www.noemamag.com/artificial-general-intelligence-is-...

3. Sparks of Artificial General Intelligence: Early experiments with GPT-4. https://arxiv.org/abs/2303.12712 The especially funny thing bout this paper is that the original title in the tex source on that arxiv page is "First Contact With an AGI System"

\title{%\textbf{WORK IN PROGRESS - DO NOT SHARE} \\ %First Contact With an AGI System} \textbf{Sparks of Artificial General Intelligence:} \\ \textbf{Early experiments with GPT-4}}

4. Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity. These guys just switched the order of two words so they wouldn't have to call it AGI lol. https://arxiv.org/abs/2303.12003

5. Open ai. GPTs are GPTs (General Purpose Technologies): An Early Look at the Labor Market Impact Potential of Large Language Models - https://arxiv.org/abs/2303.10130

6. Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge - The old switcheroo again. https://jamanetwork.com/journals/jama/article-abstract/28064...

There isn't a testable definition of General Intelligence that GPT-4 would fail that a chunk of humans also wouldn't. If there are some humans that can't pass your bar of general intelligence then it is not a test of general intelligence.


> 1. co-founder of EleutherAI

Biased, like Musk when he sold the model S as fully autonomous in 2 years, back in 2012, still nowhere to be seen in 2023

> 2.

Cool opinion, both of them work for .... Google

> 3. Sparks of Artificial General Intelligence

Biased since it was done by Microsoft which is heavily invested in openai

> 4. These guys just switched the order of two words so they wouldn't have to call it AGI lol.

Generative != General

> 5. 6.

Seem unrelated

I'll believe when I see it. Chatgpt is to AGI what the wheel is to an ICBM, it might lead there at some point but we'll need a lot of breakthrough in a lot of disciplines before we can see the link.


>Biased, like Musk when he sold the model S as fully autonomous in 2 years

GPT-3 was not a product of Eleuther. Eleuther doesn't sell anything. Everything it releases is open source and free. They are non profit.

>Cool opinion, both of them work for .... Google

Yes because working for one of the leading companies in the field is sure evidence to not take it seriously. Good thinking. I should trust the comment of a random person on the internet more seriously.

How is 5 unrelated ? You have Open ai literally telling you language models are general purpose.


> How is 5 unrelated ? You have Open ai literally telling you language models are general purpose

> they could have considerable economic, social, and policy implications.

We're almost a full year into the AGI revolution and literally nothing happened, wouldn't that be a big cue ?

And yes, people who's paycheck depends on AGI existing telling me AGI exists is a red flag. Especially when I can boot up chatgpt and check for myself...


Ah yes, chatgpt, a product well known for being to-the-minute up to date, and not making things up (aka lying).


You can find a few people who say anything. You have a few quotes. That doesn't mean that the consensus is that GPT-3/4 is AGI. My general sense is that the consensus is that it is not an AGI, but I'm not in the field.


I didn't claim any consensus on anything. I don't care what the consensus is. People move posts. People are shortsighted. ENIAC was declared the first general purpose computer years after the fact.

as it stands currently, nobody can provide a testable definition of general intelligence that GPT-4 fails that a chunk of humans also doesn't.

Think about that. anyone, including you who says GPT isn't agi is working off a definition that if testable, not all humans would be able to pass. That is far more important to me than any "consensus".

The OP i was replying to made it seem like nobody out there was of the opinion that we've already achieved agi and i was replying to counter that.


Humans can tell the truth, and humans can lie. GPT can do neither.


A 'consciousness' tries to reproduce (humans, animals, bacteria etc) to preserve itself. The day AI decides to kill not a human but humanity itself will be the day AGI is achieved. Until then 'AI' is vapourware, turing test and other such tests are meaningless.


If we achieve AGI, it will be such a significant event that everyone in tech and most not in tech will know about it, regardless of any definition.


Nobody is going to be debating definitions when it's here. If there's a question, then the answer is no.


Lol that's not true at all. Many people have definitions of AGI that are partially or wholly mutually conflicting.


And yet, in https://news.ycombinator.com/item?id=38113190, you said that AGI is "a bar we have passed". That statement assumes that we have a clear enough definition that you can tell whether we have passed it. But here you say that many people have conflicting definitions, that is, that there's not a clear agreed-upon definition.

I don't think you can have it both ways.


Yes people have conflicting definitions of the specific term.

But Artificial and Generally Intelligent is a bar that's been passed. Take a look at all those definitions i brought up and tell me which ones have anything to do with only being generally intelligent ?


My point is that actual AGI will make such debates totally irrelevant.


No it won't. What is "Actual AGI" and why is it not simply a machine that is generally intelligent ?


I think you might not be following my point. Actual AGI will be so enormously singular and distinct from anything we've ever encountered that only an idiot would look at it and say, "But hmmm, maybe this might not be it."

The very fact of continued debates over definitions to act as criteria is all the signal we need to know that we aren't there yet.


Why would it be "so enormously singular and distinct from anything we've ever encountered" ?

What does being an Artificial General Intelligence have anything to do with any of that ?

This is the point I'm making. You're hiding behind vague musings.

ENIAC as the first general purpose computer was a declaration made years after the fact. Why do you think it's going to be any different here ?


If he said there's 50% chances, then he'll be right in any case.


There might be a market for something like a CEOblocker for browsers.


_sent from my fully autonomous self driving Tesla model S in 2014_


Doubtful.

However, predictions from Gary Bernhardt's 2014 talk "The Birth and Death of JavaScript" are still coming true and can be relied on:

https://www.destroyallsoftware.com/talks/the-birth-and-death...

Take note accordingly.


The human brain is estimated at 2.5 Pb of storage [0]. Currently 1 Tb costs around $12.50 [1]. If this costs reduces by half roughly every two years, 2.5 Pb of storage will be available for roughly $4k.

At $4k and below, that means we essentially have an affordable desktop computer that has the storage capacity of the human brain.

My guess is that there will be a spate of startups that offer real value using AI when the price is around $10k, but that's just a guess and we're already at around $30k for 2.5Pb of storage right now.

[0] https://www.cnsnevada.com/what-is-the-memory-capacity-of-a-h...

[1] https://jcmit.net/diskprice.htm


Unless you can replicate how the brain works you can have 10000000000Pb of storage it doesn't mean jack shit

> A human can run at 40kmh, since decades cars can drive faster than 40kmh, hence cars are smarter than humans


> Birds flap their wings to fly. Planes don't. Therefore planes can never fly.

There were some implicit assumptions in my above post:

* Human brains are physically realizable (there is no mind-body separation)

* Human level intelligence is the result of a massively parallel computer running essentially simple algorithms on large data

* Compute will follow the same Moore's law pattern

Storage is taken as a representative metric. I argue that while storage is an insufficient condition, it's a necessary one.

I agree that computation must follow suite but, to me at least, not only is the compute getting faster and cheaper, the core "fundamental algorithm" for intelligence are essentially already know and the limiting factor is the cost of storage and, to a lesser extent, compute.


I’m not sure AGI needs Moores law to help it. Human beings are general intelligences and our form of computing runs on extremely slow and unreliable hardware.

I feel if we could crack the algorithm behind AGI, we will develop hardware to do it more efficiently much like how crypto is now run on specialized hardware.


> I argue that while storage is an insufficient condition, it's a necessary one

It also is by far the easiest thing to attain, given that all the other pre requisites would requires absolutely major ground breaking discoveries


Additionally, our planet is smarter than human because the total amount of hard disk storage on it is larger than 2Pb (even those hard drives that are disconnected from power).


We’re 5+ years from the transformer and we’re still using the transformer for the most cutting edge llms. I don’t see what difference another 5 year is going to make unless someone invents something new that can surpass the transformer, and given the amount of money and resources that has been put into AI since 2017 and the lack of innovation since (in terms of fundamental architecture, not things like Lora and Rope) then I’d say the chances are way way lower than 50%.


Who says you need anything other than the transformer? We've clearly not squeezed all the capability out of it.


I don’t think it’s so clear. The transformer has been available for 6 years, if it were possible to train one to achieve AGI then what’s stopped anyone from doing this that won’t still be the case in 5 years time, given than there’s potentially ?trillions on the table for anyone that does.


I don't understand what you're saying. People have been been training up transformers in the goal of "achieving agi". Transformers have been getting better as they've been trained up. Nobody has stopped doing this.


But they haven't achieved AGI, not even close. It can't distinguish between truth and nonsense. An LLM is essentially outputting nonsense all the time, that has been massaged by training to approximate truth through the proxy of likely-next-word.


What I’m saying is if it is possible to train transformers to achieve AGI, then why hasn’t it happened yet? What’s the limitation that will be overcome in the next 5 years?


Because training takes time (months), money and hardware. It's not like this is some instantaneous process.

Nobody has any knowledge of the "magic number" of size and data before "AGI" so people train increasingly large models.

Bigger models are in the process of being trained. They will continue to be until they no longer get better.


ChatGPT4 already severely limits what you can do with it. I wonder who will eventually have access to unrestricted AI? I'm guessing the people with the deepest pockets.


What scares me more is what happens when criminals (with deep pockets) and no morals get their hands on it.


What are some questions that you think the unrestricted AIs can answer and are worth paying for (and that restricted ones can’t)?


Running local AI models in your own hardware will be the ultimate case of freedom.


OK... How soon until we have a scientific definition of consciousness?

AGI is not simulation, it's supposed to be the real deal, machine awareness. I'm starting to feel like Charlie Brown when Lucy keeps yanking the football away. Marketers keep stealing all the terms we use to refer to the real deal for AI.

R. Daneel Olivaw, R2D2, the AIs from Troy Rising, the Culture Minds... That's where AGI goes. AGI is not really good ML models that have no insight.


Fortune tellers and stock gurus make lots of predictions, and randomly one will be right; they then promote how smart they were, and all the others are forgotten. Why would this be any different?


Define "intelligence", and we'll talk.


Makes sense. It will happen or it won't.


That's not how probabilities work, Walter.


Either we do, or we don't. 50/50.


Cool now define what AGI actually is.


Too difficult. We'll have to ask the AGI for a definition.


Turing basically defined it 70+ years ago.

Of course our ability to distinguish between AI and humans improves as AI evolves. It becomes more sophisticated, and so do we. 10 years ago GPT4 probably would have passed the Turing test.


If it works like a human, works like a human, and quacks like a human, then it probably is a AGI.


Are those the same five years that gave us all the self-driving cars?


I suppose with 'we', he means 'OpenAi'?


5 Years Out! (TM). Just in time for fully automated self-driving cars! /s


yet sadly, Nuclear Fusion is still 20 years out, just like it was in 2003.. and 2013.


Not to mention 1993, 1983 and 1973.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: