> Slate has noticed a wily hedging mechanism among Silicon Valley soothsayers to circumvent these uncertainties—make predictions for “five to 10 years out.” It hits that sweet spot: just close enough that people can begin to taste it, but just far enough away that (almost) no one is going to call you out if it doesn’t become true. A review of press releases and tech articles stretching back to the 1990s finds that these Goldilocks forecasts are abundant. We’ve compiled a list of 81 predictions for innovations coming in “five to 10 years” to illustrate the cliché.
> I give it a log-normal distribution with a mean of 2028 and a mode of 2025, under the assumption that nothing crazy happens like a nuclear war. I’d also like to add to this prediction that I expect to see an impressive proto-AGI within the next 8 years. By this I mean a system with basic vision, basic sound processing, basic movement control, and basic language abilities, with all of these things being essentially learnt rather than preprogrammed. It will also be able to solve a range of simple problems, including novel ones.
is saying the same thing but pushed his timeline back a few years. I assume if you ask Shane 3 years ago before GPT he would look away and murmur something like "kurtosis".
I'm currently doing ecommerce, digital, physical and services.
I predict that in 5 years time, all commerce will continue to be ecommerce, digital, phsyical and services. I think we should budget accordingly fellows.
I'm quite sure that the Winter[1] is coming again, especially if this scifi-level AGI hype continues. We've seen this many times, and I don't think there's any fundamental development that would bring about such a "qualitative" change in "machine intelligence".
The improvement in technical capabilities of neural netwoks (and RL somewhat too) has been wild and ANNs have jumped from silly toys to practical applications very quickly. But I think we are still deep in the Moravec's paradox[2].
The thing is that we tend to assess intelligence based on how human individuals are thought to differ in intelligence. Anybody can walk/drive, so walking/driving must be easy. Few master chess/painting/writing, so they must be hard.
But e.g. DeepBlue showed clearly that actually chess isn't that hard, people just suck at it. And conversely the failure of self driving cars showed that driving is actually hard, but humans are just very good at it.
I think it's the same thing with e.g. LLMs. People think that writing well is hard because humans who do it tend to have fancy degrees and high salaries. But writing/language is more likely closer to chess than it is to walking/driving. We just suck at it.
Before we have machines that run in a forest and make a sandwitch in a random kitchen, I'm not too worried about AGI overlords.
Oh, we will definitely have AGI in about 5 years. The problem is that term will be as meaningless as AI term today. After assigning "intelligence" to some pattern matching script essentially, we have devalued it into the ground so the AGI had to be invented to denote "true intelligence, this time honestly". But what will happen when OpenAI marketing team will become bored with incrementing version numbers? Exactly, the rebranding into AGI will inevitably happen. :)
I'm not so sure this is gonna happen, or at least catch on. No matter how much they try to hype the intelligence, the failures will be so spectacularly stupid that it's impossible maintain the illusion. The current systems do these all the time but they're just ignored due to the marvel.
With AVs that was already sort of tried, but when the Superhuman Eternally Vigilant Driver slams full speed into a truck that was plainly visible from a kilometer afar, it's hard to keep many people on the hypetrain.
Reasoning and persuasion trumps sensorimotor and physical perception skills by a large margin. If Moravec's paradox is why you aren't worried bout "AGI" then you have no imagination.
It's not (just) the sensorimotor skills. It's that the Moravec's paradox hints strongly to current machine approaches having fundamental limitations in how they generalize from "one trick ponies" to adaptive and robust behavior that I think would be needed for an "AGI".
Of course all sorts of horrors can be and are accomplished by new technologies, but this doesn't make them intelligent in AGI sense.
The idea that the machine needs to be physically present to alter the physical world is laughable. all you need to do that is communication with humans who could alter it for you.
It doesn't hint at anything like that all. That's like saying humans being unable to fly or intuitively sense electromagnetic fields to divine locations says anything about how humans can generalize.
There are millions of humans who are disabled and can't drive or walk and will never be able to. are they not still general intelligences ? are they automatically less intelligent ?
That idea is of course laughable, but I didn't say anything of the sort.
It's not the physicality, but the messiness, ambiguity and harsh consequences of the natural environment. Current machines operate in environments that are deliberately designed to be highly structured with clear goals and where consequences of errors are highly mitigated.
Machine learning has made some more messiness possible, but it still depends quite a bit on predefined structure.
Artificial and Generally Intelligent. That's what it's supposed to mean. What it used to mean. A bar we have passed.
Now there's all sorts of weird offshoots where passing the bar is tantamount to Super Intelligence.
"For some, it might mean any that is needs to do everything a normal human can. For some, non-biological life axiomatically cannot become AGI. For some, it must be "conscious" and "sentient".
For some, it might require literal omniscience and omnipotence and accepting anything as AGI means, to them, that they are being told to worship it as a God. For some, it might mean something more like an AI that is more competent than the most competent human at literally every task.
For some, acknowledging it means that we must acknowledge it has person-like rights. For some it cannot be AGI if it lies. For some it cannot be AGI if it makes any mistake. For some it cannot be AGI until it has more power than humans. These are several definitions and implications that are partially or wholly mutually conflicting but I have seen different people say that AGI is each different one of those."
He should define his version but it obviously doesn't matter what he defines it as since humans are in the business of making up new posts on the fly.
General was to distinguish between the Narrow intelligences of the time, intelligences that could only complete one task. General was taken to mean "many different tasks". It was not supposed to mean "any task imaginable".
Smell, play tennis, Assemble an Ikea shelf are clearly not hard bars of general intelligence.
There are millions of disabled humans who can't do any of the things you mentioned. Are they not general intelligences ?
I can't find any consensus about chatgpt being AGI, even openAI doesn't present it as is, nor can I find any serious paper about it being AGI either
It is a language model, a very good one, but language is the party trick of intelligence, hence people get easily tricked and anthropomorphise chatgpt to give it attributes it doesn't actually displays.
1. "I think GPT-3 is artificial general intelligence, AGI. I think GPT-3 is as intelligent as a human. And I think that it is probably more intelligent than a human in a restricted way… in many ways it is more purely intelligent than humans are. I think humans are approximating what GPT-3 is doing, not vice versa.”
3. Sparks of Artificial General Intelligence: Early experiments with GPT-4. https://arxiv.org/abs/2303.12712 The especially funny thing bout this paper is that the original title in the tex source on that arxiv page is "First Contact With an AGI System"
\title{%\textbf{WORK IN PROGRESS - DO NOT SHARE} \\
%First Contact With an AGI System}
\textbf{Sparks of Artificial General Intelligence:} \\
\textbf{Early experiments with GPT-4}}
4. Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity. These guys just switched the order of two words so they wouldn't have to call it AGI lol. https://arxiv.org/abs/2303.12003
5. Open ai. GPTs are GPTs (General Purpose Technologies): An Early Look at the Labor Market Impact Potential of Large Language Models - https://arxiv.org/abs/2303.10130
There isn't a testable definition of General Intelligence that GPT-4 would fail that a chunk of humans also wouldn't. If there are some humans that can't pass your bar of general intelligence then it is not a test of general intelligence.
Biased, like Musk when he sold the model S as fully autonomous in 2 years, back in 2012, still nowhere to be seen in 2023
> 2.
Cool opinion, both of them work for .... Google
> 3. Sparks of Artificial General Intelligence
Biased since it was done by Microsoft which is heavily invested in openai
> 4. These guys just switched the order of two words so they wouldn't have to call it AGI lol.
Generative != General
> 5. 6.
Seem unrelated
I'll believe when I see it. Chatgpt is to AGI what the wheel is to an ICBM, it might lead there at some point but we'll need a lot of breakthrough in a lot of disciplines before we can see the link.
>Biased, like Musk when he sold the model S as fully autonomous in 2 years
GPT-3 was not a product of Eleuther. Eleuther doesn't sell anything. Everything it releases is open source and free. They are non profit.
>Cool opinion, both of them work for .... Google
Yes because working for one of the leading companies in the field is sure evidence to not take it seriously. Good thinking. I should trust the comment of a random person on the internet more seriously.
How is 5 unrelated ? You have Open ai literally telling you language models are general purpose.
> How is 5 unrelated ? You have Open ai literally telling you language models are general purpose
> they could have considerable economic, social, and policy implications.
We're almost a full year into the AGI revolution and literally nothing happened, wouldn't that be a big cue ?
And yes, people who's paycheck depends on AGI existing telling me AGI exists is a red flag. Especially when I can boot up chatgpt and check for myself...
You can find a few people who say anything. You have a few quotes. That doesn't mean that the consensus is that GPT-3/4 is AGI. My general sense is that the consensus is that it is not an AGI, but I'm not in the field.
I didn't claim any consensus on anything. I don't care what the consensus is. People move posts. People are shortsighted. ENIAC was declared the first general purpose computer years after the fact.
as it stands currently, nobody can provide a testable definition of general intelligence that GPT-4 fails that a chunk of humans also doesn't.
Think about that. anyone, including you who says GPT isn't agi is working off a definition that if testable, not all humans would be able to pass. That is far more important to me than any "consensus".
The OP i was replying to made it seem like nobody out there was of the opinion that we've already achieved agi and i was replying to counter that.
A 'consciousness' tries to reproduce (humans, animals, bacteria etc) to preserve itself. The day AI decides to kill not a human but humanity itself will be the day AGI is achieved. Until then 'AI' is vapourware, turing test and other such tests are meaningless.
And yet, in https://news.ycombinator.com/item?id=38113190, you said that AGI is "a bar we have passed". That statement assumes that we have a clear enough definition that you can tell whether we have passed it. But here you say that many people have conflicting definitions, that is, that there's not a clear agreed-upon definition.
Yes people have conflicting definitions of the specific term.
But Artificial and Generally Intelligent is a bar that's been passed. Take a look at all those definitions i brought up and tell me which ones have anything to do with only being generally intelligent ?
I think you might not be following my point. Actual AGI will be so enormously singular and distinct from anything we've ever encountered that only an idiot would look at it and say, "But hmmm, maybe this might not be it."
The very fact of continued debates over definitions to act as criteria is all the signal we need to know that we aren't there yet.
The human brain is estimated at 2.5 Pb of storage [0]. Currently 1 Tb costs around $12.50 [1]. If this costs reduces by half roughly every two years, 2.5 Pb of storage will be available for roughly $4k.
At $4k and below, that means we essentially have an affordable desktop computer that has the storage capacity of the human brain.
My guess is that there will be a spate of startups that offer real value using AI when the price is around $10k, but that's just a guess and we're already at around $30k for 2.5Pb of storage right now.
> Birds flap their wings to fly. Planes don't. Therefore planes can never fly.
There were some implicit assumptions in my above post:
* Human brains are physically realizable (there is no mind-body separation)
* Human level intelligence is the result of a massively parallel computer running essentially simple algorithms on large data
* Compute will follow the same Moore's law pattern
Storage is taken as a representative metric. I argue that while storage is an insufficient condition, it's a necessary one.
I agree that computation must follow suite but, to me at least, not only is the compute getting faster and cheaper, the core "fundamental algorithm" for intelligence are essentially already know and the limiting factor is the cost of storage and, to a lesser extent, compute.
I’m not sure AGI needs Moores law to help it. Human beings are general intelligences and our form of computing runs on extremely slow and unreliable hardware.
I feel if we could crack the algorithm behind AGI, we will develop hardware to do it more efficiently much like how crypto is now run on specialized hardware.
Additionally, our planet is smarter than human because the total amount of hard disk storage on it is larger than 2Pb (even those hard drives that are disconnected from power).
We’re 5+ years from the transformer and we’re still using the transformer for the most cutting edge llms. I don’t see what difference another 5 year is going to make unless someone invents something new that can surpass the transformer, and given the amount of money and resources that has been put into AI since 2017 and the lack of innovation since (in terms of fundamental architecture, not things like Lora and Rope) then I’d say the chances are way way lower than 50%.
I don’t think it’s so clear. The transformer has been available for 6 years, if it were possible to train one to achieve AGI then what’s stopped anyone from doing this that won’t still be the case in 5 years time, given than there’s potentially ?trillions on the table for anyone that does.
I don't understand what you're saying. People have been been training up transformers in the goal of "achieving agi". Transformers have been getting better as they've been trained up. Nobody has stopped doing this.
But they haven't achieved AGI, not even close. It can't distinguish between truth and nonsense. An LLM is essentially outputting nonsense all the time, that has been massaged by training to approximate truth through the proxy of likely-next-word.
What I’m saying is if it is possible to train transformers to achieve AGI, then why hasn’t it happened yet? What’s the limitation that will be overcome in the next 5 years?
ChatGPT4 already severely limits what you can do with it. I wonder who will eventually have access to unrestricted AI? I'm guessing the people with the deepest pockets.
OK... How soon until we have a scientific definition of consciousness?
AGI is not simulation, it's supposed to be the real deal, machine awareness. I'm starting to feel like Charlie Brown when Lucy keeps yanking the football away. Marketers keep stealing all the terms we use to refer to the real deal for AI.
R. Daneel Olivaw, R2D2, the AIs from Troy Rising, the Culture Minds... That's where AGI goes. AGI is not really good ML models that have no insight.
Fortune tellers and stock gurus make lots of predictions, and randomly one will be right; they then promote how smart they were, and all the others are forgotten. Why would this be any different?
Of course our ability to distinguish between AI and humans improves as AI evolves. It becomes more sophisticated, and so do we. 10 years ago GPT4 probably would have passed the Turing test.
> Slate has noticed a wily hedging mechanism among Silicon Valley soothsayers to circumvent these uncertainties—make predictions for “five to 10 years out.” It hits that sweet spot: just close enough that people can begin to taste it, but just far enough away that (almost) no one is going to call you out if it doesn’t become true. A review of press releases and tech articles stretching back to the 1990s finds that these Goldilocks forecasts are abundant. We’ve compiled a list of 81 predictions for innovations coming in “five to 10 years” to illustrate the cliché.