AGI would mean something which doesn't need direction or guidance to do anything. Like us humans, we don't wait for somebody to give us a task and go do it as if that is our sole existence. We live with our thoughts, blank out, watch TV, read books etc. What we currently have and possibly in the next century as well will be nothing close to an actual AGI.
I don't know if it is optimism or delusions of grandeur that drives people to make claims like AGI will be here in the next decade. No, we are not getting that.
And what do you think would happen to us humans if such AGI is achieved? People's ability to put food on the table is dependent on their labor exchanged for money. I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone? Absolutely not. Even UBI isn't going to cut it because even with UBI people still want to work as experiments have shown. But with that, there won't be a majority of work especially paper pushing mid level bs like managers on top of managers etc.
If we actually get AGI, you know what would be the smartest thing for such an advanced thing to do? It would probably kill itself because it would come to the conclusion that living is a sin and a futile effort. If you are that smart, nothing motivates you anymore. You will be just a depressed mass for all your life.
I think there's a useful distinction that's often missed between AGI and artificial consciousness. We could conceivably have some version of AI that reliably performs any task you throw at it consistently with peak human capabilities, given sufficient tools or hardware to complete whatever that task may be, but lacks subjective experience or independent agency; I would call that AGI.
The two concepts have historically been inexorably linked in sci-fi, which will likely make the first AGI harder to recognize as AGI if it lacks consciousness, but I'd argue that simple "unconscious AGI" would be the superior technology for current and foreseeable needs. Unconscious AGI can be employed purely as a tool for massive collective human wealth generation; conscious AGI couldn't be used that way without opening a massive ethical can of worms, and on top of that its existence would represent an inherent existential threat.
Conscious AGI could one day be worthwhile as something we give birth to for its own sake, as a spiritual child of humanity that we send off to colonize distant or environmentally hostile planets in our stead, but isn't something I think we'd be prepared to deal with properly in a pre-post-scarcity society.
It isn't inconceivable that current generative AI capabilities might eventually evolve to such a level that they meet a practical bar to be considered unconscious AGI, even if they aren't there yet. For all the flak this tech catches, it's easy to forget that capabilities which we currently consider mundane were science fiction only 2.5 years ago (as far as most of the population was concerned). Maybe SOTA LLMs fit some reasonable definition of "emerging AGI", or maybe they don't, but we've already shifted the goalposts in one direction given how quickly the Turing test became obsolete.
Personally, I think current genAI is probably a fair distance further from meeting a useful definition of AGI than those with a vested interest in it would admit, but also much closer than those with pessimistic views of the consequences of true AGI tech want to believe.
One sci-fi example could be based on the replicators from Star Trek, who are able to synthesize any meals on demand.
It is not hard to imagine a "cooking robot" as a black box that — given the appropriate ingredients — would cook any dish for you. Press a button, say what you want, and out it comes.
Internally, the machine would need to perform lots of tasks that we usually associate with intelligence, from managing ingredients and planning cooking steps, to fine-grained perception and manipulation of the food as it is cooking. But it would not be conscious in any real way. Order comes in, dish comes out.
Would we use "intelligent" to describe such a machine? Or "magic"?
I immediately thought of Star Trek too, I think the ship's computer was another example of unconscious intelligence. It was incredibly capable and could answer just about any request that anyone made of it. But it had no initiative or motivation of its own.
Regarding "We could conceivably have some version of AI that reliably performs any task you throw at it consistently" - it is very clear to anyone who just looks at the recent work by Anthropic analyzing how their LLM "reasons" that such a thing will never come from LLMs without massive unknown changes - and definitely not from scale - so I guess the grandparent is absolute right that openai is nor really working on this.
While I also hold a peer comment's view that the Turing Test is meaningless, I would further add that even that has not been meaningfully beaten.
In particular we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.
In "both" (probably more, referencing the two most high profile - Eugene and the large LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. And the tests are typically time constrained by woefully poor typing skills (this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of a few words each.
The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that.
I mean, I am pretty sure that I won't be fooled by a bot, if I get the time to ask the right questions.
And I did not looked into it (I also don'think the test has too much relevance), but fooling the average person sounds plausible by now.
Now sounding plausible is what LLMs are optimized for and not being plausible, still, I would not have thought we get so far so quick 10 years ago. So I am very hesistant about the future.
The very people whose theories about language are now being experimentally verified by LLMs, like Chomsky, have also been discrediting the Turing test as pseudoscientific nonsense since early 1990s.
It's one of those things like the Kardashev scale, or Level 5 autonomous driving, that's extremely easy to define and sounds very cool and scientific, but actually turns out to have no practical impact on anything whatsoever.
I feel like, if nothing else, this new wave of AI products is rapidly demonstrating the lack of faith people have in their own intelligence -- or maybe, just the intelligence of other human beings. That's not to say that this latest round of AI isn't impressive, but legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles.
> legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles
To be fair, there is a section of the population whose useful intelligence can roughly be summed up as that or worse.
I think this takes an unnecessarily narrow view of what "intelligence" implies. It conflates "intelligence" with fact-retention and communicative ability. There are many other intelligent capabilities that most normally-abled human beings possess, such as:
- Processing visual data and classifying objects within their field of vision.
- Processing auditory data, identifying audio sources and filtering out noise.
- Maintaining an on-going and continuous stream of thoughts and emotions.
- Forming and maintaining complex memories on long-term and short-term scales.
- Engaging in self-directed experimentation or play, or forming independent wants/hopes/desires.
I could sit here all day and list the forms of intelligence that humans and other intelligent animals display which have no obvious analogue in an AI product. It's true that individual AI products can do some of these things, sometimes better than humans could ever, but there is no integrated AGI product that has all these capabilities. Let's give ourselves a bit of credit and not ignore or flippantly dismiss our many intelligent capabilities as "useless."
> It conflates "intelligence" with fact-retention and communicative ability
No, I’m using useful problem solving as my benchmark. There are useless forms of intelligence. And that’s fine. But some people have no useful intelligence and show no evidence of the useless kind. They don’t hit any of the bullets you list, there just isn’t that curiosity and drive and—I suspect—capacity to comprehend.
I don’t think it’s intrinsic. I’ve seen pets show more curiosity than some folk. But due to nature and nurture, they just aren’t intelligent to any material stretch.
I agree. AGI is meaningless as a term if it doesn't mean completely autonomous agentic intelligence capable of operating on long-term planning horizons.
Edit: because if "AGI" doesn't mean that... then what means that and only that!?
> Edit: because if "AGI" doesn't mean that... then what means that and only that!?
"Agentic AI" means that.
Well, to some people, anyway. And even then, people are already arguing about what counts as agency.
That's the trouble with new tech, we have to invent words for new stuff that was previously fiction.
I wonder, did people argue if "horseless carriages" were really carriages? And "aeroplane" how many argued that "plane" didn't suit either the Latin or Greek etymology for various reasons?
We never did rename "atoms" after we split them…
And then there's plain drift: Traditional UK Christmas food is the "mince pie", named for the filling, mincemeat. They're usually vegetarian and sometimes even vegan.
Agents can operate in narrow domains too though, so to fit the G part of AGI the agent needs to be non-domain specific.
It's kind of a simple enough concept... it's really just something that functions on par with how we do. If you've built that, you've built AGI. If you haven't built that, you've built a very capable system, but not AGI.
> Agents can operate in narrow domains too though, so to fit the G part of AGI the agent needs to be non-domain specific.
"Can", but not "must". The difference between an LLM being harnessed to be a customer service agent, or a code review agent, or a garden planning agent, can be as little as the prompt.
And in any case, the point was that the concept of "completely autonomous agentic intelligence capable of operating on long-term planning horizons" is better described by "agentic AI" than by "AGI".
> It's kind of a simple enough concept... it's really just something that functions on par with how we do.
"On par with us" is binary thinking — humans aren't at the same level as each other.
The problem we have with LLMs is the "I"*, not the "G". The problem we have with AlphaGo and AlphaFold is the "G", not the ultimate performance (which is super-human, an interesting situation given AlphaFold is a mix of Transformer and Diffusion models).
For many domains, getting a degree (or passing some equivalent professional exam) is just the first step, and we have a long way to go from there to being trusted to act competently, let alone independently. Someone who started a 3-year degree just before ChatGPT was released, will now be doing their final exams, and quite a lot of LLMs operate like they have just about scraped through degrees in almost everything — making them wildly superhuman with the G.
The G-ness of an LLM only looks bad when compared to all of humanity collectively; they are wildly more general in their capabilities than any single one of us — there are very few humans who can even name as many languages as ChatGPT speaks, let alone speak them.
* they need too many examples, only some of that can be made up for by the speed difference that lets machines read approximately everything
Think about it - the original definition of AGI was basically a machine that can do absolutely anything at a human level of intelligence or better.
That kind of technology wouldn't just appear instantly in a step change. There would be incremental progress. How do you describe the intermediate stages?
What about a machine that can do anything better than the 50th percentile of humans? That would be classified as "Competent AGI", but not "Expert AGI" or ASI.
> fancy search engine/auto completer
That's an extreme oversimplification. By the same reasoning, so is a person. They are just auto completing words when they speak. No that's not how deep learning systems work. It's not auto complete..
It's really not. The Space Shuttle isn't an emerging interstellar spacecraft, it's just a spacecraft. Throwing emerging in front of a qualifier to dilute it is just bullshit.
> By the same reasoning, so is a person. They are just auto completing words when they speak.
We have no evidence of this. There is a common trope across cultures and history of characterising human intelligence in terms of the era's cutting-edge technology. We did it with steam engines [1]. We did it with computers [2]. We're now doing it with large language models.
Technically it is a refinement, as it distinguishes levels of performance.
The General Intelligence part of AGI refers to its ability to solve problems that it was not explicitly trained to solve, across many problem domains. We already have examples of the current systems doing exactly that - zero shot and few shot capabilities.
> We have no evidence of this.
That's my point. Humans are not "autocompleting words" when they speak.
> Technically it is a refinement, as it distinguishes levels of performance
No, it's bringing something out of scope into the definition. Gluten-free means free of gluten. Gluten-free bagel verus sliced bread is a refinement--both started out under the definition. Glutinous bread, on the other hand, is not gluten free. As a result, "almost gluten free" is bullshit.
> That's my point. Humans are not "autocompleting words" when they speak
Humans are not. LLMs are. It turns out that's incredibly powerful! But it's also limiting in a way that's fundamentally important to the definition of AGI.
LLMs bring us closer to AGI in the way the inventions of writing, computers and the internet probably have. Calling LLMs "emerging AGI" pretends we are on a path to AGI in a way we have zero evidence for.
Bad analogy. That's a binary classification. AGI systems can have degrees of performance and capability.
> Humans are not. LLMs are.
My point is that if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans. It's such an oversimplification of the transformer / deep learning architecture that it becomes meaningless.
> That's a binary classification. AGI systems can have degrees of performance and capability
The "g" in AGI requires the AI be able to perform "the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans" [1]. Full and not full are binary.
> if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans
No, you can't, unless you're pre-supposing that LLMs work like human minds. Calling LLMs "emerging AGI" pre-supposes that LLMs are the path to AGI. We simply have no evidence for that, no matter how much OpenAI and Google would like to pretend it's true.
Why are you linking a Wikipedia page like it's the ground zero for the term? Especially when neither article the page link to justify that definition see the term as a binary accomplishment.
The g in AGI is General. I don't what world you think Generality isn't a spectrum, but it's sure as hell isn't this one.
That's right, and the Wikipedia page refers to the classification system:
"A framework for classifying AGI by performance and autonomy was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman"
In the second paragraph:
"Some researchers argue that state‑of‑the‑art large language models already exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved."
The entire article makes it clear that the definitions and classifications are still being debated and refined by researchers.
Then you are simply rejecting any attempts to refine the definition of AGI. I already linked to the Google DeepMind paper. The definition is being debated in the AI research community. I already explained that definition is too limited because it doesn't capture all of the intermediate stages. That definition may be the end goal, but obviously there will be stages in between.
> No, you can't, unless you're pre-supposing that LLMs work like human minds.
You are missing the point. If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights. If you completely ignore all of that, then by the same reasoning (completely ignoring the complexity of the human brain) we can just say that people are auto-completing words when they speak.
> I already linked to the Google DeepMind paper. The definition is being debated in the AI research community
Sure, Google wants to redefine AGI so it looks like things that aren’t AGI can be branded as such. That definition is, correctly in my opinion, being called out as bullshit.
> obviously there will be stages in between
We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.
> If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights
It is not a redefinition. It's a classification for AGI systems. It's a refinement.
Other researchers are also trying to classify AGI systems. It's not just Google. Also, there is no universally agreed definition of AGI.
> We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.
Generalization is a formal concept in machine learning. There can be degrees of generalized learning performance. This is actually measurable. We can compare the performance of different systems.
I don't know if it is optimism or delusions of grandeur that drives people to make claims like AGI will be here in the next decade. No, we are not getting that.
And what do you think would happen to us humans if such AGI is achieved? People's ability to put food on the table is dependent on their labor exchanged for money. I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone? Absolutely not. Even UBI isn't going to cut it because even with UBI people still want to work as experiments have shown. But with that, there won't be a majority of work especially paper pushing mid level bs like managers on top of managers etc.
If we actually get AGI, you know what would be the smartest thing for such an advanced thing to do? It would probably kill itself because it would come to the conclusion that living is a sin and a futile effort. If you are that smart, nothing motivates you anymore. You will be just a depressed mass for all your life.
That's just how I feel.