Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Ask HN: How close are LLMs to AGI?
10 points by alpple on April 22, 2023 | hide | past | favorite | 14 comments
Asking an intentionally simple question to generate a broad discussion.



This is already AGI, as it is not tailored to a specific purpose (unlike all previous "AI"s were). It can do anything (given multi-modality), it is general. Humans are still better at specific things, but LLMs easily beat humans in generality. For example, GPT-4 can surely write a better poetry than an average human (despite that it is not "the best of the best" poetry), and so on.

It is AGI.

There is no point to rant about "oh but there is no long-term memory, no goal-setting, no planning and such" — it is really easy to augment it with all that, see Auto-GPTs and similar projects. Just loop it through itself and give it an interface to the external world, and it could do anything. Incrementally improve the model, and you'll get to superhuman performance in all tasks.


Tbh, chatGPT caught the world by surprise. Especially the fact that it can do in-context learning so well is a big surprise even for experts in the field. You can see these from many interviews of experts in the fields for example https://www.therobotbrains.ai/copy-of-raluca-ada-popa. So no one really knows how close LLM to AGI. Most of ppl who has been working on AGI thinks that this is no where near. But again, if they are still processing what chatGPT is capable of, how would they know for sure.


I feel that most people that were "working in AGI" are having trouble coming to terms with the success of deep learning. They are still grasping at straws and saying it still can't learn, can't generalize beyond it's training data, they're not robust or reliable etc... Problems that can be overcome.

People like Ben Goertzel believe that GOFAI / Symbolic AI is going to be revived from the AI Winter era like how deep learning was. Here's a recent quote from his Twitter: "LLMs are very powerful and will almost surely be part of the first system to achieve HLAGI. I suspect more like 20% than 80% though...."


This depends crucially on your definition of AGI. LLMs are more like a mathematical function than they are a conscious being. If, in your opinion, AGI could be realized as a input/output function with no changing internal state, like a lens or a lookup table, then we could say LLMs could be an element of AGI. But if, like many of us, you believe an AGI needs a changing internal representation of the world, and an ability to mull over prior knowledge and incorporate new inputs, then LLMs are at best only a useful component of AGI. Maybe like the retina of the human eye plays some role in human visual intelligence.

The fact that LLMs appear so intelligent to humans is really a reflection of our inability to imagine effects of scale. We can understand simple linear predictions as trivial calculations, but when language-based pattern discovery is many layers deep and those patterns are combined in nontrivial (but non-intelligent) ways, we project intelligence onto the result.


I would afraid of some kind of an Experts system [1] written on Lisp as the most powerful PL [2] using NN [3] as an heuristic for some uncertain situation. I believe AGI/ASI level of an artificial intelligence is an interception of 1, 2, 3.

LLM is NN which is not a true Expert system, so it will never get rid of hallucinations. But one ability impresses me greatly - an ability to read long text and to make a digest, I craved this tool when I was a student.

[1] https://en.wikipedia.org/wiki/Expert_system

[2] https://en.wikipedia.org/wiki/Lisp_(programming_language)

[3] https://en.wikipedia.org/wiki/Neural_network


> LLM is NN which is not a true Expert system, so it will never get rid of hallucinations.

One could point out that humans suffer from the same problem. Hallucinations should not be a problem for declaring something AGI. Additionally saying "science advances" is roughly the equivalent to "we've just realized we've all been hallucinating about X".

And there's in the extreme Turing's theorems. They do mean even a perfect ChatGPT would still not know everything (and practically, it would need time, a lot of time, before it really knew more than humans do).



Need something more to the point here. This is one is 14 mins : https://youtu.be/Mqg3aTGNxZ0


In my opinion, based on what I have seen and understood thus far, not close at all.

But they can, in some scenarios, give the appearance of being close, which might be enough to be useful for some AGI purposes, whatever those might be.


LLM's are about as close to AGI as an apple is to a fully matured apple tree.


I would say an orange to a fully matured apple tree.


LLM's are one cell


In an amoeba or what?


come on




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: