Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLM's are more or less a dead end when it comes to AGI.

I don't think many people believe that LLMs are a way to AGI (whatever that actually means). But LLMs can still have many valid uses even if their prospects are limited in scope.



There are plenty of people - technical and non-technical - who seem to be acting like AGI is right around the corner thanks to LLMs, and who are, more broadly, vastly overstating the current capabilities of LLMs. I’m observing this in real life as much as on the internet. There are two very distinct groups of people that stand out to me: (1) High level execs with vested interests around AI and (2) Managers who haven’t even bothered to create an OpenAI account and are asking their subordinates to use ChatGPT for them, in what is an unforeseen usage of LLMs: by human proxy.


I think you are missing a step. A lot of people believe AI will advance so much that it will be indistinguishable from the best possible human reasoning. The evolution of LLMs just give us a clue of the speed of improvement of AI. That does not mean that LLMs, which are one form of AI, will become AGI. It is just one path that AI is following. It will probably become a subset of something more advanced.


I recently read an interesting thread that laid out the case for LLMs being a path to AGI: https://old.reddit.com/r/singularity/comments/13ox85j/how_do...

The argument boils down to the idea that language isn't simply strings of words or bits of factual information, but an actual encoding of logic. By training statistical models on vast amounts of logic, we've given them a generalizable ability to perform logic. A sufficiently advanced LLM could thus potentially fulfill some definition of AGI.

To be clear, this doesn't in any way imply that LLMs could ever fit the definition of artificial consciousness, which would be a completely different form of strong AI. They're effectively just mathematical functions (albeit extremely complicated ones), which simply take inputs and return outputs without any intervening subjective experience. Even if they can perform a complicated task, retrieve and effectively summarize complicated information, or say all the right things as a conversational partner, they have no concept of the meaning of their output.

Maybe that limitation in itself puts a ceiling on their potential. Maybe the best possible LLM can only ever be 99.99% effective, and that 0.01% of the time it will go completely off the rails and disregard its instructions or hallucinate something ridiculous. Maybe the only way to overcome that is by keeping a human or a true artificial consciousness in the loop, in which case LLMs would still be extremely useful, but a flawed AGI if "AGI" at all. Or maybe a sufficiently advanced LLM and/or a sufficiently advanced error correction architecture will actually be enough to mitigate those issues.

I don't have a strong opinion on where LLMs are ultimately headed, but I'm looking forward to seeing how it all unfolds. It's amazing how capabilities that were strictly in the realm of sci-fi so quickly became mundane.


LLMs are definitely here to stay. Even if they don't turn out to be the road to AGI, they can be used by all sorts of sub-AGI agents as a "language centre". An encoder can be used to extract meaning from input, and an autoregressive decoder conditioned on the agent's internal state can be used to keep a conversation going. What's not clear at all is whether the traditional transformer architecture will endure.


> They're effectively just mathematical functions (albeit extremely complicated ones), which simply take inputs and return outputs without any intervening subjective experience.

So are human brains, which are subject to the laws of physics, and which work just as mechanistically as any computer.

Unless you hold a dualist view that the brain accesses a spiritual realm outside of the physical world, then the fact that a computer operates mechanistically does not mean that it lacks consciousness.


The process of a human responding to a prompt isn't the same process an LLM follows. It involves subjectively experiencing being asked the question, having feelings about the question, possibly visualizing something related to the question, possibly reflecting on memories, wondering about how possible answers might be received and affect their future reputation, expressing their answer with a range of different emotions, and so on.

There may be aspects of the brain that behave like statistical models, but the broader system seems more complex than that. I don't see that as in any way inherently spiritual. I expect that it could be artificially reproduced one way or another, but would be extremely complicated.


> The process of a human responding to a prompt isn't the same process an LLM follows.

It's not the same process, but it is a deterministic function, which was one of your objections to LLMs. Humans operate according to physical laws, after all.


>> I don't think many people believe that LLMs are a way to AGI

Please tell Sam Altman ASAP

Thanks!


You think he doesn’t know?

Everything he says is marketing for OpenAI.

Same as any other CEO with their company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: