Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> IMO it is not the case. And I'd go farther in thinking LLM won't even be a component of AGI if we get there.

And why do you think that?



Because LLM are Markov Chains on steroids. They're useful for sure. But they won't suddenly start to create a better (for whatever better is) version of themselves or start pushing the boundaries of the machines they're running on.

Or maybe I'm wrong and the current "Vibe coding" push is in fact LLMs getting "coders" to compile a distributed AI. Or multiple small agents which goal is to get lot of hardware delivered somewhere it can be assembled for a new better monolithic AI.


"By design" LLMs lack: initiative, emotion, creativity, curiosity, opinions, beliefs, self-reflection, or even logical reasoning. All they can do is predict the next token - which is still an extremely powerful building block on its own, but nothing like the above.


You've made a reasonable argument that LLMs cannot on their own be an implementation of GAI. But GP's claim was stronger: that LLMs won't even be a component (or "building block") of the first GAI.


One might reasonably ask a frontier how to generate the source code for an agent based system that exhibits examples of initiative, emotion, creativity, curiosity, opinions, beliefs, self-reflection, or even logical reasoning.


I believe at that point we would have to seriously ask ourselves about the definition of "reasoning" or "intelligence"; humans have an intuitive understanding, LLMs don't - would an LLM be able to evaluate the output of an LLM, or would we have to involve a "human in the loop"[1]?

[1]: https://pluralistic.net/2023/08/23/automation-blindness/#hum...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: