Hacker News new | past | comments | ask | show | jobs | submit login
Why OpenAI, Anthropic and DeepSeek Won't Reach AGI (defragzone.substack.com)
2 points by frag 4 days ago | hide | past | favorite | 3 comments





Interesting read. Time wasn't a variable I had considered missing from interactions with AI, but it makes sense.

I'd also add this: tools like the AI bots so prevalent today are flawed because they cannot consider things like context, limitations, dependencies and scope. I give a question...they attempt to spit out a complete answer with complete disregard for the context which my question is coming from.

AI fails in the same way a monkey can't drive a car.... abstraction. We humans know a red light ahead means stop at the stop light, not stop immediately where you are right now. All AI can do is make a best guess of what the inputs pattern-match to. This is like always having an answer without ever asking for clarification or context.


Exactly. What I consider a patch and definitely a symptomatic solution is "solved" via agents that search the web (e.g. asking for the weather forecast of this year - in that case the LLM cannot know the year I am referring to, if not via a web search). Generally speaking, LLMs lack direct temporal awareness. Standard models do not model the flow of time unless explicitly. Some models can encode a model of time when trained on sequential video data and rely on external encoders to provide temporal structure. But that is a very narrow application (video in this example). That cannot be considered a generic form of awarenes of time as a concept through which facts can change.

I enjoyed reading your perspective and largely agree with your points. However, I believe it would be more compelling if you could provide concrete evidence, such as transcripts or results from actual LLM interactions. Many of the examples you’ve cited, such as those involving figures like kings or presidents, feel somewhat dated and well-discussed. Drawing a strong conclusion that LLMs cannot reach AGI or understand concepts like time, solely based on these examples, seems premature without showcasing specific results from modern LLMs. I feel a demonstration of their limitations would strengthen your argument.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: