Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think we have. Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.




> Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.

But they do close a big gap - they're capable of "understanding" fuzzy ill-defined sentences and "infer" the context, insofar as they can help formalize it into a format parsable by another system.


The technique itself is good. And paired with a good amount of data and loads with training time, it’s quite capable of extending prompts in a plausible way.

But that’s it. Nothing here has justified the huge amount of money that are still being invested here. It’s nowhere near useful as mainframes computing or as attractive as mobile phones.


They do not understand. They predict a plausible next sequence of words.

I don't disagree with the conclusion, I disagree with the reasoning.

There's no reason to assume that models trained to predict a plausible next sequence of tokens wouldn't eventually develop "understanding" if it was the most efficient way to predict them.


The evidence so far is a definite no. LLMs will happily produce plausible gibberish, and are often subtly or grossly wrong in ways that betray complete lack of understanding.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: