Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs don't think. At all.

How can you so confidently proclaim that? Hinton and Ilya Sutskever certainly seem to think that LLMs do think. I'm not saying that you should accept what they say blindly due to their authority in the field, but their opinions should give your confidence some pause at least.





>> LLMs don't think. At all.

>How can you so confidently proclaim that?

Do you know why they're called 'models' by chance?

They're statistical, weighted models. They use statistical weights to predict the next token.

They don't think. They don't reason. Math, weights, and turtles all the way down. Calling anything an LLM does "thinking" or "reasoning" is incorrect. Calling any of this "AI" is even worse.


If you have an extremely simple theory that debunks the status quo, it is safer to assume there is something wrong with your theory, than to assume you are on to something that no one else figured out.

You are implicitly assuming that no statistical model acting on next-token prediction can, conditional on context, replicate all of the outputs that a human would give. This is a provably false claim, mathematically speaking, as human output under these assumptions would satisfy the conditions of Kolmogorov existence.


Sure.

However, the status quo is that "AI" doesn't exist, computers only ever do exactly what they are programmed to do, and "thinking/reasoning" wasn't on the table.

I am not the one that needs to disprove the status quo.


No, the status quo is that we really do not know. You made a claim why it is impossible for LLMs to think on the grounds that they are statistical models, so I disproved your claim.

If it really was that simple to dismiss the possibility of "AI", no one would be worried about it.


I never said it was impossible. Re-read it, and kindly stop putting words in my mouth. :)

But is the connection of neurons in our brains any more than a statistical model implemented with cells rather than silicon?

You're forgetting the power of the divine ineffable human soul, which turns fatty bags of electrolytes from statistical predictors into the holy spirit.

An LLM is very much like a CPU. It takes inputs and performs processing on them based on its working memory and previous inputs and outputs, and then produces a new output and updates its working memory. It then loops back to do the same thing again and produce more outputs.

Sure, they were evolved using criteria based on next token prediction. But you were also evolved, only using critera for higher reproduction.

So are you really thinking, or just trying to reproduce?


Do you think Hinton and Ilya haven’t heard these arguments?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: