Hacker News new | past | comments | ask | show | jobs | submit login

> The context window can be compared to working memory in humans: it’s fast, efficient but gets rapidly overloaded. Humans manage this limitation by offloading previously learned information into other memory forms, whereas LLMs can only mimic this process superficially at best.

This is just silly. Humans forget things all the time! If I want to remember something I write it down.

> The nature of hallucination is very different between AR models and humans, as one has a world model and the other doesn’t.

I stopped reading at this point. There's not much signal here, just basic facts about LLMs and then leaps to very bold statements.

Here is an interesting experiment I use to help people understand next token prediction. Think of a simple math problem in your head, maybe 3 digit by 2 digit multiplication. Then speak out every single thought you have while solving it.






> There's not much signal here, just basic facts about LLMs and then leaps to very bold statements.

The article wasn't supposed to be informative for people who already know how LLMs work. Like the title said, just wanted to write down some thoughts.

> This is just silly. Humans forget things all the time! If I want to remember something I write it down.

The opposite was never stated. Human memory is of course selective.

> Here is an interesting experiment I use to help people understand next token prediction. Think of a simple math problem in your head, maybe 3 digit by 2 digit multiplication. Then speak out every single thought you have while solving it.

Now a point I'm happy to discuss! The process of solving it is actually quite autoregressive-like, but this is also an example of a common pitfall with LLMs: they purely rely on pattern matching because they don't have the internal representation of what they really deal with (algebra). But we all know that.

The main question is whether LLMs taught to reason actually show that they have this kind of representation. They still work very differently I'd say; even for tasks that seem trivial to humans, reasoning LLMs will make a lot of mistakes before arriving at a plausible-sounding result. Because it was trained to reason, there's a higher chance now that the plausible-sounding result is actually correct. But this property is actually quite interesting once applied to complex tasks that would take too much time and overwhelming for humans, and that's where they shine as powerful tools.


> even for tasks that seem trivial to humans, reasoning LLMs will make a lot of mistakes before arriving at a plausible-sounding result.

Like a lot of my coworkers analyzing a production bug? I would agree if the statement were that LLMs were underpowered compared to a human brain today but I'm not seeing evidence that humans do reasoning in a way that can't be correctly modeled.

From your article and comments, it sounds like the take is something like "humans don't actually reason autoregressively" which could be true, I don't know enough to know, but sort of like saying physics models aren't really how nature works: ultimately LLMs are executable models of the world, it's even in the name.


> From your article and comments, it sounds like the take is something like "humans don't actually reason autoregressively" which could be true, I don't know enough to know, but sort of like saying physics models aren't really how nature works: ultimately LLMs are executable models of the world, it's even in the name.

The conclusion states "Language and thought are not purely autoregressive in humans".

Which doesn't mean humans don't have autoregressive components in their thinking. At least that's my opinion. I don't make this bold statement and I don't know enough to know, too.

> Like a lot of my coworkers analyzing a production bug? I would agree if the statement were that LLMs were underpowered compared to a human brain today

Clearly not in the same way and that was what I was trying to explain with regards to the hallucination issue too. Humans are also learning from proofs, can apply frameworks, etc. there's no denying that. But the internal process of an LLM remains pattern matching and sequential prediction whereas there's more to the human's thinking process.

LLMs are underpowered in some aspects that can't be replicated with autoregressive modeling, but are already stronger in other aspects. That is what I think.

> but I'm not seeing evidence that humans do reasoning in a way that can't be correctly modeled.

Me neither, this is not what my stance is, and I'm actually optimistic about it. I just don't think we should be satisfied with only autoregressive modeling if the ambition is to reach or comprehend human-level intelligence.


You seem to continue to make a lot of claims without basis.

> they purely rely on pattern matching

Yes.

> because they don't have the internal representation of what they really deal with (algebra)

Wait what? No. You can't claim this yet as it's an open question. It may well be the case they do have an internal representation of algebra, and even a world model for that matter, if flawed.

I think you need to be more aware of the current research in LLM interpretability. The answers to these questions are hardly as definitive as you make it seem.


I do actually read a lot about LLM interpretability, and this is my own conclusion (should've phrased it like "they don't seem to have"). I do actually consider this an open question, so I'm a bit confused as to why you think this way - perhaps due to my phrasing (I just had a very long flight), but know that is not the case and I always doubt things. In fact I said after the text you quoted that the question is actually quite open (mentioning reasoning models, but to be honest, it's not exclusive to them, just more apparent in some ways).

I might also clarify (here and probably in my article when I have the time to do so). LLMs "do" build internal models in the sense that, at the same time:

- They organize knowledge by domain in a unified network

- They're capable of generalization (already mentioned and acknowledged at the very beginning of the article)

However these models, while they share parallels with human cognition, lack substance and can't replicate (yet) the deep integrated cognitive model of humans. That is where current interpretability research is at, and probably SOTA LLMs too. My own opinion and speculation is that autoregressive models will never get to a satisfying approximation level of the human-level cognition since humans' thinking process seems to be more than autoregressive components, aligning with current psychology. But that doesn't mean architectures won't evolve.

Do not misunderstand that because I said they're pattern matching machines, that they will be unable to properly "think". In fact, the line between pattern matching and thinking is actually quite blurry.


Do you think in words when you do a 3 x 2 digit multiplication?

I do it all in images and I think many other people do too.


We think of LLMs as not math-proficient (because they aren't yet) but what about those multimodal models?

I wonder if anyone has tried getting them to "imagine" math in a way to "visually compute".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: