Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do think that competent humans can solve any arbitrary sum of 2 whole number with a pen, paper and time. LLMs can't do that.




That’s interesting, you added a tool. You did not just leave it to the human alone.

I'm not the fellow you replied to, but I felt like stepping in.

> That’s interesting, you added a tool.

The "tool" in this case, is a memory aid. Because they are computer programs running inside a fairly-ordinary computer, the LLMs have exactly the same sort of tool available to them. I would find a claim that LLMs don't have a free MB or so of RAM to use as scratch space for long addition to be unbelievable.


The fact that an LLM is running inside an ordinary computer does not mean that it gets to use all the abilities of that computer. They do not have megabytes of scratch space merely because the computer has a lot of memory.

They do have something a bit like it: their "context window", the amount of input and recently-generated output they get to look at while generating the next token. Claude Sonnet 4 has 1M tokens of context, but e.g. Opus 4.1 has only 200k and I think GPT-5 has 256k. And it doesn't really behave like "scratch space" in any useful sense; e.g., the models can't modify anything once it's there.


Well, the GPT-5 context windows offer roughly a little more than a MB

LLMs already get enough working memory, they do not fail because of lack of working space.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: