Hacker News new | past | comments | ask | show | jobs | submit login

Fascinating. Does it mean that when we have an internal monologue, we're in effect collaborating with our immediate-future self?

This has parallels to the Chain-of-Thought prompting in LLMs, where simply by letting the model speak its reasoning, it generates better responses.

The way I look at things: language-less thinking is tapping into our evolved or ingrained patterns (aka intuition). This is like one forward pass of LLMs. But if you have to reason in a novel situation that you haven't seen before, you have to break it down into simpler steps and that necessarily requires some form of "thinking aloud".

I need to follow the references in the original paper, but most of their evidence of language-less thought was about intuition-type thinking, not multi-step reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: