Which is part of the reason why I think LLMs are a big deal. LLMs talk like the inner voice, and their failure modes are strikingly similar to the failure modes of "gut instinct" / first reactions and "stream of thoughts". I don't think this is a coincidence - I think we may have stumbled on the main trick that makes biological brains work. I don't mean language here, but rather the use of absurdly high-dimensional latent space for representing relatedness.
Now, if the above is anywhere close to the truth, then taking it together with the research you mention suggests that LLMs aren't simulating or parroting abstract thinking and understanding - they're actually doing it, the same way we do. They just lack an evaluator/censor layer of conscious experience.
Now, if the above is anywhere close to the truth, then taking it together with the research you mention suggests that LLMs aren't simulating or parroting abstract thinking and understanding - they're actually doing it, the same way we do. They just lack an evaluator/censor layer of conscious experience.