Doesn’t this just reflect that humans are generally just large language models? Maybe throw in an extra dimension of “emotions” that are useful for training?
These llms are weak facsimiles of the brain, not humans. The real world is the ultimate training model, it can't be fully substituted with a bunch of strings.
This sounds like the old question "if a blind-from-birth man (who can recognize squares by feel) gained sight would he be able to recognize squares visually?"
And since that question has been answered in the negative, I'm inclined to agree.
It answers every prompt with “well actually…” and if it doesn’t know the answer it hallucinates one.