It's hard to observe your own mind, or indeed others, it's intangible and generally difficult to observe thinking.
Because of this drawback, LLMs are actually a decent model for this sort of process since we can observe how they operate. I'm not claiming they're actually intelligent like we are, but rather that they model the process of drawing connections and making associations close enough to how we think to the point where it's an useful analogy.
Because of this drawback, LLMs are actually a decent model for this sort of process since we can observe how they operate. I'm not claiming they're actually intelligent like we are, but rather that they model the process of drawing connections and making associations close enough to how we think to the point where it's an useful analogy.