After having formulated an idea, do you put it on your intellectual bench and re-examine it, purposefully, analytically? Well, that is more than plain pattern matching over intellectual keys - it is procedural.
And what about those intellectual keys or «schemas», how are they generated? Through a verification, consolidation that is further to the original (pattern matching) intuition.
I have skimmed through another relevant piece today: it seems we are not proceeding with adequate pace with the interpretation of the internals, with the gained "transparency" of the architecture...
Precisely. The architecture is transparent but the latent representations within and the operations performed by LLMs are not.
It's a subject of active research the extent LLM "reasoning" really is reasoning similar to humans, or something of a strictly weaker class entirely.
Personally I'm of the opinion human reasoning is really just "pattern matching", but we're also still waiting for the cognitive scientists to give us an answer on that one.
> I'm of the opinion human reasoning is really just "pattern matching"
There are more interpretations of "pattern matching".
Of course it seems a fundamental component of generating ideas, but then those ideas are put - by intellectuals - on a bench and criticized actively. The two activities have important differences. First you look and go "they seem four", but then you count to be sure.
The second part is absolutely critical to determine a well working reasoner.
After having formulated an idea, do you put it on your intellectual bench and re-examine it, purposefully, analytically? Well, that is more than plain pattern matching over intellectual keys - it is procedural.
And what about those intellectual keys or «schemas», how are they generated? Through a verification, consolidation that is further to the original (pattern matching) intuition.