Not sure. I tend to think the "why" of things is always emergent, then applied to analogies.
Honestly I had no idea what to make of the abstract at first so I questioned duck.ai GPT5 mini to try to understand it in my own words, and according to mini, the first paragraph aligns pretty well with the abstract.
The second paragraph is my own opinion, but according to mini, aligns with at least a subset of cognitive theory in the context of problem solving.
I highly recommend asking an LLM to explore this interesting question you've asked. They're all extremely useful for testing assumptions, and the next time I can't sleep I'll probably do so myself.
Personally I haven't had any luck getting an LLM to solve even simple problems, but I suspect I don't know yet how to ask, and it's possible that the people who are building them are still working it out themselves.
I had in mind the datasets of Easy2Hard-Bench that the study tested against: math competitions, math word problems, programming, chess puzzles, science QA, and commonsense reasoning.
The last problem like this that I myself asked an LLM to solve was to find tax and base price of items on an invoice given total price and tax rates. I couldn't make sense of the answer, but asking the LLM questions made me realize that I had framed the problem badly, and moreso that I didn't know how to ask. (Though the process also triggered a surprising ability of my own to dredge up and actually apply basic algebra.) I'm sure it's that I'm still learning what and how to ask.