Both can be true (and that's why I downvoted you in the other comment, for presenting this as a dichotomy), LLMs can reason and yet "stochastically parrot" the training data.
For example, LLM might learn a rule that sentences that are similar to "A is given. From A follows B.", are followed by statement "Therefore, B". This is modus ponens. LLM can apply this rule to wide variety of A and B, producing novel statements. Yet, these statements are still the statistically probable ones.
I think the problem is, when people say "AI should produce something novel" (or "are producing", depending whether they advocate or dismiss), they are not very clear what the "novel" actually means. Mathematically, it's very easy to produce a never-before-seen theorem; but is it interesting? Probably not.
For example, LLM might learn a rule that sentences that are similar to "A is given. From A follows B.", are followed by statement "Therefore, B". This is modus ponens. LLM can apply this rule to wide variety of A and B, producing novel statements. Yet, these statements are still the statistically probable ones.
I think the problem is, when people say "AI should produce something novel" (or "are producing", depending whether they advocate or dismiss), they are not very clear what the "novel" actually means. Mathematically, it's very easy to produce a never-before-seen theorem; but is it interesting? Probably not.