I think pattern matching can be interpreted as a form of reasoning. But it is distinct from logical reasoning. Where you draw implications from assumptions. GPT seems really bad at this kind of thing. It often outputs texts with inconsistencies. And in the GPT-3 paper it performed poorly on tasks like Recognizing Textual Entailment which mainly involves this kind of reasoning.