If you give that person the ability to represent unknown questions as combinations or functions of ones for which they have the class' answer distribution, then - things get blurrier between being "powerful" and "smart".
To take us back to the age-old Chinese Room story: What does it mean, to _understand_ something? If an entity has internal processes which allow it to converse with humans consistently and reasonably about something (and I'm not saying GPT/LLMs are that) - can you not claim that entity understands it?
To take us back to the age-old Chinese Room story: What does it mean, to _understand_ something? If an entity has internal processes which allow it to converse with humans consistently and reasonably about something (and I'm not saying GPT/LLMs are that) - can you not claim that entity understands it?