I think the actual measure of it will be a system which has the ability to accept or reject new information, and which possesses something approximating a will to act upon the information it has incorporated.
As impressive as GPTs are, they are fundamentally the sum of the parts that humans feed into them, and they have no capability to reject training updates. Aristotle famously said "It is the mark of an educated mind to be able to entertain a thought without accepting it", and I think that once we hit the point that we're building AIs which can entertain and understanding training input and "consciously" choose to not incorporate it into its understanding of the world, we're probably pretty close.
Some might argue that ChatGPT's well-publicized political discrepancies might qualify, but I'll disagree with this, because (with sufficient access) you could easily take the same base model and continue training it on a new, contradictory dataset and cause it to flip its "opinions". It doesn't have any actual beliefs, just the sum of the training it's been given.
We are trying to. Absent first principles of cognition (which are nowhere on the horizon), we are defining tasks such that if a machine can fulfill these tasks, we classify it as AGI.
This is exactly what Turing came up with as criterion for intelligence. But now that we have such machines, many tasks we thought could only be done by an AGI can also be done by not-AGI.