The inability to fully define a thing doesn't invalidate all attempts to set its outline. At least, it's easy to conclude that an intelligent being has to be able to reliably perform basic reasoning (given all the necessary information is properly acquired). The current GPT models all fail at this, and neither the token length nor the network size can fix this.