Then we already have access to a cheaper, scalable, abundant, and (in most cases) renewable resource, at least compared to how much a few H100s cost. Take good care of them, and they'll probably outlast most a GPU's average lifespans (~10 years).
Humans are a lot more expensive to run than inference on LLMs.
No human, especially no human whose time you can afford, comes close to the breadth of book knowledge ChatGPT has, and the number of languages is speaks reasonably well.
I can't hold a LLM accountable for bad answers, nor can I (truly) correct them (in current models).
Dont forget to take into account how damn expensive a single GPU/TPU actually is to purchase, install, and run for inference. And this is to say nothing of how expensive it is to train a model (estimated to be in the billions currently for the latest of the cited article, which likely doesn't include the folks involves and their salaries). And I haven't even mentioned the impact on the environment from the prolific consumption of power; there's a reason nuclear plants are becoming popular again (which may actually be one of the good things that comes out of this).
Most humans don't have that either, most of the time.