If you offer an API you need to dedicate servers to it that keep the model loaded in GPU memory. Unless you don't care about latency at all.
Though I wouldn't be surprised if the bigger reason is the PR cost of releasing with an exciting name but unexciting results. The press would immediately declare the end of the AI growth curve
There definitely is, storage, machines at the ready, data centers, etc. Also OpenAI basically loses money every time you interact with ChatGPT https://www.wheresyoured.at/subprimeai/