Look up the numbers. OpenAI actually loses money on every paid subscription, and they’re burning through billions of dollars every year. Even if you convince a fraction of the users to pay for it, it’s still not a sustainable model.
And even if it was the highest profit branch of the company, they still would see a need to do anything possible to further increase profits. That is often where enshittification sets in.
This currently is the sweet phase where growing and thus gaining attention and customers as well as locking in new established processes is dominant. Unless the technical AI development stays as fast as in the beginning, this is bound to change.
I actually wondered about this myself, so I asked Gemini with a long back and forth conversation.
The takeaway from Gemini is that subscriptions do lose money on some subscribers, but it is expected that not all subscribers use up their full quota each month. This is true even for non-AI subscriptions since the beginning of the subscription model (i.e. magazines, gamepass, etc).
The other surprising (to me, anyway) takeaway is that the AI providers have some margin on each token for PAYG users, and that VC money is not necessary for them to continue providing the service. The VC money is capital expenditure into infrastructure for training.
Make of it what you will, but it seems to me that if they stop training they don't need the investments anymore. Of course, that sacrifices future potential for profitability today, so who knows?
That’s just a general explainer of subscription models. As of right now VC money is necessary for just existing. And they can never stop training or researching. They also constantly have to buy new gpus unless there’s at some point a plateau of ‘good enough’
The race to continue training and researching, however, is drive by competition that will fall away if competitors also can't raise more money to subsidise it.
At that point the market may consolidate and progress slow, but not all providers will disappear - there are enough good models that can be hosted and served profitably indefinitely.
For some uses, sure. But for plenty of uses that can be provided in context, RAG, or via tool use, or doesn't matter.
Even for the uses where it does matter, unless providers get squeezed down to zero margin, it's not that new models will never happen, but that the speed at which they can afford to produce large new models will slow.
That's the source you chose to use, according to you.
You don't mention cross-checking the info against other sources.
You have the "make of it what you will" at the end, in what appears to be an attempt to discard any responsibility you might have for the information. But you still chose to bring that information into the conversation. As if it had meaning. Or 'authority'.
If you weren't treating it as at least somewhat authoritative, what was the point of asking Gemini and posting the result?
Gemini's output plus some other data sources could be an interesting post. "Gemini said this but who knows?" is useless filler.