I'm glad this analogy is at the top. I think that some large companies like AWS really should not try to blow money on AI in ways that only make a lot more sense for companies like Meta, Google, and Apple. AWS can't trap you in their AI systems with network effects that the other competitors can.
Companies like OpenAI and Anthropic are still incredibly risky investments especially because of the wild capital investments and complete lack of moat.
At least when Facebook was making OpenAI's revenue numbers off of 2 billion active users it was trapping people in a social network where there were real negative consequences to leaving. In the world of open source chatbots and VSClone forks there's zero friction to moving on to some other solution.
OpenAI is making $12 billion a year off of 700 million users [1], or around $17 per user annually. What other products that have no ad support perform that badly? And that's a company that is signing enterprise contracts with companies like Apple, not just some Spotify-like consumer service.
[1] This is almost the exact same user count that Facebook had when it turned its first profit.
> OpenAI is making $12 billion a year off of 700 million users [1], or around $17 per user annually. What other products that have no ad support perform that badly?
That's a bit of a strange spin. Their ARPU is low because they are choosing not to monetize 95% of their users at all, and for now are just providing practically limitless free service.
But monetising those free users via ads will pretty obviously be both practical and lucrative.
And even if there is no technical moat, they seem to have a very solid mind share moat for consumer apps. It isn't enough for competitors to just catch up. They need to be significantly better to shift consumer habits.
(For APIs, I agree there is no moat. Switching is just so easy.)
> They need to be significantly better to shift consumer habits.
i am hoping that a device local model would eventually be possible (may be a beefy home setup, and then an app that connects to your home on mobile devices for use on the go).
currently, hardware restrictions prevent this type of home setup (not to mention the open source/free models aren't quite there and difficulty for non-tech users to actually setup). However, i choose to believe the hardware issues will get solved, and it will merely be just time.
The software/model issue, on the other hand is harder to see solved. I pin my hopes onto deepseek, but may be meta or some other company will surprise me.
i dont have the hardware to run or try them, but from the huggingfaces discussion forums, gpt-oss seems to be pretty hard censored. I would not consider it as being a viable self-hosted LLM except for the very narrowest of domains (like coding for example).
I'm not sure where censorship comes in with this discussion, it seems like cloud models are censored as well? And local models are frequently created that are abliterated? Correct me if I'm wrong or misunderstanding you.
Either way, it's just an example model, plenty of others to choose from. The fact of the matter is that the base model MacBook Air currently comes with about half as much RAM as you need for a really really decent LLM model. The integrated graphics are fast/efficient and the RAM is fast. The AMD Ryzen platform is similarly well-suited.
(Apple actually tells you how much storage their local model takes up in the settings > general > storage if you're curious)
We can imagine that by 2030 your base model Grandma computer on sale in stores will have at least 32GB of high-bandwidth RAM to handle local AI workflows.
which is why i made the claim that hardware "problem" will be solved in the near future (i don't consider it solved right now, because even the apple hardware is too expensive and insufficient imho), but the more difficult problem of model availability is much, much harder to solve.
There does seem to be a mind share mote, but all you have to do is piss off users a little bit when there's a good competitor. See Digg to Reddit exodus.
Companies like OpenAI and Anthropic are still incredibly risky investments especially because of the wild capital investments and complete lack of moat.
At least when Facebook was making OpenAI's revenue numbers off of 2 billion active users it was trapping people in a social network where there were real negative consequences to leaving. In the world of open source chatbots and VSClone forks there's zero friction to moving on to some other solution.
OpenAI is making $12 billion a year off of 700 million users [1], or around $17 per user annually. What other products that have no ad support perform that badly? And that's a company that is signing enterprise contracts with companies like Apple, not just some Spotify-like consumer service.
[1] This is almost the exact same user count that Facebook had when it turned its first profit.