They are still in the lead, and I'd be willing to bet that they have 10x the DAU on chat.com/chatgpt.com than all other providers combined. Barring massive innovation on small sub 10B models - we are all likely to need remote inference from large server farms for the foreseeable future. Even in the case that local inference is possible - it's unlikely it will be desirable from a power perspective in the next 3 years. I am not going to buy a 4xB200 instance for myself.
Whether they offer the best model or not may not matter if you need a PhD in <subject> to differentiate the response quality between LLMs.
Requiring that Gemini take over the job that Google Assistant did when installing the Gemini APK really rubbed me the wrong way. I get it. I just don't like that it was required for use.
Same with Microsoft and all their Copilots, which are built on OpenAI. Not to mention all the other companies using OpenAI since itβs still the best.
Whether they offer the best model or not may not matter if you need a PhD in <subject> to differentiate the response quality between LLMs.