OpenAI is only one of many companies building AI. While ChatGPT and DALL-E are impressive, there are domains where those models are pretty much useless and other AI solutions are needed.
For hosted models I’ve noticed that Bard has been quietly improving. Was awful at first but now seems to be closing in.
Actually-open AI with open weights and open source is innovating rapidly too, especially when it comes to making models more efficient on smaller hardware. We have fast local mixtures of experts now. Rumor is that Llama3 is training as well, and that it may be some kind of MoE.
I feel like focusing on efficiency first for local models rather than shooting to beat GPT-4 is the right approach, because without major efficiency improvements a shot for simple dominance will just yield a model only wealthier people can afford the hardware to run.
OpenAI still seems in the lead but it’s shrinking.
If you have the most powerful model by far like they have, it's a radical adantage over competitors: your model can perform automatizations that they cannot do.
OpenAI's lead is phenomenal: remember that they managed to put GPT4 on top of the leaderboard for one year and counting, while all others (except maybe Anthropic) struggle to have their model in the top3 for more than a few months.
I've definitely seen Bard beat ChatGPT once in a while. I don't use bard much but when ChatGPT gives me an answer I know is wrong I go see if Bard did any better and yes, sometimes Bard is better, not often enough for me to switch ... yet?