While I generally agree with you, who has ever counted Google out? We've made fun of Google for lagging while they instead spend their engineering time renaming projects and performing algorithmic white-erasure, but we all knew they're a potent force.
Google has as much or more computing power than anyone. They're massively capitalized and have a market cap of almost $2T and colossal cashflow, and have the ability to throw enormous resources at the problem until they have a competitor. They have an enormous, benchmark-setting amount of data across their various projects to train on. That we're talking like they're some scrappy upstart is super weird.
>As OpenAI moves further from its original mission into capitalizing on its technological lead, we have to remember why the original vision they had is important.
I'm way more cynical about the open source models released by the megas, and OpenAI is probably the most honest about their intentions. Meta and Google are releasing these models arguably to kneecap any possible next OpenAI. They want to basically set the market value of anything below state of the art at $0.00, ensuring that there is no breathing room below the $2T cos. These models (Llama, Gemma, etc) are fun toys, but in the end they're completely uncompetitive and will yield zero "wins", so to speak.
I certainly would not count out Google's engineering talent. But all the technical expertise in the world won't matter when the leadership is incompetent and dysfunctional. Rolling out a new product takes vision, and it means taking some risks. This is diametrically opposed to how Google operates today. Gemini could be years ahead of ChatGPT (and maybe it is now, if it weren't neutered), but Google's current leadership would have no idea what to do with it.
Google has the technical resources to become a major player here, maybe even the dominant player. But it won't happen under current management. I won't count out Google entirely, and there's still time for the company to be saved. It starts with new leadership.
> Meta and Google are releasing these models arguably to kneecap any possible next OpenAI. They want to basically set the market value of anything below state of the art at $0.00, ensuring that there is no breathing room below the $2T cos
Never thought about it that way, but it makes a lot of sense. Itβs also true these models are not up to par with SOTA no matter what the benchmarks say
Google has as much or more computing power than anyone. They're massively capitalized and have a market cap of almost $2T and colossal cashflow, and have the ability to throw enormous resources at the problem until they have a competitor. They have an enormous, benchmark-setting amount of data across their various projects to train on. That we're talking like they're some scrappy upstart is super weird.
>As OpenAI moves further from its original mission into capitalizing on its technological lead, we have to remember why the original vision they had is important.
I'm way more cynical about the open source models released by the megas, and OpenAI is probably the most honest about their intentions. Meta and Google are releasing these models arguably to kneecap any possible next OpenAI. They want to basically set the market value of anything below state of the art at $0.00, ensuring that there is no breathing room below the $2T cos. These models (Llama, Gemma, etc) are fun toys, but in the end they're completely uncompetitive and will yield zero "wins", so to speak.