Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Google Gemini Eats the World (semianalysis.com)
62 points by toth 9 months ago | hide | past | favorite | 21 comments



That was a whole lot of words to say Google has a lot of GPUs and if you want to find out more, pay for a subscription. :|


TPUs not GPUs.


they also have lots of gpus


Yeah but that's not what the article is about.


"The point here is Google had all the keys to the kingdom, but they fumbled the bag. A statement that is obvious to everyone."

They didn't fumble anything, they purposely did not bother releasing/continuing research to match openai, because it would kill their business.

There's a name for it, it's called innovator’s dilemma, google literally followed the textbook description of it.


> they purposely did not bother releasing/continuing research to match openai, because it would kill their business.

They certainly haven't matched openai - but is there any evidence that's the reason?

There's loads of other reasons Google could have dropped the ball:

* Maybe they looked at the chatbots that existed in the past, and things like Siri and Cortana, and concluded that chat interfaces weren't a priority for them, and instead focused on things like image processing (which will surely be useful for self-driving cars)

* Maybe they did initial, small-scale work and produced unexciting results, discouraging further investment. I don't imagine the researchers behind Microsoft's "Tay" chatbot were being lauded by their superiors.

* Maybe they were hamstrung by an excess of ethics, such as deciding not to train their model on questionably licensed data scraped from the web. Or feeling it was wrong to release a model that would sometimes confidently make false claims, which might create dangerous situations, slander people, or suchlike.

* Maybe they misallocated their resources, like putting too much of their budget into making five generations of their custom 'TPU' chips when everyone else just buys nvidia's cards.

* Maybe they hired people who loved producing papers but weren't eager to deal with the hassle of supporting a production system. A common enough mindset in academia, where papers are productivity. And if the big paychecks keep rolling in whether you release something or not - why risk releasing something half-baked?

* Maybe the people who were inclined to get their ML research into production all ended up working on gmail spam detection, adwords, youtube automatic subtitling, google translate, click fraud detection, and suchlike.


If that's the case, why did they release Bard?

Most people considered Bard to be a lesser alternative to ChatGPT, launched as a pitiful attempt by Google to show that they are still relevant to the AI game. Definitely a fumble to me.


Because they had to at least give a sort of alternative even if bad or people would actually start using bing, but have you actually used Bard? it's very poor, I could even go as far (obviously my own belief) to say that its by design, make people use the chatbot, have a bad experience and not bother with it again, they even call it an "experiment", because they plan to take it away.

If I didn't know Bing had a chatbot with internet access I would have stopped using them, I know plenty of people, even software developers who do not know that bing has a chatbot and that it has internet access, they don't use chatgpt (3.5) because it doesn't have internet access.


It is usually not a good idea to disrupt one's own cash cow, but once someone else has already done so, it is usually a good idea to compete for that new market. (The thing they do seem to have dropped the ball on, though, was to not have their competitor ready to be good enough soon enough.)


Yeah, and I think something that is often missing in the colloquial understanding of innovator's dilemma is that the organization with the dilemma isn't doing anything wrong; they end up in a losing game despite making the right decisions.

I feel like a better name for it would be "disruptor's advantage", because that focuses on the more active side of the game, the side with more agency, rather than the more passive side.


I think you are misinterpreting it. The entire point of the book is that innovators at one point need to disrupt their own innovation at a later point or be disrupted by others. I.e. they cannot rest on their laurels forever. So they clear did do something wrong and didn’t make the right decision.

Google is the epitome of a company that rested on its laurels, became the very evil they ostensibly didn’t want to become, forgot how to innovate, and treated their customers with hostility. Good riddance.


I think you're right. And I think given that Bing had a slight bump but then went back down[0] is going to make Google question whether or not they want to release SGE[1] fully to everyone.

Then again, I could be daydreaming and it goes through anyway, but as you say - this kind of chatbot AI is literally threatening Google's ad business directly, and if they wanted to - they could have eaten a lot of people's lunch but it would have cost them enormously.

[0]: https://martech.org/ai-boost-bings-market-share-is-down-6-mo...

[1]: https://blog.google/products/search/generative-ai-search/


A bag of keys?


> Google has woken up, and they are iterating on a pace that will smash GPT-4 total pre-training FLOPS by 5x before the end of the year.

I can't find any resources that validate this claim. Also, I find the rant about GPU-rich and GPU-poor environments to be unnecessary ...


I think one of the biggest things missing from this point of view is mixture of experts.

GPT-$VER does very well across a wide range of tasks because of the mixture of experts approach.

A "small" (relatively) startup implementing/deploying AI in the space targeting a specific use case (as one example) is often just finetuning a model on their dataset (in itself a moat). Given the amount of data and scale of compute for this task (relative to MAGMA) it can be done on VRAM limited cards or as we've seen from open source finetunes something like a cloud A100x8 for a few hours at a time.

Many a startup has been successful targeting niches the juggernauts just don't care about (or understand) - what's insignificant and just not worth it to them is massive for anyone else.

Sure these probably won't end up as unicorns (there's a reason they're called that) but many of us are perfectly happy with valuations in the seven-nine figure range or running a real business that is actually profitable - which is most successful startups.

Or, for internal org use you can do something like "finetune Codellama on our entire codebase", or "finetune Llama on all of our data" which few would be willing to do with an OpenAI/Google finetune as it requires sending your entire codebase to them. An agreement or promise they won't use it isn't comforting enough to many of these orgs, especially in niches with significant regulatory issues.


> the sleeping giant, Google has woken up, and they are iterating on a pace that will smash GPT-4 total pre-training FLOPS by 5x before the end of the year.

OpenAI are not standing still though. Is there an established (or estimated) Moore's Law for LLMs?


LLM growth (or frontier ML model growth in general) is driven in part by the original Moore's Law, but right now the main driving factor is the growth of investment. I.e., how much companies are prepared to spend on training run. This has been growing at more than 10x per year.


Promises are cheap - even if made by researchers and engineers deeply believing that they can beat OpenAI. Let alone any assurances given by PR or marketing departments. And don't fool ourselves - everything announced publicly is prepared or at least approved by such.

Computational power alone is not the only resource. It is also the training process itself (see this talk by Andrej Karpathy, https://www.youtube.com/watch?v=bZQun8Y4L2A, at 1 min) and, obviously, data and its quality.

I will be convinced only after Google demonstrates that Gemini is better than GPT4 (in some, or all, tasks).


Can someone share the full article?


I was hoping to discover what made Gemini so good, but it's an article about GPU disparity. Interesting nonetheless but a bit click-baity.


Paywall alert




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: