Hacker News new | past | comments | ask | show | jobs | submit login
Google Invests Almost $400M in ChatGPT Rival Anthropic (bloomberg.com)
174 points by thunderbong on Feb 5, 2023 | hide | past | favorite | 191 comments



It's interesting to read the comments and then look at Slack vs Microsoft Teams. Slack was the clear leader in at-work group messaging, people thought it was a run away success and then Teams basically obliterated it. Google has huge scale distribution on all levels. When the time comes they'll put out the right product and it will scale to millions upon millions of users within a very short time span. The quality of the experience will also be vastly superior. Google's at a stage where they're not about making speculative moves or putting out half baked products because it's just not worth it. That's the nature of big companies with a brand, reputation and scale. Look at Apple, you're not seeing terrible product launches, they take their time and do it right. Honestly, I don't think this is a one horse market, many players will be in the game, just like other markets. They don't need to be #1 and probably won't be, but it will be a quality product that either drives a new search format or a revenue stream for other things e.g plugs into services you already use, increases usage, adoption and costs.

Could they acquire an OpenAI competitor, maybe, but anti-trust is a pretty big problem right now. It's most likely that we're just going the route of everyone copying the form factor and whoever has the best experience/data and speed of response wins.


Google made lots of half baked products before and terminated them when they didn't work as expected (https://killedbygoogle.com/).

I agree with some of your points regarding scalability and add the fact that Transformer architecture (which GPT-3 was trained on) was created at Google. They also have all the data in the world to train a better model.

Interesting to see what Google will come up with.


Google made lots of half baked products before and terminated them when they didn't work

And how many of those new products cannibalized their current cash cow?

Let's not act like ChatGPT is not a completely different sort of threat than game streaming on Stadia or some dumb product like that.


You mean like Google+ coming to the rescue google ads in the social space which was shutdown.. would never happen.


If you look at the history of companies that failed to adapt to new technology then you see that it is mostly when the new technology requires a significantly different organisation. For example, digital cameras are a very different beast from old analogue cameras. For Google+, that isn't Googles expertise either. Google is full of scientists and algorithm nerds, creating engaging experience or understanding social was never their strong suit.

In Google and ChatGPT's case, a chat bot that functions like an answering machine needs almost exactly the same kind of people, infrastructure and workflows as their current search team. Google could at anytime decide to make those work on a ChatGPT clone, and that would be supported by all of their search pipelines and data models and release workflows they designed for search.

Will chat make less money than search? Probably. But search isn't such a large part of Google, worst case Google will have to kill some more side projects.


> Slack was the clear leader in at-work group messaging, people thought it was a run away success and then Teams basically obliterated it. Google has huge scale distribution on all levels. When the time comes they'll put out the right product and it will scale to millions upon millions of users within a very short time span. The quality of the experience will also be vastly superior.

That is certainly what happened for Google Plus – who even remembers Facebook now?

Google hasn’t shipped a big success since the 2000s, with G+ and the Reader shutdown marking the end of the period when they built things which were good for users rather than what their executives wanted to sell ads. It’s possible that this could be the big reversal but given how much worse search has become I’m skeptical that they can do so without different managers at ggd top.


Android


Work started in the mid-2000s, announced publicly in 2007, and the first device launched in 2008.

Google Plus launched in 2011.

I don’t think it’s just a coincidence that 2008 is also when they bought Double Click.


That's an example of what the poster you're replying to is saying: Android shipped in the 2000s before Google Plus.


Gmail, Drive, Docs ...


All late 2000s stuff before their endless rebrandings


Android was released in 2008.

So yeah - what have they released after that?

Stadia had potential, but google killed it. What else?


Chromecast in 2011 was great but also one of those things they did but seem to have stalled in “how do I get promoted again?” limbo.


Counterpoint to “chromecast as successful product”, I’ve been given/gifted in promotional stuff… more physical chromecast dongle/devices… than the number of times I’ve ever used a chromecast.

I use Apple and iOS devices because I have to do iOS development for work… I use Linux and Windows where they make sense and I live a cross platform lifestyle… but I think I’ve actively used a chromecast once in my entire life, because it’s never been better than plugging my own shit into a hdmi port with a cable from my bag…

Whats the point beyond TVs that aren’t smart? (Mine isn’t smart… but I’ve got an Apple TV as a dev device for testing.)

Is there a point beyond that? I’m genuinely curious as someone that’s given away literally a dozen chromecast dongles because I give zero fucks about the Google ecosystem of lock in garbage that might be abandoned tomorrow… oh and fuck attaching physical spyware dongles to my home network… cannot emphasise that enough.


i personally loved stadia. as a linux guy i got to play AAA games, got a sweet controller, all for free! thanks google!

seriously though they have just been building their base while also trying to grow. they do have a bit of a short attention span regarding growing though. if you don't catch on like gmail yer gone.


They totally ignored the experience of other gaming platform holders.

It took a generation for xbox to "catch on". It really took off only in x360 era and later. Imagine if the xbox was killed off in 2004.

Stadia was shut down after 3 years and first party devs were laid off even earlier. That's completely insane - it takes years to develop proper AAA games.

Just look what SONY first party studios are doing. Imagine if Naughty Dog was dissolved after 3 years.

The Wii U was weak, but Nintendo didn't leave gamedev business. They learned from their mistakes and the Switch became a huge success.

PSNow streaming service is still running just fine and SONY doesn't plan to shut it down. So game streaming business model is certainly viable.

It seems like that Google didn't even try to make Stadia a success.


Teams won because like you said, MSFT was able to bundle it into their existing products and make it almost free vs slack which is expensive at like 12 plus dollars per user.

However it is not a superior product to Slack. It has a lot of basic issues.

However Google could probably do better. My hope is they add it to G cloud.

I want to have a simpler way to deploy infra without needing to fuck with yaml and know a million AWS specific things. Gcloud is already a decent experience compared to AWS, and maybe it can be even better if spinning up instances can be made so easy that a product manager can just ask to have a build ready to have feature demo for a customer without needing engineers to do anything prior . Let them focus on building and have Gcloud robust enough to handle it well.


You're speaking of the holy grail for cloud. I think it sort of comes and goes. Google won't deliver they layer but someone on top might.


Consider the possibility that Google is politically incapable of launching an advanced AI that disrupts its only real cash cow, search. If they were to launch this perfected GPT clone, what if their Adsense revenues dropped by 10x? Monetizing GPT at Google level is a complete unknown and it may never be as profitable as adsense.


I think it's unknown in the sense that its unknown what it will look like in principle. But some kind of thorough monetization is inevitable. Those GPUs don't pay for themselves!

And it's not that hard to imagine at all: ChatGPT as it is shows how much coercing out and baking in certain things into a model can create a pretty serious product--the whole thing is honestly begging for ad injections.

In general we live very far away from a world where these things aren't immediately going to be bastardized for profit, however well it "works" to do that. Even if this was all truly revolutionary, it wouldn't be worth it if it doesn't extract money from consumers, it wouldn't even count as "revolutionary" in the way we use that word these days. There is no hope for a non-adsensed ChatGPT, thinking otherwise is like wondering if the sun will rise tomorrow. Maybe it wont be OpenAI, or Google, but whatever it is, it will rise to the top.

In general, there will never be a technology in our world that can achieve much through solely its own merits, it can only ever succeed by virtue of how efficiently it can turn around money. We are in a small honeymoon right now, but don't forget the rules!


All true.

HN people see politics behind everything these days, which is sad, because in this instance the business reasons are so obvious that it makes these comments sound like 4chan rants.

ChatGPT is slowing down at its current scale. The scale on day one for Google or Apple will be elephantine compared to the scale at ChatGPT right now. Someone has to engineer a credible solution to those issues.

Then there's legal.

And don't even get me started on UX and monetization. For Google, there has to be some way to monetize this before you butcher your market. And believe me, Google has a lot of very smart people searching for a monetization model. It really is just not as easy as people are making it out to be.

Ex:

"Sure, I can answer that question! [But first, were you aware that 9 out of 10 americans surveyed prefer Fruit of the Loom brand underwear? We weren't! That's why we sent our designers undercover in public restrooms to yada yada yada for 4 more sentences. Click here to learn more!]

Now, on the question of sigmoid vs relu, [Remember to watch Last of Us! Now streaming on HBO Max! blah blah blah] in most back propaga....

<User opens new tab to ChatGPT or Siri+>


> And believe me, Google has a lot

> of very smart people searching

> for a monetization model

Nope. The entire hiring system was not to hire people like that. They need some “street smarts” / “cowboys” - but these people did not pass Google interviews. Or they were not able to prosper at Google.


With no actual experience of having worked at Google, this is exactly my sense. They hire tons of engineering and other talent but not enough people who can figure out how to build something the market wants.

Or even if such people are hired, they aren’t empowered to the extent that they can impact the company.

It’s an engineers playground and it shows.


Online advertising in its current form will die, because it will be easier to detect an ad than it is to hide it.


And what non ad based solution will people use once they detect ads?


They’ll just use Google’s (or whoever’s) chatbot, but pipe it through another LLM that detects and removes ads, or highlights product placement or something


> Slack was the clear leader in at-work group messaging, people thought it was a run away success and then Teams basically obliterated it.

Giving it away as a part of something many already pay for let an inferior product like Teams gain marketshare.

The problem in this case with Google, is their main monetization method doesn't fit nicely with a ChatGPT like interface.


It would be pretty easy to include ads in ChatGPT output. Either interspersed among the generated text, or as part of the text itself!


I agree, but why haven't they then... I can only assume there is some non-obvious challenge that I'm not aware.


Probably cause OpenAI isn’t really in desperate “find revenue” mode. They’re growing like crazy and have plenty of funding.


Slack is time sink because it's not task focused. MS Teams users' approach of creating mostly small dedicated or task-related adhoc chat groups is a better format, even if the tool itself doesn't have all the bells and whistles of slack.

Chat isn't designed for project management, but your average slack user thinks that if everyone vents their opinions that somehow a consensus will develop - but more often than not, nothing gets done.


Teams is Conway’s law in action. It’s like SharePoint vs wiki. Everyone here should be thankful for ms teams, it lets one quickly gauge whether or not a business is an actual threat.


Thank you for referencing Conway’s law here. My favourite infodrawing: http://scrumbook.org.datasenter.no/images/ConwaysLaw_Head.pn...


How people comment about MS Teams on HN basically tell you if they have always been a developer or if they have experience with the rest of a company.

MS Teams won because it brings collaborative editing to Office. It’s ressource hungry and annoying but a lot more useful than Slack ever was.


Teams is probably the better pan-company tool if interested in preventing cross-company discussions and a new culture from forming. I wouldn’t say that everyone with experience has a preference for that kind of operation.


> and then Teams basically obliterated it

With the "worse" product

Would they have obliterated it if Microsoft wasn't already in bed with all of these corporations?


I guess by "in bed" here you mean "they're already buying office 365 to get legit Word, Excel, Outlook licenses"?


OpenAI went for the lowest hanging fruit and got a lot of credit (and money) for it. Creating a competing large deep neural network isn't hard and I guess it will soon be too available because of Google. Remember when OCR was the hottest thing and we got the best version of it for free in Google docs.


google failed at just about every product except selling ads, so I really doubt they will produce anything lasting in this area


Teams is atrocious and it sells only because companies go all in on M$.

I don't think expect Google to provide something better than openai, they will likely provide something just because everyone else is doing it, but I think openai will keep a technological edge, purely because of the people they have


> Teams is atrocious and it sells only because companies go all in on M$.

But this is the point. The technically superior product doesn’t necessarily get the market share due to factors like the Teams distribution model.

It’s not hard to imagine similar strategies from the existing big players.


I wonder what happened at Google. They basically invented Transformers, and had useful LLMs for a while. Bert, Lambda, Palm, Sparrow were all cutting edge.

Why didn't they ship something like ChatGPT earlier? Lack of product vision? Perfectionism?

Is this Xerox vs Apple all over again ?


> I wonder what happened at Google.

They suffer from "scared to release an imperfect product that damages PR"-itis.

They are so scared that an AI with Google branding might say something inappropriate, that they prefer to release nothing at all.

For context, they put a lot of engineering effort into making sure "Okay Google" for the voice assistant couldn't be modded to "Yo Dick" or something else off-brand, simply because they were worried about media backlash.


So why not lease the tech to a different company under a different brand they own and release it?

Nobody knows FB owns Instagram.

Nobody is going to care if Google releases a ChatGPT bot on a different website.


That's what I'd do... specifically, I'd license Google tech to ex-google employees who want to start a company, in return for an 80% ownership stake.

It would be a 'you can use all google internal code and infrastructure, for free, as long as you don't use google branding and don't use our user data' deal.

The companies would have the benefits of google tech, without being tied down by internal politics, procedures, risk management, etc.

They could afford to do things like just release a product to the world rather than have a legal and PR team go over every detail for every law in every country worldwide as part of a 6 month long review process.


I might take that 80% deal, for the right killer IP that I knew how to apply in a marketable way, and a couple years of sufficient GCP credits.

I might bootstrap revenue without much hiring, so that business success means the team does better financially than the sure-thing FIRE if they'd just taken a FAANG job and grinded promotions.

This business giving away 80% equity would lean on the value of IP, and on knowing how to be very effective with solo/small teams -- not 'scale' in the sense of trying to herd hundreds of software and product and business people. (This is another thing that big orgs can have trouble doing, and so sometimes do insulated spinoffs/units instead, just to unburden people.)


I think they basically employ that model already at Area 120 [0]. This interview platform called Byteboard was spun out of Area 120 a couple of years ago, and I imagine Google has a large ownership stake in it, while avoiding the brand association [1].

[0] https://area120.google.com/

[1] https://techcrunch.com/2021/10/05/technical-interview-platfo...


Lol, think from the founder / teams perspectives. Giving 80% ownership would be insane. How could you build a company like that? It’s not so simple.


Those are called Spin-Ins. Now abandoned, Cisco once pioneered this model: https://news.ycombinator.com/item?id=8348900

The closest Google had to it was Area 120, which has also been abandoned?


Isn't this what incubators do now?


The value add by most incubators and accelerators is minimal, compared to what a large parent company can provide though.


The sort of people who would accept those terms are not the sort of people you want to build a startup.


> Nobody knows FB owns Instagram.

That might have been true at some point - but a combination of their extremely loud rebranding of both to Meta, and the media constantly saying 'Instagram, which is owned by FB' or words to that effect has changed it. Even my elderly parents are aware Instagram is owned by FB / Meta.


I feel like this was done so retail investors would know Instagram was owned by Meta. Not users.


If that was the case, I wouldn't be greeted by the new Meta logo every time I open the WhatsApp, Instagram or Facebook apps.


That could be because Meta doesn't have the same negative connotation FB does.


> Nobody knows FB owns Instagram.

They're pretty upfront with it... It says "By Meta" on the launch screen


I know many people who use WhatsApp every day who are still surprised when I say it’s owned by Facebook/meta. They just don’t see what they don’t want to see.


I know people who I remind this to regularly, and they just don't care enough to remember.


> Nobody knows FB owns Instagram.

Even my non tech friends know.


So much effort that could have gone into making it actually usable. i don't think the product has improved much since it was released. Even very basic things like "navigate to Walmart by bus" actually don't work.

More things that don't work:

1. Delete the last screenshot.

2. Set timezone to est

3. Turn on hotspot

4. Turn on nearby sharw


Most of this is due to androids permissions model. The Google app doesn't have permissions required to do any of those 4 things.

However, it should at least just ask for permission the first time you try to do that.


It can’t be as simple as someone Googling “how to request permissions on Android” – this kind of thing is a product decision. I would assume it’s more due to their promotions culture: are any of those getting demoed on a big screen at I/O?


I don't think it's due to the permissions model. Google Play Services has almost all permissions on your device by default.


People say its to preserve the brand and be woke etc. but IMHO it is simply because doesn't fit Google's business.

ChatGPT is very cool but they don't make money from it and GPT-3 had a paid public API for a year now and I read that the money they make from it is very little, insignificant compared to the funding they get from VCs.

Currently, there's no clear path for monetisation but has the capacity to take away a lot of Google's use cases. In essence, it's like Napster destroying the Music CD business.

Maybe it's also a bit like Kodak: Invents the technology but its current business is too good to be messed around with. This new tech is the future but we are happy in the moment.


What I usually never see mentioned is the fair use doctrine. As soon as some website (reddit/yelp etc) is able to prove that their data has been used to train an AI that is monetisable, they'll be up in arms.

I believe this tech is so different and revolutionary, that the lawyers have yet to wrap their head around it. I predict a million lawsuits incoming any time now. This I believe is the primary reason for Google not releasing any product before the water gets tested. For what its worth Google already has access to lots and lots of data that it owns, that it can train its AI on (Photos, Gmail, Docs, obviously Search and probably YouTube as well)


Well, every single executive at Google has read the Innovator's Dilemma multiple times, so I doubt that's some novel insight that no one at Google has thought of.


If you’ve read the innovator’s dilemma this sounds a lot like that. Highly suggest reading.


I'm familiar with the core ideas but definitely should read the book, thanks.


Pretty much this.

If Google wanted - they could've easily released ChatGPT version that is 100x better than current one considering they invented the technology and they have thousands times more and better data to train it.

They just weren't interested.


That is a huge assumption. The simplest explanation to me is that while they have larger language models they don't have a better product than chatGPT to release. I would think building that product is what this $400m represents.

The impressive thing with chatGPT to me is how well it understand what you want even with very sparse input. Even if it gives wrong answers it still feels like you are both on the same page. That seems like the secret sauce even if a larger language model would give more correct output. I wouldn't be shocked at all if Google doesn't have anything currently that feels the way chatGPT does when it comes to interaction and now they are racing to build exactly that.


This doesn't make sense. They did invest in PaLM, which is ~2.5 larger than GPT-3, but no where near 100x better. I do agree that the limiting factor is the data, not scale though. What data has Google that OpenAI (or any other LLM company) can't scrape from the web as well? Using Gmail data without consent would be definitely a move towards a mountain of law suits.


Google has a lot of scanned/ocr'd books that aren't available in public data sets.

I wouldn't be surprised if they used anonymized gmail data though. Even if they removed all proper nouns it'd probably provide a lot of extra tokens for training.


PaLM isn't public, there's no way to know if it is better or if so how much better it is.


> People say its to preserve the brand

I don't think it's that, either, given the rate at which they are involved in overt censorship, over the last years.


Because ChatGPT isn’t fully baked yet. You can get it to say the stupidest stuff, which is perfect for a research release with somewhat sophisticated users but when it’s “how granny gets information” is when the lawyers get excited. It doesn’t just filter info it finds on the internet, it generates it from whole cloth. This is the Tesla FSD for information - it’ll confidently drive you into a pole with a smile on its face.

I love it, but it’s not ready for all Google users yet.


They do have lamda and it is available for test in their AI test kitchen. Seems much better handling of sensetive and offensive content then ChatGPT for me, but still cannot perform basic addition like ChatGPT does. I think it is technically better than ChatGPT but maybe they are only going to release the perfect product.

Tbf ChatGPT was far from production quality for serious applications, lots of misinformation and you can make it produce very offensive content. It is a good for toying around but you cannot take the output seriously.


I think a token effort to avoid offensive content is ok, but chatGPT should quickly detect if the human wants to go outside the box and allow it. If a human pushes it means they understand the risks and take full responsibility for the outcome.


This is not how Google's AI Test Kitchen is designed. AI Test Kitchen seems quite boring and very framed system, where you can ask what is the best Dyson model for example, or the old-style "GPT dungeon game", it doesn't really go off-rail (this is part of the product specifications sadly :/).


> chatGPT should quickly detect if the human wants to go outside the box and allow it

This is why "jailbreaking" is a thing. Once you convince the model that it's OK, it'll let you do anything from then on.

-Emily


I couldn't disagree more. ChatGPT would be extremely easy to convert to "HateGPT", and would be able to create some pretty powerful and useful political, racial, etc propaganda.

I think it's right that the owners understand what the weaponization of ChatGPT could do and prevent it, and I think we need laws (and fast) before weaponized AI like ChatGPT turns into a disaster for humanity


My experience it is like working with a genius idiot, the type that refuses to be wrong, which means the shithead (if it were human) requires verification and curation. So what if I need to verify? I do that anyway, because people have imperfect memory, documentation is often old, and who knows what unexpected whatever could be impacting my expectations.

I welcome idiot savants.


People I know say the lamda Kitchen release is unbelievably limited by comparison. A Kitchen session has three sections: The 'Ask a question' prompt is limited to under 100 characters and response is like the existing Google search question snips. The 'Make a List' section is just lists like as in short bullet points. And the 'Creative' section is limited to respond with stories involving dogs for which is a little bizarre to say the least.


The outside world gets a glimpse into what is happening at Google right now by looking at the people who leave Google. I wouldn't have been surprised if Google had invested in Character, a startup which was formed by Ex-Google LM people. To cite The Information [0]:

"At Google, Character’s co-founders Shazeer and Daniel De Freitas were instrumental in developing Google’s LaMDA language model, used for natural language processing research. They decided to launch their own startup after getting frustrated with big company bureaucracies, Shazeer previously told The Information."

[0]: https://www.theinformation.com/articles/character-seeks-250-...


They'll shut it down anyways that's why, it's hard to trust them these days. LLMs require years of commitment, check out google graveyard for list of products that also needed that kind of commitment.


They’re REALLY fumbling the bag with search and maps lately as well. My experience with both as a user has never been worse. I’ve flirted with exiting Google search before but this time it got bad enough that I fully committed to Kagi.


I've used DuckDuckGo for a couple of years now, so I've gotten very used to it, and on the odd occasion I try to use Google, the amount of ads, different sections with news, images, products, and other unnecessary clutter that isnt actual search results makes for a really uncomfortable experience


They’re a stockholder owned corporation now. Beholden to profit above all else. Making things like layoffs, hedged bets, and scared-to-execute moves all the more common. Remember when they were “don’t be evil?” - can’t be not evil, when on the market..


> They’re a stockholder owned corporation now. Beholden to profit above all else.

Neither of these two things are true. It's 51% owned by the founders and public companies are not required to only care about profit.


Your reply’s logic shows my point. 49% of the company can be purchased via the market. It is a public company.

Public companies are legally required to put shareholder interest first - that’s what the whole point of this facet of capitalism is: to provide profit to the “owner” class of society, at the cost of worker exploitation.

Get Adam Smith’s boot out of your mouth and recognize that literally anything a corporation does is for the goal of profit. Corporations are not human - and only have a decision making process that puts “return” first. Zero of them are “not evil.”


> Public companies are legally required to put shareholder interest first

No, they are not. And they definitely aren't required to care about the interests of 49% of them.

> Get Adam Smith’s boot out of your mouth and recognize that literally anything a corporation does is for the goal of profit.

Is everything an animal does for the purpose of finding food? (No.)


everything an animal does is for the purpose of reproduction lol


They made T5 and released the weights, while T5 "only" goes up to 11B params, it's an efficient model for a wide variety of tasks and the base for a lot of other models/projects. Of note imagen[1] and open-assistant[2] use T5 and FlanT5.

[1] https://imagen.research.google/

[2] https://github.com/LAION-AI/Open-Assistant


> I wonder what happened at Google.

Finance and management, knowing their bread and butter is as advertising distribution company, undermine, kill, and dilute any projects that are not obviously, undeniably directly tied to improving their advertising distribution revenues. Their big problem is, they never communicated this core identity to the larger corporation, and somehow search is allowed to coast...


Why didn't they ship something like ChatGPT earlier?

In order to launch something you have to need to launch it. Either because you're losing market share, or because you need the revenue, or because your customers expect it. Google rarely has any of those things.


Maybe they thought they could do better than just "building a bigger model".


Probably the lack of freedom to innovate or middle management suppressing free thought stopping cool stuff being made in a devs/eng spare downtime.

I've seen so many cool ideas thrown around by devs that could actually go onto be monetized by companies go wasted. To think they'd be worked on for free (spare time) and often better then outsourcing because nothing drives true innovation than passion.

Focus on BAU, keep your head down and be lucky you're not getting fired.


> Why didn't they ship something like ChatGPT earlier?

I would say they have just in different form. For a few years now when you google something you're given a quote/excerpt at the top of the results that typically answers your question.


Google is too big to seriously innovate. No top down focus or vision just recent Stanford graduates hawking random ideas. Then there’s the promo culture where everyone is busy trying to extract as much comp as possible instead of building. (I’m guessing)


Lack of balls. Same as Kodak, etc.

The fact that they are freaking out is actually a good sign though


> Why didn't they ship something like ChatGPT earlier? Lack of product vision? Perfectionism?

The simplest answer is that the people with the know-how did not work at Google. Knowing why they didn't is not so simple, though.


They thought it was too powerful to be considered safe for release. And their ad revenue nevertheless had been great even without it. No rush.

They back stepped, and then watched someone else capture the market.


Something to be said for '“Move fast and break things.” Zuckerberg


Probably only hiring people that cram leetcode instead of actual smart people.


I've heard that Google have models at least as good as the competition, but they're not using them in production because of scale. Don't know if it's true, but it sounds reasonable. Running inference that uses a lot of compute might be too expensive when you have billions of search requests and make a fraction of a cent on each, or whatever the actual numbers are.

If that's the case they're not working towards a "better" model, they're probably more concerned with a cheaper one.

And maybe something where you can make sure to construct the answer using their knowledge graph instead of hoping the black box doesn't just give plausible sounding replies that are wrong.


Google never released a state of the art model, what they run in translate, ranking, etc are efficient models on par with state of the art a few years ago. They train huge models that are too large to even demo (PaLM 540B) and never release the models, not even for academia to try to build on them. Their voices and OCR are behind competition, I'd rather use ElevenLabs and Amazon Textract.

Google makes a lot of noise but delivers little, and what they deliver is almost never the best in the world. I think maybe they made a mistake with TPUs and the PyTorch-NVIDIA path seems to be more productive and can deploy more advanced LLMs.

There was a time when I was dying for an invite to Gmail or Google Wave, but that time has long passed and nothing in-between caused that level of excitement. What I appreciate most from Google is YouTube, and not for the UI or recommendations but for simply being the largest video platform with everything you could want in music and education. If I lost access to YouTube internet would be dead for me, well, almost dead.


Google does produce world-class products that are virtually free (and wouldn't be through competitors). They say, the best products run so well you barely even notice their competitors or their complexity.

Google Maps comes to mind.


Google Maps has actually gotten worse over time, and not even in the same way Search has where spam has been winning. eg in Japan they used to license the very best map data, but switched to the 2nd-best and cheaper one also used by Apple, and do a worse job presenting it.

And of course globally its color scheme and visual hierarchy are stuck in 2005.


You make a really good point about blending large language models with their Knowledge Graph (I worked for a short while at Google using their Knowledge Graph).

What gets me the most excited about systems like ChatGPT is not the systems as they are now, but rather all of the possible hybrid architectures, mixing in old tech and new ideas. I think ChatGPT is just a little first step.


If this would be the case, Google would offer a paid version of LaMDA that would cover the costs of infrastructure ;) So this is not the reason.


Interesting list of investors:

- Eric Schmidt (former Google CEO/Chairman), Series A

- Sam Bankman-Fried, lead investor in Series B

- Caroline Ellison, Series B

https://www.crunchbase.com/organization/anthropic/company_fi...


I wonder whether the courts will claw the invested money back.


Assuming that the investment was made by one of the entities that filed for bankruptcy protection (which seems likely based on filings), bankruptcy trustees generally attempt to sell the equity that the company purchased.

In general, that’s because the the clawback period is short: 90 days before filing unless the transaction was between insiders (which no one has alleged of the Anthropic investment). In this specific case, the equity has probqbly appreciated. OpenAI is Anthropic’s closest comparable and its valuation went through the roof. Even after allowing for transaction costs, FTX trustees will probably get more for the equity than it cost.

So, the question is likely this: who will buy a ~10% stake in Anthropic?


The circle jerk goes on


It feels a bit like a Kodak moment. Fore sure google can replicate chatgpt. But there is no business model behind it yet.

Bing on the other hand just don't care that much and is willing to integrate it to get more traffic.


> there is no business model behind it

Why can't Google use the user's prompt (and AI's output) to display targeted Ads?


They can, but if their competition isn't doing the same, their offering is strictly worse.

I'd guess given the amount of self-cannibalization Google has been doing the last few years, the fervor with which they've been sticking ads everywhere even to the clear detriment of utility suggests they're struggling to present black numbers through growth.

Having to operate at conversational AI service which is competing with their own main offering, at a loss, to run a competitor out of business is not what they want to do. Not with bearish investors, and especially not given the regulatory scrutiny they're under.


Their competition still hasn't found a profitable business model, though. OpenAI started offering paid subscription, but it's unclear how well it's performing and how high their margins are.

Google's dominance might have ended, but this is a new market. Various players will try different business models and I'm sure we'll see someone offering an ad-based product


I think the Google era ended in Nov 2022 when chatGPT was launched. They had a good quarter century run. Keyword search on top of ads, spam and disinformation is not a good product anymore, and won't cut it after people's expectations moved up.

They can't even faithfully follow a search term, and instead return exactly what you wanted to avoid. Try finding "the shortest time for crossing the English Channel entirely on foot", it will tell you all about swimming, boats, even hovercrafts, ferries, but not about an immigrant trying to walk the tunnel.


I think keyword search is still actually legitimately useful and a distinct problem from natural language search interfaces, but Google has found itself somewhere in between keyword search and natural language search in a way where they're getting the drawbacks of both and the benefits of neither.

When a query fails and gives bad results, it's not clear why it has failed or how to fix it. This is above all very confusing and frustrating.

In part I think the problem is the insistence on using the same search box for both modalities of search. This is straight up a poor design choice.

It's also worth mentioning that they have an inherent conflict of interest in the business model. Since they sell ads, presenting clean search results with few ads would actually hurt their bottom line. But this is a limitation in their business model and not their tech.


Impressions and clicks would go down drastically.

- No 6+ ads on the search results page

- No direct traffic being diverted to publishers (which display AdSense ads)


People would block those with AdBlock. And if they dare do product placement in the model output, people will react with extreme prejudice - a model trained to fool them into bad decisions, that's what it would be.


The question is how much worth is it? Looks like a complete different Ads model, compared to google search, with multiple results and the payed ones on top.


Ads in the results I guess? Ask ChatGPT to write you an essay and it has a couple of paragraphs of how great Coca-Cola tastes.


>it has a couple of paragraphs of how great Coca-Cola tastes

In fairness, once the investors start putting pressure of monetization on these AI chatbots, and ads and product placement starts to creep in AI generated content[1], that would make it easier by people, teachers and evaluators, to detect it, brought to you by Carl's Jr.[2].

[1] https://www.youtube.com/watch?v=VBi4QlY_xr4

[2] https://www.youtube.com/watch?v=dQPU_BiT25w


A future where a mission to mars fails because of some comment overflow about Pepsi. Ahaha


I hope they do it. It would be fun to trick it into generating highly brand inappropriate product placements.


I get that GPT3 is a miracle of miracles – that it's even better and even more promising than 'Singularity studies', Soylent, HTML5, Bitcoin, autonomous vehicles, NFTs and Web2.0... /s

...Google has already "improved" its search with AI. More and more AI "improvement" each year for the past decade.

I can imagine what further Google AI "improvements" will look like as it frantically chases ChatGPT (which time might well show is a smaller search niche than plain-text search)

If I were in charge of Google, I'd dumb search down again - dumb it down to the point that it provides useful results.


> dumb it down to the point that it provides useful results

I don't think you understand the extent to which search results have been broken by an arms race of SEO and spammers.

Contemporary Google search would be incredible on the internet of 2005, but it's operating in an adversarial environment (and Google in practice has to restrain itself for antitrust concerns).


Nope, Google search is bad, not because of SEO spammers, but because it prioritizes results that make money for Google, not what you're actually seeking.

And for what it's worth, I think AI generated search results will actually be more resistant to spamming, since it can more deeply understand content to determine whether is it spam or not, in the same way email spam is mostly a solved problem. Of course, if entities like OpenAI become "evil", then it is a simple matter for them to render the search less useful, in the pursuit of profits.


I've worked at Google search, I don't think you understand what a huge cumbersome mess that codebase is, many of the complaints people have about it are probably bugs where some modifier isn't properly taken into account in some sections of it.

And I never heard anyone talk about sending people to more valuable sites. Google makes most of their money from search, not on third party ads they might monetize on whatever site you get to, if even a tiny fraction stop using search due to dissatisfactory search results then that is a net loss. So from what I know the issue is scope creep and technical depth for sure.

Also there isn't really one search team with one search algorithm, different kinds of searches has different people, so whatever you search for might have someone who isn't good at their job. But, to go with the greed reason you are looking for, I'd bet that if people start seeing bad results for search queries that are moneyizable they would quickly replace the person responsible for it and fix it. On the other hand, low value searches from people who aren't looking to buy anything probably costs more for them to run than they get back in ad revenue, I wouldn't be surprised if nobody at Google would notice if those got worse. In addition, since Google mostly uses internal tools with internal help pages and internal version of stack overflow their programmers wouldn't notice that it got worse either, and things that neither programmers nor money cares about wont get fixed.

Lastly, I quit Google years ago and have no stocks or friends left there, I don't care about them. I just wanted to clarify a few things. Also there are a lot of money grubbers at Google, but I mostly saw that in the ads organisation, search and ads are very separated though, ads doesn't have power over search since most ad money comes from search and search is the founders darling. So when things are broken its more likely to be incompetence or bloat or something like that than evil. At least for now, will change in the future as culture there normalizes to the rest of Google/rest of companies.


Worth pointing out, but they took a huge round of funding from SBF prior to his arrest. Not sure what liability they have from the bankruptcy court, and the surrounding clownshow.

>The Series B follows the company raising $124 million in a Series A round in 2021. The Series B round was led by Sam Bankman-Fried, CEO of FTX. The round also included participation from Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, and the Center for Emerging Risk Research (CERR).


So then perhaps the equity in Anthropic will be owned by the FTX creditors through bankruptcy and will appreciate greatly to the point where it could repay all the creditors. That would be hilarious.

For what it's worth, when MtGox blew up, it took years to recover and distribute what Bitcoins were left. Everyone was frozen out, but when they finally got their money back it was worth far more than when MtGox blew up. I have friends who are sitting on millions thanks to this. They probably would have sold on the way up or during one of the crashes. But thanks to the illiquidity during bankruptcy, they're far better off now.


Guess the Center for Emerging Risk Research failed to catch the risk that was emerging from their co-investors


You are assuming they want to "catch the risk" but maybe they want to be the risk?


This fits the trend where they only care about risks that don't exist like AGI, because that's just so much more interesting than the ones that do exist.


Just a general reminder (non-specific to this situation, but it can explain many moves):

Sometimes funds don't invest because of the product or the company, sometimes it's just to take over another investor's position (or make the other investor's position look better).

When things go really wrong, the ultimate solution is to use the company that has had a successful exit to buy a failing company.


Can anyone please explain to me why does Google need to buy companies like Anthropic when they already have DeepMind and Google Brain? I thought they were Google's AI spearhead.

To me it seems like Google isn't lacking internal ML talent, but more like it's lacking management, direction, vision and focus in the ML space. Buying more companies while staying fragmented and directionless won't necessarily give them a win.


> Can anyone please explain to me why does Google need to buy companies like Anthropic

Because Anthropic is a splinter group from OpenAI, who are Google's #1 rivals. Anthropic was founded by Dario Amodei, who was the VP of Research at OpenAI and previously led the GPT-3 project. They are buying GPT-3 rivals.


You conveniently left out that they were also funded by Sam Bankman-Fried.


> Can anyone please explain to me why does Google need to buy companies like Anthropic when they already have DeepMind and Google Brain?

Same reason they have both DeepMind and Google Brain. (I’m guessing)


Did any of those acquisitions result in successful Google products, or did Google just buy them so none of their competitors would?


Seems like the second option to me. I think WaveNet has been integrated into many Google products. Then, Deepmind invested quite a lot in RL-research, with not much real-world use case. And now they are having a major pivot towards LLMs (see their job openings), although apparently 'sparrow' is supposed to be better than GPT-3.


This is a tactical deal between Google Cloud and Anthropic. Google Clouds invests in the business but Anthropic in turn has to use GCP resources. It appears to be more of a business expansion for GCP than a stratigic investment into an AI team.


> To me it seems like Google isn't lacking internal ML talent, but more like it's lacking management, direction, vision and focus in the ML space.

Ding! Ding! Ding! Give that man the prize, he guessed the winning answer!


It has been indicated already in other comments. To have one competitor is bad. To have two is worse.


> Can anyone please explain

Talent acquisition.

Also : Deepmind is based in the UK and does whatever the F*CK they want inside Google instead of actually contributing to Google's bottom line.


>Deepmind is based in the UK

DeepMind isn't UK exclusive. Ian Goodfellow works there as well from the Bay Area, AFAIK.

>does whatever the F*CK they want inside Google instead of contributing to Google's bottom line

And that seems like the bigger problem with Google, not the lack of proper talent, which I also raised in my argument.

You can't successfully steer a ship when everyone runs amuck doing whatever they want. Making the ship larger won't fix this.


> Also : Deepmind is based in the UK and does whatever the F*CK they want inside Google instead of actually contributing to Google's bottom line.

Sure, but that's what everyone does at Google. They don't hire engineers to make money, they hire them because Brin and Page enjoy keeping engineers around like pets.


> because Brin and Page enjoy keeping engineers around like pets.

Funny, but inaccurate.

First, Brin has stopped giving a damn about Google a very long time ago to go and enjoy his life.

I suspect from around the time of the "China incident" where both he and Page realized that having ethical positions at the scale Google had reached back then was just not possible anymore.

Second, Page held on for a while longer, but eventually followed Brin in not giving two f*cks any more and just enjoying their billions.

Page then proceeded to pick the most milquetoast personality in his entourage as his successor (Sundar Pichai), probably because he thought he was the candidate most likely to inflict the least amount of damage with the mighty thing he was handing over (by doing nothing with it).

No, I think the explanation is very simple: Google believes that acquiring top talent is the path to further success.

They're correct but fail to see that it is only a tiny first step on a very long road.

Namely: once you've got the talent, you must:

   a. Not spoil them rotten by throwing oodles of money at them
   b. Direct them towards worthy goals
   c. Focus on execution by actually *managing* them
   d. Not allow them to play the "hire me for 3M USD a year, I'm an AI diva" musical chair games and actually deliver tangible value.
These days, Google is simply incapable of these four things.


Like Mike Tyson once wisely said, everyone has a plan until they get punched in the face.


How is this not the top comment?


I think the "Anthropic Partners with Google Cloud" [1] title is more appropriate

Also this quote is interesting [2]

  Anthropic was formed in 2021 when a group of researchers led by Dario Amodei left OpenAI after a disagreement over the company’s direction. They were concerned that Microsoft’s first investment in OpenAI would set it on a more commercial path and detract from its original focus on the safety of advanced AI.
[1] https://www.anthropic.com/news/announcement

[2] https://archive.ph/ciZPV


Interesting interview with the sister and brother founders of Anthropic on the Future of Life Institute Podcast (also on YouTube). They were talking about developing safe, steerable, and explainable AI. I hope they continue to live up to their good intentions (no reason to doubt that they will try).

As much as I enjoy using OpenAI’s (and Hugging Face’s) models and infrastructure, I also like to see lots of competition.

Cory Doctorow recently wrote [1] how social platforms start out by giving a lot of value to users, then short change users to give value to advertisers, and finally short change advertisers to take huge profits for themselves. I wonder if the large language model businesses will see a similar trajectory.

[1] https://pluralistic.net/2023/01/21/potemkin-ai/


FAANG+M are all machines of oppression, now. They are the cause of a less free press, a less free software ecosystem, and especially financial oppression which they are most successful at - as enabled by decades of loose monetary policy and buybacks in dire need of regulatory controls.

A brief look at the economic circumstances of the "haves" and the physical circumstances of the have-nots across the West coast under the hard labor of the SF Fed will make this abundantly clear to basically anyone.

It's a place where every cause is an unbearable burden and simultaneously the only hope in perpetuity until hush gifts and roadblocks rain down from Capital wherever.

Further, their expenditures are why you've heard that Elon Musk is a "robber baron" when in reality it is these companies that are only investing in their own funny money while the quality of their products dwindle, and often fall catastrophically as they compete and muscle out the old guard rather than cooperating and improving things. Or even better collude and calcify investment in places where growth is truly needed, like the telecoma.


That's quite surprising given that the prevailing opinion seemed to be that they already have comparable tech in-house.

Although at google scale it might make sense to eliminate competitors via acquisition


Googles in-house tech they haven't managed to get to pass the threshold for release into a product.

By investing in a competitor, they at least indirectly get their foot in the door of someone who releases to the public, even if the product quality is worse and the quality-bar for release is lower.


Google Search Quality Team: "Don't use AI generated content. We will penalize or nuke your sites."

Also Google: "We need to enhance search with AI generated content"


Google is slow in the AI space because it isn't willing to take the risks its competitors are willing to take due to PR and branding factors. Anthropic with its focus on AI safety is the wrong choice here, it will slow them down not speed them up.


This is scary: Google was always proud of being on top of AI research. With so many engineers needing to outsource something so core to their mission means that their system is not as good as they want us to believe.


In German we have a saying: Everyone is cooking with water. Google has no secret sauce, just momentum, reputation and its position as old big tech leader.


While that's true, some can afford more water than others and finer spices.


Yeah, but there is a reason why in a restaurant setting most food sold is fast food. No, need for finer spices and more water you just need to hit that threshold.


they have a lot of sauce (money) but it s not secret


a.k.a they have credit...and it is doomed to fail...


Scary? It’s reassuring to me


They made bloom, and do have an internal chat AI. They have always lacked focus however.


Um... BLOOM was independently trained?


or it's just a good bet to maximize chances. It's not like they invested a large portion of their assets.

edit: last sentence sounds kinda weird given recent layoffs round


My question would be: What is the purpose of all the Product Managers and Group PMs at GoogleAI?


Fiefdom. Every PM is a little Barron of a small kingdom, and they must grow...


After so many years, Google should know by now, this is not the 'right' way of doing/managing things. This just pushes the coasting 'theory' even harder.



Google Brain?

Hasn’t there existed an entire department for 10+ years called Google Brain that should have been working on creating their own “ChatGPT”?

Is this a failure due to how Google has structured their initiatives/promotion review process?


Google already has a ChatGPT competitor called LaMDA (https://blog.google/technology/ai/lamda/), they just haven't released it to the public (or to the research community for that matter). Probably because it would cannibalize their revenue from Google search (Innovator's Dilemma).


> Google and Anthropic declined to comment on the investment, but separately announced a partnership in which Anthropic will use Google’s cloud computing services.

I really hope they have tested some prototype extensively before signing the deal. Running this kind of stuff on GCP could be a massive clusterfuck.


I assume it means they will spend money on GCP for running/hosting things.

Are you thinking they’ll let it free and train a chatGPT on GCP activity/log data or something? What would be the point of that?


I read enough horror stories about GCP in the last few years to prefer AWS over GCP with some exceptions (mostly GKE). What I'm saying is I hope they don't expect the same level of service from GCP that they expect from something like AWS, regardless of what they are using it for.


Why would startups requiring a ton of GPU compute use AWS or GCP, unless they received extraordinary discounts? I have never used them, but I once did a price comparison with AWS and Lambda Labs, and it didn’t look good for AWS. There must be many smaller providers that offer better rates, or self-host.


I don't work with GPU based stuff, but I work with some convoluted lambda based systems that are kinda expensive. You could literally replace all of them with a $5/month VPS (several orders of magnitude cheaper), or an equivalent Lightsail instance. You would go with Lightsail if you care about some SLA.


I’m honestly surprised. Google can replicate this research and even push it forward. Is this simply because Microsoft is all in on Open AI?


Which is interesting, because Google claims that their language model is even better than ChatGPT...


The people who got access to Lambda recently beg to differ.


Now I'm really confused. I asked ChatGPT

---

1) How does LaMDA compare to GPT-3?

"LaMDA and GPT-3 are both language models developed by OpenAI, but they have different focuses and capabilities. [...]"

---

2) Why is it referred to as "Google LaMDA" if it is an OpenAI technology?

"It may be referred to as "Google LaMDA" due to confusion or incorrect information, as LaMDA is a technology developed and owned by OpenAI, not Google. [...]"


Great example of why chat gpt is nothing to worry about.

https://blog.google/technology/ai/lamda/amp/

Without provenance GPT is only useful for things where you don’t care about correctness, or one’s where you already know or can find the answer.


Why doesn't ChatGPT always just annotate its responses with references? maybe ranked in other of perceived reliability. That would make it answers more useful. Probably an idea for an app based on ChatGPT.


Because that's a completely different technology than an LLM and we don't know if it's possible to do it yet. It's surprising what we do have works as is.


And now we shall ask Lamda for comparison.


Google seems to forget simple thing, what google used to do great and not much now, is only one thing. Fast Software.

When in doubt choose fast. When not in doubt, choose fast.


Is it as politically biased as ChatGPT?


Microsoft beat them to it. Soon we’ll have an explain bar with Clippy on edge. I think I’d use edge and maybe even bing if they just put gpt on there to mess around with.

Google, late to the game, will have anthropic which will be killed off/extinct within a few years anyways. The name may reference the anthropic principle but I’m going with a rewording to “Anthropic extinction”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: