Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google I/O and the Coming AI Battles (stratechery.com)
200 points by Amorymeltzer on May 15, 2023 | hide | past | favorite | 177 comments


The Google AI strategy is to use their weight and flood the market with everything AI so any competitor is either very niche and/or more expensive than "free".

Typical big co move.

Also the amount of tweets that suddenly popped up on Twitter with the same template of "ChatGPT is finished! Now that Bard has launched here 10 tips on how to..". It was clearly a Google marketing push ( scummy as always, but Google have done way worse in the past ).

This leads me to believe that Bard is at best OK and has no "AI advantage" over other competitors and it's basically who has the bigger ranch and deeper pockets until everything is dead around them.


This seems like attributing intent where there is none.

AI hype guys on Twitter work pro bono. Nobody pays them they just shill whatever the latest thing is.

Google has no central AI strategy because the company has weak leadership and is largely directionless.

The distributed actions of a lot of people does not amount to any concerted plan.


Absolutely this. Google could develop an AI that's superior to GPT-4. They may have already. Under their current leadership, it wouldn't make a difference anyway as they wouldn't know what to do with it. Google is on a fast track to becoming Xerox.


Its called Gemini. Google's own mega-multimodal model. Google merged google brain and deepmind to start developing Gemini. All the talent left at Google to go all in on this.

The other AI products at google were just polishing up their old stuff to hastily release. Gemini is their real attempt at catching up to GPT-4.


So far I have found Bard to be significantly worse at some basic tasks. A simple request to give me the text of the opening scene of King Lear met with a badly comical rewrite in modern prose that got basic things wrong (all the daughters professed their love for their father a and received equal shares of land & dowry) I point out the inaccuracy and reinforced my request for the exact verbatim text. And then proceeded to give— still— a rewritten version albeit slightly closer to the true text. Bing and ChatGPT both complied on the first try.

On the other hand, bard performed much better at a more complex task: I provided fake data spanning six years and a few rows of different categories. I left the sixth year blank. Without prompting it, reformatted the from comma do you limited into a nice table. It also gave a brief summarization and analysis of the trends. When I asked follow-up questions he gave detailed responses that were moderately insightful for anyone that would be familiar with the data domain given a few minutes to review the data set. Finally, I asked it to give a projection of what the values would be for the final column, which I had left blank. It did so, with an explanation of the trend it observed, as well as a description of the linear regression model. It used it and apologized for not being able to give more complex modeling due to the Small data set.

GPT-4 provided answers and analysis that were comparable in quality, but required additional rounds of prompting and clarification.


Just to disentangle this a bit, GP's post was about Gemini and you're responding about Bard. The two are different things. Gemini hasn't been trained yet; it's the next-gen thing that's currently being worked on (Pichai mentioned it at I/O).

It obviously remains to be seen how good Gemini will be. I just wanted to clear up the apparent confusion.


I'd imagine you could extrapolate from PaLM/PaLI, which are both impressive.


Google are taking the time to make sure that there is not much that comes back. With being the incumbent leader comes the greater risks of regulatory problems an reputational damage. OpenAI get something wrong is annoying Google do it and it damages the business much more significantly. They will attract regulator scrutiny far harder than other operators will.


I guess that's also the reason why OpenAI is MS's in 49% and not 51%. Seems like a tried and true strategy.


We can safely assume that a lot of YouTube/Twitter influencers get help from ChatGPT to design their titles/messages. This is documented by some prominent AI vloggers. A quick test on ChatGPT asking for YouTube titles on new Bare announcement vs ChatGPT comes back with:

""" ChatGPT in Trouble: Bard's Arrival Sends Shockwaves Through the AI Community! ;) """

What you find fun is that Bard gives suggestions that are much less catchy than ChatGPT: So you still need ChatGPT to write trash about ChatGPT


> AI hype guys on Twitter work pro bono. Nobody pays them they just shill whatever the latest thing is.

I thought this was the case, until I saw how identical all of the messages were, including the order of the bullet points. It was clearly some form of paid marketing - or fake accounts, except they all had blue ticks (not that that means much these days).


> I thought this was the case, until I saw how identical all of the messages were, including the order of the bullet points.

This has always been the case for all corporate fanbois on Twitter. As an ancient example: there was a time when Steve Jobs proclaimed that 3.5" screens were the perfect size, and 5.5"/6" phones were too big(!!). This was followed by suspiciously similar talking points recirculating in the "iPhone Twitter" echo-chamber[0]: including the same doctored image[1] that compared thumb-coverage of a 3.5' iPhone vs the Galaxy Note (IIRC). I don't think Apple was directly paying people to say those things: they simply confirmed their biases and they felt compelled to share the "truth".

0. Trended on Twitter, and even appeared on blogs written by Apple superfans like Marco Arment and Gruber

1. The phones were not to scale and the image was inaccurate in a way that was unfavorable towards the non-iPhone phablet.


Can we also observe how ridiculous it is to be arguing over whether or not similar Twitter content is first-party astroturf... about generative AI?

That's a bit like debating whether a gold miner is proficient at alchemy -- if the primary endeavor was actually successful, it would preclude needing to engage in the secondary.

If you had higher-quality generative AI, wouldn't writing marketing tweets, instead of paying for them, be the first thing you used it for?


> Google has no central AI strategy because the company has weak leadership and is largely directionless.

Evidence please?


I think they are suffering Monopolyitis.

Here's what happened to anything even decent at Microsoft: some shitty nakedly sociopathic MBA comes in and starts picking it apart for blatantly anticompetitive or monopoly protecting concerns, and the tech people behind it lose all morale and say "fuck it, it's a day job".

Here's what happens to anything at Google: some shitty "product manager" (toootally different that the microsoft monopoly MBAs. Totally.) comes in and says "how do we sell more ads with this". And... well, yeah same thing happens.


Covert public opinion campaigns are incredibly common, especially on Twitter.

Google has billions of dollars it doesn’t know what to do with: they can certainly part with a tiny fraction of it and pay some PR firm to save face after the ChatGPT thing.

What often happens is a few paid employees get the ball rolling and everyone else starts doing the hyping for free.


>AI hype guys on Twitter work pro bono.

Why do you assume not being explicitly paid by Google means not being part of Google's marketing strategy?

Google's whole business model is getting people to do work for free ("produce content") by making them believe they are doing something else.


I think they’re getting paid. The tweets are all similar, down to the wording of the intro. It’s unlike any organic AI trend on Twitter that I’ve seen, and it reeks of “we want you to mention these three things:” prompting from a Google-sponsored PR firm.

I would’ve believed the first three were coincidence, but by the time I got to tweet number 4 I was already re-reading Submarine. http://www.paulgraham.com/submarine.html


> The tweets are all similar, down to the wording of the intro

This is what happens when they are all using the same ab testing and title generation tech.


But I'm not saying they are not getting paid - I'm saying they are not getting paid by Google. They might be making money on sponsored links, or being part of some content farm. Barnacles aren't aware of how they are working for their whale.


How can you be sure? I agree that’s a possibility, but you seem certain. It seems more unlikely they’re not getting paid.


Because when something is possible and the incentives are there, then it's either already happening, or will eventually happen.


Bard's advantage is integration with Google's "moat" which is the best search engine in the world.

In all my tests so far, this integration fails. The Bing AI worked every time. (Can't believe I just wrote those words, but here we are...)


> It was clearly a Google marketing push ( scummy as always, but Google have done way worse in the past ).

I don't think you understand engagement farming. The only objective these hustlers have is to gain more followers, and they will say anything about anything as long as it leads to them standing out in your TL and thus increasing their chance of a follow (and of course, eventually monetization)

Last year they were opining about Web3. In December they were the ones screaming "Google is Finished", then that became generic so now there's a new angle. In 2 months it'll change again. Their words are completely meaningless and bare little relation to reality. The most important thing is to establish the poster as some kind of authority figure who always knows what's going on, so please follow them.


I noticed the same. "ChatGPT is old news, here are 10 ADVANCED prompts for Bard".

The advanced prompt in question = "create a table from these 3 numbers"


The vast majority of tweets about ChatGPT are the same garbage. I think it’s mostly due to Twitter giving too much weight to tweets and likes from “verified” users.

I wouldn’t be surprised that if you looked at the details on those tweets, it’s the same few thousand dudes creating them and liking them and retweeting them.


The worst part is the tweets themselves are probably generated


Lol, I posted "Google is BACK! MusicLM rules!" and no one paid me for that. I just think MusicLM is cool!

Funny to think that somewhere out there I'm loaded into someone's consciousness as a pawn in the marketing machinery of The Alphabet.


Bard is terrible, by comparison. ChatGPT feels like spontaneous answers. Google Bard will literally copy top search result answers to questions, and doesn’t answer them relevantly. It’s a bad extension of search.


exploding head emoji

C&P "Google just updated its free competitor, Bard."

C&P "ChatGPT has now a big problem"

C&P "Here are 7 incredible things Bard can do (that ChatGPT cannot)"

C&P "ChatGPT is in trouble"

exploding head emoji


Also so telling that every Bard comparison is to ChatGPT, when the correct comparison would be with Bing.


> has no "AI advantage" over other competitors

Couldn't that be said about search as well nowadays? and yet here we are.


> Also the amount of tweets that suddenly popped up on Twitter with the same template of "ChatGPT is finished! Now that Bard has launched here 10 tips on how to..". It was clearly a Google marketing push ( scummy as always, but Google have done way worse in the past ).

There's clearly a Big Tech turf war going on over AI. All sides are slinging poop/propaganda. I've seen more hysterical and uninteresting anti-Google FUD than ever on this here Orange Site over the YTD. Special mention of the "world is literally ending because Google requested .zip TLD from ICANN" or pre-IO anti-hype "Google CEO earns as much as other highly-paid CEOs".

Superfluous spam that I wish would be filtered, but often times it's produced specifically to get upvoted on HN, which is a dilemma for moderation.


> ChatGPT is finished! Now that Bard has launched here 10 tips on how to..

As if the spammy AI threads couldn't get worse, they managed to do it.


Dude, or ma’am, thank you for mentioning these obviously-sponsored tweets.

How do they guarantee hundreds or thousands of likes? Presumably the only reason the algorithm keeps showing them to me is that they’re popular. Otherwise they’d be labeled “sponsored”.

The points are always so bad, too. I asked Bard to summarize one of my repos, and it hallucinated that it was an http server written in Go. My repo didn’t have a single .go file. “Internet-connected AI” is just mistaken.


They carefully flood their tweets with likes from bot farms and once there’s a lot of inorganic activity it leads to a ton of visibility and therefore further organic activity


Had a similar experience. Asked Bard to generate a Firefox plugin and it responded that it was a text-based assistant and couldn't comply [1] whereas ChatGPT just did it. It didn't work, but it still generated it.

[1] https://twitter.com/jeroenvlek/status/1657195209686683648


>It didn't work, but it still generated it.

So neither could generate the plugin, it's just that one of them lied about it.


No, I suspect it's more the case that ChatGPT could generate it anno 2021, but can't anymore due changes in Chromium (I switched to Edge ultimately) and the Gmail API and Google Cloud Credentials workflow that I tried to add to it later.

It was interesting to see that ChatGPT annotated its code with pitfalls to avoid, that it actually didn't avoid in the code.


The company I work for is building a browser extension that involves ChatGPT, and we can unfortunately confirm that it is, in fact, not capable of writing browser extension code.


Just like a real human;)


>No, I suspect it's more the case that ChatGPT could generate it anno 2021

Why? I mean, why does it sound more plausible to you that it ever could, when we have countless examples of models lying (or rather hallucinating)?


Because it was trained on old documentation, but I see your point. My original point was also not necessarily that ChatGPT was correct, but that ChatGPT at least gave something to work with, whereas Bard simlly refused.


Twitter doesn't mark bot accounts as sponsored.

As to how - many ways, some of them really creative. One that I remember was an account that's reposting stuff that I genuinely liked, some left-wing activist content, and only now and then they would push something different, like a message about how this one time brand X turned out to be crap during a manifa.


Most of the hype bots aren't labeled as bots. They are run covertly, and replaced when they get burned.


Google’s problem is that if it wanted to not break its current business model it would need a chatbot that answers whatever the highest bidder desires.

The fast pace of ‘internet time’ means Amazon, AOL, Facebook and Google can follow the trajectory of Ford, AMC, Chrysler, and GM in a fraction of the time. Amazon was a leader in e-commerce once, now they count on someone who subscribed to Prime for the last 15 years to subscribe for the next 15 years. Google search used to be good, but now it is corrupt. Search for a ‘used Honda Fit’ and the first search result is an ad from the one car dealer that owns all the new car dealerships in town for new cars and even though they presumably have used cars in their lot, they’ll try to sell you the same CR-V they’d try to sell you in person. I’d accept any kind of ad from someone who is selling what i am buying but Google aids and abets numerous acts of brand destruction every day.

They know how to walk the line of running a corrupt ads engine and not get anybody arrested but it is an unsolved problem in A.I. ethics to get away with running a corrupt chatbot —- and that is why Google is in so much trouble.


To be honest... it actually would be entirely unsurprising and perfectly workable for Google to sell off the top AI answers. Something along the lines of

Your query: "Google, show me the best place to eat in manhattan"

"Bard's partners at Manhattan steakhouse are known for their delicious chargrilled steaks. Other options include..."

Sure, Google won't do that today, but they didn't do ads at the top of search for a while... until they did.


Google promised never to blend the ads with the content back in the “don’t be evil” days. Their first ad product was simple text ads that went on the side of the search results that maybe left some dollars on the table but actually kept the table open by looking squeaky clean legitimate.

Google has a massive valuation because whenever Wall Street demands “M0R M0NEY!” they can just increase the ad load. The thing is they and their investors depend on every penny of ad revenue they get and any innovation that reduces it, even just a little bit, is an absolute disaster for them. That’s why it is a ‘disruptive’ technology to Google…. It eats their lunch right now and the possibility they might be able increase their ad load someday doesn’t make up for losing it. I mean, shareholders can hit the SELL button and run in as much time as it takes me to write this sentence.


Yeah but this is why Google has its dual shareholder structure. Sergey and Larry can say "Hey, fuck you wallstreet" watch the share price drop in the short term and take a hit to make sure that long term they drive out competition in the space.


> Amazon was a leader in e-commerce once

did i miss something and they are no longer half a trillion revenue top ecommerce company in the world? who did they get replaced by?


Yeah, it's the only place I know of where I can order from 1,000s of products and have them delivered the next day...


You can find 1000 products listed on their web site but 999 of the listings are bogus, fraud, or both.

Items you can buy off the shelf at Target for $40 are for sale for $70 at AMZN.

Some people get one day delivery, but other people (me) get five day delivery with Prime, particularly ZIP codes that have an Amazon distribution facility in them. (Everyone I know who gets 1 day delivery has a Cheesecake Factory in their town... But for Christ's sake I order things from Japan on Ebay and get them before I get things from the AMZN warehouse in the next state.) I guess they think I want to watch Rings of Power so bad I'll subscribe to Prime anyway but you know... I don't.

My understanding is that the online store consistently loses money, they are a going concern only because of AWS. But guess what, I've already moved enough spend from AWS to Azure that subjecting me to 5 day delivery cost them much more than it saved.

As for replacements, see every other retailer that sees fulfillment as a way to make a positive impression. Adorama, Best Buy, and Target just to name a few. It's true that many of those have bizzare limited product line ideas (e.g. Best Buy sells Sony camera bodies but has no real selection of lenses but they think I'm stupid enough to buy a Samsung washing machine which is not a good choice unless you want to buy another washing machine in a few weeks) but the product listings are basically honest outside AMZN.

Long term see Temu and other asian retailers. When I look at arXiv I see a large literature in "sequential reccomendation" almost entirely by Chinese and Indian authors. Looks like a "missile gap" for e-commerce and social commerce if you ask me.


I read through the full article and the cognitive leaps the author makes are too big for me to follow.

It’s like listening to a mad scientist connect totally unrelated (seemingly) events.

E.g., I don’t at all understand the linkage between current events and the schism between the Catholic Church and Protestants denoms.

Maybe I’m just too dumb to follow this much brilliance.


> E.g., I don’t at all understand the linkage between current events and the schism between the Catholic Church and Protestants denoms.

His point is that LLMs are a disruptive technology at the level of the printing press. The power structure most famously disrupted by the printing press was that of the Catholic Church.

EDIT: Yeah this isn't what he's saying. His analogy is more like the major LLM companies like OpenAI or Google are like the Catholic Church which is being disrupted by European regulations which are like the protestants. Or the opposite. He literally says both ways as possibilities. And in this analogy the open models (the refinements of the 'leaked' LLaMa models which can fit on a single GPU) are like the wacky persecuted fringe sects which were exiled to the United States and which he thinks shouldn't be called as either Catholic or Protestant. At the end he says "This is, admittedly, a rather speculative and far-reaching Article" which I have to agree.


That’s what I thought too but I went and reread that part. He’s saying something else.

The American companies like Google/OpenAI (trying to export AI globally) are on one side, opposing them are Europeans (who want AI to be controlled, licensed, and regulated). Together, these two regimes are the Catholic Church and mainstream Protestantism of AI.

Opposing both these sides are open source AI models. He calls this group radical reformers like the Amish.

Yeah, don’t ask. This much brilliance in way above my pay grade.


He says "This is where history is interesting to consider, particularly the invention that I have long held is most analogous to the Internet: the printing press." Maybe his main metaphor for disruption is the printing press and he already used it for talking about the internet. Now there is a new disruptive thing, so he has to make his metaphor more complicated if he doesn't want to look like he's repeating himself.


> I read through the full article and the cognitive leaps the author makes are too big for me to follow.

Very typical of the author. Except few valuable ideas presented years ago, I haven't read or heard anything worthwhile. Mostly unsubstantiated speculation and “I've seen the big picture” type of ramblings.


Many of his recent articles are structured around quoting himself from prior articles.


Yeah, back in the day, I remember being impressed by one such insightful article shared on HN but the others were almost always “wtf is he saying?”


Agree. And so much recursion to his previous work. Unreadable.


Use ChatGPT to make sense of it. In future idiocracy we all will


Sad but true.


>It’s like listening to a mad scientist connect totally unrelated (seemingly) events.

Why a mad scientist and not a mad junkie? Or just some clueless language model, artificial or not? I think we tend to make this assumption that things that seem complicated _are_ complicated, and not just random bs, and that assumption no longer holds.

>E.g., I don’t at all understand the linkage between current events and the schism between the Catholic Church and Protestants denoms.

Hadn't read it, but let me guess, is it about how Gutenberg changed the society's interconnection fabric, and it triggered Thirty Year's War, and the same fabric remodeling thing is happening right now? :)


> Hadn't read it, but let me guess, is it about how Gutenberg changed the society's interconnection fabric, and it triggered Thirty Year's War, and the same fabric remodeling thing is happening right now? :)

Yes, there is some reference to the evolution of the principle of national sovereignty which came from the disruption of the Catholic Church and how that ties to Europe trying to regulate AI advances coming out of America.

Like you said, he could just as well be an organic LLM.


>Europe trying to regulate AI advances coming out of America.

That's just an attempt to prevent technical colonisation. An analogy would be Japan and China trying to fence off missionaries. Protestantism was about something different: decentralisation. And in that regard there's no difference between Google and their competitors - it's all strictly centralised; nothing Protestant about it.

Having properly Open Source models would be protestant. And EU regulations should help it happen by reducing the incentives for centralised ones.


Google's brand is only bested by Apple in the minds of consumers. They will never go away. It doesn't matter if ChatGPT or anything else comes along - 95% of the billions of daily Google users will wait for Google's version. I am honestly surprised that the Alphabet market cap hasn't reached $2T yet.


Google is an advertising business. Can you sell more ads with AI? I would say no. And even if you can, having an automated smart assistant being able to "cut" through the BS is certainly not as good for Google as clicking a bunch of sites trying to battle SEO.

However the judge is still out in regards to how much can AI allow Google to squeeze the profit margins of businesses by offering even better conversion rates. Personally, I don't think it will make a big difference, the AI will not make you buy more. I suspect Google makes more money by having multiple companies fight for the same customer than actually making a conversion.

IMO best case scenario for Google is to keep their search market. Table stakes is keeping search relevant at all.


Wouldn't this help google on second order effects tho? I guess having an AI summarizing the web would lead to less incentive to create the type of SEO junk that made searching in Google so awful in the past years.

Also for queries asking for product recommendation, where ad price is quite high, they could still plug ads into an LLM and have it write a summary back to you similar to what magazines do with sponsored content.


>I guess having an AI summarizing the web would lead to less incentive to create the type of SEO junk that made searching in Google so awful in the past years.

Main part of Google's business is selling infrastructure for that SEO junk, AdWords et al. Sure, you meant the _other_ SEO junk, but from AI point of view both would be noise to be separated from actual content.


> Can you sell more ads with AI

Companies like Google and Meta have been relying on AI to display ads for years now. It’s been central to their strategy.

ChatGPT was just a shiny toy. A very cool, shiny toy, but a toy nonetheless.


I wouldn't call something a toy that has shown the most impressive growth rate for a commercial product since the invention of the internet.


To be more than a toy it needs to be profitable and have a moat.

OpenAI just implemented something (well) that was developed in other places.

LLMs are indeed very cool and maybe even revolutionary, but their main accomplishment was beating big tech to the punch.


They didn't beat them because they are faster. They beat them because it was something big tech did not want to happen. The space you have for ads is severely limited and the fact that no moat exists means that you will need alternative ways to fund it (or skyrocket the ad costs).

I suspect that those who don't care about privacy or can't afford to pay, will be swarmed with AI-seo results from an ad-supported model, but people will FINALLLY have the choice to pay and avoid this ad-fest. And who knows, maybe when open-source catches up, it will even be affordable enough that ad-tech will shrink considerably.


> They beat them because it was something big tech did not want to happen

Yep. Google didn’t want a chatbot telling kids how to cook meth, so they weren’t in a hurry to release. Meanwhile, releasing a chatbot is one of the few cards a much smaller company like OpenAI can play.


I dont know if you are sarcastic, but the solution to kids cooking meth is not hiding the recipe. Hiding knowledge is a sign of a problematic society, not "God's work"


This is mostly unrelated to the chatbot conversation, but hiding sensitive things from children is basic parenting as practiced in all cultures everywhere. The only difference is not everyone agrees on what’s sensitive content.


Kodak invented the digital camera. Being first doesn't mean much. New competitors don't have anything inside the moat (their existing business) to protect and can move faster.


Can you sell more ads with AI?

100% Yes. We can (because people are going to ask how to do X in postgres and AI can reply how to do that in postgres with a side-note that this Sponsored product does it out of the box). however, can you sell more ads (by tagging them as "ads" or "Sponsored") - That is where the gray area shows up (IMO).

Obviously, this is still evolving but I guess something along that lines could be done, if one is too serious to monetize it. In my personal opinion, I think right now companies are focussed around capturing market than monetization.


I don't think so. Google had Lambda for quite some time internally. If it was a slum dunk on selling more ads, it would have been out way before chatgpt.

Google has maximized their ad revenue with their current model. You don't have the same ad real estate in your example and that is a problem. Now they get money for impressions on a whole page, and clicks from people trying to find the right fit among the seo. I believe that google wants information filtering but not too much as it needs current estate for ads.


Any chatbot that has ads integrated into the responses will face an uphill battle against competing products that simply have banner style ads.


> Google is an advertising business. Can you sell more ads with AI? I would say no.

Oh, you optimist!

It's ALWAYS about incentives. Aka money.

Companies will use AI to squeeze garbage (aka ads) in every nook and cranny of the observable universe now. Think of all the thankless tasks that required a human to do them until now.

Guess what? Now that AI can do them 100000x faster and probably 2-5x better.

Our lives are about to be flooded by a stream of -pardon my French-, adshit.


I don't see how my life could me MORE cluttered with adshit than now. Literally everything is an ad space. And I have to navigate into a lot of it to actually find something useful.


This post is a culmination of a trend I'm been seeing where LLMs are made synonymous with AGI. AFAIK you can't use an LLM to drive a car.

In the context of this thread, I don't think Google could sell more ads with an LLM which is why they sat on this technology. It disrupts their current business, while being more expensive and harder to monetize (charging a subscription to consumer is worse than a monopolized ad business charging hundreds of millions to corporations).


Can't they massively improve their ad-targeting by analyzing conversations with the AI?


People are already freaked out plenty by the idea that googles products spies on you to sell ads. To start doing it outright will definitely kill uptake of any AI service they provide.


I wonder, does the general populace really freak out of it, since everyone I've known don't really do


>People are already freaked out plenty by the idea that googles products spies on you to sell ads.

The inverse is true - people continue using Google products despite being aware that they are spying on them. Doing it outright won't make them suddenly start caring.


Tech companies have been running all your data through AI for years now to show you targeted ads.

It’s what the whole house of cards is built on.


To me building the automated smart assistant seems like insanely lucrative place to put in ads. When you order the assistant to solve a problem or buy some product google can show you something even better or a things that solve the issue.


Really? To me, Google's image is much closer to that of Microsoft : You may hear of them in a few innovative projects, but most of what you know and use from them is boring office stuff, or personal stuff that you take for granted and don't think twice about. Having an Android or using Chrome isn't a personal statement like having an iphone or a macbook is.

And in terms of ethics, the perception of them being a force for good eroded a lot. When thinking about the typical googler, people around here will sooner think about an exec trying to skip some tax than about a techie doing innovative stuff, because that's what their media coverage is about these days in Europe.

I'll give you that Microsoft is very unlikely to go away either, unless the US govt finally remembers they have antitrust laws.


Youtube and Google are the top two highest favorability brands among gen Z

https://archive.is/8Ippr


>Others in Gen Z’s top 10 include M&M’s, Walmart, Target, Doritos, KitKat, and Oreo.

Makes one wonder if it's actually representative of the whole gen Z, or just some specific western subset.


It is based on a survey in the US, as stated in the article.


Are age coherts like "gen z" relevant outside of a single country? I don't know if generations would line up world wide. Perhaps "baby boomers" would line up Europe wide becuase WWII ended at the same time for all of the continent.


Sure - there definitely are Zoomers in UK, but I doubt many of them would ever recognize Target or Walmart brands.


You're a techie (and/or business tech person).

Go ask 10 non-tech people what they think of Google.


You mean the same people that use it as a verb? Which is the highest compliment you could give a product and is so rare that companies can only dream about it.


Likely the same as they think about fb, microsoft, netflix, amazon-tech, uber, etc: employees are probably handsomely paid, and taxes probably aren't.

Actually as a factoid, I recently changed jobs, and got handed an iphone for work. People noticed it and commented on it, whereas a pixel would probably have not raised much attention.


> I am honestly surprised that the Alphabet market cap hasn't reached $2T yet.

Because online advertising, particularly search ads, are working less well for advertisers and regulators are pushing for more changes. The vast majority of Google revenue comes from an industry that is facing massive risks.

On top of that I don't think it's clear that "AI" in search will help Google sell advertising or increase revenue. It's also likely to bring them under even more scrutiny from regulators.

In many ways AI is harder to sell ads against, Google is payed when people leave the page via a click on an ad. That doesn't happen when Google is summarising the content. Microsoft/Bing have found a brilliant way to attack Google, build a product that they have to copy that (potentially) decimates ad revenue.


> Microsoft/Bing have found a brilliant way to attack Google, build a product that they have to copy that (potentially) decimates ad revenue.

Didn't 0-click search as a trend for google search predate bing's changes?


>Google is payed when people leave the page via a click on an ad

Not really. The click rate is iirc usually around 5%; I'm not convinced this is much higher than the "noise floor" generated by a human equivalent of drunk racoons. So in practice Google is being paid for displays.


Its because unlike Microsoft and Apple. Google has no real presence in China which keeps it out of one of the world's largest and fast growing economies


> They will never go away.

History would say otherwise.

Look at how quickly Nokia went south.

Look at how quickly Google took over from the AltaVista search engine.


The Nokia example makes much more sense in this case since it was an established company.

On the other hand AltaVista was on the early days of the internet where everything was much more turbulent.


Don't think that with GCP and play store it'll be as easy to replace Google though


You don’t have to replace them. You just need to reduce the income from search based ads by enough to cause them to implode.

Hard to fund loss leaders like Android with say 20% less revenue.


None of us know what the direct profitability of Android is. Given the revenue from Play Store, it at the very least is not obvious that it's a loss-leader.

But one thing that is obvious is that having 20% less revenue would not mean Google was unable to work on Android. We know that, because they were able to already fund Android in the distant past of 2021, when their revenue was 20% lower than now. (In fact, they were able to afford Android in 2010, with 90% lower revenue.)


Compared to AWS, no one uses GCP. Compared to IOS App Store, Play Store doesn't sell any paid apps and subscriptions, only adware.


Google Play 2021 revenue was 48 Billion and 42 Billion in 2022. I suggest you google in the future. Also, I have an iPhone and it's filled with just as much adware, crapware and ridiculous subscription apps.


Yeah, that's what Microsoft (Windows era), Xerox, Kodak, IBM (PC era) and so on also thought back then about their own "monopoly" and brand ;)


That's what we thought about Google Plus.

What's the last time Google launched a product that everyone really uses?


when's the last time any company has launched a product everyone really uses? for the sake of conversation let's say 2 billion+ for everyone.


In 2021 there were "only" one billion iPhone users[1]. By the time, Apple was the most valuable company in the world, and it still is today. So I'll say 2B is a very arbitrary and unrealistic number.

[1]: https://appleinsider.com/articles/21/01/27/there-are-1b-ipho...


the parent said "everyone". there are about 5 billion people over 20. 2 billion is quite conservative.

as for your link. iphones are popular, but it's definitely not true that "everyone" has an iphone, which is my point. android has almost 4 billion active users, for comparison.


TikTok is the closest I can think of.


TikTok is probably closest, but I don't think they've hit 2bn yet.


I think 2 billion is an arbitrary number but I'm thinking of Search, Maps, Gmail, Google Docs. All of those are ancient.


Instagram? (launched 12 years ago)


ChatGPT


Not even close to 2B users. Google have probably 10 different products with more than 2B monthly users.


It's very hard to believe. Besides search, Android, Chrome and Gmail are in 2B~4B range. Are there really another 6 products that are ubiquitous as these 3? Google Docs definitely doesn't have 2B monthly users. Some built-in Android apps might be installed on >2B phones, but it doesn't mean they have that many active users.


Google Search, Android, Chrome, Gmail, Google Play Store, and YouTube

Also Google Maps, Google Photos and Google Drive have more than 1B users.


The article mentions they have six products with 2 billion users.


Google does annoy the shit out of me when using it behind a VPN. Constantly asking me to do Captchas. Never have that problem with bing or duckduckgo.


Waiting for a Google product to start working well (or to start respecting you as a user) isn’t a strategy that gets you very far these days.


Some of Google's success comes from being "free". It's not necessarily because consumers believe Google has the best services/products.

It's hard for a software company to compete against the largest ad network ever created.


ChatGPT is the fastest growing application in history. It had 100 million monthly active users just two months after launch. No one is “waiting” for Google’s version.


Never is a really long time.


On Enterprise, Microsoft >> Google


I am curious if there will be a requirement in the future, possibly enforced by the EU or US law, for content such as books, videos, music, PDF files, news articles, or blog posts to be labeled as produced by either a human author or AI. I'm talking about ethical disclosure. While it is uncertain, it is important to be transparent about the content's origins. As we know, AI is here to stay, and it is likely that governments will be the biggest investors (or spenders) in AI technology for military purposes.

Despite Bing having a head start, it's unlikely that Google will lose to AI battle to Bing and others. Google's strong brand image and popular properties, such as Gmail, YouTube, Search, Chrome, and Android, used by billions of users daily, give them a significant advantage. It's doubtful that Bing will take over the search market anytime soon. But, who knows? Ha!


IMO this sounds infeasible, since AI human generated content exists on continuum. At one end is content 100% written by a human, at the other end is 100% AI written content. How do we label everything in between? For example, AI written content that was tweaked by a human? Or human written content with some sentences written by AI?

Furthermore, it seems unenforceable. As AI becomes more sophisticated (if this isn't the case already), it will be virtually impossible to prove mislabelling.


This.

I have written a number of documents recently that are AI assisted but definitely my own work. I use the LLM to help me cross reference topics, clean up some of the language, improve the flow and prepare for the potential follow-up discussions but the arguments and recommendations are still mine. Is this AI or not?

Side note: I just prepared a recommendation for an org change and had the AI argue against it, as well as provide responses. Some were good and some were weak but it was extremely useful and quite fun.


Yesterday I sent out an invite for a family event we're organizing.

This time, I figured I'll use GPT-4 to help me write the perfect text. I described the details of the event, my relationship to the invitee (inb4: without any kind of PII, of course), the style, tone and context of the desired text, and asked GPT-4 to generate suggestions. I went back and forth with it, asking it to generate more variants of a specific suggestions, then taught it a simple markup for editing and asked to iterate on specific words and sentences, until I was somewhat satisfied with the result.

Then I asked my wife for her idea, and she quickly wrote a little text of her own. Only afterwards, I showed her the best (to me) of GPT-4 texts. We then mixed the sentences from both together, creating a final work that's 50% OG, perfectly clean text written by my wife, and 50% the output of GPT-4 (with me guiding it).

Is that text a work of AI? Or of a human? Or both? Is it even 50% AI and 50% human, given that the AI part were created from my input, and then edited by my telling GPT-4 what to change, and then finally approved by me using my own judgement?

Does a random website sharing invitation templates on-line have a copyright case, if a couple sentences from our text matched something of theirs? What if it wasn't the AI part that matched?


Does using a smart spell check count? The new grammer checks might be AI powered.

Where is the line?


For reference, Gmail and docs 100% use generative ai[0]: "As language understanding models use billions of common phrases and sentences to automatically learn about the world, they can also reflect human cognitive biases"

0: https://support.google.com/mail/answer/9116836?hl=en&co=GENI...


It's not clear where to draw the line on what is AI or not, and I suspect most content will be at least partially generated by AI. Is spell-checking AI? grammar-checking? Rewriting sentences for clarity? Summarization?


Or, following editing suggestions your text editor/word processor gives you? Or, on the artistic side, automatic cutout/background removal? Content-aware fill?


Just as with labelling for Vegan and Vegitarian food, "Fair Trade" or Organic, there is going to be similar movements in the creative arts for "AI Free" content.

So no, I don't believe there will be regulation on labelling "AI" derived content, but there will be a drive to label and certify "organic content".


Cool, then I will know what to avoid.

People who boast that their company is <insert identity here> owned do that, because their product can't compete otherwise.


An interesting thought. Though it’s not a binary property. I’m also not sure how many people care how the sausage is made. I don’t.


It could backfire like the Made in China labels and all that did is advertise China as a manufacturing hub and leader and project it’s totality and power


Maybe the publishing company can train an AI on GRR Martin's work and we'll get the final books from an AI ghostwrite, labeled or not


Considering season 8 was apparently Martin's idea to test the waters, I think the AI will do a much better job than him.


I believe that would just be fighting losing battles on a front that doesn't matter.

Content mills existed long before AI. Clickbait existed long before AI. Fake news and misinformation existed long before AI.

AI will empower things like that, but it isn't at the core of the problem.

As a user, I care that content is truthful, useful, and/or entertaining. I care that it isn't presented in a misleading way, and I care that it doesn't cause harm (hate, misinformation, etc.) - What I don't care about is who or what created it.

I think a bigger step in the right direction for society would be to empower certain groups like journalists in ways that they don't have to compete with all the crap out there.


I think clear visions, good taste, thorough fact-checking, clever prompt engineering and proper AI-result curation will all become more important in the years ahead.


Google is going to be just fine. The most important slide in the article is that Google has 6(!) 2B+ user products. Even if slightly behind in capability, they have superior reach. They'll deeply integrate AI into every part of that stack and this way put it into people's hands. They also have the data, infrastructure, people and capital to make it work.

Microsoft has little consumer push (no mobile presence) hence they will likely dominate in the business market. Integrating it into Office, private training on company data, etc.

Apple is the big mystery. Supposedly the big consumer-facing competitor to Google but very little is known about what's coming from them. Honestly, I'm quite convinced they're in panic mode as well, they just hide it better.


It's really 5 products: would the Play store have 2B users if it didn't come with Android?


You can say the same thing about Chrome.

Irrespective, doesn't matter how the numbers came to be, they're massive opportunities to integrate AI into.


It just means that lots of these new so-called AI startups using wrappers around AI APIs don't have a moat to go with since these new announcements by Google has effectively destroyed them for free or close to free.

It is only going to accelerate and AI research will increasingly be more closed. There is now no difference between O̶p̶e̶n̶AI.com or Google DeepMind.

Either way it will have to take more than LLMs like 'ChatGPT' to even come close to dethroning Google. [0] There is more to 'AI' than the LLM chatbot hype.

[0] https://www.similarweb.com/blog/insights/ai-news/chatgpt-gro...


On a side note, we're experiencing the death of SEO I believe since content can't be trusted any more and knowledge, expertise and competence is available to everyone that can ask the proper questions.


If the eu makes those laws then it will exclude itself from all ai development, push ai to other countries, and lead to an ai brain drain.

I do think all government should require AI to disclose all the material ingested.


Courious question: how difficult and expensive is to train these models? Being the technology open source and assuming that we can collaborate on the training phase and have big open training sets, what will be the competitive advantage of companies such Google beyond the brand?


> what will be the competitive advantage of companies such Google beyond the brand?

The advantage is that their AI will be integrated into products that already have billions of users. Android users will get used to the default google assistant model so theyre more likely to use it in other instances as well and so on.


> Courious question: how difficult and expensive is to train these models? Being the technology open source and assuming that we can collaborate on the training phase and have big open training sets, what will be the competitive advantage of companies such Google beyond the brand?

Zero to none?

I mean, it's much much much easier for collaborators to produce better (or even competitive) map data than training LLMs, and yet the open alternatives to google maps are hardly ever used, have spotty data for some locations and have made no compelling feature for user's to adopt.


Training foundation models is very expensive. Easily a few million per model with no guarantee of success.


Again, we can assume a distribute training if that is feasible.


Unfortunately, I don’t think distributed training is feasible.

My understanding is that these models are trained using some form of gradient descent. This requires knowledge of the model’s prior state and current training error to adjust parameter values.

With distributed training it’s difficult, to point of impossible, to communicate the state of the model across large distances because these models are so large.

Take my comment with a grain of salt because I’m not an expert in this area. I’d very much enjoy being wrong about this because distributed training would unlock so much potential.


Google scanned 25 million books. That has to help them at least a little.


But it is possible to crawl Internet in a distributed way. Also Wikipedia is a treasure.


That's all true, but everybody has access to that. None of that is scarce so the value is basically 0.

You asked for advantages that Google has and the text of 25 million books is one thing. A lot of the text is going to be scarce and so it's something of value Google has and others do not.

Google also has built a massive GIS database from driving their cars around the world. Apple has a smaller database and there are some third parties (like ESRI) that will sell you access to their database. If you want to ask an AI something about your neighborhood, Google has more data than anybody on that front.


Speaking of "AI Battles", the conversation I'd like to see more is about how these 2 giants are crushing entire industries in their search for higher stock prices. Far more interesting than Bard vs. ChatGPT imo.


Google right now are Netscape in 1997. They see AI in the rear view mirror. They think it’s going to eat their primary revenue stream. The question is are they going to keep staring in the rear view mirror and go over the cliff or can they reinvent themselves?

Given that Google still gets the vast majority of its revenue from search its hard to see that this doesn’t cause then a huge amount of pain. I suspect they’ve made the mistake of not starting to cannibalize their revenue stream and now it might be too late.

It’s going to be interesting to see how this plays out.


From my experiences with Google integrated AI, I'm skeptical that Google can win any battle with it.

ChatGPT and other modern AI tools are useful specifically because they do what I want when I want to do it. Compare this to Gmail's smart compose: by the time the AI makes a suggestion I'm already half-way through typing that text, so its prediction is completely useless even if it's correct.

(Also, I really enjoy the random historical rambling at the end of some Stratechery articles).


I think the main battle for google will be that corpus poisoning might be the new SEO - wait for it!


What differentiates second-degree corpus poisoning from first-degree advertising?


The example of the new EU regulation in the article is a great example of how regulation typically only serves to enable regulatory capture, thereby playing into the same goals of creating a monopoly or duopoly that governments otherwise say they’re opposed to.


Regulation “typically only” serves the interests of incumbents? Surely that’s an overstatement, no? Personally I tend to think that the inability to move fast and break things is a good thing when it comes to, say, air travel or food safety.


Ah yes the USB regulation which did nothing for the consumers.

And the safety regulations which made our products less toxic and less shoking


Plus the 40 hour workweek, 2 day weekends, healthcare (not in the US, outlier among developed countries), pensions, equal rights acts, ...

Or in the EU, roaming charges removal, flight reimbursement regulations, etc.

Heck, speaking of flying, the entire airline industry is regulated up the wazoo and a few years ago we had a full year of metal coffins flying through the skies at 10 000m going 950 kmph without any fatalities.


This is a popular talking point but it feels ahistorical to me. Was Ralph Nader really working for General Motors all along when he called for more regulation?


> Was Ralph Nader really working for General Motors all along when he called for more regulation?

You don't get it! Only big companies can make safe cars cheaply, so it was all a big ploy against small car manufacturers.

I'm being sarcastic in case someone missed it :-)


Google hasn’t cared even slightly about how it has developed a reputation for cancelling projects, leading to developer mistrust of any new Google initiative.

It’s going to bite them when they need developers to embrace their ai ecosystem.

Their CEO should have been fired years ago.


Honestly, is it Pichai's fault for canceling the projects? Or is it Schmidt's fault for starting them, and for creating that short termism culture in the first place.

It could be the case that Pichai is Immelt while Schmidt is the Google Jack Welch.


> AI is zero marginal generation of information (well, nearly zero, relative to humans). As I wrote last year generative models unbundle idea creation from idea substantiation, which can then be duplicated and distributed at zero marginal cost.

I find this quote a little bit hard to believe. Are the current models genuinely capable of innovation? I've been productive in using image generation for my album covers, but I haven't found a similar use for Large Language Models yet.


He didn't mention innovation, just 'idea substantiation' i.e. actually making the end product you have in your head.


Should have just called it Google A/I this year


The author seems shocked by the demonstration of the difference between marketing and reality at Google.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: