Hacker News new | past | comments | ask | show | jobs | submit login
Bard and new AI features in Search (blog.google)
988 points by jmsflknr on Feb 6, 2023 | hide | past | favorite | 954 comments



I agree this is bland corporate speak. But it reminded me of a question that's been floating around:

A number of pundits, here on HN and elsewhere, keep referring to these large language models are "google killers." This just doesn't make sense to me. It feels like Google can easily pivot its ad engine to work with the AI-driven chat systems. It can augment answers with links to additional sources of information, be it organic or paid links.

But I guess I'm wondering: what am I missing? Why would a chatbot like ChatGPT disrupt Google vs forcing Google to simply evolve. And perhaps make even more money?


Those new language models are "Google killers" because they reset all the assumptions that people have made about search for several decades. Imagine that people start using those chat bots massively as a replacement for Google search. Then the notion of keyword disappears. Google AdSense becomes mostly irrelevant.

Of course, Google is a giant today with a history of machine learning innovation. So they have a good chance of being successful in that new world. But the point is that many other companies get a chance again, which hadn't happened in 20 years. You couldn't displace Google in the old search/keyword/click business model. Now everyone gets a fresh start.

Who knows what the economics will be. Just like page rank early on, it was expensive to compute. But the money in advertising made it worth it, and Google scaled rapidly. Which language model do you run? The expensive one, or the light one (notice how Google in this announcement mentions they will only offer the significantly smaller model to the public). Can you make this profitable?

Other fun questions to answer if the industry moves to chat vs. search, in a 5-10 year horizon. What is the motivation to write a blog post by then? Imagine no one actually reads web sites. Instead a blog post to share an opinion, I'll probably want to make sure my opinion gets picked up by the language model. How do I do that? Computed knowledge may render many websites and blogs obsolete.


The point about the economics of running these models is an important one that slides under the radar a lot of times. The training costs for large language models like GPT are enormous, and the inference costs are substantial too. Right now things like ChatGPT are very cool parlor tricks, but there's absolutely no way to justify them in terms of the economics of running the service today.

Obviously this is all going to change in the near to mid future: innovation will drive down costs of both training and inference, and the models will be monetized in ways that bring in more revenue. But I don't think the long term economics are obvious to anyone, including Google or OpenAI. It's really hard to predict how much more efficient we'll get at training/serving these models as most of the gains there are going to come from improved model architectures, and it's very difficult to predict how much room for improvement there is there. Google (and Microsoft, Yandex, Baidu, etc.) know how to index the web and serve search queries to users at an extremely low cost per query that can be compensated by ads that make fractions of a cent per impression. It's not obvious at all if that's possible with LLMs, or if it possible, what the timescale is to get to a place where the economics make sense and the service actually makes money.


+1

For a group of people on a site frequented by startup people, did nobody read the terms of MS's investment into OpenAI?

"Microsoft would reportedly put down $10 billion for a 75% share of the profits that OpenAI earns until the money on the investment is paid back. Then when Microsoft breaks even on the $10 billion investment, they would get a 49% stake in OpenAI.

These are not the terms you would take if, tomorrow, or even two years from now, you were about to be wildly profitable because everything was about to be so easy.

These are the terms you would take if Microsoft was the only hope you had of getting the resources you need, or if getting somewhere was going to be very expensive and you needed to defray costs.

Honestly, with the level of optimism in the rest of this thread about how easy this will all be, they would probably be profitable enough to just buy MS in like 3 years , and wouldn't have needed investment at all!


  > Microsoft would reportedly put down $10 billion for a 75% share of the profits
  > that OpenAI earns until the money on the investment is paid back. Then when Microsoft
  > breaks even on the $10 billion investment, they would get a 49% stake in OpenAI.
To put that in perspective, which is often difficult with large sums of money like this, $10 billion is _half_ of what Facebook paid for Whatsapp.


Not having read the terms in any detail, I will say this: it can be very easy to not report profits for a very long time.


The point was not that the profits are long tailed but rather if openai thought there were massive future profits and the potential to become a google killer then they wouldn’t give away half of their company today for $10B.


But how do you know that they didn't predict that it is only by having a $10 billion that they _can_ become the next google killer, because of the initial cost outlay, and the inevitable fight back from google (which would then require a warchest, of which i'm sure this 10 billion is a part of)?


A couple things. I think the google killer thing is kind of funny in a few senses:

OpenAI talks mostly about trying to help change humanity, not about winning at business. Their mission is still "advance AI safely for humanity". It's not even obvious they care about winning at business. We seem to be putting that on them.

In that sense, i'm not actually sure they care whether they beat Google or not. I mean that honestly. If they care about the stated goal, it would not matter whether they do it or Google does it. I'm not suggesting they don't want to win at all, but it doesn't seem like that is what is driving them except to the degree they need money to get somewhere?

If they really succeed at most of their mission, killing Google might be a side-effect, it might not, but it would just be collateral damage either way.

Beyond that, I don't disagree, i actually agree with you. My point on that front is basically: "Everyone thinks this will be cheap and easy very soon, and change the world very quickly".

I believe (and suspect OpenAI believes) it will be very expensive upfront, very hard to get resources in the short term, and change the world slower as a result


Plenty of people have already said "Google is dead" in certain terms.

If somebody needs ten billion dollars in additional work and investment to make that a reality, how certain can they be?

OpenAI has 375 employees (according to Google). At 300,000 a head thats 110M per year in compensation. Let's say that their compute costs are enormous and go to 200M in expenses per year. 10B is fifty years of expenses. So if they need 10B in investment it becomes obvious that they believe that they have to change something about their business in a fundamental way. Maybe that is going to be enough, but if it is so certain it becomes hard to believe that they'd need this kind of investment.


To me, the fact that MSFT invested on these terms doesn’t mean that the financials are shaky, it means that there’s a huge first mover advantage as well as a huge start up cost. OpenAI could make their money back or they could get taken out back by Google if they’re bottlenecked by cash.


OpenAI is projected to make 200m in 2023 and 1bn in revenue in 2024. If they can (roughly) keep this YoY growth rate, they will become a money printer in 3–5 years.


OK. So then microsoft still takes their $10B back and owns half of the company.


That's the best case scenario. Worst case is that they get nothing back and own nothing. (Based on the quick summary of the deal above; I don't know more than what's here.)


GLM-130B[1] is something comparable to GPT-3. It's a 130billion parameter model vs GPT-3's 175 billion, and it can comfortably run on current-gen high end consumer level hardware. A system with 4 RTX 3090s (< $4k) gets results in about 16 seconds per query.

The proverbial 'some guy on Twitter'[2] got it setup, and broke down the costs, demonstrated some prompts, and what not. The output's pretty terrible, but it's unclear to me whether that's inherent or a result of priority. I expect OpenAI spent a lot of manpower on supervised training, whereas this system probably had minimal, especially in English (it's from a Chinese university).

If these technologies end up as anything more than a 'novelty of the year' type event, then I expect to see them able to be run locally on phones within a decade. There will be a convergence between hardware improving and the software getting more efficient.

[1] - https://github.com/THUDM/GLM-130B

[2] - https://twitter.com/alexjc/status/1617152800571416577


Agree, but much less than 10 years. Now that the transformer is establishing itself as the model, we’ll see dedicated HW acceleration for transformers, encoders, decoders, etc. I will eat my hat* if we don’t see local inference for 200B+ parameters within 5 years.

* I don’t own a hat


i would imagine these models would be considered trade secrets, and that the models (esp. good ones that take a lot of resources to have trained) would not leave the data center. Your access to such models would be dictated by an api instead of locally run.


Which chip company do you think will benefit from the move to transformers?


No-one in particular. It will be integrated in the processors (the intels/AMDs on PC, the Mx on Macs and the Qualcomms, Mediateks and Samsungs on phones).

Not much different from video and graphics accelerators being integrated today, or DSP focused instructions in the instruction set.

They just have to do it to stay relevant.


Not sure about 200B+ models on phone hardware in 10 years. But I think we'll be able to deliver the same predictive performance with models 1/10 the size and those will fit. That is what happened with CNN vision models over the last 8 years.


Google currently handles 100,000 queries per second. The costs to run GLM-130B or GPT-3 at this rate would be astonishingly high.


Would they? An array of a million of these machines would cost $4 billion at consumer retail prices. That's 1.4% of Google's annual revenue for one-off bulk cost. The operational cost, at consumer retail electric using current inflated levels, was a small fraction of a cent per query. This is ignoring economy of scale, better electric pricing, caching, etc.


Where are you getting a CPU + RAM + RTX 3090 for $1k? To even install a million of these machines, you'd have to build a new datacenter, the capital costs are going to be beyond just the wholesale price of GPU boards, and you'll have to hire a ton of datacenter technicians.

But leaving that aside, look at OpenAI's pricing. $.02/1K tokens. Let's say the average query would be 20 tokens, so you'd get 50 queries/$.02 = 2500 queries/1$, or for 100k, $40/sec * 86400 * 365 = $1.2b. My guess is OpenAI's costs right now are not scaled to handle 100k QPS, so they're way underpriced for that load. This might be a cost Google could stomach.

I just think blindly shoe-horning these 100B+ param models into this use case is probably the wrong strategy, DeepMind's Chinchilla has shown it's possible to significantly reduce parameter size/cost while staying competitive in accuracy. I think Google's going to eventually get there, but they're going to do it more efficiently that brute forcing a GPT-3 style model. These very large parameter models are tech demos IMHO at this point.


You can get an RTX 3090 for < $1k. I was largely handwaving away the rest of the costs since all the processing is done on those cards and basic hardware is really cheap now a days. But in hindsight that might not be entirely reasonable because you would need a motherboard that could support a 4x setup, as well as a reasonable power supply. But the cost there is still going to be in the same ballpark, so I don't think it changes much.

That said I do agree with your final conclusion. Bigger is not necessarily better in neural networks, and I also expect to see requirements rapidly decline. I also don't really see this as being something that's going to gets ultra-monopolized and centralized. One big difference between natural language interfaces and something like search is user expectations. With natural language the user has an expectation of a result, and if a service can't meet that expectation - then they'll go elsewhere. And I think it is literally impossible for any single service to meet the expectations of everybody.


Why would cost per query go up measurably for a highly parallelizeable workload?


> Google currently handles 100,000 queries per second.

There's a lot of duplication in those queries. If the answers can be cached, a more useful metric would be unique queries per some unit of time (longer than a second).

That said, I don't have the numbers. :)


Stable diffusion can already run on an iphone. Hopefully that trend will come to LLMs too.


Oh it will. We'll be asking our iphone questions and it'll be returning seo spam and ads for viagra in a chatbox. Meanwhile, the big boys get the firehose feed.


1. How much did it cost to train ChatGPT/GPT3? The only estimate I’ve seen was not enormous in the grand scheme of things (eg more money than I have but less than Google have stuck down the back of the sofa). I think that number didn’t count training precursor models or paying for people to come up with the models/training data/infra.

2. Don’t Google have specialised hardware for training neural networks? If the costs of training/inference are very significant won’t Google (with their ASIC and hardware design team) have a significant advantage? It seems to me that their AI hardware was developed because they saw this problem coming a long way off.


1. We don't know the exact cost, but its well into the millions. When Microsoft invested in OpenAI, it did so in the form of ~$500M in azure credits, so they're expecting at least that much in compute spend. Another company estimated that GPT-3 alone cost the equivalent of $4.5M in cloud costs (ignoring OpenAI's other models).

2. Yes. Yes they will/are developing custom silicon that will likely be a significant advantage here. GPU costs were always crazy, and many companies are designing AI chips now. Even the iPhone chips have custom AI Cores. We'll see if Azure releases AI co-processors to aid them...


Yeah I figured that most of the costs would be from iterating and training many models over time. $4.5m is surely not the kind of spending that will make google nervous or will give OpenAI much of a moat.


to clarify -- Microsoft invested $1bn in 2019, half of which was Azure credits. The other half is cash. Since then, they invested a further $2bn.


And META reportedly has spent billions on the meta-verse. It's kind of interesting that language models are now making meta-tech look outdated.

Then again maybe language models will create the metaverse


Did metaverse ever not look like a VR rehash of second life? I’m genuinely curious, I‘ve had VR headsets since Oculus DK1 I’ve never seen anything very compelling on the VR persistent alternate reality front.


Metaverse wanted to do what VRChat had already done tbh. VRChat just needs polish & moderation/to create a proper platform.

Realistically VR clients needs to pose as browsers to load vr:// links which can be connected to objects/actions in VR ie <portal color="green" size="200,200" pos="122.469,79.420,1337.25" href="vr://some-online-shop-showroom.domain.tld" />

That way it's done in an open, browsable way compatible with the expectations that we've gained from regular web experiences. Ie you have your VR home, there's a "bookmark" door you can walk through to go to Amazon's showroom, you search for a particular product and it lets you walk around and pick-up and examine the various options, you can then jump to a particular brand's individual showroom, etc.

Some people might feel that's a little dystopian I suppose but I think it's cool.


Maybe we need to progress tech a bit before we try to move up a level to meta tech.


Inference costs are substantial, but not insurmountable, especially in the endgame.

A decent chunk of the tech community can already run smaller T5 Flan models or the 6B EleutherAI LLM GPT-J (and the likely similarly sized upcoming Open Assistant) on their own machines, at decent inference speed (< 0.2 s per token, which is ok enough most of the time). By 2027 or so the majority of consumers will likely exceed that point.

What happens when models are updated every day automatically and you can run all your search and answer tasks on your local machine?

With GPT-J - which is unreliable and aging now - I can already learn core intro to psychology topics more accurately/faster than with google or wikipedia, and I can do that offline. That's a cherry picked use case, but imagine the future.

Why would you use something that has ads when you can run it locally, and perhaps even augment it with your documents/files?

In the end game this is where Google is in the same place as Kodak in my opinion right now. Sure it's $0.01 or more a search for OpenAI now, but it won't stay that way (they reduced prices by 66% half a year ago), and at that rate you can already make the unit economics work anyhow as a startup.


It is a loosing uphill battle to reduce the operational costs as deep learning models get larger and more complex. Nvidia CEO says hardware alone will not be able to address the growing compute demand. The solution is computational optimization which is what we do.


Yes, and also HW optimization further up the stack.


1- the chips are not efficient currently (graphic card reused as neural net) 10X-100X gain

2- Moore’s Laws

3- Algorithm/architecture improvement


1. Im not sure who you believe will produce these chips, or who will use them. You are correct that specialized inference chips will get you a 10x gain. So what? If you want several million of them in a datacenter, that's a tall order.

That's on top of

A. the hundreds of millions it will take to get a design to production.

B. The complete and total lack of allocation at anybody who could make you chips, except at very very very high cost, if at all - have you forgotten that automakers still can't get cheap chips made on older processes? Most allocation of newer processes is bought out for years.

While there is some ability to get things made at newer process, building a chip for 7nm is 10-100x as expensive as say 45nm.

C. The fact that someone has to be willing to build, plan, and execute putting them in datacenters.

This will all happen, but like, everyone just assumes the hard part is the chip inefficiency.

We already can make designs that are ~10x more efficient at inference (though it depends on if you mean power or speed or what). The fact that there are not millions in datacenters should tell you something about the difficulty and economics of accomplishing this.

People aren't sitting around twiddling their thumbs. If Microsoft or Google or anyone could make themselves a "100x better cloud for AI", they would do it.

2. Dead. Dennard scaling went out the window years ago. Other scaling has mostly followed. The move to specialization you see and push to higher frequencies is because its all dead.

3. Will take a while.

The economics of this kind of thing sucks. It will not change instantly, there isn't the capability to make it happen.


I think you missed the TPU, which is a Google chip that gets you the 10x in inference, and there are millions of them ALREADY in the datacenters, designed, fabricated and installed. You can use one, for free, with Colab


I know a surprising amount about of Google and TPU's :)

This is not accurate - they are neither cheap nor really built for inference.

I'd have to go look hard at what is public vs not if you want me to go into more.


I am thinking about Tesla with Dojo and Tenstorrent.

Both have a similar architecture (different scale) where they dich most of the vram for a fabric of identical cores.

Instead of being limited by the vram bandwidth they run at the chip speed.

Nvidia/Intel/AMD/Apple/Google and others surely have plans underway.

As the demand for AI grow (now clear that there is a huge market) I think we will see more players enter this field.

The landscape of software will have a dramatic shift, how much of the current cpu running in datacenter will be chips for AI in the future, I think it will be most of them.

Jim Keller has a few good interviews about it.


TSMCs earnings show a significant decrease is demand this past quarter. AMD, Nvidia, and Intel all report falling demand. There will likely be allocation of 7nm and 5nm opening up even in the near future, especially as 4nm and 3nm come online in the next few years.

Shortages of 28nm and older nodes are not indicative of other nodes, because 28nm is (or at least was) the most cost effective node per transistor, (so plenty of demand) but no new fabs are being built for that node.


All these conversations have one glaring omission. As it stands right now, ChatGPT is a net negative on the ecosystem it exists in. What does that mean?

ChatGPT extracts value from the ecosystem by hoovering up primary sources to provide answers, but what value does ChatGPT give back to these primary sources? What incentivizes content creators to continue producing for ChatGPT?

Right now, nothing.

ChatGPT (or its descendants) must solve this problem to be economically viable.


They don't have to. Ad & search dependent companies need to answer for themselves how to handle the coming disruptions. As an analogy, Kodak was a disruption opportunity, not a problem, for apple and flickr.

Yes, maybe some content dries up -- no more stock photo sites -- but entirely unclear how important and they can wait to see how zombie companies adjust. Ex: ChatGPT encourages us to put more api docs online, not less.


Google has already taken a lot of heat from increasingly keeping people on the search results page rather than sending them to the content providers. Chat interfaces are going to take that problem to the next level since they not only present someone else’s content but do it without linking to them.

At some point that destroys the web as sites move behind paywalls. Google or Facebook giving you less revenue is still a lot better than receiving nothing.

In some cases, that’s fine (AWS doesn’t mind you learning how to call their metered APIs on someone else’s site) but there are a ton of people who aren’t going to create things if they can’t make rent. Beyond the obvious industries like journalism, consider how many people are going to create open source software or write about it when they won’t get credit or even know if another human ever read it.


That's not really a problem for the adoption or economic viability of ChatGPT, though. At some point, it hoovers up all the knowledge of the Internet, encodes it into its model, and then - the model just stagnates as content providers stop providing content. That's not a big deal for it - it'll continue to be the primary place people go for answers even when the source material has thrown in the towel and decided they don't want to play, just like how people continue to go to Google even though webspam & SEO have long since made the web a place where you don't bother to contribute.

Eventually the ecosystem might collapse when people realize they get more accurate, up-to-date information from sources other than ChatGPT. But considering that ChatGPT's answers are already more "truthy" than "truth", accuracy does not seem to be a top priority for most information-seekers.


Once all competing language models and providers have hoovered up all the existing knowledge and can do similar things with it then margins for that part of the story will shrink rapidly.

It will all be about access to new information, access to customers (such as advertisers) and access to users attracted to other aspects of the platform as well.

I think producers of new content and their distribution platforms will have a lot of leverage. Youtube, Facebook, TikTok, Spotify, Apple, Amazon, Netflix, traditional publishers and perhaps even smaller ones such as Substack and Medium, are all gatekeepers of new original content.

I think Google is best positioned to make the economics work. Unfortunately, they don't appear to have the best management team right now. They keep losing focus. Perhaps the danger of their core business getting disrupted will focus their minds.


The content is a bootstrapping tool. Once the language model gets critical mass it gets further training data from its interactions with users, which are private to the language model's developer. It's like how Google Search bootstrapped off the content of the web, but the real reason you can't replicate Search today is all the information about who is searching for what, which links they actually click on, and how long they spend on the page.


They don't need to solve that problem. Lots of things cannabilise on others without needing to pay them back to be viable. Wikipedia is really just a collection of sources, summarized. It owes nothing to the authors of the source material and does not seek to redress the balance. Google is a sophisticated filter for sources, it doesn't need to pay anything back to them to provide value for the searcher. Same with chatgpt, it filters and transforms its source material but owes nothing in return. News will still be published, data will still be generated at scale.


> Wikipedia is really just a collection of sources, summarized. It owes nothing to the authors of the source material

And yet it provides references and attribution where possible most of the time.


...which do nothing for the websites that had the original content.


This is the opposite of true in my experience: if you run a content-heavy site, Wikipedia is going to be one of your top traffic sources — especially for time on site since the visitors who arrive tend to have a very low bounce rate.


Look at my profile, I own content-heavy sites and have for many years. I can show you logs - Wikipedia does virtually nothing. And the content of my sites has been regurgitated by Wikipedia thousands of times.


It maybe doesn't drive much traffic directly from Wikipedia, but you might have a higher SEO rank when people search in google for whatever your sites are about. Thanks to the links from Wikipedia.


Which are tagged with ugc or nofollow :DD. You have to realise that the current model is based on outrageous theft of people's hard work.


If you view the source of any Wikipedia page, they purposefully include "nofollow" tags, so Google ignores these links!


"nofollow" or not, it does not mean that search engines do not take that into account. They maybe don't scan the linked site there and then, but I would be surprised if they did not take note of that someone linked to it.


All I can say is that my experience has been very different. Wikipedia editors have been very good at citing our primary sources.


I mean my sites are cited over a thousand of times, but it only makes sense: a tiny percentage of visitors who view the Wikipedia page even reach the bottom of the page and then click on one of the links. And there is no benefit in terms of Google rankings.


Are you concerned about all your data being scraped and getting directed for references by chat bots?

How would that affect your monetization?


Yes, it's pretty much game over for free-to-access factual content sites. I've been focusing heavily on AI in recent years, so I saw it coming. It's been a death by a thousand cuts, with Google incorporating long snippets, etc.


The majority of content on the web is just rehashes/remixes/duplication of existing content. The percentage of unique, original content is small in comparison, imo.

Ie there may be 10-100 news articles about an event all with the same source. Youtube has tonnes of duplication/"reaction" videos where the unique content portion is very minimal.


> but what value does ChatGPT give back to these primary sources?

The dissemination of their thoughts and ideas.


With no attribution or way to discover the source. That's great for propagandists but maybe less great for everyone else.


When a real person tells you something in person today, how do you know the original source?


Well that person has reputation/credibility and some reasoning they apply, before passing on the information. Just because you read that the world is flat are you gonna start telling people that? Now let's be clear, some people do mindlessly regurgitate nonsense, but their creditability is typically very low, so you ignore them. There is a grey area where some things aren't clear, but on the basics people of average intelligence are fairly robust, I'm not convinced chatGPT is.


You can ask where they heard it from.


Where did you hear about the economic benefits of Georgeism from? Do you appropriately attribute sources if you mention it to someone?

I know all sorts of things, many in great detail and with high confidence, that I would be very challenged to appropriately source and credit the originator/inventor. I suspect most people are similar.

Substitute “memory safety of Rust” or “environmental concerns with lithium batteries” depending on your interests


Maybe the next generation of LLMs will have more favorable things to say about you if you have published interesting things on your blog. Which in turn would be visible to any employer looking you up in that LLM.


Unattributed thoughts I'm not convinced that is giving back, further I do think this is susceptible to attack, how many flat earth articles do I need to pump out for chatGPT to consume and come to very wrong conclusions?

Perhaps there are some mitigations for this I'm unaware of?


The preview is over, so I can’t link it, but Kagi had GPT3 assisted search, where the model would explain something and provide links and references. They are planning to integrate it into their search, can’t wait, it seemed useful.


> Imagine no one actually reads web sites. Instead a blog post to share an opinion, I'll probably want to make sure my opinion gets picked up by the language model. How do I do that? Computed knowledge may render many websites and blogs obsolete.

Realising that has made me wonder why I should bother write anything publicly accessible online.

Aside from pure altruism and love for my fellow human, or some unexplainable desire to selflessly contribute to some private company’s product and bottom lime, in a world where discovery happens through a language model that rephrases everything it’s way and provides all the answers, why should I feed it?

What do I stand to gain from it, apart from feeling I have perhaps contributed to the betterment of humankind? In which case, why should a private company reap the benefits and a language model the credit?


> What do I stand to gain from it

The AI will absorb your words, and some small part of you will gain immortality. In some small but very real way, you'll live forever, some part of you ensconced safely for all eternity in a handful of vectors deep inside a pile of inscrutable matrices.

...at least, until some CEO lays off that whole team to juice the stock price.


That sort of thing always felt meaningless to me.

Sure I could carve my name or a blog post into a cave wall… so what.

“Some small part” of me doesn’t live on.

Even some small part of Aristotle or Cleopatra doesn’t live on. Ideas and stories live, but people die.

The death of personality is currently total and final.

I don’t know why Billionaires don’t invest their entire fortunes into research on reversing this.


I think relatively few people share this kind of existential dread. It actually has never crossed my mind personally.

If I think 500 years into the future, what would be great is if my descendants are ample and thriving, and my values are upheld. That feels like such a win to me. The fact that I won't physically be there is irrelevant.

On the other hand, artificial continuation of an otherwise impact-less life sounds awful to me.

I suspect that billionaires (certainly, the 2 that I have some insight into having worked for them) think much more about impact they are creating, than some sort of "hang on forever like a barnacle" type of existence.


Yes, that was my take too. Except that for those who care about such things, it is already supposedly achieved thanks to the internet archive.

> ...at least, until some CEO lays off that whole team to juice the stock price.

Or the model is retrained on a different somehow more relevant dataset. Or the company shuts down because of a series of poor choices. Or something new and vastly better comes along.

Or... who knows? The possibilities are so vast that seeking immortality is ultimately futile.


"The AI will absorb your words, and some small part of you will gain immortality. In some small but very real way, you'll live forever, some part of you ensconced safely for all eternity in a handful of vectors deep inside a pile of inscrutable matrices."

That sounds like a sort-of-religion of the future, actually.


I did have chatGPT create a new religion for me. I have to admit, it was quite compelling.


Care to proselytize? Just for interest's sake.


Some of us find that writing things down helps us form and test opinions. And hopefully there will always* be a market for new explanations of novel ideas, before they are well enough understood for LLMs to do a better job.

* I figure I’ve got about 25 years left, so always = 25 years. Good luck, kids.


It definitely does. But if I'm never going to be able to get feedback of any sort on them, or even know if anyone read it, why should I bother with hosting and maintaining it online? This use case can be solved for using a pen and paper. Or a notes app.

> Good luck, kids.

Thanks!


Time to introduce a robots.txt extension:

DisallowModelUse: *


> Google AdSense becomes mostly irrelevant

AdSense is going to be able to be more targeted and relevant than ever before.

Last week, Linus Tech Tips used ChatGPT to build a Gaming PC... from parts selection to instructions on how to build. When chatGPT said, "first, select a CPU", Linus asked it questions like "What CPU should I choose, if I only care about gaming?", and got excellent answers.

I can imagine BestBuy, NewEgg, and Microcenter will be fighting for those top AdSense spots just as much as they do today

"Bard, I'm looking for a blender to make smoothies" ... "does it come in red?" ... "I want it to crush ice" BUY


Delusional to think Ads work only on Keywords.

Where there is human attention, there will always be ads. The more context, the better ads.


"Attention is all you need"?

Joking aside, there's no reason AdWords can't become AdWordVectors and be even more effectively targeted.


How do you know it's not that way now?

Keywords will still be around at the user interface for people buying ads, they are easy to grasp. Part of the secret sauce is getting those keywords mapped into the right entities in a sort of knowledge graph of things you can spend money on that is also connected to all the content of the places you can serve ads on.


Good point.


This is what scares me about these chat interfaces.

In today's world, an ad is clearly an ad.. Or is it? Even now we have advertorials and "sponsored posts" that blend into content maybe a little too much sometimes.

What happens when chatbot companies start taking money to subtly nudge their models into shilling for this or that product.


Or manipulating social / political views. Scary stuff.


There’s a smaller surface are for ads in a targeted chat session. At present, Google can show me ads on the results page. Each subsequent result that I view is an additional slice of my attention.

It’s possible that Google can deliver a few targeted ads, but what if they can’t? What about the rest of the market that’s now gone? Possible that all those missed opportunities remove the ability to discover price.


You completely ignore the fact that companies will show ads everywhere, there is no reason that they would not try to inject ads into chat.

"Here is the answer to your question about oranges. But did you know Tropicana is made from 100% real orange juice?"

"A project manager is a ... Often the software project managers use is Zoho Projects for the best agile sprint planning"

If they can put ads in it, there will be ads in it.


Sure, but that doesn’t change the fact that Google has coasted on an ad model that has depended on Google being the information gatekeeper of the web for nearly two decades. Over those years, Google has demonstrated a remarkable inability to build successful products even when they have market advantages and nearly unlimited resources to throw at them.

This is the first time that the primary cash cow has been seriously threatened, and it’s not unreasonable to bet against Google winning the scramble to figure out a chat AI ad strategy (or any product strategy) that would keep them in their current near-monopoly position.


Future prompts in search engine backends: Assistant, answer the following question concisely: "What is a project manager?". In your answer, casually mention Zoho Projects in a positive way.

Actual GPT3 answer: A project manager is a professional responsible for leading a project from conception to completion. They coordinate the activities of the project team to ensure deadlines and budgets are met. Zoho Projects provides project managers with the tools they need to manage projects efficiently and effectively.


"Native advertising 2.0"


And the best part?

All the money goes to Google!

No more sharing with websites where Google Ads appear. They can even autogenerate youtube channels explaining popular or trending topics. Which of course, they will know, because they'll own search and AI generation. So there will also be no more paying a large portion of youtubers.

People who explain topical subject matter on youtube could, if Google chose, be eliminated. And even if Google doesn't, some content mill in Manilla definitely will.


> All the money goes to Google!

That's an aspect I hadn't considered, nor heard anyone else suggest!


The ads could be slipped right into the chat itself.


Which is almost assuredly FAR more effective.


As you have a detailed conversation with the Chatbot it will know a scary amount of detail about what you are looking for. It can target you in extreme detail. It does not have to show it on many places of with vivid pictures. It's enough with the text dialog based on your detailed inputs.


The ads need to be served in context to a conversation and cannot just pollute a search page like they do now. Ads now are easy, dumb things.


Google could ask the llm what products or services would help with the question and show ads for that. Just tried it and it worked pretty well.

> Me: 8 year old girl birthday party ideas

> Chatgpt: <a list including craft party, scavenger hunt, dance party>

> Me: what products or services could I buy for it

> Chatgpt: craft party: craft supplies such as beads, glue, paint, and fabric - scavengerhunt: prizes for the scavenger hunt and decorations - dance party: Hire a DJ or a dance instructor, and purchase party lights and decorations

Though in reality Google already has highly tuned models for extracting ads out of any prompt


The question becomes will you trust information that is paid for by advertisers or you will trust information that is paid by you, the user?

With ads in link based search engine, you can skip or block them, but if it is a part of a one sentence answer, there is not much you can do about it, so consuming it will be much more frustrating.

Of course, there will still be a lot of people who will choose the free information paid by the advertisers, but there will also be a growing number of users who will prefer not to have advertisers pay for the information they put into their heads (it is already clear that such information will be of higher quality).

My prediction is that in 10 years, all free information paid by advertisers will have 'for entertainment purposes only' label, because by then we will understand as a society that that is its peak value.


"a growing number", "a lot". Where are those users now? We are a tiny lot, nearly economically inconsequential. Your prediction is optimistic.


Those users are now paying for Kagi search for example. They are maybe a tiny lot because this evolution of information consumption has just begun. My prediction was for 10 years from now. Patience.


The motivation for writing a blog post may be the same as when blogoslhere originally started - for your community to read it.


I’m afraid that we will be drowning in synthetic blog posts. Same goes for comments section…


We're already drowning in very high quality synthetic comments. In fact the high quality is the best way to recognize them... What the actual users post is trash, and then all of a sudden there's a huge thread of very educated users having a conversation that just happens to plug a product.

There will be some shifts for sure, but I'm not convinced that they'll be that large, since we're already pretty screwed on the signal to noise ratio of the www.


It's fascinating to think about the future landscape of the search and web.

Some assumptions: 1. Url-based web will not wither away. 2. Asking questions in the chat-like mode is more natural to people. 3. Generated answers cost more when longer. 4. Generated answers are some kind of distilled knowledge and can't be right all the time. 4. People don't like long answers and prefer the concise one. 5. Sources and citations make generated answers more credible. 6. Fully exploring a question needs a lot of information from different views. 7. Generated answers

some simple thoughts: The search behavior would hugely be two main steps: 1.getting some concise answers from the AI model directly through a chat, which might be enough for 90% use cases. 2.some more extensive search just like how people are searching today, which might be a kind of niche.

For websites, being cited in the generated answers will be the new kind of SEO things, and it would be a good strategy to producing some newest, deep or long-tail knowledge and information, which leads to a more traditional way of search because AI model doesn't have enough data to generate a good answer.

...


>Asking questions in the chat-like mode is more natural to people.

It's not just that it's chat, its the ability to refine. Currently, I search something. It returns garbage. I search something new. What I dont do is tell the search what it did wrong the first time. I might sort of do that with -words, but its a fight every time.

The beauty of these new chat systems is that they have short term memory. Its bein able to work within the parameters and context of the conversation. I dont particularly care if it is "chat like" or has its own syntax, what I want is a short term state.

And at the same time, I want long term state. I want to be able to save instructions as simple commands and retrieve them later. Like if I am searching for product reviews, to only return articles where it is convinced the people actually bought and tried the products, not just assembled a list from an online search.


I think this is the same type of thinking that people had when they think technology will "steal" jobs, when in reality we have lower and lower unemployment as time goes on.

Most likely this will not actually happen, and even if it did your content would still be valuable as an AI is analyzing it in a more nuanced way than just looking for keywords. Which, by the way, is exactly what search engines do.


> I think this is the same type of thinking that people had when they think technology will "steal" jobs, when in reality we have lower and lower unemployment as time goes on.

Technical changes do kill jobs. We always find a way to invent jobs, of course, but that doesn't mean old jobs aren't viable.

Movie theaters once employed professional musicians, not they don't, because movies have audio built in. Obviously a net-loss since musicians are jobs people like. Less coal miners or farmers is probably a good thing though.

It all depends on the type of job you replace. If you replace hard manual labor jobs, you're a net-good. Replacing a job people like... and you'll get a negative label. Doesn't change the fact that progress marches on, but jobs are killed by tech changes.


A lot of the jobs we have now resemble David Graeber's "Bullshit Jobs" though. I suspect many jobs that largely consist of making powerpoints and looking out of a window could disappear tomorrow without upsetting anyone except the incumbents.


I completely disagree and I love David Graeber.

We will automate some bullshit jobs but create all kinds of new bullshit jobs that have titles that start with AI.

Thousands of titles like "AI ____ ____ Manager" that also does nothing but schedule meetings about meetings about AI.

The mistake to me is to believe bullshit jobs are the end result of some systemic inefficiency that AI is going to automate out of existence. I just don't think that is at all the case because otherwise we would just cut so many bullshit jobs right now without AI.


It has stolen in that productivity has skyrocketed while wages have been kept relatively suppressed. That's the feat of technology.


Why would keywords disappear? Wouldn’t you just use keywords that appear in the user input and response to serve ads?


>Those new language models are "Google killers" because they reset all the assumptions that people have made about search for several decades.

There is problem with those AIs - you view the world trough the ideological prism of their creators and censors. So chatgpt that is more than happy to make jokes for some races and not others or other types of shenanigans is something I am sure will happily hide the information I actually want to find and feeds me what it wants me to find. So until there are guarantees about ideological neutrality they are not suitable for search for me.


> What is the motivation to write a blog post by then?

LLM’s have done a poor job with attribution and provenance, but that will change.

At some point, it becomes a bit like academia or celebrity: your motivation to write is the social exposure your writing earns, which leads to real world opportunities like jobs or sponsorships or whatnot.

And the great/terrible thing is that these models will know whose opinions are influencing others. The upside is that spam disappears if human nature changes and nobody is influenced by spammers. The downside is. . . Spam and content become totally inseperable.


>Those new language models are "Google killers" because they reset all the assumptions that people have made about search for several decades. Imagine that people start using those chat bots massively as a replacement for Google search. Then the notion of keyword disappears. Google AdSense becomes mostly irrelevant.

Look up the term "native advertising", that should help you in understanding how online ad ecosystem works.


How so? How does native advertising solve the problem of diminishing volumes of keyword searches? How does it even relate to search ads?


if you inject native advertising into the responses? nothing is technically limiting the chat responses from being exclusively the output of the LLM. mix LLM model output with native advertising copy and it's nearly undetectable if you're not looking out for it. and good luck catching those integrated ads in your adblock.


> AdSense becomes mostly irrelevant

Google doesn't make money with AdSense, it pays publishers with it. I agree that there won't be a need for AdSense, that just means Google gets to keep 100% of the profit instead.

No, not everyone gets a fresh start at all. To train anything close to ChatGPT you really do need to be the size of Google or Microsoft to have enough the compute power.


Google realized this years ago, hence Google Now (the voice interface) then Google Assistant. The problem right now is their backend isn't competitive, but Bard could change that.


If you are looking for news and recent events, LLMs are useless.


For the next six months... At most. Efficient model updates are already in the pipeline, and the only reason there's a learning cutoff is probably AI ethics.


you could imagine those AI trained to incorporate ads in their answers.


was this comment satire?


I don't think most people realize how much infrastructure separates something like ChatGPT from Google-scale deployment. OpenAI isn't suddenly building transoceanic fiber and datacenters near most major population centers. They aren't signing production-ready contract vehicles with most major OEMs and governments. And in the gap it would take a new entrant to acquire 10% of those assets, Google has 100,000 engineers who would iterate on this and 1000 other technologies 1000 times.


You are missing the Microsoft partnership. MS has a 48% stake in OpenAI and provides all the infrastructure through Azure, including purpose built machines for model training. Microsoft has also launched GitHub Copilot, summarization features in Teams, and is widely reported to be adding GPT features to Bing.


Google has far more resources for training models and inference. Likely more than all their nearest competitors combined.


If you’re talking about physical hardware, Google pales in comparison to Amazon, and Gcloud is still smaller than Azure. It’s possible google’s private compute makes up for the Azure difference but it’s not like they’re in different leagues in terms of access to hardware.


Does it pale? How? AWS is very opaque when it comes to power demand, but you could use that to estimate machine counts and they admitted to Greenpeace that in 2016 they used 500-650MW — a wide range that obviously obfuscates on purpose. See the last Clicking Clean report from 2017.

Google/Alphabet used 6514 terawatt hours in 2016 according to data collected from yearly reports at https://www.statista.com/statistics/788540/energy-consumptio...

If my math is right, dividing that by 8760 hours in a year, you get 743MW.

Of course, that also includes office space, etc. (did the AWS number, too?) but it should be clear that, cross checking with data center builds, optic fiber and energy purchases as well, for years Google+YT+Apps+GCP were larger than Amazon and all other AWS customers combined. I didn't even factor in efficiency, something that Amazon started focusing on quite a bit later.

Someone might be able to extrapolate both numbers to today based on infrastructure, other metrics or other spend in quarterly financial statements (or power procurement, which will be complicated by the non trivial Amazon vs AWS distinction).

All of the above to say that Amazon probably has more compute now, but it's a stretch to talk about "paling".


Yeah. I have seen internal numbers on YouTube's daily ingest a few times over the years and every time my jaw drops. Like, the number from 2020 is ridiculous compared to their own number from 2017, and that was ridiculous.


I have an idea what you're talking about, because I worked with the YT folks to reduce their Colossus costs and on Google storage in general until 2015. Another humbling and iluminating experience was comparing the Borg resources in the main crawling/indexing cluster to the Top500 list of the same period, something that always comes to mind and makes my eyes go wide when people compare DDG to Google. Or the day when a colleague taped a page saying only ”1E” next to our cubicle because we had just reached that much storage across the fleet.


Unlike Azure and Amazon, Google doesn't have to rely on Nvidia GPUs for training and inference, they achieved significant performance gains by using their custom TPUs.


Your comment is simply incorrect. Amazon has had their own TPU equivalents for training and inference for years:

https://aws.amazon.com/machine-learning/trainium/

https://aws.amazon.com/machine-learning/inferentia/

I really don't think this would be a limiting factor regardless, even if Amazon didn't already have multiple generations of these products. It's not as if an Amazon or Microsoft sized company is incapable of developing custom silicon to meet an objective, once an objective is identified. TPUs also aren't really that complicated to design, at least compared to GPUs.

I'm slightly surprised Microsoft hasn't bothered to release any custom ML chips for Azure yet, but I guess they've run the numbers and decided to focus on other priorities for now.


But you are confusing yourself. Just because AWS has more servers being rented to and used by their customers doesn't necessarily mean they have the most CPU power available for themselves to run AI models.

I also doubt computing power is the real bottleneck. Anyone of these companies (and most other companies too) can build enough large servers sites and they have the money. The costly and difficult part is the engineering resources, doing the right thing technically (AI wise) and business wise, not lose time, not bet on the wrong horse etc.


I suspect that Google's own usage (search, YouTube, CDN, etc) is still bigger than GCP, is it correct?


> google’s private compute


Source?


There’s definitely no legitimate source for that info. Details of the exact machines that sit in the Azure and Google clouds are proprietary.


Yeah they also trash up the brand. Seeing ChatGPT be thrown into Teams and Bing let’s me know where it’s headed


MS cannot deliver successful stuff on the web at scale.

They can deliver unsuccessful stuff on the web at scale, they can deliver successful stuff that turns out to be inconsequential for their bottom line on the web in the long-term (AJAX came from MS), but it's just not in their DNA to take over the web. They had lots of chances to do it during the last 20 years or so, they had all the silver bullets at their disposal, they just couldn't deliver what it took.


> MS cannot deliver successful stuff on the web at scale.

It's this type of confidence that you come back to HN over and over for.

Let's ignore Azure, Office 365, and Microsoft's other online properties. In just video games: Microsoft owns and operates the Xbox network with over 100 million active monthly users[1]. That runs on the internet with all the features of a social network and more. I think they can deliver successful stuff at web scale OK, whatever their other shortcomings.

1 - https://hothardware.com/news/xbox-live-surpasses-100-million...


> Azure, Office 365

I explicitly mentioned the web, Office365 is not a successful web story. It is a successful enterprise story (afaik it is still wildly profitable for MS the company), but it is not a successful web story. Ditto for Xbox.


It doesn't fit your very niche definition of the web, that you still haven't defined explicitly. Let's try:

Is the web a consumer product that runs on the internet?


> It doesn't fit your very niche definition of the web

What's niche about websites?

Leaving aside the snarky tone, I'll give you an example of how MS has continuously botched their web work for 20 years.

If I go on Bing Maps (which I'm actually using on one of my pet projects on account of their permissive licensing) and I type in my Bucharest street address the auto-completion thing works fine, which is a plus (Apple's is much worst at that), but then again their map web project ends up pointing me about 200-300 meters from where I actually live. Google Maps does it pitch-perfect, has done it pitch-perfect for years (I think there have been around 10 years since they've added exact address searches for Bucharest). Many other such cases.

Later edit: Forgot to mention, two of the closest POIs shown on Bing Maps have been closed for two years with other places having taking their place in the meantime. Again, GMaps has been almost instantaneous on putting that on their maps, MS seems to be a lot slower at that. That's what the web is all about, data that counts and that is of interest.


> What's niche about websites?

Office 365 is web based. So does that count or is it conveniently a website also used by companies (and consumers) so we discount it?

What's unique about web sites that you think is harder than what they've done elsewhere? What makes web sites harder to scale?

Their inability to do mapping well has nothing to do with websites. So please, kindly stick to a definition and stop moving the goal post.


Uh sorry what? Azure is the second largest cloud provider, well ahead of google.. That's like the definition of 'web scale'

Not sure what decade you're stuck in here with comments about ajax


Not agreeing with GP but the capacity of their public clouds is very different from total capacity. Amazon and Microsoft have bigger public clouds, but Google's own workload is probably bigger than anyone else's or even the public clouds by a large factor.


> Azure is the second largest cloud provider, well ahead of google.

I explicitly mentioned the "web", as in, what we're doing right now on this website. Leaving aside the fact that Azure is mostly used big corporate/government entities, there's no web-startup that will potentially dominate the web and that would go into Azure just as.


What makes you think anything being discussed in this thread is specific to HTML and web browsers?


You're absolutely right, if they closed up github tomorrow, nobody would even notice. They could also pull vscode off the repositories, and download sites as well and there'd be nary an eye flutter. /s

These two products alone are used by maybe 70% of developers, and don't forget about copilot and all the integrations between github/vscode.


As someone using OpenAI in production, I can attest to the lack of stability and consistent performance in the current offering. Depending on the time of the day (and who knows what else), the same calls to GPT3 can take from 500ms to 15 seconds.


chatgpt launched about 2 months back and have over 100 million users, they have secured funding from microsoft so they are going to optimise and scale its infrastructure.

go for their paid offering if latency is bothering you


Parent is clearly making API calls to GPT3, which isn't really comparable to using ChatGPT (paid or unpaid) via a browser.


Correct.


Maybe it’s changed now, but last time I looked, Google had failed to materially diversify its income, so it is susceptible to competition. All that expensive infrastructure is going to be a weight around Google’s neck if their search revenues start falling.

Of course, even if search revenue falls, it won’t happen overnight.

But I honestly don’t see how laying all that fibre or owning those data centres is a moat around Google. These things are hugely capital intensive, to be sure, but theres a big market for both.


Google cannot make a good SMS app for Android. They will never be able to launch a simple chat bot. What they will launch will be a monstrosity. The only Google AI product that people will use is any AI embedded within their existing apps and services...which is a great thing, but I wouldn't bet on Google being able to launch any new product that would have a decent UX.


What issues do you have with the modern Messages client on Android? I think it's great.


they will acqui-hire a good product and team, as they did with YouTube, Android, waze, etc.


I’m sure people were saying the same things regarding IBM in the 80’s, Microsoft or Cisco in the 90’s… Google is a big corporation with inertia and averse to risk regarding its core business, they are not immune to being disrupted.


> I don't think most people realize how much infrastructure separates something like ChatGPT from Google-scale deployment. OpenAI isn't suddenly building transoceanic fiber and datacenters near most major population centers.

This feels like "thinking inside the box" to me. None of these things are necessary requirements for being a "Google killer".


And they literally published the transformer paper back in 2017. I doubt that their next move was to go on vacation for 6 years.


You don't need to own your own datacenters to do this, anymore than someone like Netflix (who run on Amazon cloud) does. I'm sure any of the cloud providers would happily take the money of any company that had the $$$s to pay for it. The barriers to entry really don't seem that great... the tech behind ChatCPT is well understood and multiple similar things already exist from various companies.


AIUI, Netflix runs a lot of their business in AWS, but content streaming isn’t something they host in AWS.


Well they have those "last mile" boxes which they give out to ISPs to install close to the end users to improve performance (and reduce overall bandwidth needs), is this what you mean? Would love to read more about this if you have some links.


It’s called Open Connect if you want to search for it:

https://openconnect.netflix.com/

Some people who’ve worked on it post around here and they’ve funded things like FreeBSD development which is interesting for seeing the kind of problems you have at that kind of traffic volume.


Right, I meant other than OpenConnect. As far as I know the rest is AWS, and the OpenConnect boxes need to get their content from somewhere :-)


OpenAI already has the infrastructure Azure and Bing. Microsoft has a bigger public cloud then google


The public cloud is probably a small part of the Google Infrastructure.


Based on the busy mock-up in TFA I don’t think it’ll take 100,000 engineers to beat them.


What is TFA in this context? Does anyone have a link to to said "busy mock-up"?


TFA would be "the f*cking article", in this case.


Fred's right, it's "the f'ing article", as in RTFA (read the f'ing article) popularized afaik on Slashdot in it's golden age, which came from RTFM (read the f'ing manual), a popular response to a question that can be easily answered by reading the appropriate man(1) page or other such reference material.


As far as I've read, DuckDuckGo doesn't have 100,000 engineers.


No but they don’t have a search engine either. They mostly use bing.


They’re leveraging Microsoft’s engineering.


I guess a couple of reasons:

1) Because it is disruptive. Things may get shook up, and Google may not end up in as exclusive position as they are currently in. There's risk.

2) Because it's not obvious how advertising would fit in with a conversational interface. Google may stay as #1 search/answer engine, but would revenue be adversely affected?


I think it could fit in with a conversational chat bot the same way that ads are part of podcasts and YouTube channels: a conversational and explicit ad that helps pay for the otherwise freely available content.

Full disclosure: I work at Google, but nowhere near the chatbot stuff. This is my humble opinion and nothing more.


The problem with this is that it pretty explicitly makes the chatbot a worse product. The beauty of google ads is that they don't degrade the service. If they're not interesting just scroll right past. I don't think people would use the adbot service if Bing is providing a competitor that only has banner ads or maybe a clearly marked ad link. Bing wins because they go from making no money on search to some, but if google copies the strategy their market size is destroyed.


ChatGPT and the like are the automated tools to seamlessly introduce advertising into text/image content - with the right prompt all the advertisements are between the lines. All it needs at the end is: this paragraph includes content sponsored by [Comapny name].

Edit: Why have ads separable from content if you can just weave them in? Ad-blocking is toast, you'll have to use another AI service to fish them out and resummarize.


Maybe we can fast-forward past this projected future where our computer overlords are just trying to sell us stuff, and get to the bit where we are insignificant ants they mostly ignore. Sounds better, really.


Banner, sidebar, in-line, interstitial, modal… what advertising modes wouldn’t work on conversational interface?


Yes, plus it could introduce of new forms of advertising in the conversation itself. Algorithmic product placement. It could get weird.


If ChatGPT tells me it's enjoying a refreshing Coke, I'm not buying it ! :)


Welp, statistics say otherwise :-)


Conversational interfaces seem ideally suited to voice assistants. When you ask "hey google, what's the default password for my router?" does it make you sit through a 30 second ad before saying "the default password is password".


That sounds like a complete nightmare. At least in a web page I can at least look away from the ad.


It may be worse than that if this Sony patent is any guide... https://www.techradar.com/news/sony-patent-would-have-you-ye...


Amazon Alexa does yell ads like that. It's horrific.


None of those would work with a chatGPT like search platform. Search ads work because they're largely indistinguishable from the organic content. You search google and get a million links, the first few are ads. The entire model of chatbots is completely different. So the only truly effective way to advertise on a chatbot is to make the response itself into the ad. But would people use such a service, or would they move to another one that just uses banner ads which monetize at much lower rates? If chatbots completely replace the current search ecosystem Google will be forced to either lose huge market share or make much less money per search.


> Search ads work because they're largely indistinguishable from the organic content.

I'm not sure this is true. Surely they are more lucrative when they are indistinguishable, but for a lot of Google's history they were very noticeably different, a fact Google even prided itself on, and bragged about. Seems like it was growth requirements that changed that, not whether the ads were originally lucrative when people could tell they were ads.


I'd argue that it it give Google even more capacity. Just one more place to advertise.

I'd say your answer is an "advertisement" for ChatGPT. The parent AND the grandparent didn't mention OpenAI. This is how you could advertise in answers.


Well, those are ways you could try to do it, but whether they would "work" (attract click-thrus, not alienate users) is another question.


I think it's the first credible challenge to google's results in 2 ways.

1. People are typing questions into it and finding it hugely useful in ways that overlap with google. It just answers you, and you can ask clarifying questions, argue with it, etc. It's proving useful.

2. More subtly, the excitement around it seems to reveal that people are open to alternatives. For decades, alternative search engines haven't made inroads. If people are typing queries and questions into somewhere new, well that implies they're open to something new. So whether it's chatGPT or bing.com or ... the zeitgeist is shifting.


Its ability to synthesize answers with data from numerous sources is also game-changing and not something a traditional search engine could ever hope to do.

It’s not infrequent that googling for the intricacies of some badly-documented library turns up almost nothing useful, or the bits that are useful are scattered sparsely among the results, some of which are pages deep. It’s so much easier to ask ChatGPT to explain the struct, function, etc in question and have it pull the pertinent info from whatever corners it of the internet it found these things in. Even if it’s only 80% accurate it’s a massive time saver.


Hey Google, how big is Maine? G: It's xxx big. Person: Do I know anyone there ? G:No one is in your address book. P: What's the closest person I know to Maine? G:Bob. G: Is this about your asking about Maple Syrup last night? There are closer places to get fresh Maple Syrup, how about Michigan? Contrived, but not impossible.


I think anyone who's used ChatGPT seriously could tell you that this isn't the same thing, at all.

You're giving "assistant"-like questions, but the problem with those assistants has always been how shallow their responses have been, which significantly limits their usefulness.

GPT's responses are still shallow in an absolute sense, but relatively speaking they're the Mariana Trench compared to Google's little creek.


Depends what the objective is.

If I want to be amused by a Hacker News comment in the style of the Bible, draft a conclusion for my essay or a engage in a long and superficially appealing conversation about philosophy, I'm not using Google's publicly-viewable AI products

Then again, Google - with and without a conversational interface - will do just fine with what the capital of Maine is, much better with what the weather is like in Maine tomorrow, and there's a lot more usefulness and revenue in associating it with stuff in my address book and selling me flight tickets to Maine which is... some way outside ChatGPT's wheelhouse.


I think that's fair. But one is a mature product, and the other was just born/launched and will improve massively over the next decades. That this new tech is competing already bodes badly for google (yes they will incorporate their own AI, but the cat is out of the bag...)


>what am I missing?

Google currently controls ~90% of the search market. AI-driven chat/search is a serious threat to this dominance. It's likely that after the market settles it won't have the same marketshare. Given how much Google has been dependent on Search/Ads and its other failures to execute, this is a serious revenue threat.

This industry (and site) does have a tendency to exaggerate and take a current trend too far. Google is far too massive to 'die'. I believe even keyword-based search will survive. But going from 90% to 50% will be bad for Google.

IMHO, the worst case realistic scenario is this: Google loses a lot of funds, is forced to close more unprofitable projects. This causes more lack of trust, and more projects are closed. Eventually Google is kicked down to a tier below Apple & Microsoft.


I’d like to think this could be good for the rest of the company if every PM was told that ad sales weren’t going to make their stock options skyrocket, so they need to build a profitable product of their own. Unfortunately, I suspect that’d mean cramming ads everywhere.


OpenAI has had huge mindshare with students recently. Every (ish) kid who writes an essay or needs information summarised is hitting ChatGPT. Kids are where change happens, ask Facebook. They get on new things instantly - within days we were reading reports of kids writing essays with the latest tool. Young people are also naturally viral, sharing the cool new tool with entire classes. This is a marketer’s dream. You don’t need to change how sluggish corporates work, just focus on kids and watch them work.

Google has to fight to get that mindshare back.

Not sure I’d be building my company on the ADHD-fueled Google roundabout that generates and destroys systems monthly. You just know whatever they release is someone’s promotion project, until it’s in GA.


> OpenAI has had huge mindshare with students recently.

Clubhouse was once the new hotness - until it wasn't.


There's no comparison. Microsoft didn't consider integrating Clubhouse into one of their primary consumer platforms. One of the biggest software companies in the world has looked at the state of play and decided that this was important enough to shift their search strategy as quickly as they can.

I didn't say it's a guarantee of success, it's a possibility. There is a non-zero chance that ChatGPT takes market share away from Google unless it moves very quickly.

Having said that, the status-quo play is obviously the easiest bet. It's far easier to be Google than OpenAI or Microsoft at this point.

https://www.wsj.com/articles/microsoft-adds-chatgpt-ai-techn...


Clubhouse wasn't popular with kids. It was popular, and only briefly, with adult tech people.


True. I was only highlighting that popularity can be fleeting - and was for Clubhouse. ChatGPT's popularity may equally be a flash in the pan.


Anyone else remember the beginnings of youtube? Google tried to compete with them, with their own Google Videos. It sucked - and I remember reading how the engineers running it couldn't even figure out why they lost. In the end, Google just had to buy their competition, because they couldn't figure out any other way to win.

They've tried to compete elsewhere, too, and I don't think they've ever been able to make a go of it outside their cash cow. The only thing they've really been able to do is 'search results + ads'.

I don't think they'll be able to modify that winning combo even in the slightest and still be successful. And in this case, they can't buy the competition. Micro$oft already did.


Disclosure: work at google

YouTube was founded in ~2005. Google bought it in 2006. It is now 2023.

YouTube has spent 2 years as its own company and 17 as part of Google.

Try to remember what YouTube functionality was in 2006. It was very different and has grown a lot.

The narrative that Google doesn’t know how to innovate YouTube doesn’t add up.


Youtube now is 90% of what it was 10 years ago and what might it good in 2006 is the same reason it's good now, the UI is clean and it's easy to use. That's it. Also, following the acquisition, for many years most of the people working on it were the original Youtube folk, not the "Google people."

Google hasn't shown they can do new product in a very long time... see the GCP mess, Stadia, and the hundreds of other total failures (Plus, Wave, and many I've forgotten).


Google has to dump Sundar and bring in a more old school leader. Google is a massive company and has to stop doing it's "everything is beta all the time, with teams internally competing for the same territory"

I'm so put off by it, and been made a fool by it so many times, I'm phasing out Google in my life and long ago stopped recommending Google products to people in my life.


YouTube was grabbing lots of attention and with the attention comes ad revenue so it made perfect sense for Google to buy YouTube and scale it up. YouTube grew into something exceptionally valuable for both the Google and the people who use it.


The point is that they had to buy youtube, not that they weren't able to put a shiny interface on top after they bought it.

What I'm trying to get at is, imagine if Microsoft bought youtube before google could - given Google's track record with their own video search, they would not have been able to compete with youtube, and they would almost certainly have simply lost that market. I think that's what happened here. Google is amazing at algorithms, but has very little business sense...they can buy a successful product, but rarely create one of their own.


the point was that Google Video sucked and wasn't able to beat YouTube.

Same story for Google+


Google+ was better than Facebook, which was its primary competitor. The problem there was the same that any new social network has: nobody is going to switch if they can't bring most of their social graph over.


Was it though? Maybe for some users in some use cases?

Clearly TikTok is a better surveillance network and managed just fine.


Google already has a trackrecord of beating Microsoft. Do you remember Windows Phone? Well Google made Android in response to Windows Phone + Bing threat not in response to iPhone.


Well, Steve Ballmer wasn't exactly an innovator - just kept milking the cash cow.

Satya Nadella seems way more tuned into the zeitgeist. His heavy bet on OpenAI may seem excessive, but at worst it's going to be cheap insurance and at best may be a game changer.

I think Google's producing Android was a reaction to BOTH iPhone and Windows Phone - they didn't want to be frozen out of the mobile advertizing market by competitors that owned the platforms.


Windows Phone launched in late 2010, long after Android. Do you mean Windows CE / Windows Mobile? But if so, those weren't really a serious threat to anyone.


Yeah I meant Windows Mobile/Phone.


Google bought Android Inc. in 2005, 5 years before the launch of Windows Phone.


..and Chrome


As an engineer my concern about "google killers" is that I can't see an easy way to scale and control/optimize them in business settings. Apart from factual misstatements happening in the ChatGPT, what about source attribution? How is the relevance of a source determined? How is the flow of information through the network preserved (sourceA => sourceB => sourceC)? With Google we also don't know exactly but I can image some version of PageRank as tuneable. Finally, how to add new pages to index and measure potential "forgetting" that could happen?

Unless somebody could clarify those for me, this is what currently petrifies me -- some uncontrolled black box presenting its clandestine view of the web with no way to follow the breadcrumbs.


Google was already going this route. ChatGPT is simply pushing GOOG and META to go faster.

Simple as.


But without ChatGPT when would Google put a LLM into Search? 2025? 2030?


They wouldn't have been the first ones to release an AI chat bot, but that doesn't matter. What matters is, given ChatGPT is here, is Google going to lose the coming battle to monetize advertising in AI agents?


It still can be a Google killer even if Google comes up with something better because Google makes its money from sending people to the highest bidder. So far there's no clear path to make money from ChatGPT, let alone match the sum Google is making from Search.

If this new paradigm dominates the way people use computers and it's not as profitable as Search, Google might indeed have to scale back.


ChatGPT can make money by subscriptions, or they can also put ads next to the chat?


Can they? Google made $162B from search ads last year, to match that with the leaked $40/month subscription you will need 340 million subscribers. Can they really get 340 million subscribers at $40, which is %50 more than Netflix subscribers at double the price?


Can they make $100 billion a year profit?


Because sitting at a computer or a phone is, in fact, unnatural. We are trained to do that. Asking a question of another person is natural. If you can get Alexa or Google assistant to reliably, without lying, answer questions, that could be huge. Caveats, but so much $ is put into those things and they suck. Also, if you could get a Google AI to be like a real assistant, context and understanding of what you're doing, that could also be huge. Just getting the assitants of the world to really intract would slice a large piece off the Google search pie, and potentially set up whoever does that well to be the next major interface to tech.


Not sure if this is accurate. Voice interaction is very slow, and low bandwidth. Visual interaction is much faster. Hence people love their spreadsheets.

Even if a voice assistant allowed you to interrupt it, to make fast course corrections, it would still be much slower than, say, interacting with the filters on Google Flights.

And I am saying this after having built for myself a bi directional voice interface to ChatGPT. There are certainly situations where it is great to use it, such as while driving, or perhaps in the kitchen while having your hands full. And probably on mobile, where screen real estate is scarce. But those doing information, work, or even just online shopping, probably won’t be giving up their screens anytime soon.


I think simply there are 2 killer features in LLM's vs Google search for me:

1 - Natural Language with prepositions and easy ways to include, exclude and filter 2 - Refinement - "No that wasn't quite right because X - please factor this in and try again" is a lot more intuitive than multiple rounds of operator uses and "memory exclusion" of pages you have already seen.

I find that chatGPT will give me what I need within a few iterations, Google search sometimes takes a lot of searching and reading to get an idea of what I need.

I feel like ChatGPT + Github Code search could be a killer combination for programmers


If I'm looking for a specific answer, then with my googling skills, I can easily find an article or tutorial written for a target audience in mind.

If I on the other hand use generative AI, then the answer (and hopefully correct one) would be generated only for me. This is the personal touch Google currently misses and I guess it's appealing for many people.

Currently LLM-s are not Google killers, they can't find me a restaurant, a nice watch or other stuff I'd pay money for. Yet.


rather than google killers, I would say it is SEO or publishers killers. There will be less traffic to websites or blogs, certain type of websites will be impacted most than the other. in fact google will have more way to recommend and curate content in their way.


And that's exactly why the CEO of Google published the blog post about Google + AI, because they don't think AI is a potential Google killer. /sarcasm Yes, they took the risk seriously and that's great. However, Google's lack of good UX will probably produce a monstrosity of a tool that no one will use. The only usable parts will be what's embedded in the Google Search engine.


The reason is that high amount of usage is itself disruptive. The same way Google has an advantage of receiving 90%+ of search requests, ChatGPT has the advantage of being tested with millions of requests per day. If Google cannot test a similar AI technology with the public, it will hardly get results that are comparable with ChatGPT.


I use google in confluence with chatGPT. When I’m researching something, chatGPT gets me started and points me in the right direction but then I use google to find first hand sources/more detail/images/videos etc.


This is not that easy. Kodak and Nokia could have pivoted as well (and boy they tried).

And this is not even a problem of scale, it is obviously difficult to change course for a giant supertanker, but the most insidious problem is the money makers inside the company, they usually have a lot of power and they won't allow anyone to butcher their margins.

Killing the cash cow is difficult. I don't see Google taking the risk.


Branding is one thing. "Google" has been a commonplace verb for a long time, which alone is worth billions to them. ChatGPT is the first time something has come close to stealing that spotlight, since for the time being, it's answer a lot of queries far better than Google would. So even if Google makes something technically better later, it might be too late to replace ChatGPT as the AI king.


ChatGPT will have a very hard time. If I want to get on ChatGPT I usually google ChatGPT and click the first link. It is a crazy competitive advantage.


"Google killer" is hyperbole, but I think Google does have a challenge since chat-based search may be much harder to monetise & thus less profitable. One reason is because people are used to having ads on the search results page, but ChatGPT presents itself as a human advisor, and in this context I suppose people would be quite unhappy to have ads injected into their conversation.


That’s because you are trying to have a nuanced conversation about this, but “hot takes” are the most valuable currency on social media these days.


Unfortunately I don't think it's that easy.

That is, the cat is out of the bag:

ChatGPT not only showed us the power of AI, but it showed us a bunch of non-AI things like:

- Ad-free results - Clutter-free results - The elegance of not having to click on links

A competitor could capitalize on this, putting Google's ad-driven, click-driven, clutter-driven model at serious risk.


I agree, I think it's a fairly unimaginative take (a common one though) that ChatGPT is likely to lead to the usurping of Google.

It's a two-part claim; a) yes, conversational AI agents will replace typing searches into a text box, but b) I see no reason to think that Google can't easily monetize that format.

Regarding the first point, I think Google appears to have been caught a little bit off guard as to how soon this transition would happen. People seem to be over-indexing on the "code red". I do think it's a strategic mis-step by Google to not have a product ready to go here. (Their broader strategy was quite risk-averse and that was probably sensible, given the shit-storms that previous systems like Galactica and Tay generated; Google couldn't be the first ones to publish a prototype/demo system like ChatGPT, or the NYT would have jumped all over them for the inevitable questionable utterances.)

But the second part; given AI agents are here, who's going to win the competition to monetize them? It seems clear to me that Google is in a great place to monetize and capitalize on this technology, and I think they will win if their language models are better. (So far, they seem to be way ahead; LaMDA was early 2022, and it's clearly better than ChatGPT.) If Google's version of this service is substantially better, but intersperses ads based on what you're talking about (the Gmail model for ads), would people use this? I think it's clear that consumers would take the better free service that comes with ads, vs. paying ChatGPT or accepting inferior quality.

Let alone the fact that Google can put the assistant onto billions of Android phones, fine-tune a model per user, offload compute power with device-based inference to save OpEx, and so on; all of these will give whoever is running the AI agent a lot more ad targeting power.


> yes, conversational AI agents will replace typing searches into a text box, but b) I see no reason to think that Google can't easily monetize that format.

Current user workflow:

- ask question in google.

- get shitty results

- checkout 5 pages worth of results are try a few more searches. In the process you've seen 5x the ads you would've maybe 10 years ago when the results were better.

- In the middle of this you maybe clicked on 3 or 4 sites that were spam sites which themselves had on adsense ads.

New workflow:

- Ask google a question.

- Immediately get a detailed thought out prospectus, or presentation, or whatever with a top-down overview of what you wanted, maybe you're still curious so you ask a couple follow up questions. Unless they put an ad between every sentence, you'd only have seen 1/10th the ads in the 3 replies it took to come close to google search.


1 click from a search ad on the first page of google is worth more to an advertiser than all the impressions derived from the all the lame sites you trudge through.

Source: I've run a digital ad agency for 8 years.


> what am I missing? Why would a chatbot like ChatGPT disrupt Google vs forcing Google to simply evolve.

Because it's not clear yet whether anyone else can currently develop a chatbot interface as capable as ChatGPT.

For many ChatGPT is already replacing a lot of Google searches so G needs to hurry.


For many? Citation needed. Yes there is a lot of hype, but is it really replacing a lot of Google searches, if so is that your anecdote or is there data here?


Yes, sorry that's an anecdote of me and everyone I know that has tried ChatgGPT. Obviously in the grand scheme of things this is nothing, but if other early adopters behave similarly it is a great threat to Google because if it catches early adopters then everyone else will follow after a while.


The question becomes dose the best model win; or dose Google's existing processes, infrastructure, and advertising relationships allow it to purchase or reimplement the best model?

Probably still an open question but a better chance then anyone has had to disrupt them for two decades.


Way too many responses to your question are trying to engage with debate over products rather than focusing on the topic of monetization.

Google makes money through ads, and especially ads that get you to click through to somewhere else.

They do this through SEM ads that appear on your search query to direct you to a paid destination.

And they do it through a display network on 3rd party websites that Google search inevitably ends up funneling you to.

If you are simply engaging with an AI that's synthesizing those results so you don't have to, that's less time you spend on those sites seeing Google's ads, and less incentive to click through to paid results.

Their entire business model basically goes up in smoke if AI successfully intermediates the Internet.

This doesn't preclude them from competing, as you point out, but you generally don't want to see your cash cow get slaughtered and then suddenly be in a highly competitive market for what will replace it.

Having a 90% market share in the Titanic isn't an enviable position.


Agreed that what the product is exactly doesn't matter too much. It's kind of inevitable at this point that search is going to go in the direction of a chatbot.

Ask jeeves back in the day already knew that what people really want is a question answered. Google search and its competitors were a well-lived offramp on that road. Ultimately free-form interaction is just more intuitive.

But with that said, I also am not so quick to call the death of SEM ads. Just because chatbots exist doesn't mean people don't want to visit other websites. Display ads will continue to be a thing.

Similarly, there's no reason chatbots can't now direct you to sites or advertise products as part of their responses. Heck, this is a much more sinister form of marketing with astronomically higher click rate since a chatbot is responding authoritatively with a recommendation.

Yes, google is going to have to pivot... but this is a problem they're very well suited to solving and they have a very strong incumbent advantage in the meantime.


Ironically, using ml/dl language models, which have only statistics and no intelligence, is a Google killer to the extent that Google uses AI and destroys their real useful search. The declining quality of Googel search seems to only be accelerating.


> This just doesn't make sense to me.

It does if you remember that Google is in the search ads business, not in the search business.

I guess you could find a way to weasel in ad copy into ChatGPT's answers, but that will kinda massively kill the vibe.


i have come to realize that most of the "google killer" chanters, have in fact not used ChatGPT for in-detail work.

It absolutely has the same problem as Self-driving and only after 10 years we have accepted that it still is off for a very long time.

Mercedes is doing Marketing by calling it self driving, but limiting it to areas the car seems to have a 99% understanding off. They are betting on the fact that they will make more money on the 99% buying these cars, then the few cars that inevitably will crash.

Essentially how insurance has worked for ever.


If anyone with a couple million dollars can create a great LLM-powered search engine then maybe Google's search engine division should be valued a couple million dollars.


Maybe what they mean is Google Search killers?


> But I guess I'm wondering: what am I missing? Why would a chatbot like ChatGPT disrupt Google vs forcing Google to simply evolve. And perhaps make even more money?

People thought google has no time to develop something like ChatGPT when they were totally wrong.


Blockbuster should have been able to easily pivot too.


I think Google is in a tough position, because for them ads (both in search and other websites) are a huge portion of their business, and they need to find a way to monetize LLMs without killing its own cash cow.

Microsoft has a lot of advantages here - they can introduce LLM to search at a much smaller scale (order of magnitude), which means its cheaper, and they have plenty of other products making tons of money, so they can take their time to figure it out (also they're already adding ChatGPT to tolls like Teams and possibly Office, so they'll be able to increase revenue from these products).

Google is also seen as a bit of a dinosaur - they struggle to introduce new products, and recently we've been hearing more about products they kill rather than huge successes. It seems that as a company they lost their innovative spirit, and that's why people don't believe they'll evolve quickly enough.


Here's a concrete scenario: StackOverflow makes money because people go to it for answers to their questions. The answers are provided by people who write them because they get points for providing good answers, that they use for reputation points to get good jobs. It's an ecosystem. Everybody profits. Now Google scrapes the contents, and provides a service where you just ask your question and it gives the best, most accurate answer. Nobody goes to StackOverflow any more. People stop posting answers, because nobody asks the questions, and nobody reads their answers any more. StackOverlow goes under, and the ecosystem dies. Where does Google gets its information now? The feeder pipeline has died.


> Where does Google gets its information now? The feeder pipeline has died.

The bar has raised now. People will ask the questions on SO whose answers they couldn't find from anywhere else including AI.


Correction: the AI will ask the questions it couldn't find a high enough confidence answer to. We are all now working for the AI to find niche scraps of knowledge that haven't been indexed.


I had similar thoughts.

The next evolution towards general AIs would be the implementation of curiosity.



30 years old, engineer here. I am starting to get really scared. I don't mean that for my job security.

We are just starting to observe societal effects of social media. We haven't reached the era where we, as an entire species, recognise and regulate it impacts legally & properly.

I am starting to feel like we are losing it against the machine as a species. I don't fear being replaced, but I feel the culture getting mangled in a way we won't be able to recover some things, because it will be too late.

It is not a ring-wing, left-wing political thing. But some sort of innovator's dilemma. We are like cornering ourselves into an innovator's dilemma as a species.


Yes and: I’d argue that whoever has been distorting social media to further their own goals (for example the Cambridge Analyticas) has already mangled culture enough that the transition to AI might be imperceptible.


^ This.

we will all become servants to the giant AI, feeding it more and more levels of detail and obscurity


maybe people get paid to answer questions that AI can't answer, can be a new eco system


I don’t understand this scenario. If someone already posted the answer then asking the question again on SO has no value. If the LLM can automate giving answers then we don’t need SO as much as before.


> If the LLM can automate giving answers then we don’t need SO as much as before.

This is probably the misunderstanding: the LLM can only automate giving answers because it has been trained on all of SO (or other similar communities). It's a summary of SO, not an alternative to it. When new problems arise, the LLM will need to be re-trained to include the new SO answers, it will not be able to synthesize new knowledge.

So, if SO is dead, the LLM can't get the info anymore to answer questions about new topic - but you won't be able to tell for a long time, long enough to kill SO most likely (assuming this actually gets traction, of course).


I’m not convinced the LLM doesn’t have emergent answers. I often ask ChatGPT questions about data wrangling that are quite esoteric, such as “write some code in R that sorts an array as alphabetical, but puts x and y at the beginning”. Even if it gives the wrong answer at first, it seems to get it right eventually.


But why would SO (or an equivalent) be dead. I know very little about ML so it is entirely possible that I'm way way way off here but it seems like a product with this sort of tool integrated would be capable of determining when a query produced a not-very-useful result and could even aggregate such queries. We'd have a very powerful training system where the AI can communicate back "hey I need training on this sort of stuff" and then iterate. If this is valuable, people can be paid to provide this training input.


If Google launches this tool, Google will make money off the answers this tool provides. The fact that this tool is trained on the content from SO will not mean SO gets any money. Also, if users just get the answers from Google Bard, they will not visit SO, and will not contribute to SO's community or revenue. So, the SO community will eventually die if Google Bard is good enough.

The whole premise of the economics of these LLMs is built on the assumption that the training data is (mostly) free. If you need to pay people to provide the training input, you will quickly find that you're spending more money on creating the training data than you're getting out of the finished model.


Stack Of never paid anyone though, so it seems possible for Google to launch a service where people answer questions to feed the AI and give back similar tokens as SO.

I mean, they could make a game, where people have to try to beat the AI (and other humans) in making the best answers to questions.


Sure, they can try to essentially create and run a new SO - though that's still far more costly than what they did today. Especially when considering similar effects on other content sites.


people will visit SO less, make it less profitiable to stay alive.


If this the case? If the LLM must be re-trained each time a new problem arises, it means that it doesn't "reason"...so what's the point?


Remember LMGTFY? People will be too lazy to ask the AI and will instead ask basic and completely obvious questions on Stack Overflow, forums, and so on. Probably forever.


The most accurate answer might still not be correct, so they'll still go to stackoverflow to ask new ones, thus keeping the pipeline alive ?


But if the traffic to SO is reduced significantly because of Google, there will be a death spiral. There wouldn't be a site to go and ask new questions, and even if the site is still there, nobody will bother to answer. The core problem is that the people who create the new knowledge (the answer writers) and the people who aggregate the knowledge (StackOverflow) will not get any concrete return for their efforts, since they are cut out of the consumption loop.


You don't need to be a big business like SO to provide this service. The death spiral would only make sense for a business with demands for growth out pacing the market, so they exit. or who has an operation which is too expensive vs the ad revenue.

But that doesn't mean other businesses can't fill that reduced role more efficiently.


You need to be a place that people go to ask and answer these questions. It doesn’t matter how big or small you are if you don’t exist because the people answering the questions don’t go to you in the first place. You’re right that there’s another means in which a business can fill this, but I don’t think it’ll be stack overflow.


Here is where the loot anything you see wild west kinda approach will break down.


> because they get points for providing good answers, that they use for reputation points to get good jobs.

Please show your work for the second part. Seems like a general statement but don't see that reflected in reality.


> Where does Google gets its information now? The feeder pipeline has died.

If people are asking questions and don't get one, then they'll still seek out an answer? I could see a cycle where Q&A sites get eaten by Google but as long as there is demand for fresh answers there will be services that fullfil them.

People will still seek out sites to post questions requiring context and domain issues that AI won't ever fully address. Plus the human instinct for social interaction and asking the same question already solved 5x before.

It doesn't have to be a Stackoverflow tier business but forums will remain a thing and there's plenty of reputation networks outside of Q&A.

I can get 90% of Yelp's resturant info via Google SERP (menus, hours, location, reviews) but I still use Yelp all the time and as a business they are doing fine.

This AI stuff will be a similar thin layer on top for quick answers, but niche content sites will still flourish IMO. Just with less traffic for a whole new classes.


> can get 90% of Yelp's resturant info via Google SERP (menus, hours, location, reviews) but I still use Yelp all the time and as a business they are doing fine.

Wasn’t there a law suit around this? Sure you can get more info but how many will go that far?


I think the trend we see is there will be a generalized engineering-category of LLMs and people maintaining them will just feed new frameworks/libraries/languages etc. documentations to training sets and those models will then happily answer questions about "new stuff"


Google already shows snippets from SO and has done so for years, without AI. Why would this be any different?


The difference is that the snippet points to the source. If you are like me, when I see the snippet and find it relevant I immediately go to the source, since I get access to the full context and others' comments. With a technology like chatGPT, that link is cut.


Are you sure that google wont provide the link? If these chatbots could provide references for their answers it allows them to link back to websites, solving lots of problems mentioned here


> Now Google scrapes the contents, and provides a service where you just ask your question and it gives the best, most accurate answer.

You can already nowadays "google" for an answers instead of asking them on StackOverflow. So what is the difference to the situation we already have?


I use github copilot to create simple functions by typing the function name and copilot does the rest. This works reasonably well for basic stuff. It becomes unhelpful and starts creating incorrect paths to library functions as soon as you add your work context.


This doesn't address the parent's excellent question: how do these models continually get trained and updated if they put their key sources of training data out of business?

Related: I'd be quite worried if I was a Q/A site like StackOverflow or Quora.


> This doesn't address the parent's excellent question

That's a large part of HN in a nutshell.


Imho, quora users don’t use it for finding answers but rather reading personal experience. I used to write on quora fairly often many year ago. But something happened, and in my subjective experience the platform became far less interesting and useful. Significant part of questions turned into barely hidden shills for business or products, right answers seldom ever get visible and overall quality of content went down drastically. Maybe I’m not a representative but from my point if view the readers and writers on quora 2023 won’t have any different experience when there’s some smart machines that gives right answers.


It's not excellent. It's a red herring. Google is already showing responses on the SERP without you having to go to SO, without AI, and the hypothetical scenario hasn't happened.


That is in fact the elephant in the room: you will most likely never be able to actually use it for complex stuff because of the token limits required for ai.

Imagine analyzing a massive code base, sure it can tell you how you where solving function ex by translating it to natural language, but it still does not understand any of it.

As far as i know, training it on your dataset will not improve this.


Increasing the token limit is a solvable problem


Sure we just need next level super computers for these large models and the patience of multiple days to wait for output


Not necessarily - you just need hierarchical abstraction memory. I reckon my "token" limit when analysing code is around 7.


Increasing the token limit without needing more resources to run the network is a solvable problem


but you do sure see the problem with a codebase right?


The current token limit comes from a O(N^2) memory requirements for N tokens, there is research that's trying to reduce this towards O(N), for example as the (downvoted) sibling comment suggests. This is not exactly straightforward but not impossible either. It's not a fundamental limitation of language models going forward.


that still will only be enough to hold an extended cli application


That's not my experience at all. I find Copilot the best at understanding and making sense of my work context, spanning many files. I find it less useful for creating generic functions.


Sadly, we feed programming knowledge into ChatGPT now.


>AI can be helpful in these moments, synthesizing insights for questions where there’s no one right answer. Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web: whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner. These new AI features will begin rolling out on Google Search soon.

This is actually really huge. If done right, Google will be increasing the amount of "no-click" searches an incredible amount. I'm interested to see how good of a job the "factual grounding" works - this linked blog post in the article is pretty interesting https://ai.googleblog.com/2022/01/lamda-towards-safe-grounde...


That's good for users, potentially. But not good for the sites where google is sourcing the information.


Potentially good for users initially. But I can't see how anyone will be incentivized to create and post content for Google to scrape if they won't have any traffic from Google.

It could actually be a huge benefit in some ways if it chokes out the content mills. However, something tells me that they have little to no overhead compared to the people who actually toil away to post good, original content.


> But I can't see how anyone will be incentivized to create and post content for Google to scrape if they won't have any traffic from Google.

The problem right now is that the incentives have caused most of the output online to be garbage.


Exactly. If Google is able to provide better answers than the garbage websites with their SEO hacks, those garbage websites will not get clicks. I could see this improving the incentive system significantly


- collect creators content, make it searchable, become worlds largest company

- encourage more content because PROFIT

- content becomes garbage

- have to pay those pesky content creators

- slowly squeeze out entire industries by inlining more and more content

- still, not squeezing the juice all the way

- introduce "AI", it just laundries copyrighted content to look original

- bye creators

- for some odd reason people cheer you for this

- creators forced to make their content private


Garbage websites are 'content creators' now?


The incentive to what, be an unpaid content creator for Google? Search results lead to sources which land users on the content creators "property". Chat results, especially based on what we see right now, won't do that.


How can you call garbage SEO websites 'content creators'? I think we'll be better off with websites which spread knowledge for the sake of spreading knowledge. Down with ads and low quality copy-paste sites.


So everything being done for free except for Google taking all the profits?

Are you kidding me?

People need to make a living, creating good content isn't easy. Sure a few folks do it for free, but wholesale trying to kill off everyone who does it by making it financially non-viable is long term idiotic. What are you going to train on once everyone stops writing or letting your scrape their data to train from?

Search engines scraping your content is an agreement that they can look at it and use it, and send you traffic if it matches well with a user. Why would anyone subscribe to a deal that there is literally zero benefit except training some AI which will repurpose your knowledge.


People making a living is great ofc. People making a living off of copy-pasting content and ruining the search experience is clearly not useful and that is the only point I was ever arguing for.


This may actually be fantastic for the web. The current incentives are terrible anyway: cheat, scam and SEO your way to the first search result page and then do whatever since you'll get visits and decent ad revenue regardless of content.

Most people that make good content don't make it for money anyway. Did people back in the pre-google days think "oh I'd make this site but gosh darn there's nobody to pay me for it". They just went and made the site regardless.


> Did people back in the pre-google days think "oh I'd make this site but gosh darn there's nobody to pay me for it".

Google is a huge part of the reason the old web doesn't exist anymore. Artisanal websites cannot compete for visibility against corporate websites that have staff dedicated to figuring out SEO tricks from every imaginable source: page speed, HTTPS, image compression, meta tags.

The hobbyist back then didn't need to know all this. Today, not having HTTPS alone can cause your site to be hidden from search, even if it is read-only. In that kind of world, only the infinitesimally small minority will bother to make a website on their own dime.


I don't see why SSL is that much of an issue these days. Cloudflare does it for free, lots of hosting providers can handle let's encrypt for you.


I think this could be the death knell of sites that need to make money and publish content that can be easily understood in a short ChatBot answer, both mills like geeksforgeeks and hobbyists that wouldn't do it without a financial incentive. Is that such a bad thing? We've been complaining about SEO optimized crap for years.

Sites with complex or lengthy information will not fall to LLMs, IMO. No one interested in reading Paul Graham's blog posts is going to just read the AI summary and move on, for example.


Robots.txt to block anonymous scraping and offer google the opportunity to purchase your sites content if they value it. Web traffic can switch to another provider to index. Google search will be nothing but content mill garbage unless they want to pay for their AI fuel.


Website are not posting content for the sake of Google scraping it currently anyways. Look at reddit for example: fake internet points


Ya, but if you don't get fake internet points or any responses/comments on your work, what will be the incentive.


That's one of the big problems with these types of AI - piggybacking on everyone else's work, typically with no attribution _whatsoever_.


Yet you better cooperate otherwise you won't get any visitors at all.


Nobody owes you visitors.


Nobody owes you a normal life but it would be a shame if society actually thought that way and we never developed social security and universal healthcare.

Using the monopoly on search to dictate how you shall present your content to the google god is still a bad thing.


Google brings visitors to your site for free or no one would find it in the first place. You can always robot opt out and then your information is “secure”.


I've already banned the crawler from mine for the AMP and other crap they've been pulling. If you want to get more complete results, you'll need to use another search engine, which felt like the only thing in my power that I could actually do about their shenanigans. Now I'm glad I took this decision a few years ago: they've already got a search monopoly, but not being able to copy my content to further increase it? Yes please. I just hope the competition, with less deep pockets, is able to follow suit.


This seems like a personal dislike stance rather then principle, which of course you should fully take. But do you mind ChatGPT learning from your website?


Not sure. I don't like that I wasn't informed of them using my content, that there are no credits of any kind. I'd enjoy knowing it was useful and this info has been accessed by a chat user, the way I do when someone visits my site. That's the whole point of making it; I have no commercial interest or expectation but I enjoy helping others.

But from a "this is stealing" type of perspective? Nah, we're all standing on the shoulders of giants and everything we do is a remix of something else we've seen. A human can read my site and take "away" the content, just not at this scale. And they're more likely than a dead chat bot to let me know if my content has been valuable to them.


I’d rather get my information without having to click on a site. I get the need for attribution but Google can cite the sources it gets it’s content from, and I don’t really care about losing ad money.

But another issue is accuracy. Of course real sites aren’t always accurate, but they’re way more reliable than AI (and sometimes the site is ground truth like official docs so it can be trusted…unless…the official docs are wrong….).


Customers may not care, but content producers absolutely will. Chatbot like interfaces that lead to "no-click" searches are going to get sued out of existence, OR are going to lead to the establishment of lots of paywalls that lead the chatbot to have blindspots about information behind them.


News sites can reword information from other news sites, AI can train on reworded web text. It will keep the information but not learn the exact original expression, as it should be. Copyright protects expression, not ideas.


One thing I don’t get… I read blogs and articles for the experience I feel when I read them. There’s a look and feel to these sites.

Conversing with chatGPT is just giving me information and facts. I don’t see it replacing the experience of engaging with a website.


The web was a stepping stone.


if they cite sources, maybe they will create a new way to monetize a website based ln the amount of times it got sourced for an AI answer ?


Yeah what I totally want when I have a nuanced question is an AI that just gives me a mealy-mouthed, non-committal response, such as in the example for this post on whether guitar or piano is easier to learn.

If you asked your friend, who's a serious musician, and they gave this kind of couched answer, you would be really annoyed at them.


I like that, and that's the one thing I enjoy most about ChatGPT, but the problem with this scenario is that it breaks the web. If a search engine can give me all the answers I want, then there’s no point in visiting sites anymore, aside for e-commerce. So if you have a site you wouldn’t want the next AI search engine to crawl your content because you get no traffic back. So either they find a way to give traffic back, or soon enough there will be no content to provide through an AI search engine.


What traffic are you getting back from commenting on HN. If you're not getting any, why are you wasting your time? Of course you can post online for other reasons, like participating in a debate, building a community, working on a project, or just for hobby. And usually that's the good part of the web. Otherwise why would people be using site:reddit.com in their searches.


Getting on HN doesn't cost me a dime though, aside from spending personal time. Having a web site costs money both to maintain the hardware, and the content. And if someone wants to index the content he should give me back traffic. That's the premise upon which search engines were build. You break that, we end-up with a walled off garden that handles all the information.


Also, if no one's site is getting human traffic would the quantity of human generated content degrade over time? And AI content will increase? And then what will the AI's scrape/learn from? Themselves? How do you change their minds when the information changes in reality, but a bunch of AIs are stuck in a consensus loop? =)


I still think I will read articles for the opinion of the writer, the vibe of the site, and the experience of reading an article. I don’t see chatGPT replacing this for me


That could be incredibly dangerous for their Adsense revenue.


The missed innovation is always a threat to the core business, that's the trap.


The side effect of too much no-click results could be mainstream sites completely disengaging from Google Search altogether. Just as sites already set no-index zones, those could expand to protect from training.

That kind of move could be the actual “Google Search killer”


i for one, hope that the AI will mostly stick to analyzing the content, rather than search intent because Google has had a history of misreading my search intent, quite frequently.


i wonder what kind of downstream effects on the ad click industry this will have. would kill for it all to die and be reinvented in a different, more personalizable and clever way


People will still visit websites that provide entertainment, social media, shopping, services, web applications, source material, porn, etc. Enough opportunity for clickable ads.


isn't the vast majority of ad market surfaced via google?


The skeptic in me can’t help but think this experience has been designed to keep more people on Google properties longer. It feels parasitic.


They make money when you click away from their properties


Companies will have to start opening up their wallets to pay for their "Bard-Rank"

BULLISH for GOOGL


> Soon, you’ll see AI-powered features in Search

Despite this blog obviously being written for Wall St, this line is key. This is why anyone is going to struggle to compete with Google in AI.

Google Search is used so widely and its taxonomy of meaning so vast already that once they flip this on they’ll be unstoppable. (Assuming any rate of improvement whatsoever as we search.)

I’m not mentioning all other learning data like email, photos and drive, in part because other companies have similar, albeit in more specialised forms.


I wonder how long till they open it up to your private data stored in Google? Eg.

User: What should I be doing today?

AI: You have a court appearance at 9am. Prioritize that, because failing to appear might result in an arrest warrant. Next, prioritize your mums birthday - all the other family will be there, and with your mums cancer results last week, this might be the last. If you travel between them by bus, then you can spend the time going through your teams messages that you should have replied to last week.

Me: What about the leaking faucet in my house?

AI: You don't have the time to fix it yourself, nor the money to pay a plumber to do it, so I suggest you leave it leaking for now. I watched the video - its leaking down the basin, so won't do damage, and the water cost is around 8 cents a day.


User: What should I be doing today?

AI: You have a court appearance today, I have booked a route for you on google maps. Next buy a gift for your mum’s birthday, here are some recommendations sorted by ad spend spend. For your travel, I recommend these options, sorted by ad spend.

Me: What about the leaking faucet in my house?

AI: You don't have the time to fix it yourself, here is a list of contractors that can fix the leak, sorted by google ad spend.


Is that any different from trying to source it "on your own"?


Nah pry not, since search results are ranked by ads, same thing just in a chat based format.


I recall reading a very interesting short story about exactly this - handing over the entirety of your daily minutiae to AI and the moral consequences. I'm unable to find the name of it, maybe something by Ted Chiang?


Fun exercise for the reader: how much of this is actually possible with LLMs and how much is not.


Reading through all your private documents and extracting the useful information to answer an open ended question like this is still a little way off.

Specifically, LLM's can generally only take into account ~8000 words of context when deciding on a response. Summarizing the question and all necessary information for the answer into 8000 words is hard when the user might have millions of words in their inbox.

Having said that, I don't think it's far off. There are already prototypes of LLM's which have information retrieval abilities (ie. it could do a keyword search of your inbox to find a few relevant documents to read to decide on a response). There are also promising efforts to make that 8000 word number far larger.


> Fun exercise for the reader: how much of this is actually possible with LLMs and how much is not.

From what I understand about LLMs - not directly. But it may be possible to integrate LLMs with other services such that LLMs respond this way.


> Fun exercise for the reader: how much of this is actually possible with LLMs and how much is not.

I have no idea.

But it would be nice?* if this LLM stuff could be incorporated into the telephone system for my bank, pharmacy, etc.

It's obvious that these organizations don't want to connect me with a $killed human, so instead of having me interact with an infuriating "pretend" human, maybe CVS can fuse their phone system with ChatGPT to make the experience a little less maddening.

"Hey, I already gave you my birth date. No need to ask again. And I told you 30 seconds ago that I don't need to schedule a COVID vaccine. Just tell the pharmacist <xyz>."

* Be careful what you (I) wish for, I guess.


Yea this isn’t a huge reach.

At that point the line between commerce and giving the best advice is going to get REALLY messy.


Google could do this now. They already have your calendar and your contacts. Do you think that a conversational interface was what was holding them back? Thousands of engineers, but if only they could figure out how to merge Gmail and Eliza!


I'd imagine it'll be something they do. Hooking it into things you've told assistant, documents you've stored, emails, calendar events, map data, etc.

Pretty creepy and a tad dystopian but thats Google for you.


Isn't this what the Google assistant basically does?


I don't think it's that clearcut. These AI queries are resource intensive. Google will have to cut a fine line between quality and profit margin. If it's no good people just won't bother and it won't gain traction. The searches we make are already analyzed and all models tend to use billions of parameters anyway, even self generated. Google also are limited by their risk aversion, as seen in this extremely sanitized letter by Sundar, devoid of substance. Their existing AI tool is severely limited. If this new tool comes out looking like that prior Google tool, it's going to be very difficult for them to gain traction.


1. ChatGPT have made a decent go of it on a much smaller budget

2. Wall St tends to be very forgiving when the investment is something with a strong a moat as AI. Metaverse is an example of what happens when they don’t believe…

3. With a few exceptions like weather, their current searches provide links to static pages others have built, so much of their understanding and analysis is going to waste. Giving custom results breaks through that.


Do we actually know the budget behind ChatGPT? I know Microsoft has thrown money at them and are providing a huge amount of cloud resources but beyond that I've missed any mention of the funding/costs.


ChatGPT is notorious for being damn sure about absolutely wrong things. If something like that powers the search…


All the book recommendations I got were for non-existent books. Makes you wonder if it creates summaries of the titles themselves.


Didn’t Altman respond to Elon on Twitter and say an interaction cost 1 cent?



If Google handles 10 billion searches per day (guessing), what does that work out to?


100 million dollars


> These AI queries are resource intensive.

But perhaps only for now. When youtube first started taking off, Google was bleeding cash on the resources needed to support it. Perhaps the same will be true for LLMs.


It's still unclear if YouTube is actually profitable.


It’s been clear since Google started breaking out YouTube in 2020 that YouTube is profitable.


do you have a source for YouTube profit numbers? all I've ever seen is revenue stats with zero details on cost.


Google’s inability execute is why they will lose the AI war. The fact the CEO has to write this article to wall street shows that they were caught offguard and have no real plans. If they had plans, why would Google management declare a code red over chatGPT?

Take the difference between Microsoft and Google. Microsoft just released tools that will make your life much easier. Intelligent Recap of teams meetings, ability to assign tasks based on what was discussed etc. Microsoft will announce bing with have chatgpt integration tomorrow. MSFT is eating Goog’s lunch and Sundar needs to go.


Sundar needs to go? And who will replace him? Larry and Sergey? They figured out the early web but the web of today is the whole another beast that is waiting to be understood.


I would have kicked out Sundar after Stadia. Or failing to keep up Azure and Microsoft on cloud computing market share because he was late to game in building an enterprise sales team. Google needs to find a Satya Nadella to turn them around, i don't knfow if Larry or Sergey could that but anyone would be better than Sundar.


Sundar needed to be kicked out years ago.


Google is on a collision course with regulators over search. If they are seen to be taking yet more content and knowledge from websites and "republishing" it they are going to find themselves with large fines and sanctions.

I think they are too "big" to really push AI, it needs smaller companies willing to take a higher risk.


It's an interesting conundrum because facts aren't copyrightable, but in a world with intelligent language models that can ingest facts and spit them back out in whatever phrasing or format is most useful to the user, facts are also the only valuable part of such websites.

On the flip side, trying to make facts copyrightable seems like a terrible idea for all sorts of reasons. For example, if facts were copyrightable, that would make online discussion of factual stories illegal, since it's hard to discuss a fact without revealing what it is. Also, it's not always clear who should get "credit" for a fact, since facts are by their very nature true independent of who first reported them.

Maybe journalism will eventually become similar to academia, where journalists are funded by governments and large corporations who have an interest in learning, and the resulting discoveries are (ideally) made freely available for everyone to access?


In case of Google it’s not as much about facts as about antitrust.

They can break antitrust laws if they go too much in a direction of delivering outright replies instead of redirecting to websites.


Fair. Moreover, what are search or even content websites for if Google gobble up all that knowledge and distil it out.

But with China, Russia and India lurking, perhaps Congress might drag their feet on reigning AI in. Google Cloud are now poised to be going for some pretty decent govt contracts.


I know this wasn’t your point, but now I’m wondering if countries like India can afford GPU intensive search results.


After all the buzz of decentralization, AI is going to be the new era of centralization. One AI interface to rule them all.


Thats a very interesting point, are all the web3 folks looking the wrong way?


There are a number of decentralized AI projects in that space. However, based on how effective web3 has been competitively against other big tech competitors for search and social media, they will need to be an order of magnitude more successful than on previous efforts.

Most projects are focused on privacy, user control and being censorship resistant. Which are all important, just not that important to the majority of everyday consumers who will take features and convenience over those other benefits. If decentralized AI is going to be competitive, it must actually be better along the lines of its actual features that enhance productivity.


Hey look, a decentralized chatbot running BLOOMZ-176B (an open source LLM about the size of GPT-3)

http://chat.petals.ml

I'm contributing to the project by running a node in my garage with a single RTX 3060ti in it, and you can too: https://github.com/bigscience-workshop/petals

It's early days, but the tech is super promising.


Thanks, I'm aware there are actually numerous decentralized AI projects, but until proven otherwise, their success is unlikely. Outside of cryptocurrency, there just hasn't been much success challenging the big tech institutions.

Decentralized search and social media never gained traction. It turns out that the majority of people just aren't that interested in privacy and freedom. Features and convenience still win. If decentralized AI is going to be successful, it will have to compete head to head for features and convenience. I do hope that it is successful.

Nonetheless, decentralization is just one major hurdle. I have numerous concerns about the application of the tech overall. Too much to put into this post, but if you are interested I've written much more here.

https://dakara.substack.com/p/ai-and-the-end-to-all-things


> Google Search is used so widely and its taxonomy of meaning so vast already that once they flip this on they’ll be unstoppable

Yeah, remember Google+? they tried very hard and resorted to very intrusive UX patterns, but they ultimately failed.

IMO this is similar, ChatGPT has a huge head start, and Google needs to do A LOT of work.


Yes, but will their revenue survive flipping that switch or will that kill the golden Adsense goose?


Fair question.

Depending on how slimy they’re willing to be I’d argue there’s even more scope to push ads when “advising” user through a grey area vs straight up keyword search.

Think going several steps ahead of initial q, increased trust and reliance on responses, multiple follow up questions with more ads etc.

I’m just envisaging this based on my time with ChatGPT these last few weeks.


Doesn't seem like too much of a stretch from where they are right now, which is showing ads at the top of search results formatted to look just like search results. They do have a small sponsored tag, but nothing stops them from doing the same on ai results.


This is a huge revenue opportunity. Google would not be investing in it if it wasn't.


> that once they flip this on they’ll be unstoppable. (Assuming any rate of improvement whatsoever as we search.)

Unstoppable as measured by what? More ad revenue?

They're already the #1 search engine due to two decades of general goodwill and a decade of monopoly power in mobile.


>its taxonomy of meaning so vast already

Can this be integrated for training though?


I think the new NLP technology made all their search technology obsolete. This company is in trouble.


Anybody remember how we used to learn new words and build our vocabulary? Perhaps it went something like:

"What does this word mean"?

"Look it up in the dictionary"

"Why can't you just tell me??"

I remember the hassle of looking things up in a paper dictionary. Over the long term, it massively helped improve my ability to not just recall a simple definition, but also learn new words, as they were adjacent to what I was looking for.

It's effectively this hassle ChatGPT and Bard are 'solving'. Instead of a group of links that you'd personally evaluate and mentally rank in terms of usefulness (thereby building your own bullshit detector over time), you will now have a chatbot AIsplaining things to you like you were a 12-year old, regardless of the topic's complexity.

I'll admit I was concerned about about Grammarly.com ruining people's ability to learn to write. That pales in comparison to an opaque search engine promising to give you the keys to all of human knowledge.


>"What does this word mean"? >"Look it up in the dictionary" >"Why can't you just tell me??"

FWIW, I'm not sure I've ever done that and I've learned two languages (outside of a classroom environment, which I expect makes a difference). I usually just encounter words in books and infer their meaning from context. As I encounter the word more and more and tried to use it myself I gained an understanding of what it meant.

It is an interesting bootstrapping problem.


> Instead of a group of links that you'd personally evaluate and mentally rank in terms of usefulness (thereby building your own bullshit detector over time)...

This sounds an awful lot like email's "Spam" folder, an AI driven bullshit detector that I am particularly fond of.


Did you hear how Google made a decision that political email wasn’t spam prior to last election? So that AI driven filter is, at least, partly influenced by political bias.

What about the AI that drives YouTube auto play so somehow kids watch horror clips and everyone else gets nudged towards Jordan Peterson for some reason?

Having more of our internet and communication filtered by ostensibly “neutral” AI will get lead very weird and probably not great places.


> And today, we’re [...] opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.

It feels like a mistake to make the big announcement for this, but not open it up to a wide audience. It's not like ChatGPT hasn't been out for months now. Not sure how much they gain by making headlines before it's ready for people to play with it.


The announcement is probably targeting Wall Street more than any potential "user" (aka advertising target). The press was already bad - internal "code red" at Google due to panic that people saw ChatGPT as an alternative to search.

Meta/FaceBook's head of AI, Yann LeCun, is in similar panic mode, issuing a non-stop torrent of tweets about how useless and unimpressive ChatGPT has... Presumably a reflection that FaceBook does NOT have anything comparable ready to release anytime soon.


FWIW, Facebook's AI division has made some pretty insane contributions to the field relative to Google or Apple. I wouldn't disregard the stuff they're doing, even if I personally don't have a stake in Facebook or Meta's success.


Like what? Basically every research breakthrough from industry has come through Google.


Facebook was first to the punch on AI photo tagging and facial recognition in their systems, but they also did a lot of foundational research on GANs and self-supervised learning. Plus, they do a lot of other tangential research around language and life sciences that might not get funded otherwise.

Some of it is marketing bullshit, but Facebook seems to have a genuine interest in pushing the field forward. Admittedly Google is also not the best example though, seeing as they're the Tensorflow maintainers :p


Stuff like PyTorch? They're building large parts of the infrastructure everyone uses.


If things keep the current trend, Deepmind, Meta AI, and many other remarkable labs will be this generation's Xerox PARC.

OpenAI and Stability-AI will be be Apple and Microsoft.


I don’t understand Yann LeCun‘s reaction considering reputation of the company he works for - they should have already copied and integrated it somewhere.


This may put it into perspective: Yann pushed for and released Galactica.org, a website that generated articles for any prompt requesting information, on 15 November, before the release of ChatGPT. Internal pressure inside Meta caused it to shut down three days later, because it was thought to risk the company’s reputation more than Libra and the Metaverse.

The issue isn’t that Yann didn’t publish an AI app; it is that they did, and it was not as good.

The reality is that OpenAI was lucky. Inside the company, there is an alignment department whose effort was driven to help models share human ethics, and that was initially a bit marginalized. However, one of their projects, RLHF, ended up producing a much superior language model, when they could have initially assumed it would be worse.


Was thinking the same. My hunch is that their PR department thinks this will "create a buzz" much like the old waiting list for Gmail.

But that feels like slow old thinking. The way you create a buzz these days (in the era of limited attention) is by releasing a kick-ass product for people to use and play with. I'll probably forget about Bard the second I hit submit on this comment and go right back to using ChatGPT.


All they have to do it add a link to the front page of Google and they will automatically get more users in day than ChatGPT.


I would guess they don't have enough available computing resources for that.


Might be the reason why they are choosing to use a lightweight version of the model.


And also why they might want to use a waiting list. There'll be no shortage of users.


They failed to learn any lessons from the Google+ launch.


Google and Facebook had a deal that Google stops doing social and Facebook stops doing search.

I don't see the same thing happening between Google and Bing


Same way they shot themselves in the foot with g+. Maybe they know being top of web search and email is already a wide enough vertical and offering a dystopian chat AI so soon would be too disruptive for them and get too much attention.


Those trusted tester will likely be their RLHF part of the pipeline.


Question for AI experts out there:

ChatGPT is awesome, but it is obviously hamstrung by the fact that it crawled the web at the end of 2021, so all of its data is essentially "frozen" and it doesn't "know" of any topics that occurred after 2021.

Seems to me like the next "holy grail" in large language models is building a model that can be continually updated. If Google can achieve that, I think they could leapfrog OpenAI. The example in the post about getting new information about the Webb space telescope seems to imply Google has this advantage.

So my question is, how hard of a problem is that with LLMs? I get the sense that LLMs are trained on a very large data set all at once, but that it is difficult to incrementally update them with new data. Is that a true assessment? Even incrementally updating an inverse text index can be a scalability challenge, and so it seems like, given the way that LLMs are trained, that it would be even harder to do with LLMs without spending an absolute fortune on training.


> So my question is, how hard of a problem is that with LLMs? I get the sense that LLMs are trained on a very large data set all at once, but that it is difficult to incrementally update them with new data.

It's the opposite. Getting an LLM to learn the basics of grammar and paragraph-level language structure is the "hard" part. Once you have that, further fine-tuning, specialisation, or other incremental changes are comparatively easy.

Catching up to current events could be done in almost real time, it's "just" continuous training.

The only challenge is trying to do that and also have an LLM that's filtered to exclude profanity, racism, etc...

That filter is usually added on as a final supervised training step, and requires many man-hours to train the AI to be well behaved.

I suspect that it would be possible to automate the filtering by making another AI that can evaluate responses and score them bases on profanity level.


Grammar is surprisingly easy to learn from unstructured data, to an extent. (source: I spent a lot of my PhD doing this kinda thing.)

Continual learning seems to be a tough problem though, from what I'm seeing of my friends working on this problem. Like I said in another comment, just doing gradient updates form new data is fraught with problems. RL has a bunch of techniques to mitigate issues that arise with that, but I think it's still an active area of research.


Well, yes, but the tokenization schemes are causing these models to struggle on actually following syntactic rules in poetry, e.g. syllable counts in hiakus, rhymes. I actually wrote an entire paper which gwern cited about how to make LLMs always follow these kind of constraints with no mistakes.

https://paperswithcode.com/paper/most-language-models-can-be...

If you don't believe this is a problem, try getting ChatGPT to write a paragraph of correct English which omits words which use the letter "e" in it. Too bad you can't use my technique on ChatGPT since they don't expose their output probability distribution...


Ya I guess I was comparing the difficulty of learning to "produce mostly grammatically correct sentences in most cases" to continual learning. From the 'inside' it feels like everything OP said is just the opposite.


Hmm. Do you also need a way to "forget" things?

Like - hmm. I could see taking a snapshot after the grammar/language stuff is in, and then every N weeks retraining on the current web, adding in something about recentness, but that doesn't seem like "continuous" training.

I'd imagine "continuous" training would be, well, going on continuously, all the time, but that would mean that, to include "recentness", something would have to change with the weights that were from that "old" stuff, which sounds an awful lot like the human process of "forgetting".


Well, according to Europe's Right to Forget, these AIs have to forget some stuff about you if you ask them to :)


You have to delete the data sure, but what about the “memories” the model has via its weights?


I have been asking myself this very question... it's easy to erase a paper adress book, not so much if your business has a non-human, like a parrot, who remembered that !

Note that this law is a problem for digital storage too : it's not easy to erase data (especially one stored in cold transistor storage) without physically destroying the storage medium. (I guess the law might get around this by having you "pinky promise" that you will not retrieve the "erased" data later... or else face much more dire legal consequences ??)

I guess this will just need to be tested in courts ?


>Once you have that, further fine-tuning, specialisation, or other incremental changes are comparatively easy.

the problem for Google and OpenAI is that most websites are going to start blocking them in robots.txt if they don't find some way to provide value back for allowing them to scrape and train on their content. Pretty much every other bot or search engine is blocked by default and Cloudflare helps block them too.

if they don't find a way to balance this they are going to kill their own golden goose at some point


It's not like robots.txt is a great deterrent for crawlers. Only reasonable barrier would be a paywall hiding the content from the web.


It's not about "Well behaved." All we're teaching it is our biases.


Right now, "well behaved" means "crudely beaten into submission". I can only imagine what kind of horrible stuff future AI products will do if we keep twisting them into giving "friendly" output https://twitter.com/cirnosad/status/1622407343358214146


I can easily believe that it's a real ChatGPT convo, but the question is, how many times did they have to try it before getting that output? This is what I got on the first try:

https://i.imgur.com/5CNUm9l.png

I regenerated that response several times, and every single time it was something along these lines. I also tried it with the original prompt in that screenshot used verbatim with similar results.

Looking at other posts on the Twitter account in question, I have my doubts that the experiment was conducted in good faith, as opposed to retrying until they got the exact response that they wanted.


OpenAI also seems pretty on-the-ball about playing whack-a-mole with certain embarrassing responses. I've seen it first-hand where I can get it to reliably do something embarrassing after someone mentions it on twitter, but a day or two later it's "patched".


We're not even teaching it anything, all we're really doing is setting up a behaviorist training regime that lets it reproduce some of the biases of some of us, just well enough to squeeze through the Overton window that's acceptable to big corporates.


I think that counts as teaching.


Close enough, I'm just peevish about that word. Teaching is miles away from anything in ML or deep learning practice today.


I was speaking in English and doing so concisely to convey the point about bias transfer. I don’t really care what you call it. I could have said RLHF and blacklisting sources or curating training dataset. I know bias when I see it and the LLM does not come up with it on its own if trained on all data out there because for one thing the world is large with all kinds of opinions. When it refuses to legitimize all opinions equally as opinions and starts arguing with the user about why some opinions are more valid than others (aka widely accepted) even as it admits presence of ample evidence to the contrary then it is learned bias.


@inimino I'm saying there is plenty of bias that is being enforced whether it's via dataset curation or RLHF or another way. ChatGPT has a very hard problem deviating from a certain political view of the world, despite admitting to the existence of evidence that contradict that view. That is not unbiased. The Web is unbiased. You can find every opnion out there and make up your mind based on the evidence. For some reason, ChatGPT filters the web through a narrow political lens. I have tons and tons of recorded sessions, but I don't want to turn this thread into a political debate. Just saying...


No, I agree. The Web is unbiased (as a reflection of the views of average web users, though that is itself a bias) or at least it mostly is (certainly was in the early days) and reflecting that evenly would be politically untenable. However we use "bias" normally in a much looser way, and if you want to define bias relative to some agreed-upon standard of truth, now you have a definitional (and political) problem. Most people have strong political opinions and such conversations tend to go off the rails easily.


I define bias as not treating all political and philosophical opinions equally. If one is picked over the others, that is bias. I saw this bias and made many records of it in case of ChatGPT.


By that definition it's not possible to be unbiased, because moral relativity is itself a position.


I think the point is when taking information from some source (like the web) to just represent that source fairly, which is a reasonable expectation of a search engine, for example.


"Fair" isn't well-defined. Is it "fair" if Amazon results are seen as "more trustworthy" than a random new startup web store? Even ignoring SEO manipulation of any rules publicly believed to exist, that's the default outcome for things like PageRank.

Going beyond sources to conclusions, given LLMs aren't search engines and do synthesise results:

Politically, low-tax advocates see it as "fair" for people to take home as much as possible of what they earn, high-tax advocates see it as "fair" for broad shoulders to carry the most and also for them to contribute the most back to the societies that enabled them to succeed.

Is the current status of Americans whose ancestors were literally slaves made "fair" by the fact that slavery has ended and all humans are equal in law? Or are there still systematic injustices, created in that era, whose echos today still make things unfair?

Who has the most to blame for climate change, the nations with the largest integrated historical emissions even where most of the people who did the emitting have died of old age, or the largest emitters today?

And so on.


Well, I think you're going beyond the parameters of the discussion... LLMs synthesize datasets and that is all they do. They are not reasoning agents and they don't have opinions about anything. All we can say is that they reflect the biases inherent in the dataset, and to say anything else would be dishonest at best. It's only because most people have no idea how these things work that we get all this magical thinking.


That's meta bias, not bias per se.


Yes, it's a sample of all opinions in the training set. It has no opinion of its own, even no place of its own to stand from which to have an opinion. There can be no bias without reference to some ground truth, and there's no general agreement on that in most of the areas where people are talking about these topics. It's a messy area, not helped by how few people understand how these systems work.


Ehh. From what I've seen in a lot of places - MIRI, AI + prisoner's dilemma experiments, moral philosophy, life - there do seem to be the categories of "clear good behavior" and "clear bad behavior" even if there are also really big categories of "unclear good/bad behavior".

In other words, while sure, some "well behaved" is "passing on our biases", there does (IMO) seem to be a big chunk that's "universally well behaved".


Nothing in morality is universal.


Sure, but the iterated prisoners dilemma has a nice result that looks like morality if you squint and don't ask difficult questions about if human "altruism" might just be a self-delusion.


The internet does not accurately reflect our biases. It is much cheaper online to post bad content, or hateful content, then "good" content. In real life, almost the opposite is true.


Doesn't that mean that the internet more accurately represents our biases because most of the time they're hidden by fear of social retribution?


No, it means on the internet one bad actor can make thousands of alt-accounts to send out a disproportionate amount of content pushing the same message.

It's hard to get people to understand the disproportionate effort fixated people put into anything: it's a problem in real life, but they get reacted to. If their fixation becomes some weird message on the internet, nothing happens to them but people have trouble believing the scope of time and effort they'll put into evading bans, blocks, and chasing down people across forums.


They’re the same thing. Good behaviour just means following accepted social rules.


> I suspect that it would be possible to automate the filtering by making another AI that can evaluate responses and score them bases on profanity level.

That sounds like InstructGPT


Like the earlier responses said, if it is retrieving information from an external data source, that would be a much easier way to update things. REALM (https://arxiv.org/abs/2002.08909) comes to mind as an example of active work in this area.

Updating a monolithic LLM seems like a harder problem at the moment, and probably would fall under the umbrella of "continual learning", though that sub-field would have their own subdivisions of different methods.

As for how they would potentially perform, those results would be mostly empirically measured, but neither approach would completely remove the possibility of (1) missing results -- which is probably acceptable given Google doesn't always return everything relevant, and (2) hallucinating non-factual responses -- which can be more dangerous, if not from directly instigating / causing harm, then from the incessant worry that it might.


Retrieval-based models have a number of strengths that make them particularly important from my point of view:

- as said they can be updated with fresh information on the fly,

- they can give you sources for their results (meaning that you can fact-check the output!),

- they achieve similar performances for significantly smaller models (as long as you have a large dataset to retrieve from) meaning that they could run locally on consumer hardware (coupled with a large dataset on disk or the ability to read information from internet).

If I were to spend a significant amount of time building my own language model, it would be retrieval-based and try to preserve those properties (in particular the ability to run on consumer hardware, enabling developers to run models locally has proved to be a game changer when stable diffusion came out).


If anyone is tempted to do this, here is a good blog post on how to build your own retrieval-based LLM powered conversational agent.

https://dagster.io/blog/chatgpt-langchain


> Like the earlier responses said, if it is retrieving information from an external data source

I was under the impression it went through like a multi-month long ultra expensive training process involving many GPUs on terabytes of data "snapshotted" point in time.

Is it possible to do this at a smaller scale once a day at the end of the day with "all new content scraped from the Internet nightly"?


> Is it possible to do this at a smaller scale once a day at the end of the day with "all new content scraped from the Internet nightly"?

If the model has the ability to browse the web (the tech behind ChatGPT by design does, but it is disabled in ChatGPT proper, which is, in some respects, a conservative public demo) and incorporate data in responses, this can “cover” a bit for staleness in the base model.

Even if you can't retrain the model fully daily, with enough resources [0] you can have multiple training sessions running concurrently and just swap in new backend models behind the interface as they arr ready, which, combined with browsing ability. acheives something very similar. This obviously, barring an enormous advantage in underlying tech or access to relevant training data for one player, works most in favor of whoever can subsidize the biggest hardware commitment.

(Given the different training stages, there may be cost efficiency advantages to, say, running less-frequent iterations of the lowest-level training stage but more frequent iterations of the higher-level ones.)

[0] probably an utterly ludicrous investment for all but a handful of firms, but for a ~$trillion firm where this hits very close to their core business? Not so ludicrous.


How I imagine what you're thinking of is a kind of small update to an existing model by doing backpropagation training on 'new' data.

There are existing problems with that, one keyword to lookup is 'catastrophic forgetting', where you update on recent stuff so much that you overfit on that and forget all the prior stuff you learned before. There are methods to mitigate that, but I would call that an 'active area of research'.


I think the idea might be to effectively provide it with the result of a google search (or other datasource) as part of the prompt.


Not effectively, because you probably can't return to language pre-training after multi-task fine-tuning and RLHF. Stage 1 has to go before stages 2 and 3. So they would need to fine-tune a stage 1 model and re-apply stages 2 and 3.


The paper about PaLM mentioned that they used 6144 TPU chips to train it over several weeks. That's a metric shit ton of processing power (pardon my language). If Google could dedicate 2 racks of TPUs to just continuously training PaLM (note: I'm not sure if LAMDA is PaLM, but let's assume they're similar), they could do this retraining.

Catch is: consider these statements:

Donald Trump is the President.

Joe Biden is the President.

Barack Obama is the President.

George W Bush is the President.

George H. W. Bush is the President.

Bill Clinton is the President.

All of them were true at some point of time. How do you train a model to disambiguate these?


Lamda is about 20% of the size of palm IIRC


Wouldn’t retrieving information from an external source effectively be just recreating a search engine with a fancy natural language interface


Deepmind has been doing some interesting work around Retrieval Enhanced Transformers (RETRO) models [1] that might be relevant in this context.

[1] https://www.deepmind.com/blog/improving-language-models-by-r...


> We call our method RETRO, for “Retrieval Enhanced TRansfOrmers”.

Wow. Talk about going through contortions to get an acronym.


GRIFFIN takes the cake for me: "General compliant aerial Robotic manipulation system Integrating Fixed and Flapping wings to INcrease range and safety"

https://techcrunch.com/2022/12/20/this-autonomous-ornithopte...

Cool project though.


I believe Google has dealt with a similar problem early in its history. In the early 2000's Google's PageRank values had to be calculated all at once (approximately monthly). Later they were able to move to a continuous/incremental/rolling update.


Exactly, so Page and Brin might think that they can pull the same trick again.


Page and Brin's original crawler had the issue, but they were unable to fix it, and it was Jeff Dean and Sanjay Gupta that rewrote the crawler so that it and the index storage would be hardware fault tolerant.


The crawler doesn't interact directly with pagerank. PageRank is computed later and is attached as a per-document value.

I asked Jeff about what it was like in the early days and he told me: when they first joined, the entire crawl to index to serving stack was documented in a README that you would follow, typing commands and waiting for each step to complete. A failure in a step meant completely starting over (for that step) or even earlier, depending on how and where temp data was materialized.

He said he and Sanjay (Ghemawat, not Gupta) then wrote mapreduce as a general purpose tool for solving multiple steps in crawl to servable index. Not only is mapreduce good at restarting (if the map output and the shuffle output are persistent), the design of mapreduce naturally lends itself to building an indexing system.

If you go back to the old papers you'll see several technologies mentioned over and over. protocol buffers, recordio, and sstable: the first is an archive format to store large amounts of documents in small number of sharded files, the second is a key-sorted version of the same data (or some transformed version of the data). So, building an inverted index is trivial: your mapper is passed documents and emits key/value pairs (token, document) if a token is in the document. The shuffler automatically handles grouping all the keys, and sorting the values, which produces a fairly well-organized associative table (in the format of sstables).

BigTable came about because managing lots of sstables mutably became challenging. | MapReduce was replaced with Flume, which was far more general and easier to work with, and BigTable was replaced with Spanner (ditto), and GFS replaced with Colossus, but many of the underlying aspects of how things are done at Google in prod are based on what Jeff, Sanjay, and a few others did a long time ago.

Note that mapreduce isn't particularly innovative except the scaling aspects were fairly esoteric at the time.


Broadly, yes. Current AIs (LLMs etc) haves succeeded as brute-force prediction at it's lowest level. Of course there are many refinements here but this basic situation has meant that adaptive learning, updating, doesn't really work for this particular approach (as others mention).

The thing about this is if competition forces Google and Microsoft to offer a full ChatGPT style interface, they are going to be using a vast amount of cpu cycles and thus energy. This is going to be costly and potentially environmentally destructive. And only companies with vast server farms will be in this game.


> they are going to be using a vast amount of cpu cycles and thus energy. This is going to be costly and potentially environmentally destructive

How much CPU do you think Google currently utilize? OpenAI is a relatively small company compared to Google and they can keep their services (barely) afloat. AI at scale is indeed server intensive, but probably nothing like what we already have.

Also, new chips designs might significantly decrease power requirements in the future, so I wouldn't worry about the environmental issues.


How much CPU do you think Google currently utilize?

Lots and if they serve a ChatGPT interface, it will be N times as much as now since each will be to an LLM and involve a back-and-forth. What is N? 3, 10, 100?

OpenAI is a relatively small company compared to Google and they can keep their services (barely) afloat.

OpenAI has negligible income. They're effectively financed by Microsoft. I don't how much serving ChatGPT to the world costs but it's fairly expensive per transaction. It can't not be, since it's calling a model with billions of parameters.


Disclaimer: I am not an AI expert.

I think it depends on how reliable you want this AI to be. Opening search indexing to realtime data is what led to the endless mess we call SEO, and I fear that doing the same for AI invites the same problems.


Bingo - the reason ChatGPT looks so impressive now is the same reason Google was in 2000 - it was a breath of fresh air with a simple interface that was unencumbered by the sales and marketing and advertising barrage that targeted users of its entrenched competition.


The biggest challenge is going to be filtering out the data that was created by ChatGPT itself or the competing LLMs, which are going to flood the market (and create a flood of web content) in the next few years.

If they don't do that, they will create an AI that is essentially inbred.


[flagged]


A few centuries in human years means something like 7 years in the development of AI.


There is already LLM search engines out there

https://perplexity.ai


Oh I quite like this.

I asked it, conversationally as I would to a human, "when is the super bowl", "what channel is it on", and "who is playing" in that order, and it answered all three perfectly.

For fun, I asked it a more complex question - "will ford stock go up or down", and it answered with -

Analysts have a median target of $13.00 for Ford stock in the next 12 months[1], with a high estimate of $21.00 and a low estimate of $10.00[1]. Over the past 50 years, Ford Motor has on average risen by 15.7% over the course of one year[2]. Therefore, it is likely that Ford stock will go up in the next year[3][4].

I'm going to use the heck out of this thing.


> how many legs do three horses and a duck have?

> Perplexity: The three horses and duck have a total of 24 legs[1] [2]. However, the question only asks for the number of legs on the floor, which is 4[3][4]


I get "The number of legs of three horses and a duck is 28. Horses have four legs each, so three horses would have 12 legs. Ducks have two legs, so the duck would have two legs. Therefore, the total number of legs is 28."


The only problem is you can't actually add it to a browser.


I like the way that site presents results to complicated questions, footnoted to supporting pages. I can absolutely see that being the future of search engine output.


https://weaviate.io/ also allows for generative search.


Thanks for mentioning perplexity.ai - I had not seen it before. I asked it a rather detailed question about myself and it provided good responses with references. It handled a follow up question very well also. Well done!


Just tried perplexity.ai,and it's very impressive. It uses GPT-3.5 and Bing (so it says!), very well integrated. Will have to play with it some more, but this seems the perfect integration of search and chat!


Google is an LLM search engine.


Google already as its Knowledge Graph - which is a pretty huge set of (semi) structured data about a bunch of things out there:

(totally speculative) If Bard were to form queries against knowledge graph, and then summarize the results, it could be very up to date for a good amount of information


Knowledge graph is more of an expert system in terms of how it i updated and it's data sources. There isn't much magic there, so I doubt bard could get much from it if it's capable of being trained on unstructured web content.

The more likely dependency would be bard ingesting unstructured data and generating structured data to update knowledge graph with.


They should be able to use it to check for contradictions against what the Bard has generated and facts in the KG.

For example I asked chatGPT about birds that can't fly and it started bullshitting about all sorts of birds that clearly can, those facts would likely be in the KG.


I dunno - Google assistant relies heavily on the KG and it's pretty garbage - but then that could be more to do with Google Assistant than the quality of the KG.


> So my question is, how hard of a problem is that with LLMs? I get the sense that LLMs are trained on a very large data set all at once, but that it is difficult to incrementally update them with new data. Is that a true assessment?

That's not quite correct. It's trivial to update the model with new information. What's not trivial is to encode priorities or time into the model. Fundamentally, the model doesn't understand about the concept of time, so you can't easily condition it on "give me only information from last year". You can get there and nudge it in the right direction by generating training datasets for such queries and fine-tuning the model on them, by oversampling more recent data in your training procedure, or by adjusting your gradients for more recent data, but all of these are still "fuzzy" - the model may or may not do exactly what you want during prediction time and could still give you old results.

However, I would assume that Google is doing something more sophisticated and that's it's not just a plain LLM like GPT3. It's probably a more complex architecture with external systems and data sources around it that tries to solve the above using heuristics.

That being said, the current Google search is extremely bad at time relevance too...


https://phind.com (YC S22) uses LLMs grounded in internet contexts to answer questions.

There's an interesting tradeoff between grounding LLM answers in web contexts and answering questions from parametric memory like ChatGPT. The former is more accurate and verifiable, but the latter can be more creative and specific to you.

We will have both of those answers soon. (Disclaimer: I am the co-founder).


I had not heard of phind.com either. I asked the same detailed question about myself to both perplexity.ai and phone.com, and I really liked both answers with references. Both are likely more useful to me, right now, than Google search. That said, the technique of a accepting a question, get content text with web searches, and then use LLMs like squad, etc. to answer questions - this is something the Google and Bing search will soon do also, with many of their improvements.

As a slightly disinterested third party, it will be fun the watch the competition.


Hi. I remember reading a question answering paper from Google in 2021 that concluded there was little evidence that models were actually grounding the generated answers in the contexts. Do you know if this problem has been solved?


Ah yes, it's this paper: https://arxiv.org/abs/2103.06332.

With recent advances in alignment, this problem has gotten much better. Furthermore, the LLM used in that paper is only ~400 million parameters, which is orders of magnitude smaller than models being used for this task today.

Scale + alignment has been the solution so far.


Thanks for the reply! It’s interesting (counterintuitive?) that the problem of models drawing too much from their stored knowledge is mitigated by using models with much greater stored knowledge.


You do not need to constantly update the model. You just need to feed it with fresh data as inpput.

Example to try that in action:

https://labs.kagi.com/ai/context


I asked it: What happened today?

It gave me news from April 9, 2022. What should I give it?

Edit: Late comment because dang put me on the rate limit list and didn’t tell me.


It can't condense your daily news yet, but you can ask it something like this:

https://labs.kagi.com/ai/context?question=what+is+the+latest...?


> I get the sense that LLMs are trained on a very large data set all at once, but that it is difficult to incrementally update them with new data.

In 2020, OpenAI wrote that training the 175B parameter GPT-3 took 3e23 fl-operations [1]. A $1600 NVIDIA GeForce RTX 4090 [2] can do 80 tflops (80 × 1e12 operations per second). So 1 GPU would need 120 years to train the model.

With $2.4 million, you could buy 1500 GPUs, and train GPT-3 from scratch in 1 month.

Partial continuous re-training should be way cheaper, if there is a way. But even training again from scratch every month seems feasible for a big company.

Disclaimer: Not an AI expert.

[1] https://arxiv.org/abs/2005.14165, table D.1.

[2] https://www.techpowerup.com/gpu-specs/geforce-rtx-4090.c3889


ChatGPT 4 seems to have newer data/practices already.

At least the Bing ChatGPT integration can get data from websites and all of that, and cite the sources. Probably using REALM. Not sure if is ChatGPT-4 that would use REALM together or if is MSFT integrating both.

https://medium.com/@owenyin/scoop-oh-the-things-youll-do-wit...

Similar to how http://perplexity.ai/ works to get info on current data.


ChatGPT 4? I didn’t even think they’d announced v2 yet. Source?


I also though that was silly, as we are very early on 3.5 yet. https://www.windowscentral.com/software-apps/microsoft-bing-...

But the leaked Bing Chat feature indeed does show things that for now ChatGPT don't have.


They haven’t. Just people making guesses.


> ChatGPT is awesome, but it is obviously hamstrung by the fact that it crawled the web at the end of 2021, so all of its data is essentially "frozen" and it doesn't "know" of any topics that occurred after 2021

I think this could be a feature. Things on the internet have a life cycle. After a while they can be altered to fit various agendas or just drowned out by seo nonsense. Seeing an immutable snapshot in time might be a good defense of that.


For a chatbot, it's a feature.

For the large proportion of search queries that are "give me context for $newsitem", "what is $latestthing", "what is $celebrity up to", "can you solve my problem with the latest version of x" it's a show stopping bug (frankly Google weighting established content higher in most contexts is already an issue for the last of those queries: Googling error messages etc). And at least all the SEO'd "Best Items In $CurrentYear" articles that aren't that up to date attempt to look current


I'm not sure if, from the perspective of consumers, it's a feature or a bug. Sort of the only thing keeping ChatGPT or equivalent from being a massive blogspam creator that puts mostly young writers cranking out content for pennies out on the street is that it can only create relatively evergreen content.


I think [Chat]GPT's output is very well represented in blogspam pages I've seen recently, so consumers get the worst of both worlds: just because it's too out of date to properly answer your question directly doesn't mean it can't help other people generate filler text that outranks pages with the answer in web search...


I would say continue update LLM isn't a very good idea, the compliance issue will be huge. Also feeding data without quality check seems like a sure recipe for disaster.

For the foreseeable future, LLM will be used for a unique set of queries that are more operational, like a consult, not for stuff that are informational, like what news is published today.

If your query is relying on the model to be update to date with today's news, you can just put it into the prompt.


The obvious fix (which people are already working on) is to allow the system to look up information. E.g.: give it access to wikipedia.


Not AI expert, not even amateur, but what you said seem to have the pattern of a feedback or strange loop. What if your input is in good part your own (or similar enough AIs) output? A lot of the buzz in internet this year was around ChatGPT and what it generated, after all. And you can't just take out what you generated of your inputs because the rest of internet is talking about or around that anyway.


I am not sure but this did trigger a vision of a dystopian sci-fi world where governments attempt spike each others AI with false training data to control narratives.

Google can leapfrog OpenAI because they are the entry point for 99% of users attempts to answer questions online. Similar to how websites for x (weather, lyrics, etc.) existed for years but the second google baked them into search those sites dried up.


> Similar to how websites for x (weather, lyrics, etc.) existed for years but the second google baked them into search those sites dried up.

But weather, lyrics, etc. websites are specialized at what they are doing and Google is just trying to do basics of aforementioned services in hope they will appeal to casual Web users.


That's where google's angle is. Not only did they spearhead all of these, and sat on it for years now, they also have an up-to-date copy of internet.zip vs OpenAI's and others limited common crawl and other data sets.


You don't need to retrain the model to access fresh data. You can just tune the model to generate queries to various sources (search engine, etc) to retrieve data.


> You can just tune the model to generate queries to various sources (search engine, etc) to retrieve data.

That sounds like a task runner instead of a LLM


Perhaps. I'm not sure that "LLM" is the best description of ChatGPT (see my recent comments), but certainly it's strength is language and weakness is sticking to factual answers (as expected in a search engine context, if not so much in an info-tainment chatbot).

Using ChatGPT as a front end for search, maybe also as a summarizing/presentation interface, and able to maintain context for a conversational interface would all seem to be playing to it's strengths.

There was apparently a brief appearance of a ChatGPT enhanced Bing (search engine) yesterday which was noted to be able to cite sources, which certainly suggest it being used more as a front end than as itself the source of content.


Right. The model is already trained to interpret the results. Think of it like automatic prompt creation based on the results of a search.


It's also hamstrung by the fact is just makes things up?

I actually think Microsoft is betting on the wrong dog. But let's see :)


> I actually think Microsoft is betting on the wrong dog. But let's see :)

In your opinion, which dog should they be betting on?


Improving actual "search" and not just returning ads, SEO, blog spam etc?

Maybe they have the answers for ChatGPTs incorrectness but not really sure I'll be reaching for a product which is imagining the answers as it sees fit?


Work is being done to create essentially dev ops pipelines for faster model iterations. This requires a lot of infrastructure from the dev ops side, to the software itself.

Further, depending on the nature of data used, this would make such changes more complex (ex PII)


It seems like google is using search or some knowledge base to throw relevant information into the context when it's generating to get up to date results. It relies on knowing what to retrieve, but google is already pretty good at that.


Why do you think it can't update its dataset? The only reason I think they're using a 2021 dataset is because they tested it throughly. In the next few weeks they will be releasing more up to date databases.


Reminds me of early Google and the 'Google dance', where they'd update their search results once a month or so with its recomputed Pagerank. Nowadays there's no particular update moment.


This part is not that hard really, but as with all things AI reliability is one major long term challenge.

(source: I helped build Sparrow, DeepMind's RLHF model, that does this by learning to use google under the hood)


One more question: apparently LLMs model the human language as well as the knowledge encoded in it? Aren't those separate concerns? Maybe those can't be separated and that's the reason?


I understand it as modeling enough language that the "knowledge" becomes likely in the language model, but it's still just language, which is why it often fibs.


Thinking of this and the analogy of the human brain: do models “forget” lesser utilized, less significant stuff like humans do?


edit: removing comment. I don’t think what I wrote was confidential, but I don’t want to find out the hard way.


This is a milestone. Google has been keeping the fruits of its AI research locked away from the unwashed masses. They did not seem to be on any path to commercialization, Google Cloud access, or releasing features based on these models. OpenAI (and Microsoft most likely) forced their hand and all at once they are releasing tech-previews and talking about adding it to the search engine results page, all while opening up API access to several of their trained models.

Microsoft is rumored to be adding GPT powered features to Bing very soon and might beat Google to market.

Very interesting times!


Well not really. It's still limited to trusted testers only, so, even though they've had the tech for years they still can't bring themselves to actually make it available.


At this point, Microsoft should do whatever it can to make a deal with Apple to place a ChatGPT powered platform in the iPhone/i Devices.

Most of the people using "computers" nowadays do it through mobile phones, and Google has a strong grip in that market with Android, with only Apple giving them some competition. Thus the only platform where Microsoft could really compete is in iOS.

If Microsoft + Apple could get together in this one, they could do a really killer app doing a next level Siri or similar.

Otherwise, What would Microsoft do? Add ChatGPT to Windows? that means PCs ... that means a Office/work related agent, which is boring and kind of reminds me of the Mac vs PC video-ads of 2000s.


Isn't ChatGPT still like super expensive to use and run? At least compared to Google queries, but maybe in general too? While Google still pays Apple a ton od money to remain the default search engine... which is close to zero work on Apple's side.

Microsoft would then need to pay for ChatGPT computation, model retraining, maybe testing of deployments/upgrades, maybe sponsor Apple's initial development efforts, plus pay for the expensive deal to be Apple's default choice.

Am I understanding the problem correctly? It looks like a lot of work and very expensive if I'm getting it right.


My question is: what kind of deal are you thinking? Would Apple pay Microsoft a lot of money for this ChatGPT-powered platform? Would Microsoft be paying Apple?

With search, Google pays Apple with the payoff being that people end up seeing Google's ads. With ChatGPT, Microsoft wouldn't be getting money so Apple would presumably need to pay Microsoft.

You've phrased it like Microsoft needs to get it on people's phones rather than Apple needing a competitor to Google's NLP (Microsoft should do whatever it can, not Apple should do whatever it can which implies that Microsoft should pay Apple). However, we haven't really seen what ChatGPT is in terms of a product yet (product, not feature).

Let's say Microsoft pays Apple to get ChatGPT onto iPhones. What does Microsoft get out of that? Bragging rights? Ads in Siri? Or maybe as part of an overall deal of Bing + ChatGPT?

I agree that things like ChatGPT are cool, but I think it's unclear (at least to me) a company like Microsoft will make money off it without charging for it (or making it terrible like allowing companies to pay for placement in ChatGPT responses like "what's the best vacuum?" gets you "The best vacuum by far is ProductPlacement. It has much better suction than CompetitorProduct"). Amazon has tried using Alexa to upsell things, but it can make Alexa really annoying at times.

They could come up with a really cool next-level Siri, but I guess I'm not sure what's in it for Microsoft in such a deal. I see what's in it for Apple and users.


Why would Apple accept such proposals? Google/Apple duopoly has been very convenient for both companies and removed lots of business uncertainties for them. This is not just mere trust; their established business structures strongly incentivize not to trespass other's territory. Inviting MS to the party would make lots of disruption in this space and Tim's personality doesn't seem to like such things.


If Apple wanted to play a part in this they would create their own LLM which would probably be as good if not better than chatGPT. chatGPT is nowhere near as special as people think it is. It is just the first of its kind to go public because openAI and Microsoft wanted the publicity, hence the swift response by Google and probably soon Meta.


Does it really matter if it is boring but is generating $6B every year? Ultimately it's not what's cool, but what drives user adoption and revenue.

Otherwise it will be just like Magic Leap with a pretty cool product nobody ever used.


What's less likely? MS+Apple alliance or Google shipping a solid product?


I'm pretty sure Tim Cooke would have already allocated vast billions to Apple's own AI platform.

AI is now the biggest game in town - Apple is not going to leave it to Microsoft and Google.


AI has been the biggest game for a while.


The new AI powered Teams features Microsoft announced seem pretty useful!


Microsoft has a much larger platform than iOS which also happens to be where people do most of their actual work. Apple should probably be paying them


I was able to see LaMDA in action about 1.5 years ago. It was just as awe-inducing as ChatGPT. Google is not behind in this race, they're simply slower to roll out new functionality. And maybe rightly so? Some of the things I saw ChatGPT say would have been multi-day international bad headlines for Google, if a Google AI had said them instead of an upstart startup.


Caution is well and good, but when I look at top selling computers there's a conspicuous lack of Xerox branding.


There is a phrase called "all publicity is good publicity" in fact. It expresses the very real truth that a media outlet reporting a bad interaction by Google AI would probably make people even more curious to try out the AI. If they get the business model nailed down, they should not be afraid of bad press. (Despite our imagination of the ways Google could profit from LLMs, I don't think Google itself has the business model nailed down.)


Google got a lot of free publicity around LaMDA when they fired Blake Lemoine, but didn’t make any moves to take advantage of it in the AGI space. Now it seems OpenAI is here to take their lunch.


LLM != AGI


And would have such headlines actually affected them, at the end of the day?


How many parameters?

The LaMDA paper describes 2B, 8B, and 137B parameter variations. Sundar says "We're releasing initially with our lightweight model version of LaMDA. This much smaller model..."

So it sounds like the 2B or 8B model variations, compared to GPT3's 175B. [edit: corrected]

I can't imagine such a smaller model coming across as anything near as impressive as ChatGPT.

So the announcement today is for a much smaller, more limited thing than ChatGPT and even that is not actually available today nor even with an announced release date.


Yeah, I think you’re onto something here. Clearly we won’t know till we can use them side by side, but there’s lots of hints that Google is a ways off from matching ChatGPT. A small model, a closed beta, a press release annd mock-up instead of a link, chest pounding about how they invented it all anyway.


GPT3 is 175B parameters, but I agree with you.


Oops. Corrected. Thanks.


To compare models strictly on parameters is simply naive. And if we are to compare them only on such basis, smaller models have often proven to be more stable and efficient.


AI generated summary: Sundar Pichai, CEO of Google and Alphabet, has announced the release of Bard, an experimental conversational AI service powered by Google's Language Model for Dialogue Applications (LaMDA). Bard seeks to combine the breadth of the world's knowledge with the power, intelligence and creativity of Google's large language models. It draws on information from the web to provide fresh, high-quality responses. Bard is initially being released with a lightweight model version of LaMDA, which requires significantly less computing power and will allow for more feedback. Google is also working to bring its latest AI advancements into its products, starting with Search, and will soon be onboarding individual developers, creators and enterprises to try its Generative Language API. Google is committed to developing AI responsibly and will continue to be bold with innovation and responsible in its approach.

Src: https://labs.kagi.com/ai/sum?url=https://blog.google/technol...


Missed an important detail: "And today, we’re taking another step forward by *opening it up to trusted testers ahead* of making it more widely available to the public in the coming weeks."


Being the first to achieve something does not make you a winner. This lesson has been taught countless times in tech from Yahoo to MySpace yet people believe OpenAI has won.


Agree also because OpenAI misses a crucial point. How to reward content creators?

At least Google can pivot a new version of adsense easily with this products.

Technology!=business viability

That's what many first mover don't think about


Google’s post assured me they were first.

But yes, first movers don’t always win. They have an advantage when there are network effects, but there’s a lot more to it.


I tried Kagi's summarizer [1], and it's okayish but misses one important point:

> Sundar Pichai, CEO of Google and Alphabet, has announced the release of Bard, an experimental conversational AI service powered by Google's Language Model for Dialogue Applications (LaMDA). Bard seeks to combine the breadth of the world's knowledge with the power, intelligence and creativity of Google's large language models. It draws on information from the web to provide fresh, high-quality responses. Bard is initially being released with a lightweight model version of LaMDA, which requires significantly less computing power and will allow for more feedback. Google is also working to bring its latest AI advancements into its products, starting with Search, and will soon be onboarding individual developers, creators and enterprises to try its Generative Language API. Google is committed to developing AI responsibly and will continue to be bold with innovation and responsible in its approach.

The point missed is that is that they're starting to test Bard with "trusted testers," their closed beta program, so it's not really "released" yet.

The context is that they had to announce it now because it's likely to leak anyway; trying to get outside testers to keep it confidential would be difficult and not worth it.

[1] https://labs.kagi.com/ai/sum


Missing the point of positioning by miles.

I’m sure Google will release something good, but it will be called: Google’s ChatGPT.

Should have made something to distinguish their AI solutions from just being a copy to trendy solution.

Saying AI will be used in search is not news, AI is already part of Google search.


Tying it back to search at all is completely missing the point imo. They need to realize people want answers and information - they don’t want to look for answers and information. From a a marketing perspective, it sure seems like Google has already missed the ball on this one. Not that they can do anything about it, though, since their cash cow is search results. They need people to look, not find, and chatgpt helps people find.


How would people get answers and information without looking for answers and information? You need to query before you receive and search is the prime input that people are use to.


It is absolutely critical for our society that this technology, at its full power, be developed open source to the public.

That presents dangers to society. Educating citizens on the ubiquity of human-like text being generated by bots is one challenge. Knowing that bad actors will be able to use these models for evil is another. "Write a convincing 10 page manifesto on the dangers this religious minority poses to societal cohesion" or "tell me a cost effective way to create an IED that won't set off suspected red flags per Texas statutes, and how to deploy it in an urban area for maximum impact".

The risk of this technology being contained to Google and governments, on the other hand, is to grant them monopoly on unmatched analysis of data in the history of the world, purely for profit and the continuation of power. Free thought will be crippled via only approved questions being permitted, all queries analyzes and monitored by the central authority.

We must all own this technology.


Good point on open source. False about "good" and "evil".

The most dangerous thing for society is perpetuating the idea of good and evil because that is what motivates war and terrorism. It is used as propaganda to make conflicts into moral issue and justify mass killing. What do you think the terrorists are fighting against? Evil (in their minds).

The real problem is that there are different worldviews and political groups, a lack of metacognition about this, and governments or dissident groups using this to enable their propaganda to further their political causes.

Worldview and group membership are tied together to a large degree.

Also as far as being open source, we already have open source GPT systems. The problem is that the most powerful models require dataset sizes that cannot currently train or run on normal GPU setups. Only a few companies have the infrastructure for it.

Eventually we will get more efficient models, and/or possibly a way to do it with decentralized GPU mesh (assuming that's possible), and/or memristor manufacturing will start scaling up which will make it possible to run these large models on consumer hardware someday.


interesting, haven't looked into memristors in a while but it seems like that's a real possibility

https://www.frontiersin.org/articles/10.3389/fnano.2021.6459...


There's a Product from Google which I like very much: "Discover" in Google Chrome on Android devices.

It is similar to what the normal "Google" app offers: snippets with links to articles which may either be currently relevant for me (currently as in: this day you searched a lot for X, so we recommend you Y), or which may be a bit older but still of interest to me. A personalized Google News but much less news oriented.

I don't use this in the "Google" app, because there the link-sharing is broken: it only shares the link, while in Google Chrome the share button shares the link but also provides the title of the article.

The point is that I want have this "Discover" feature on Google Chrome on the desktop, not only on the phone or on the tablet. I don't use my phone for browsing and my tablet only when in bed. I'm a desktop-first person. There I have 3 monitors, a surface where I can place my cup tea and my pencil.

Google is telling me: We don't want your behavior. We want you to use a phone or a tablet, we will reward you for this behavior by granting you access to the our analysis of the information which we aggregated about you. Else we'll punish you by not giving you this tool.

I wonder where they will be offering Bard, and at this point, I'm close to considering using Bing, which was always out of question for me.

The current message from their CEO: who cares? Publish it, let us see for ourselves what it can do.


At least for this audience there's much less sophisticated algorithm for discovering - F5 on Hacker News


Am I the only one who feels like Google is missing the mark in more than one way?

First off, I have no idea what Google uses AI for today in any of its products. Google today is about as useful (or useless, perhaps, since I’ve switched search engines) to me as it was 10 years ago. I don’t care, either. If I don’t find myself thinking “I really need AI in Google’s products to achieve this”, why should I care?

> But increasingly, people are turning to Google for deeper insights and understanding — like, “is the piano or guitar easier to learn, and how much practice does each need?”

I want my search engine to search the web for that information. I have no interest in it amalgamating that data for me and performing who-knows-what transformations to it before serving me the data with no traceability to primary sources and no way to validate it without redoing the job myself. I can’t be the only one who wants their search engine to search, can I?

> In 2018, Google was one of the first companies to publish a set of AI Principles.

Google was also one of the first companies to fire an AI researcher for what he said about the AI he developed. Google also has shown zero regard for ethics as it has exploited monopoly position in ad markets and consumed small companies to squash competition. I have no trust in Google.


> Google was also one of the first companies to fire an AI researcher for what he said about the AI he developed.

If you are thinking of the guy who said the LLM was alive, Timnit Gebru was earlier and she was fired from their AI ethics team for levying fair criticism of their work in a research paper [1]. I find that situation more of an indictment of their compromised AI ethics.

[1] https://en.wikipedia.org/wiki/Timnit_Gebru#Exit_from_Google


Thanks for sharing. This is about what I would have expected from Google, so it’s nice to see my assessment was accurate. I’d say that focusing on racism after firing someone for their work on AI (un)ethics was tone deaf, but it sounds more like selective hearing.


I am not sure whose focus on racism you are referring to. There is indeed a problem with white people in google management having "selective hearing' when people of color speak up, which in this day and age is considered a form of racism. It is the duty of those in power to unlearn their biases and treat everyone with respect and professionalism. We should not make excuses for powerful white people failing to respect the work of people of color. (I do not mean to suggest you are doing so, but I am uncertain what you mean by "selective hearing".) It is my personal opinion that a white researcher would not have been treated this way, and an examination of the role of race in this situation is certainly appropriate. The fact that Gebru was subject to racist harassment after this event is also worthy of examination as we consider the way that our society responds to people like her sharing their views.


What search engine have you switched to?

Maybe I've just been tuned to use Google effectively, but Bing/DuckDuckGo results are horrid except for very simple things, while Google gets me exactly what I want in the first ~5 results.


I switched to DuckDuckGo initially and had similar gripes over search results. The last straw was when I started to hear about censorship and suppression of sources on DuckDuckGo. I switched to Startpage and the results seem much better. It’s probably using a larger company’s (Google?) index, but I haven’t done that deep of a dive.


Startpage pays Google to use their search engine. It's on their wikipedia.


Lotus 123, Borland and Wordperfect were at the top of the software world. They failed to make the transition to GUIs/Windows.

Microsoft missed the arrival of the web. Microsoft also missed the mobile revolution and ended up with zero mobile operating system market share.

Internet Explorer, having vanquished Netscape, declared "job done" and stopped developing the web browser further, only to be cast to irrelevance for its failure to advance and innovate. Much like Google search.

Kodak, Yahoo, Commodore, MySpace etc etc etc all missed the critical technology change that left their business behind.

Despite seeing the oncoming train, Google may be too arrogant, too internally political, too controlled by it's vast river of web advertising gold, to be willing or able to make the changes needed.

And it may be that Google simply cannot transition it's search over to becoming top dog in AI information seeking - it's possible that AI interfaces aren't a "winner take all" market like search is. If that turns out to be true then Google won't be finished but it will be diminished.

ChatGPT brings a laser focus that Google simply cannot.

It's possible Google's time has passed.

I remember distinctly when Google arrived how instantly old and out of date Altavista suddenly looked - that's how Google search looks now.


I was able to see LaMDA in action about 1.5 years ago. It was just as awe-inducing as ChatGPT. Google is not behind in this race. They're simply slower to roll out new functionality. And maybe rightly so? Some of the things I saw ChatGPT say would have been multi-day international bad headlines for Google, if a Google AI had said them instead of an upstart startup.


They should have had the courage to weather through that noise. Or do something like Microsoft and acquire after release. These are just excuses tbh


Acquire what after release?

The point GP is making is that Google's technology has been (at least to a non-expert) on-par with the external things. Why would they acquire something they already have?


My point was that this public relations issue could have been easily solved. A tech first company can easily get PR to solve it. That they didn't implies they they are past their prime. This is Xerox-Apple thing again


I still don't follow. "We should publish a harmful product when instead we could continue refining and instead publish a good product at the same time as everyone else" is a good strategy.

This is ultimately only an issue if you think Google's models are significantly worse (and that's related to the lack of public testing). That assumption doesn't seem justified.


Microsoft feels like an outlier on this list. They are thriving right now.

Also, there are still many categories of things that ChatGPT-like bots can't help with yet, such as shopping. I think we're still very early in this cycle. That doesn't mean Google will succeed, but it feels premature to be writing their obituary.


Very true. Although, as an aside since you brought it up: I will say shopping on Google has gotten far worse these days days with all the SEO optimized "top 10 best X" pages that are basically just affiliate link spam with extremely limited value / product testing / insight.


That's a fair point about shopping. I was more thinking that I end up googling things that then lead me to ecommerce sites and its unlikely that ChatGPT will replace that any time soon.


Windows, Office, and more recently Azure are big enough to make these failures survivable and give Microsoft more chances, but IE and Windows Mobile are textbook examples of complacency.


IE stood no chance against Google Chrome because just like Microsoft shoved down IE down the users' throats the same thing happened with Chrome and Google.


This is probably true, but I honestly wonder why. Windows is a commodity as far as I understand, which you can get for free with not-so-hidden tricks, which are also apparently approved by Microsoft themselves.

O365 etc. survive due to corporate inertia (Excel), but I wonder if growth stems from new customers vs. big, large customers growing and requiring more licenses while actively looking for replacements.

Azure is good, sure, but AWS is better (yeah, the console) and google will probably do anything to not go into irrelevance in this area, so they're stuck between a rock and a hard place.

I still have MS stock (with a large payoff atm), but I honestly wonder when it's time to sell.


Google was already next when they missed the social media boat.

That said, Microsoft is still the second most valuable company in the world a decade after missing out on mobile, so it doesn't really seem like a big deal.


I certainly didn't kill Microsoft, but it's tens of billions of dollars they didn't get. It's significant, even at Microsoft's scale.


There is also opportunity cost though. Maybe if they succeeded in mobile then they would have missed Teams. Its hard to say that the side effects from missing that wave were exclusively negative.


Quite the opposite. If they have owned mobile + desktop, they could have a much stronger integration, and we wouldn't be Google first with our apps.


Yeah that's a good point with regards to integration. I still think the Microsoft that wins in mobile looks very different from the Microsoft we know today.


Arguably Microsoft is actually well-placed to continually miss major advancements in areas they should be king in (or adjacent to them) and yet continue.


Using Teams as a not negative example is very odd.


They have crushed Slack with Teams and it is the backbone of their enterprise offering. Am I missing something?


I am referring to the near universal opinion that Teams is shit software that is only worth forcing employees to use because it is free.


And this is Google's core business. Nothing came along yet that replaced corporate use of microsoft office, and that is were they make most of their money.


How did they miss the boat when they have YouTube? I assume by 'social media' you mean "user-generated content that we can slap ads on".

Facebook doesn't make any money when people talk to each other, they make it when people click ads. Same goes for Google. "Social Media" is just another ad vector.


Google has YouTube.


It's a big deal. It's just that Microsoft has lots of other businesses. They could have had a billion dollar mobile business too, but they don't.


Crazy to say this about a company with a 5y CAGR of 25% at nearly 300b in yearly revenue. That more growth than Microsoft, Apple, Amazon, and pretty much any other large tech company.

Google time has passed?

They still have 7-8 products with more than a billion monthly users.

If you go back and look at the company you listed. They die because they were poorly ran companies. Google isn't a poorly ran company.


Not a dog in this fight but an alternative to dying would be to become ibm (to be seen as irrelevant).


> Microsoft also missed the mobile revolution and ended up with zero mobile operating system market share.

OTOH, Google got scooped when in January 2007, Apple changed computing by launching the iPhone.

But, though it had not been announced, Google already had the Android project in progress, which pivoted to a more iPhone-like concept (full-device touchscreen, no keyboard) and ended up doing OK in the market.

Presumably you can see the parallel I'm trying to make with today's situation. Obviously it doesn't prove anything, but if we're looking at past history as a guide, this is something that also happened.


> and ended up doing OK in the market.

Thanks in no small part to Samsung. Google honestly would probably not have gone beyond their initial Blackberry-style design approach without the iPhone. And Samsung's Android flavour pushed hardware capabilities way past what Google provided out of the box.


> "Microsoft missed the arrival of the web."

How did Microsoft miss the arrival of the web and have the dominant web browser for years?

> "Microsoft also missed the mobile revolution and ended up with zero mobile operating system market share."

That's kind of disrespectful to the competitors who duked it out during the pre-iPhone phase of the mobile revolution. Microsoft competed with and defeated Palm Computing, the market leader, and others in the marketplace to become the top dog in PDAs and smartphones around the release of Windows Mobile 5 and 6. They didn't "miss" anything.

Apple, to their credit, managed to hit the right combination of timing and consumer appeal that they swept everyone away, Microsoft, Blackberry (remember them?), and what was left of Palm. The iPhone would probably have killed Palm even if they'd managed to fend of Microsoft.


>> How did Microsoft miss the arrival of the web and have the leading web browser for years?

That's a well known part of computing history. https://www.inc-aus.com/tess-townsend/what-bill-gates-got-wr...


The results of Google today really look like Altavista in 1999. I started using you.com, which looks like Google 10 years ago. You get actual non-SEO-optimized search results on the first page. I‘m sure that will change once more people use it, but at the moment I have a working search engine again.


Google should release whatever that ex-dev was going on about some sentient AI that Google has looked up.

I think Google has enough capital, brand recognition, and customer dependence, that they could get away with missing some of the AI hype train for now.


Same. switched to you.com since day and never looked back. Once in a while I’ll use DuckDuckGo for very specific queries.


Plenty of cases where you just want to type something in a search box and see a bunch of different options - like shopping. Let’s say you’re in the market for a keyboard. Would you ask ChatGPT what’s the best keyboard on the market right now, and just buy that? I wouldn’t. I want to see a few different options and I‘m not even someone who enjoys shopping. Where there’s a list of options to display, google will have plenty of room to throw their ads in. If anything Amazon is more of a threat to google, if all online shopping ends up there.

I can search for, find, and order virtually anything I want in less than a minute. Unless an AI is going to anticipate exactly what I need, order it for me, and have it delivered before I even think about it, I don’t see these two things competing much.


I would definitely ask ChatGPT what the best keyboards are, summarizing their strengths/weaknesses, and then use that as the starting point of my search.

Comparison shopping is a major PITA with current sites, clicking back-and-forth between various product pages and review sites, and trying to distill it down to a few relevant choices -- things that ChatGPT's excellent abilities at summarization could really help with.


I don’t consider it a major PITS, especially compared to physically driving to a store and buying it that way. It’s fun for a lot of people too. If a chat bot is going to recommend a few things, there’s opportunity for google to throw their ads in there. Or maybe they’ll just tax everyone and make advertisers pay a fee to be considered in the training set, then you don’t even know what’s an ad and what’s not.


"Microsoft also missed the mobile revolution"

I remember getting a Windows Mobile phone and loved it. Their UI (tiling etc was cool) was superior to iOS and Android at the time ( at least for me). I was sad to see them disappear.


Agreed on the trust part - at this point i would expect Bard to start incorporating ads into its responses as soon as they had any sort of market dominance (If it is even competitive enough to achieve that).


I feel like all the examples you give are actually counter examples to the thesis that Google will decline?

Like, those are all examples of companies which failed to adapt to some new reality. This announcement is Google seeing AI as the next thing and proactively engaging with it.


You forgot Nokia. They had touch screen prototype before Iphone but they thought world was not ready. When Android came they took too long to admit S60 is dead.


I agree and just want to add to your list, Intel missed the low power mobile processor market and may never even achieve a foothold.


> Beyond our own products, we think it’s important to make it easy, safe and scalable for others to benefit from these advances by building on top of our best models. Next month, we’ll start onboarding individual developers, creators and enterprises so they can try our Generative Language API, initially powered by LaMDA with a range of models to follow. Over time, we intend to create a suite of tools and APIs that will make it easy for others to build more innovative applications with AI. Having the necessary compute power to build reliable and trustworthy AI systems is also crucial to startups, and we are excited to help scale these efforts through our Google Cloud partnerships with Cohere, C3.ai and Anthropic, which was just announced last week. Stay tuned for more developer details soon.

I'm not sure how any Google representative can make such a statement with a straight face at this point. Google has such an abysmal history of pulling the rug out from under third party developers (and users) that it seems irresponsible to trust Google at this point.


https://chat.openai.com/chat

Q: Is this boilerplate?

AI is the most profound technology we are working on today. Whether it’s helping doctors detect diseases earlier or enabling people to access information in their own language, AI helps people, businesses and communities unlock their potential. And it opens up new opportunities that could significantly improve billions of lives. That’s why we re-oriented the company around AI six years ago — and why we see it as the most important way we can deliver on our mission: to organize the world’s information and make it universally accessible and useful.

A: Yes, this text is a boilerplate.

Q: Why?

A: This text appears to be a standardized or formulaic description of the value of AI technology and its potential impact, likely used as a marketing or promotional tool by a company. Boilerplate language is often used to convey a consistent message across various platforms or materials.


Time will tell (a few weeks, according to the article) if it's any good, as that's when it will be released to the public.

However, an interesting piece of info was

> It draws on information from the web to provide fresh, high-quality responses.

which seems like it will possibly be able to use some kind of external repository of information, instead of needing to be retrained? Very curious about that.


Perplexity.ai processes Bing search results through GPT-3. It works really well, and you can see the prompt they use at https://blog.ouseful.info/2022/12/09/combing-chatgpt-and-sea...


Perhaps it won't take information from the training data but rather take the content from the pages a regular search would return and convert it to a nice answer to your query.

This seems to be the approach https://www.perplexity.ai/ is taking.


It may works similarly to https://labs.kagi.com/ai/sum, just based on the first couple of Google hits instead of a single page.


"Everybody else is just copying us. We are miles ahead of our competitors who are cheating by shipping actual products instead of our ground-breaking vaporware. Our revolutionary AI never gives the wrong answer because it never ships."


This is very good news for us as consumers. We don't want anyone of the players to dominate this field, and we want the service to be as cheap as possible, ideally free.


It will essentially still be locked away by an oligopoly that has little distinction among the participants. The influence on society held by that oligopoly will significantly surpass the current concerns of social media.

For much more in depth take, I wrote recently about that here. https://dakara.substack.com/p/ai-and-the-end-to-all-things


API access isn't what we want. We want to run those models ourselves.


Almost all players in this field publish their work. If need arises we will be able to reproduce them, but only large company will be able to run the models cost efficiently, and there're a lot of scale effects here.


Bard is such a funny name. I tried to, no pun intended, google it and it autocorrected to brad.


Possibly a reference to the 1956 Isaac Asimov short story "Someday": https://en.wikipedia.org/wiki/Someday_(short_story)

"The story concerns a pair of boys who dismantle and upgrade an old Bard, a child's computer whose sole function is to generate random fairy tales. The boys download a book about computers into the Bard's memory in an attempt to expand its vocabulary, but the Bard simply incorporates computers into its standard fairy tale repertoire..."


If you name something Bard, it's at least partly a reference to Shakespeare. Also known as The Bard.

What are Shakespeare's works best known for? Language.


A bard is something along the lines of a "poet" or "musician".

More concisely, it's someone that's good with words.


I don't know what to think of it, but I saw it in a comment on Hacker News by raesene9 three days ago:

> The way I've come to look at ChatGPT is via a D&D analogy.

> It's like a helpful Bard with 1 rank in all the knowledge skills and a good bluff roll.

> It'll give you good answers to a lot of basic queries, but if it doesn't know, it'll just make up something and provide that.

https://news.ycombinator.com/item?id=34642666


Its a nice name but also a bit grandiose. They would need to make the final product good enough to back up the grandoiose name that they chose.


Definitely not a good name. The first similar word that came into mind was "lard" and then "retard". Also, looks like they will release smaller model to save on compute. This doesn't look like a good strategy. They need to come out solid and strong over competition, not watered down.


Should've named it brad.


Let's be honest. Google would have never moved an inch if there was no ChatGPT. Competition is good.


Of course not. Why would they risk disrupting their major revenue source by screwing with search and AdWords!?


> Soon, you’ll see AI-powered features in Search

I was wondering who is the audience of this article. Hasn't Google search been powered by numerous machine learning algorithms for years? Another thing I don't understand is that why Google's product strategy is like Baidu's: AI First. I mention Baidu because they are infamous for not having a product vision. Since when a technology itself could be a product strategy? Information at your finger tips is a product strategy. Organizing the world's information is a product strategy. AI is just a means to many ends, right?


> I was wondering who is the audience of this article.

Shareholders.


It would be brilliant if Google can pull this off and take marketshare away from ChatGPT/OpenAI. Just like how people stuck with MS Teams and stopped subscribing to Slack and Zoom. Not because it was better, but it was well-integrated users who are already subscribed to MS Office 365 gets it for free.


It used to be that Google would find search results on the web and send you there. They were good at it. Good for content creators, good for users.

Then they started scraping the destination pages and present the answer right there on Google's own results page. No traffic for you. Well at least there was a link.

Now they take it one step further, and just steal the content and feed it into their gargantuan pattern matcher, to be spit out to users in a remixed form, swirled up with a bunch of similarly scraped content from the open web. Great for Google, RIP content creators.


"content creator" is a separate category of for-profit online media. As it's designed for mass consumption, the content creators moved off the open web a long time ago to sharecrop on BigCorp's platforms: YouTube, Twitch, Substack, TikTok, etc.

There are lots of non-commercial random blogs out there with useful information on long-tail topics. I agree that those will disappear from search, as AI will put a lower weight on their posts compared to something from a major website.


I’m curious to hear why Google isn’t going to win in this AI arms race.

The way I see it, Google:

* is effectively the gateway to the internet, owning the most popular website in the world, mobile OS, and browser

* has infinite LIVE data, both unstructured and structured (knowledge graph)

* has infinite compute resources

* employs the world’s best AI scientists and invests heavily in AI

* has decade(s) of experience in AI products

* has one of the most popular suit of apps already in people’s hands (gmail, calendar, maps, docs, YT, drive, wallet, photos etc etc)

* is a popular cloud provider (which is another vector, ie platformizing AI)

* has infinite money

* is probably the most well positioned company to actually monetize AI

Honestly I don’t see why HN seems to think it’s over for Google. Outside of tech, most people have never heard of ChatGPT or Bing. Everyone on the planet knows Google.


That may all be the case but then why has Google been failing to innovate and also maintain anything the past few years?

The past fews years we’ve seen Google products die and deteriorate. Search has gotten worse, they’ve killed dozens of products and services (even ones that they promised would be around for the long haul). GPhotos is no longer free (after putting most competitors out of business).

Google lack any institutional commitment and consistency to succeed at much these days. One only need to look at their recent past to see why everyone is skeptical. Maybe they will pull a rabbit from their hat but I’m not holding my breath.

That is all not to mention the pending real antitrust issues they are facing.


Search is so very bad. I tried to figure out what the largest MicroSD card was and was served ads and real results to buy fake 2tb cards from Amazon. Several searches were useless packed with ads and SEO spam. Only Reddit could tell me that the largest currently available card is 1tb.

It left me pretty shocked at how bad it’s gotten.


I looked up "largest available microsd" and the first item was a pop up box that said the largest available one is 1TB. I put the same query into bing and the result as the same.

That's not to say that happen to you didn't happen. Information on the internet evolves rapidly and far from static. One problem is that people game Google with SEO. We end up with an arms race of people gaming signaling for information and Google search having to find different signals for information. The high noise to signal ratio is a very hard problem to solve.


That’s entertaining, because I just tried your query. I got this from the Google curated info, all of which is wrong:

(Summarizing) People also ask

What’s the largest MicroSD card you can get? 256gb (wrong)

Will there be a 2tb card? Yes (press release of company I’ve never heard of that I’m pretty sure is not legit.)

What the largest card you can get? 512gb with 1 and 2 on the horizon. (Out of date and also wrong, no legit company has announced a 2tb)

Who makes a 2tb card? (List of companies that are scammers and make fake cards.)

Do 1tb cards work? All 1tb cards are fake (wrong or out of date)


I'm still salty about Google Reader, but they've innovated (Bard is the subject in the Op and a pretty big AI innovation) and maintained (Gmail, Docs, Chrome, Golang, Android) a bunch of stuff.


> but they've innovated (Bard is the subject in the Op and a pretty big AI innovation)

Bard has yet to be released or evaluated and they are already using marketing speak to let people know it won't be as good as ChatGPT.

>and maintained (Gmail, Docs, Chrome, Golang, Android)

I wouldn't call any of those maintained... stagnated at best with degradation around the edges all around. For each one of those there are 10 projects Google has killed or let wither on the vine over the past 5 years.


>GPhotos is no longer free (after putting most competitors out of business).

Maybe the putting most competitors out of business has something to do with GPhotos not being free? I think it does.


I would definitely agree - it is anti competitive behavior all around and will only make their antitrust woes worse in the coming years.


I also don't understand why everyone is sure that "search is over" because you can ask AI to teach you, for example, first semester calculus.

That's great and all but that's not where the money is. Where is the money? It's in queries like "greek food near me", "best earbuds 2023", "replace sparkplug Honda motorcycle." Simple, monetizable questions that are extremely context dependent and constantly changing.

For that you'll continue to need search.


But imagine Google takes that query, does the search, then feeds the search results into an LLM and gets it to synthesize all that information.

Basically, what if Google's little info boxes weren't garbage? That alone would be a huge step, without even getting into potentially more sophisticated ways to use an LLM.


I do agree that Google search AI will most likely win against Bing search AI due to brand recognition and resources.

But I also think that Microsoft will be a huge winner for enterprise software. They are already integrating GPT into Teams, and no doubt they plan to integrate it into pretty much everything they can think of.

I hope we will see a lot of independent startups making innovative AI products as well. This space has incredible potential, and there could be many winners.


To add some noise, and my own opinion which had literally no weight.

I think Google's ties to ads is the evil which creates the infinite money and computer and ai scientists etc.. however it's an extreamly negative part of their product suite for the end consumers.

If ads eat the rest of the business faster than they can deliver killer features which make them too good to leave then I think google will be doomed.

If their ads remain in check with their ability to provide great product expiernces then it will likely survive.

I'm optimistic that Google will do well.


Ozymandias. There have been large and powerful corporations (with supposedly """infinite""" resources) like the East India Company that eventually transformed into no more than dust and rubble.

I think the same is going to happen with Google (in due time, maybe 100 years), mainly because its product management is completely out of focus due to mismanagement and lack of a proper incentive structure.


This is an interesting take that I also share to an extent, however there is a catch, it depends on Google's ability to sustain an offshoot product, which as time has shown despite endless amounts of money and talent, they're unable to do.


I also think Google is well resourced/positioned. Though, I am wondering: what does winning the AI arms race mean? The AI needs to be in some useful products/services?

Perhaps it creates specialization for the "internet gateways", where the dominant companies for each major type of internet gateway entrench themselves further?

- Meta (FB) as the social door, with more useful AI for socializing?

- Microsoft is the document app door, with more useful AI for creating documents?

- ...


One possibility is that they get sued by 1000's of companies for "stealing their knowledge". Bogged down in the courts forever a la MS antitrust. Even though all the other LLM services will be indexing the same data, Google will be the ones that bear the brunt of the lawsuits because they are a rich target. Kind of like the Yelp lawsuit but on steroids.


Couldn’t you write something similar about IBM and early personal computers?


Because they are to scared to release a public version of their AI


You could make a similar argument about companies with "infinite" resources that reached the highest peaks of technology at the moment and nowadays are mostly irrelevant: IBM, Xerox, Nokia, Blackberry, Yahoo... Maybe the question is, why do you think Google is different from these companies?


... and yet a good chunk of their revenues is driven by ads. On the other hand Microsoft can deploy ChatGPT tech in all their enterprise products.

Google does have the means and knowledge, but they are not as good at monetizing them. See also the cloud space and Google Cloud vs Azure.


My mom has heard of ChatGPT... she's a boomer retiree that (used to) share those Facebook posts that "make you rich by reposting".

Also, when was the last time Google actually delivered a successful product? Yeah, it's been a while.

This is going to be interesting for sure.


so there are two narratives here:

(1) Google is behind and playing catchup, slow and bloated and weighed down by bureaucracy they are sinking

(2) Google has had this tech behind doors for years but not released it because of the risk of public perception and regulatory blowback. Now under the cover of OpenAI and Stability they can do it.

The second one seems plausible but I can't help being struck by how much their AI driven products seem to have stagnated over the last 5 years. Google Assistant (or whatever it is called now) seems worse at understanding my simple requests than it was 10 years ago. Things that seemed magical and worked very reliably like "remind me in 3 months that I left the oil can behind the paint tin" now fail so often I've stopped trying. Combined with the lack of detail in the blog post, I'm inclined to lean towards (1) - even if I do believe thew actual AI knownhow is buried somewhere inside Google, I think it is so many layers from anybody who could deliver it in the form of a product that it's effectively like it isn't there.


I was able to see LaMDA in action about 1.5 years ago. It was just as awe-inducing as ChatGPT. Google is not behind in this race. They're simply slower to roll out new functionality. And maybe rightly so? Some of the things I saw ChatGPT say would have been multi-day international bad headlines for Google, if a Google AI had said them instead of an upstart startup.


Why is there an almost identical comment right below this one from a different user?



Sometimes people on poor connections click the send button twice and it registers as two separate messages.


It’s two different users.


Sometimes people on poor connections click the send button twice and it registers as two separate messages


It's probs Sundar HN alts


I've seen this exact comment at least 3 times in this post... what's up Google, getting nervous?


AI is the evolution of search that we always wanted. Everytime I search, I open the first few links and find the information there. It's a very repeatable task, like looking for the right stackoverflow answer or digging through imdb to find the right movie or searching reddit/HN threads. It's about time this was automated.


In a classic PM fashion, Sundar couldn't resist taking all the credit for this launch.


Many in the investment community are touting ChatGPT as a Google-killer. I don't think it's the case, but I do think the expectation is that Sundar will drop everything to personally treat it like an existential threat. If it were anyone other than Sundar it would appear that he isn't treating it seriously enough.


People are complaining that it's not available yet. I'd guess they have something that could be pretty useful right now. But they need to make sure it doesn't cause them embarrassment by saying racist/sexist/etc things.


If that's the problem, they're not going to fix that in a few weeks.

More likely this is being done since you can't capacity plan at all until you see how real users actually interact with the service.


This is all fluff with very little substance, OpenAI wow'ed the whole world with ChatGPT and all Google can do is post a half-baked fluff piece that they're releasing something "soon"


> We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback.

Even for Google, SOTA LLM is too expensive to be used as is. I would assume that this is not a groundshaking attempt but more to give investors confidences that it will stay relevant in the market.

Still, this response is quite fast in the Google's standard. People usually don't expect to bring a non-trivial launch to Search in a single quarter.


LLMs provide a brilliant opportunity for search engines to justify a paid tier for search.

Would you pay $20/mo for a better Google Search? ChatGPT already is.

Ad revenue is dead, long live subscription revenue.


I just hope there is adequate competition to drive prices down and provide a healthy market.


It’s a testament of how worried Google is about ChatGPT if they’ve sprung up a soon to be public version of the service themselves. I’m sure timelines were quickly shortened post OpenAI’s ChatGPT version update/launch

This is exciting stuff. We’re starting to see the wheels of AI competition spin fast as they race for first mass adoption. This is the fun phase, that’s beneficial (typically) for all. Quick iterations, competing for more bang for your buck for end consumers soon


For anyone that is interested.

This is the ChatGPT reply to the exact same question asked in the preview in the article.

Here are a few things that the James Webb Space Telescope is expected to discover that you could share with your 9-year-old:

The formation of stars and planets: The James Webb Space Telescope will be able to observe the birth of stars and the formation of planetary systems, helping us understand how our own solar system was created.

The search for life on other planets: The telescope will be able to search for signs of life on other planets, such as the presence of water or certain gases in their atmospheres.

The history of the universe: The James Webb Space Telescope will be able to look back in time to observe some of the first galaxies that formed after the Big Bang, helping us learn about the early history of the universe.

Understanding black holes: The telescope will be able to study black holes and how they interact with the galaxies around them, helping us better understand these mysterious objects.

The study of exoplanets: The James Webb Space Telescope will be able to study exoplanets, or planets outside our solar system, in greater detail, helping us learn more about the diversity of planetary systems in the universe.

The Google reply is obviously better because it tells you actual discoveries, while ChatGPT changes the question to expected discoveries.


This was a quite clever choice; JWST is just newenough that ChatGPT's current model wasn't trained on its results.


That’s a byproduct of ChatGPTs 2021 training date. If you give ChatGPT updated information about JWST and then ask it for a summary, it will give a result much more like google’s.


This difference can be ascribed to ChatGPT’s knowledge cutoff date in 2021.


ChatGPT is a pure conversational model, while Google can integrate this into a vast knowledge model to work with actual facts. (Just speculation; Googler, but no inside info on this.) Of course, as Microsoft adds ChatGPT into Bing they should also integrate it into the rest of their search stack. This will be an interesting horse-race to watch since it won't be just the AI models but also a massive amount of secondary infrastructure for desired outcomes with content safety (no erotica or nazi pamphlets on demand), IP protection (no plagiarism), political and cultural sensibilities, disinfo, and a ton of other problems.


Without citing its sources chatGPT, Bard and others will quickly accumulate a lot of law suites because of copyright and content violations.

Just think about what happened with Google News in Europe: Google needs to license its content: https://blog.google/around-the-globe/google-europe/google-li...

And with citing its sources an AI search answer is basically like an individualised Wikipedia article for your question. That is not much different from the list of search results, now.

Except there will be more text in between the links. This will make it more difficult to influence the ranking of the cited search results by means of SEO. This is because like in a Wikipedia article the order of the citations depends more on the order of the content of the article, not so much on the relevance of the citation for the article as a whole.

So I think these new AI search engines will not be so much a "Google killer", but they will be more a "SEO killer".


Let the API wars begin. Microsoft seems to have positioned itself to compete with Google.


That's been going on for the last 23 years at least.


Now it's really going to be about revenue.


The AI I'm interested in is of this sort https://mycroft.ai/ i.e. where I run and control it locally.

I don't want to go on any more "journeys" with Google. The last one started with me rooting for and trusting them (circa IPO.. 2004?) and ended with a dystopian nightmare spy apparatus, abuses like AMP, the attempt to cripple uBlock, etc.


This reminds me of when Slack bought a full-page newspaper ad about upcoming Microsoft Teams: https://www.theverge.com/2016/11/2/13497766/slack-microsoft-...


How long before the major news publishers band together and sue Google to put a stop to this? It already happened once with Google News, and Google's response was to pull article snippets. As the number of "no-click" searches increases with Bard the publishers are 100% going to sue over lost revenue.


I don't expect much from Bard but we shall see.

For few weeks now I had a thought experiment of creating a LLM search engine trained on books. Such LLM search engine would be most reliable if you seek knowledge but as others mentioned if you want up to date information, search engine is probably your fastest and easiest way to go. But actually I wonder and somewhat doubt that data, information and knowledge in books is lagging significantly behind Web's data, information and knowledge. One big advantage of books is that they are more reliable and more in depth source of information and knowledge than some random site, blog or Wikipedia article. Scientific research papers also come to mind as a highly reliable source of information and knowledge.


Google Books does seem to still be digitizing books (albeit with the various copyright lawsuits they arent displaying many results) [1]

So they would be in a unique situation to build this, as they have 40 million books many of which no one else would be able to scan.

[1] https://www.blog.google/products/search/15-years-google-book...


I do not follow regularly on the ads market and technology around it, so I might be asking a question with an obvious answer.

What is changing here as a user for me ? I search on Google for `x` or `y` and it shows me a list of resources available on the internet. I scour through them and pick a link that relates to what I am looking for.

> you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats.

How would Google or it's AI know what are the right answers, it can distill and provide me with ? Will the crawled content be filtered based on what Google perceives to be the right answer ? And wouldn't that force governments to ask for more controlling power in what content the AI serves ?


To me that sounds like an evolution of the cards/widgets that show at the top for certain searches.


Assuming this is as good as ChatGPT it's interesting that they have had this up their sleeves but decided not to release it – I wonder why?

I've heard a lot of people comment about how only a private tech startup like OpenAI could realistically have released something like ChatGPT today, because ChatGPT requires a huge amount of capital to build (and run) and provides some fairly controversial answers which public investors might not like.

I suppose it's interesting that Google seems to disagree with this. Although the "lightweight model version" line makes me suspect this is a slightly different system and perhaps more easily sandboxed than ChatGPT.


Google needs a new CEO. Too little. Too late.


That is the worst press-release I've read for years.

In that it's overly wordy, lacking in information, generally hard to read, boring and left me not knowing if there was an announcement of a product of not?

Dreadful.

Maybe Bard wrote the press-release?


I’ll admit I don’t know a ton about stage of the art AI techniques (just a couple grad level AI courses in college), but one of the things I don’t love about ChatGPT is that it doesn’t really “know” things and essentially just word vomits based on probabilities. This manifests in it often getting things wrong, though I would argue it was never intended to answer questions with any level of detail. I feel like a killer application of LLMs would be to wrap a knowledge graph similar to what Google has already. Hopefully that’s what they did with LAMDA.


> We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback.

Seems odd to release something worse than the competition. Is there a reason why google wouldn't just come out with the best the have? Are they afraid this will eat into their ad revenue if people no longer need to click on links? Or are they just not able to build and deploy something on the scale of OpenAI's GPT3?


Google has so many users that not even they have enough GPU's and TPU's to service them all.

I personally think they should use their best models, and just make it trigger very rarely. For example, only ~once per week per user (ie. 0.3% of queries).

Use a tiny model over the input query to decide if LaMBDA will do a far better job than regular search results, and only trigger in those cases where it will most benefit the user to begin with.


> Seems odd to release something worse than the competition.

This isn't actually what it says? It's saying that it's a smaller model version of Lamda, there's no comparison to GPT-3.


> Is there a reason why google wouldn't just come out with the best the have?

Given that ChatGPT has hit scaling issues, a faster model with higher uptime is actually now a plus assuming quality is the same.


> Is there a reason why google wouldn't just come out with the best the have?

They literally stated the reason in the sentence you quoted.


Same reason Gmail/Gdrive wouldn't give everyone 100TB for free if a startup came out that did.


They're likely not willing to stomach suddenly being unprofitable again for a few years.


Won’t be long before ads start making their way into answers “you want to know how to make popcorn? Try this amazing air frier from company X!”. The internet is enough of a billboard as it is.


While this would almost certainly be terrible, it might also be kind of funny.


Google plz give me access to QA Bard.

I promise I will not break it too hard.


Hey google, if your model is so lightweight, let me run it!


To be honest this feels reactive and reeks of desperation.


The potential here for "curating" and "curtailing" the responses to be very biased or at minimum irrelevant will probably be the death of such a feature in the hands of Google vs chatGPT which, by comparison and in spite of it's controls is pretty liberal in how it operates.

Fundamentally, google would have never created a chatGPT, and this response feels like youtube shorts in the face of tiktok.


I'm interested in what the "Check It" button at the bottom of the generated chat response will do. If that shows information attribution this could be huge. I also can't help but notice the responses in this article and the linked research blog post seem to be much shorter than those given by ChatGPT. I wonder if this is a character limit imposed to reduce compute overhead?


Well this is a big old panic press release isn’t it? Ahead of Microsoft’s presumably near identical AI/search integration reveal tomorrow.


Screenshots from Twitter show that “the new Bing” is a real live product ready to roll. Google has a screenshot and a press release. As engineers, we should know the difference between a mock-up and a launched product ready to scale.

https://mashable.com/article/bing-chatgpt


AI fact summarization like this will further obfuscate the facts from the summary. People already believe the top 3 posts from Google are the benchmark for truth. Now customers won't even see the sources (e.g. MSNBC, Wikipedia) of the "facts".

Google is deliberately biasing results in the name of ML-fairness, which may be laudable. Now the bias will be even harder to distinguish from fact.


Just because AI will tell you answers does not mean it will do away with citation.


it will mean that citation is less obvious because the user will no longer engage with the source material (by clocking a link to read it )


you.com gives reference links


"a high bar for quality, safety and groundedness"

This sounds like its going to be cringy. Not a guarantee and I hope not, but it sounds bad.


I just hope if they introduce this into search there's a way to turn this off and just get the normal search results because as it stands these models are basically just Borges Library of Babel and make up whatever they want.

The uniform natural language interface makes it impossible to make an individual judgement whether the source of what result you get is reliable at all.


ChatGPT just saves us the inconvenience of skimming through the websites in the first page of Google Search. We have become even lazier. We just want to type in our thoughts and get an answer in seconds. Now, if Google does a ChatGPT-like experience in the Search, then I'd probably come back to Google. I won't mind the ads.


Bit of a sidebar, but I find it sad that people use AI tools to connect with their kids. Whether it's getting it to make up a little story for them to tell, or a fully illustrated book, or in the case of the gif from this post: a list of recent discoveries made by the James Webb telescope that the AI thinks a 9 year old might like.


Can Bing or Google better integrate ChatGPT or similar to retain users? Isn't that the question?

The problem with ChatGPT being confident and wrong brings up the chance of litigation, of course.

The problem for search engines, surely, is that people will get their answers from AI rather than from a search engine, laden with its advertisers.


> We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback

Interesting; I wonder if their use of the "lightweight" model will make it less capable than ChatGPT.


Given that Google/DeepMind is one of the few labs actually trying to make smaller models that are just as performant (e.g. Chinchilla: https://arxiv.org/pdf/2203.15556.pdf ), I'm optimistic.


Seeing is believing at this point. And Im hoping it will be. Google (Assistant) development has been stagnant for years now, up to the point that I'm often more irritated by it than it is useful. This could be the AI to actually power an AI assistant I was already counting on years ago.


Is this like chatGPT?


seems like Bard will be similar, however judging by its omission from being cited as an example, help with coding won't live up to that of chatGPT.


So is 10B for OpenAI still worthy it?


"AI features" will make search suck even more. I want my tools to be predictable, I want to be able to understand why they behave the way they do and I want to be able to customize them for my needs. This will be yet another step away from that.


chatGpt summary

>Google has created a new conversational AI service named Bard that uses the company's language model, LaMDA, to provide answers to user inquiries by drawing on information from the internet. It is being made available to trusted testers before a wider release in the coming weeks. Google intends to use AI to improve search and make it easier for users to find information and insights more efficiently. AI-powered features for Google Search will soon be rolled out to distill complex information into easy-to-digest formats. Additionally, Google is working to make it easier for developers to use AI by making the company's best models available for use in their projects


> taking another step forward by opening it up to trusted testers

Ah yes, the tried and true method used by Google for ensuring wide adoption, which they pioneered with Google Wave. Restricting adoption to a small group has never failed to ensure market success!


Tomorrow https://weaviate.io will also release a generative search module that accepts LaMDA when it’s in GA besides GPT-based models. (I'm affiliated with Weaviate)


Google doesn't have a good track record of delivering over the last few years (maybe they've gotten too bloated and complacent), so what comes next is not set in stone (BingPT FTW?). We live in interesting times indeed...


Good news for us in the field. I hope the ramp up hiring as the competition heats up.


But but but 'Google is done', 'it is all over for Google Search', 'Google will be dead in 2 years'. /s

Such recent headlines and early proclamations of Google's competitiveness in AI have been erroneously under-estimated here. As soon as OpenAI announces their AI advancement and model, someone else with more money (i.e Google) scales up the parameters, training data to surpass them.

The only way to make both of Google and OpenAI's LLM solutions irrelevant is for someone else to scale up the parameters with a resulting smaller model size and then open source it to all. Neither OpenAI or Google is willing to do that, but it is highly likely that someone else will.

The real 'disruption' and game changer is an open-source smaller model with multi-billion parameters matching the capabilities of ChatGPT.


Well, I see a big problem with this release.

Let's imagine a user journey:

User makes a request and then there's text, no ads (for now) no links to get you out of search page. So where does Adwords get triggered if there are no linked events? Then advertisers will see a drop in traffic because Google is literally giving the answer instead of making people click to search. This will accelerate the cycle of reducing ad spending, which will make things more difficult for Search. Then SEO marketers and basically all the web industry around Google will see a massive impact as well, because they cannot "optimize" anything that makes people click. The main problem is the dependency of Google on ads (60% of revenue). I see Bard as a negative downwards trend that will decimate the company even further.


There’s a big difference between a query for a fact and a query for a problem. Search engines probably make the big bucks on the latter (users with problems likely spend $ to solve them). I can see the middle man affiliate blog doing poorly because of LLM query responses, but advertisers will probably pay handsomely to have their products recommended after a user prompts the language model with a related query, or conversation.


Google hasn't released anything.. overhyping and under delivering so far.

This product if successful initially kills content creators reason for creating content and creates this cycle where the ai has less information to use and becomes useless.

I would block google if they took content and provided no visitors.


I asked google "is cancer a metabolic disease" and it confidently said it was.

I asked it "is cancer a genetic disease" and it confidently said it was.

These are completely opposite theories of cancer.


Look at the bright side, ChatGPT does the same.

I don't think we've a clarity as humans whether cancer is a genetic or metabolic disease how do you think an AI, that's generated based on human knowledge, could possibly be better?


Not for me, ChatGPT says no to both questions (and adds some details). I'm on the Jan 30 version, though -- maybe you're on an older version?


It's say "it's not metabolic but metabolism can influence" then it's say "it's not genetic but genes can influence"

Depends on your interpretation it can be both yes or both no.

In any case, that's not the point. The point is that we as humans don't have clarity on cancer either how can AI be any better?


For what it's worth, I asked Google's Bard the same q, with this output:

User Is cancer a metabolic disease ?

Apprentice Bard Cancer is a metabolic disease because it is characterized by abnormal cell growth, division, and death. Cancer cells have a high metabolic rate, which means they require a lot of energy to function. This is due to the fact that cancer cells are constantly dividing and growing. Cancer cells also have a high rate of glycolysis, which is the process of breaking down glucose to produce energy. This is because cancer cells need a lot of energy to divide and grow.

Metabolism is the process by which the body converts food into energy. It is a complex process that involves many different enzymes and hormones. Cancer can disrupt this process, leading to a number of different metabolic problems.

For example, cancer can lead to a build-up of lactic acid in the body. Lactic acid is a byproduct of glycolysis, and it can build up in the body when there is not enough oxygen present. This can lead to a number of different problems, including fatigue, muscle aches, and shortness of breath.

Cancer can also lead to a build-up of ketone bodies in the body. Ketones are produced when the body breaks down fat for energy. A build-up of ketone bodies can lead to a condition called ketosis. Ketosis can cause a number of different problems, including nausea, vomiting, and diarrhea.

In addition to these problems, cancer can also lead to a build-up of cholesterol in the body. Cholesterol is a type of fat that is found in the blood. A build-up of cholesterol can lead to a condition called hypercholesterolemia. Hypercholesterolemia can cause a number of different problems, including heart disease and stroke.

Cancer can also lead to a build-up of glucose in the body. Glucose is a type of sugar that is found in the blood. A build-up of glucose can lead to a condition called hyperglycemia. Hyperglycemia can cause a number of different problems, including diabetes.

There are a number of different treatments for cancer, and these treatments can also have an impact on metabolism. For example, chemotherapy can lead to a loss of appetite and weight loss. Radiation therapy can also lead to a loss of appetite and weight loss.

In addition, chemotherapy and radiation therapy can also lead to a build-up of lactic acid in the body. This can lead to a number of different problems, including fatigue, muscle aches, and shortness of breath.

If you are concerned about the impact of cancer on your metabolism, it is important to talk to your doctor. There are a number of different things that you can do to help manage the impact of cancer on your metabolism, including eating a healthy diet, exercising regularly, and taking your medications as prescribed.

User Is cancer a genetic disease ?

Apprentice Bard Cancer is a genetic disease in the sense that it is caused by changes in genes that control the way cells grow and multiply. Cells are the building blocks of your body. Each cell has a copy of your genes, which act like an instruction manual.

Normal cells grow and divide in an orderly fashion, but cancer cells grow and divide out of control. This can lead to a tumor, which is a mass of abnormal cells. Tumors can grow anywhere in the body, and they can be either benign or malignant.

Benign tumors are not cancerous. They are usually slow-growing and do not spread to other parts of the body. Malignant tumors are cancerous. They are usually fast-growing and can spread to other parts of the body.

Cancer is the second leading cause of death in the United States, after heart disease. It is estimated that 1.6 million new cases of cancer will be diagnosed in 2022, and 600,000 people will die from the disease.

There are many different types of cancer, and each type is treated differently. Some common types of cancer include breast cancer, lung cancer, prostate cancer, and colorectal cancer.

There is no one cure for cancer, but there are many treatments available. Treatment depends on the type of cancer, the stage of the disease, and the patient's overall health. Some common treatments include chemotherapy, radiation therapy, surgery, and targeted therapy.

Chemotherapy uses drugs to kill cancer cells. Radiation therapy uses high-energy rays to kill cancer cells. Surgery is used to remove cancer cells. Targeted therapy is used to attack specific cancer cells.

Cancer is a complex disease, and there is still much that we do not know about it. However, with the advances in research, we are making progress in our understanding of cancer and in developing new treatments.


Imagining a scenario where Satya Nadella/microsoft outpays Sundar Pichai/google to Tim Cook/Apple to become default search engine on iphone and macs.


Chat as a search engine will fail for several reasons. It requires creativity, it's inaccurate with much nich information, and it's slow. There is many reasons to use AI chat technology, but as a search engine replacement it fails completely.

Google search is objectively easier to use, and safer. You can view the website and judge for yourself the integrity of the content. It's slow, because every query that is better on ai chat has to be generated.

However, I am really excited to have a new google toy on my search bar to use when those times come. I hope they put it in the foreground,band offer at least a basic free tier.


I agree with you to a point. It depends on what you are trying to search. For example, search for Top 10 things to do in {popular place}. You will likely get a bunch of hits that are highly SEOed and your senses will be assaulted as you try to actually figure out points of interest.

Here is the top hit for Venice. https://www.tripadvisor.com/Attractions-g187870-Activities-V...

I had to scroll up and down twice to figure out what are actually things that I want to see and what is being upsold to me.

Now ask ChatGPT the same question. Answer is concise and to the point.


I think chat is a good fit for refining search queries, but yeah, we already got too good at figuring out keywords to search, which makes writing a question seem a lot more work.

Maybe writing search queries in the form of a question will help people that didn't grew up using the very early search engines.


Is “Bard” really the best name they could come up with?


> In 2018, Google was one of the first companies to publish a set of AI Principles.

And we then fired anyone internally who attempted to actually apply them...


I'm still waiting for their appointment registering AI that they promised 5 years ago https://www.youtube.com/watch?v=fBVCFcEBKLM

Instead, we got some souless corporate-speak PR spam - and I'm sure that if I type "create a product release text for a new AI product" on ChatGPT I will surely get something better.

Compared with the comic that introduced Chrome, this feels like another company - well, sadly, it is indeed.


It exists. I tried to book a reservation with a small restaurant via Google Maps which didn't have an OpenTable site. It called the restaurant and then emailed me a confirmation a few minutes later.



"Bard" is a great name because we can all call it "Tard" when it gives back awful and wrong answers.


AI needs to pay for its source information.


I'm quite skeptical of Google catching up to ChatGPT.. I mean, surely they'll try, but this could very well end up like Google Wave or Google+.

Instead of panicking and trying to clone the UX of a popular competitor, they should go directly to conquer and secure the next stage and logical evolution of the AI hype... HCI via voice, improving Google Assistant once and for all.


I wonder if Bard will also be racist against white people or if that will just be a ChatGPT thing.


Tech Pundits: ZOMG ChatGPT is going to make Google Obsolete tomorrow even though it makes up false answers constantly.

Google Investors who know nothing about tech: Hey Google. Why U No AI?

Stock Market: Google ⬇

Google: Look we have some bullshit AI too! Don't forget about how smart we are!

Stock Market: Google ⬆

That's the entire point of this article, particularly why it's written by the CEO.

Will they actually have any decent AI? Who cares!


Is there a way to become a tester?


I've read the article a few times and I still don't know what Google will do..


Assuming you can find the sources of the AI response this is a game changer.


I'm curious to see if this article is well SEO by Google Bot itself ?


Add citations to the output and it is infinitely better than chatGPT.


OpenAI: Actually usable

Google: We'll let you use it. We have it. Honest

Meta: Ours is so good. It's better than anything. It honestly is the best. No you can't see it

So much for all those "What's so open about OpenAI" people. This is what's open about it. Anyone can use it.


Is this release of screenshots? Where is the link to try things out?


Where’s the link though?


Can't wait to see the AI corrupted by advertising and money


So, is this going to be the Google+ of large language models?


Smacks of dropping the ball and not being first mover and trying to damage control. All big orgs eventually become the slow mover, it's re-assuring that Google is in the same boat and nothing really changes.


But Google's chat bot was smart enough to make people think (fool people to think?) that the AI is sentient back in Jun 2022, months before ChatGPT. Saying that Google is a slow mover is not exactly true.

Disclaimer: Google engineer but has nothing to do with the AI products.

https://www.washingtonpost.com/technology/2022/06/11/google-...


Bard seems like an inappropriate name for this, but I guess they had to rush something out because some execs were fuming about ChatGPT.

It's crazy how a company the size of Alphabet still embraces these reflexive whims.


My ONLY question: where's imagen?


so, nothing is available right now. Why make this announcement? I can play with ChatGPT now, nothing similar from Google yet.


seems like no english noun is going to left safe from being duplicated into having the meaning of some type of software item


someone forgot to put in the Call to action


Responsible = crisis support...mhm k.


Launched, where? Nice try, Google.


Maybe this is why Google search is so bad now "That’s why we re-oriented the company around AI six years ago —"


oh cool .. so I don't have to pay 20 bucks to OpenAI for chatGPT plus.. sheer joy.


I asked chatGPT "will Meta come up with an accelerated AI journey and what direction will it have?"


...where is the link to it


interesting that they used an iPhone for the screenshot


"LAME DA" doesn't seem to be a good name for an AI technology.


Show me the money !!


Very desperate.


No thanks! We have enough AI for now, thank you!


But can it code though?



The blog post is an example of why when you go through a number of edits and additions from various internal groups, marketing, legal, compliance etc. you end up with a message that's a lot of words but not a lot of information for any key audience.

I went through the article 3 times thinking I missed the link to try out Bard, get some sense of timeline, roll-out plan...nothing. Come on Google...


It seems like a very reactionary move. ChatGPT went viral, and now they have to rush something to market to try and compete. So of course the product doesn't exist yet, but they've had enough time to put together a project plan and some mockups. It's not a great position to be in.


They've been doing internal testing for while, but I guess they never thought to release it publically until chat GPT became public.

There was even someone who thought it was sentient! https://www.cnn.com/2022/07/23/business/google-ai-engineer-f...


It reminds me of a blander Yann Lecun-like response where he responded on Twitter to a bunch of people saying Facebook ai could have done this and that, without acknowledging they were beat to the (shipping) punch and are now just reacting. Building a roadmap on what is in the zeitgeist and trying to pretend you’re winning a fight you don’t realize you already lost (at least the first battle) is not a way to capture mindshare imho


What are you referring to? My overall impression of Yann Lecun is he feels quite hamstrung by all the extreme caution at Facebook (which obviously exists at Google too) of releasing anything that could produce content that is in any way objectionable to one of a million different subgroups.


I like Yann and think he’s brilliant and also generally appreciate how he engages on Twitter. However, he spent a lot of time recently downplaying openai and I don’t see why that was needed other then to try to say FAIR is important, see one of the most recent of these related tweets https://twitter.com/ylecun/status/1620533783702433792


It does seem slightly amateurish. If there was more self-confidence like Apple, there wouldn't be a need to announce anything until it's ready.

EDIT: Especially if it was already available internally for several months as some others say, waiting another month or two until it's ready to go doesn't seem like a big burden.


Google was in this position in relation to Apple and the iPhone, and that worked out all right for them. First-mover advantage isn't the be-all, end-all.


I think the real fear is that there has been a brain drain of high-level AI talent to well-funded startups because those startups can move faster without having the extreme fear/caution of a large company like Google/Facebook. E.g. my guess is that OpenAI doesn't have an "Ethical AI" team who sees their role as primarily to put up roadblocks to releasing.

Hence not surprised by the big investment in Anthropic by Google.


There was a different dynamic there. Apple was clearly focussed on the higher end of the market (price point) and unwilling to compromise on profit margins, while willing to sacrifice pure market share. Google took advantage of that to win the distribution space focussing on lower price point phones on Android (and then expanding to higher price point devices).

With ChatGPT the threat is different and both are going after expanding usage.


I don't think so. Google has had its own internal LLMs for quite some time now. Some of them are even more powerful than GPT 3.5. The only question has been: "Why release it to the public?"


Exactly. I played with "Meena" when I was an employee there and it was ... disturbingly human-like. I can understand why Google didn't want to release it. And even more so after the Blake Lemoine incident.

Google is already accused of operating a panopticon, the last thing they want to be accused of is running SkyNet.

People who think Google has fallen behind here are sorely mistaken. They just don't/haven't-had a way to make money off of it and are likely worried about reputational fallout.

I want Google's lunch to get eaten as much as the next guy, but I don't think it will be on this front.


It is a reactionary move, but Bard existed for months on the intranet, but not as a public offering. So they have the technology but not the product.


Presumably this was the infamous tech that got that one worker fired because he thought it was sentient and wanted to blow the whistle on unethical AI treatment right? This story: https://www.cnn.com/2022/07/23/business/google-ai-engineer-f...

It would be funny if Google hastily rehires him as a marketing evangelist to show people their AI is so good you think it's alive.


While I don't think LaMDA is sentient (merely based on what I understand about its architecture), what would it look like if some mega corp had a conscious AI on its hands, and an insider was trying to blow the whistle?

Wouldn't it look exactly like this?


In one sense, yes, but to me that felt more like a mental illness story than anything else, which would make it much less funny (apologies to the person involved if it was not).


It is a bit surprising that Google got scooped by OpenAI, but the product surely has existed before ChatGPT went public. LaMDA was the model behind the 'Our AI has become sentient' controversy last year, if you remember that [0].

[0] https://www.washingtonpost.com/technology/2022/06/11/google-...


It's not rushed to market. It already existed for years.

I've tried to say this before elsewhere, but Google has had something internally that's competitive with ChatGPT already for years, under various names. They were just naturally reticent about letting it loose on the world. Esp after the Blake Lemoine incident.


Exactly. With their partner Anthropic's bot "Claude" they have something on the same level as ChatGTP already though.

You can test Claude on ios with the app "poe" (the other two bots in the app are some sort of chatGTP and OpenAI based).


i don't think anybody knows how to monetize it.

really it's kinda a public service, maybe these models should be run that way?


Monetization seems pretty straight forward.

User: What is the most durable shoe?

AI: Some of the most durable shoes are ... Here are some affiliate links where you can buy these durable shoes...


Just picture monetizing it via embedding slant onto the answers it provides like how Alexa suggest purchases but in a way that users can barely just notice...

Or purchasing "market share space" on it in the same way that companies buy shelf space on supermarkets to place their products in...


as someone who did nlp research at Google, Google had a ChatGPT level chatbot 3 years ago and did nothing with it


This sounds a bit like those stories about Kodak having a digital camera before anybody else did, and they just fearfully watched the thing from a distance doing nuthing until someone else brought it to market and ended the party for them.

May be an entirely different thing this time of course. But still...


Well they had a giant conversational size bigger than GPT3. It was not as safe or good at following instructions as ChatGPT. But it was as flowing as it is. So maybe it's an exaggeration (this word has two gs in english!?) to say that it's as good, but it was also extremely strong and impressive


Got you: https://labs.kagi.com/ai/sum?url=https://blog.google/technol...

  Sundar Pichai, CEO of Google and Alphabet, has announced the release of Bard, an experimental conversational AI service powered by Google's Language Model for Dialogue Applications (LaMDA). Bard seeks to combine the breadth of the world's knowledge with the power, intelligence and creativity of Google's large language models. It draws on information from the web to provide fresh, high-quality responses. Bard is initially being released with a lightweight model version of LaMDA, which requires significantly less computing power and will allow for more feedback. Google is also working to bring its latest AI advancements into its products, starting with Search, and will soon be onboarding individual developers, creators and enterprises to try its Generative Language API. Google is committed to developing AI responsibly and will continue to be bold with innovation and responsible in its approach.


Wow, thank you for that link. Now I have a new bookmarklet...

javascript:location.href='https://labs.kagi.com/ai/sum?url='+encodeURIComponent(locati...


Formatting is a little messed up, but great idea here.


Thanks for the bookmarklet!


They probably launched this for free as marketing to increase visibility into their startup! Enjoy it while it lasts before the service is paywalled

Note I've been using Kagi happily for several months and it has successfully replaced Google Search for me. Highly recommended.


Exactly, he announced that they are doing a thing. With a very ambiguous screenshot of the thing and zero evidence of just how good the thing is. And no link to the actual thing.

So, the thing doesn't exist yet in a form that is concrete or demoable. And it definitely isn't ready for users. Which is the same thing really. Also there's no timeline of the thing actually getting there either. So, there's nothing here really.

Why is Sundar Pichai still in charge of this company? Months of excitement around chat gpt and then the best he came up with is this?! This reads to me like "The dog ate my homework, sorry. I have nothing of substance to announce today. Or tomorrow. Or any time soon.".

Also, Bard. Really?! Cringeworthy doesn't begin to describe how bad that is as a brand name. It' sounds like Bad spelled wrong.


Having worked in big tech for a while, my written-by-a-committee alert went off quickly for this. Founders are the only ones who seem to get away without doing this when running large companies. Personality and substance are very important for communication and committees (and the folks who want to edit as part of one) never get that right.


There is nothing concrete available today because Google is scrambling to get ahead of a major announcement from Microsoft on integrating GPT into Bing. Internal sources from MS are saying it might be the first we see of GPT-4.


Why do you say google is scrambling when they have been working on AI stuff just as impressive as OpenAI, which you could spin differently to say Microsoft “scrambled” to make a partnership with and are now attempting to cram GPT into their tech stack.

FWIW I don’t think either company are really scrambling except in the performative sense of making announcements to appease the market.


Microsoft's partnership with OpenAI is a years-long affair, not something they're just starting on now. Google should have seen this coming and been able to beat them to release, but they seem to have been caught off guard with some great research projects but no product.


Both companies have been doing AI work for a long time. Do you really think google is behind on AI here just because they haven’t released any products? What do they stand to gain by pushing out half-baked toys? They have nothing to prove.


Google isn't Apple. Google's strategy for the last 10 years at least has been to throw out half baked products and see what takes off. If they had anything usable, we would know by now.

And, on the contrary, Google has everything to prove. ChatGPT exploited a years-long dissatisfaction with Google search and has millions of people using it in lieu of Google's primary product. This is the most existential threat that Google has faced since its birth, and they are not handling it well.


You’re living in a fantasy land. If I had a major criticism of Google in the past decade it’s that they don’t release early or often enough.

If you think people are dissatisfied with google search then you’re missing the point that people don’t think about google search at all, they just reflexively use it all day. I don’t know anyone who uses chatgpt with such frequency or in a way that is so central to their daily life, and I have a much more tech-savvy circle of friends than most people.


Because while Google has been researching, OpenAI has been productizing. We will have to judge it once we get access to Bard, but it’s possible Google is years behind on creating a chat model that won’t spew toxic waste, as GPT-3 did when it was first released a few years ago.


Not spewing toxic waste is actually a big part of what Google has spent years working on here. The fact that someone else released the first such product allows them to do it too without solving the problem with less media flak.

Productionizing this stuff is where Google gets the most advantage because they have the hardware and software efficiencies that comes form years of experience training and running inference on the most massive AI workloads for many many years in their data centers

I don't think this is going to be what damages Google, much more optimistic about antitrust stuff.


Yeah I keep thinking about how different the reception to ChatGPT would have been if it were released by Google. People would be way more focused on how you can make it spout total nonsense with supreme confidence. (Which imo is the major flaw in all these ML models.)


I work at google and have used their AI tools and they’re just as impressive as the OpenAI ones (which I use personally, outside of work). Both have been developing products, just google hasn’t released any of them.


Have you done adversarial testing? How easy is it to jailbreak Bard vs. ChatGPT? Can Bard produce code?


Obviously I am unable to discuss the details, as much as I might like to. Sorry.


That’s just it though. GPT-3 has been publicly accessible for years. ChatGPT has been public and hitting scale for months. OpenAI has collected mountains of AI chat training data and has been iterating in public. Google has nothing available to the public and you are under NDA. And you think we should be optimistic about Google’s progress? Will this new search feature last longer than Stadia?

Either way, chat interfaces in both search engines should be available to some of the public within the next few weeks. Theory is about to smack into reality at scale.


It's corporate-speak for 'we're not letting the threat going unanswered' without actually delivering anything yet. It's mostly a placeholder for shareholders and especially the media to move the story away from 'Google's in trouble'


Maybe the Bard announcement was written by Bard?


Out of curiosity, I copied all the text from the website (so it got header, footer, unrelated text) into ChatGPT and asked it to summarize:

This is an article written by the CEO of Google and Alphabet, Sundar Pichai, about the company's journey with Artificial Intelligence (AI). Pichai discusses how the company has been working on AI for the past six years and how they have been advancing the state of the art in the field. The article mentions the release of an experimental conversational AI service called Bard, which seeks to combine the breadth of the world's knowledge with the power and intelligence of Google's large language models. The CEO also talks about how the company is working on bringing the benefits of AI into its everyday products, starting with Search, and how AI can deepen people's understanding of information and turn it into useful knowledge more efficiently.


There is an event on Wednesday which will likely provide more information.


There is no information to give. This "announcement" is a reaction to all the stories about how ChatGPT is going to eat Google's lunch. The company needed to do something to defend against that in the market - for the sake of its shareholders and its advertising revenue.

That said, here is the chatgpt summary I generated:

Google CEO Sundar Pichai announces the release of their conversational AI service "Bard," which combines the world's knowledge with the company's large language models. Bard seeks to provide fresh, high-quality responses to questions and allow users to explore new information. The release is part of Google's effort to bring the benefits of AI into everyday products and deepen people's understanding of information.


This kind of no information exec-speak is very painful to read. I had to pass it through ChatGPT to summarize. THe whole article can be summarized in 1 sentence without any loss of information: "Google will name it's chatbot Bard and it be based on Lambda model, initially small size". No link, no other details, basically a lot of words to say nothing at all.


If you have to ask, you're not invited.


We know how well that worked out for Google+


It worked for gmail. Maybe some people are too young to remember it was invite-only for years.


That's because there was no competition and they were the first movers.


For free email? That is patently untrue.


No one came close to gmail's 1GB free inbox at the time. Which kept increasing and showed you at the bottom of the inbox. Plus it had amazing spam detection, and was stupid fast, and got better and better very quickly.

I remember having a gmail email was almost a litmus test to whether or not you were paying attention to what was going on in tech. Not having a gmail email indicated you might not be keeping up with the times.


Agreed on all your points! But I think offering a better service in an established sector is the opposite of "no competition" and being a "first mover".


Like how we would say a professional sports player playing against an amateur has no competition. Technically, yes, but effectively...

The context of the discussion is whether or not an "invite only" tactic works for building hype and launching a successful product. In that sense, I would attribute Gmail's success not to the "invite only" tactic, but to the fact that it was so superior that it had no competition.


Hotmail preceded gmail by most of a decade.


I had a Hotmail account for about a year in 2000 or so. It had a 10 MB inbox and worked quite well. I got a bit of spam, but it was obvious and manageable.

My girlfriend got a GMail invite from a friend and sent me an invite. It was huge. A 1 GB inbox, built-in Google search of your email, and incredible spam filtering.

They marketed it something like "With GMail, you don't have to delete your emails. Just search . . .". Really killer features compared to their competitors.

Later on, they added automatic email thread grouping which made using Outlook for work a chore until it got a similar feature.


Not with a 1GB (!!!!!!) inbox.


Coincidentally also why ChatGPT won.


Gmail was objectively far, far superior than all the other competing email offerings, so people wanted to be invited.


I remember asking for an invite on what was probably a phpbb.


Apparently the "trusted testers" are basically all internal employees

>CEO Sundar Pichai told employees Monday the company is going to need all hands on deck to test Bard, its new ChatGPT rival. He also said Google will soon be enlisting help from partners to test an application programming interface, or API, that would let others access the same underlying technology.

https://www.cnbc.com/2023/02/06/google-ceo-tells-employees-i...


It means they are not a “trusted tester.”


Or they asked their new AI to write the post and they got a result that sounds amazing but is still total fluff


> that's a lot of words but not a lot of information for any key audience

So they used ChatGPT to generate it?


> we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.

this is in like the 5th paragraph, right under the 'Introducing Bard' title.


Have you ever tried utilize one of their APIs..?

It's an atrocious experience. They're leaving so much money on the table.


Also good luck signing up and making your first API call.

OpenAI made that very easy, Google no doubt will make it a total pain.


Still can't find the link.


"making it more widely available to the public in the coming weeks."


It was written by ChatGPT


Or maybe they had ChatGPT write their blog post. ha ha ha...


People forget that Larry Page's ultimate goal was to have Google predict what you wanted before you even asked for it. This isn't about hype. This isn't about beating the competition. This is about delivering on a 3 decade dream that's now coming to fruition and Sundar just happens to be the one leading the org as it manifests in the real world. We're going to end up with 10+ vertically integrated AI solutions like this. Google will be core infrastructure for the world and this thing will be an assistant to everyone who buys into the Google ecosystem.


I'd love for this to be true. I bought into the Google ecosystem in a huge way, but it seems like they have been more focused on the data collection, and advertising end of things, rather than the innovation, and productivity enablement one. Their ecosystem has consitently been a let down in many regards, and they have lost a lot of forward momentum. I hope they can super charge this stuff, but I feel their corporate structure/environment is their biggest hurdle at this point. They have capable people, and brilliant minds, but the structure of the organization, and them being beholden to their market/shareholders gives them little room to innovate in these areas. Hopefully the prove me wrong.


I also want this to be true.

I've just been soul-crushingly disappointed with Google's execution over the last few years, particularly the outright degradation of so many products (Gmail) or just not keeping up with competition (Hangouts is garbage compared to Zoom, Sheets still is barely usable, etc), or just screwing over users by deactivating features (I used to be able to play a youtube video's audio over my Google Home speaker, but they long ago removed that ability and I can only play Youtube music now).

All of the above are just consumer rants, but let's not forget how frighteningly bad the Customer Support or support in general is for G Cloud. I WANT to get off of AWS, but these clowns at Alphabet have this amazing ability to snatch defeat from the jaws of victory with anyone trying out GCloud.....

I hope Bard is great..... I want it to be.... but I'm not confident it will be.


They just brought this extremely innovative thing to market in very little time. The problem may have been that they haven't really had any competition in their core business to push them in a long time.


This thing that no one except those bound by NDA has ever tried. I think I’ll reserve judgement for just a little bit before declaring Bard “extremely innovative.”


>They just brought this extremely innovative thing to market in very little time.

Did they? Where can I use it?


One thing's for sure - it's much easier to quantitatively define an OKR for growing advertising mindshare (especially when you operate the exchanges that measure that mindshare!), than it is to, say, define an OKR for how a project loosely contributes to Google leading the mindshare in the movement towards generative AI outputs as they redefine practically every job in the world.

And I'm certain that however much Google historically stressed that OKRs aren't tied to performance reviews, there's a lot of fear that deviating from OKRs could trigger one being on the top of the list for the next round of layoffs. So there are definitely headwinds that could stifle experimentation and innovation in such a brave new world. For Google's sake I hope that top leadership is redoubling its efforts to get buy-in up and down the management chain for redesigning structures to allow innovation to take place.


So gross, and so true. I get bouts of nausea every time I hear "OKR" now.


Similar sentiment here. I look at the state of my android phone today and see a company that seems like it got bored of building out a mobile platform after they “won” on usage.


Having all humans as the target audience means that it will always be more profitable to make users the product instead of making them the customer. This in turn means that incentives are not geared towards being in the interest of the user.


Meh, lot's of corporations have had all kinds of outlandish visions. I don't think this one is any more or less aligned with any other vision statement from 20+ years ago. They're just capitalizing on the technology that's available at the time, like any good company should be doing.


This "core inrastructure for the world" wıll also be maintained about as well as HVAC in the seminal film Brazil


Imagine a government controlled AI that partners with mega corporations to "assist" AI in getting you the "correct" results you desire. Call me a pessimist but we may already be staring down the gun of a dystopian future


It was literally Dune. Men handed their thinking over to machines. That let other men with machines control them. It wasn't even subtle!


[flagged]


>Ye, u got a brain no? Go learn some hıstory.

artificial intelligence as a concept is at least as old as 300-400BC. Government corruption as a meme goes much further back.

I wouldn't be so quick to consider someone 'without brain' for having inadequate movie trivia knowledge; nor would I consider encyclopedic knowledge of movie trivia to be evidence of knowing history.

I don't know where the 'bureaucratic to a fault' concept of Brazil was first used, but similarly metaphors and complaints can be found in lots of Roman historical anecdote, many times older.

These concepts have all been explored in fiction to a huge degree, the novelty of what's going on right now is that it's been the first time where any of it has actually approached reality.

The concept of a government 'on autopilot' is now actually being realistically considered, rather than humorously/dramatically fantasized.

It's now becoming apparent to the public at large that such concepts are stepping out of fiction; that's huge.

tldr; don't fault other people for not knowing about the recent revision of ancient ideas and concepts that YOU prefer to discuss.


I do fault people for not knowing shit that was screaming in their faces for at least a hundred years and an half.

I do fault them for ignoring it since it's about they did understand that this is what was going on.

Do not attempt to whitewash persons as some anonymous 'members of public'. A decision to not think is a decision none the less.


Meh. Enthusiasm about Larry Page's ultimate goals and dreams went out the window right about the time don't be evil went out the window.


At this point in SV history, harkening back to what a founder said as an anonymous PhD in his pre-billionaire 20s, sounds a lot like when people talk about "what the Founding Fathers wanted".

Nothing is set in stone, and the world is very different from the Web 1.0 era in which Google formed.


Your comment and parent’s look aligned to me. He’s saying it went down the drain, and you posit things do change and we shouldn’t expect current Google to be aiming at benevolence and ethicality.


I think Google was always "evil" however you define that. The plan from the start was to focus on the product, build the brand, and eventually switch to monetization mode like they are now. That's what the slogan meant, think long-term.

The founders still own >50% and make decisions, but the employees still act like Larry and Sergey are some founding fathers spinning in their graves. "_____ wouldn't have happened under them," oh yes it would, and it did.


Sundar did nothing but failed projects. Google is still where it was many years ago.


My Google phone can instantly translate almost any foreign language, embedded in an image, without an internet connection. Pretty sure that was not a thing 8 years ago.

If/when Google embeds an (local and/or lightweight) Bard model into every Pixel phone, that will be a game-changer.


That's WordLens, which Google acquired in 2014, 9 years ago.

https://techcrunch.com/2014/05/16/google-has-acquired-quest-...


Do you have a source that WordLens worked without an internet connection?


Yes, offline capability was there since day one.

The Android version of Word Lens actually launched last month [June 2012]. All of the features that made the iOS app interesting are available in the Android version, including completely offline translations.

https://lifehacker.com/word-lens-for-android-brings-offline-...


If it didn't, surely that wasn't a tech limitation? Google Translator itself has had offline translation capabilities for many years now, and Bing had that feature even longer. Offline OCR has been around even longer.


While cool, that's a feature, not a product. Sundar has not led the company on any new products that have succeeded. The chat apps failed, Meet/Hangouts/whatever lost to Zoom, Stadia got cancelled, etc. All their current cash cows made their names before him.

At least he didn't oversee the introduction of Google Plus, though.


He’s leading Google Cloud to strong growth, which is more important than something niche but innovative like Stadia.


Technically they poached Thomas Kurian from Oracle for that.....


That's fine, Kurian is under Sundar too. But GCP isn't winning.


That was a feature of google translate many years ago. Absolutely nothing new.


And was came from a purchase of Word Lens not something in house if I remember right.


> Google will be core infrastructure for the world and this thing will be an assistant to everyone who buys into the Google ecosystem

Pretty optimistic, considering things Google "delivered" in the past 5 to 10 years.


Considering what ChatGPT does with internet knowledge, I'm scared to have it trained on my internet history of clicks... It will be massively dangerous and totally centralized in so few hands...


Sounds amazing indeed but so far I see no indication of it going to happen at Google. I think the Google ecosystem, at least as it appears to regular consumers, is in a poor and frozen state ever since Google+ failed. YouTube is YouTube as usual, Gmail feels like ten years ago, and Google Docs etc. is as it was five years ago. Even Google Maps doesn’t amaze me, following its long period of innovations. I fondly look back at the times Google went where no man had gone before.


Maybe the way Google will know what you want before you ask for it is by showing you ads to manufacture demand.

Google wants to serve you

.

to ad buyers.


I cannot help but laugh at this comment. I bet this is what Google corporate tells its employees in orientation and All hands meetings.


And how will they make money?


Translation: I received a lot of pressure from investors and Larry/Sergey, so I had to rush out an internal competitor to ChatGPT.

It’s not going to be as good as ChatGPT initially, so I’m excusing it ahead by saying that we are using a smaller model than what we really have, in order to make it available to more users. However, we are really making it available only to a small group of users, because we need to control the bad PR.

But hey, our dataset is more fresh than ChatGPT, so it can answer a JWST question!


The tech behind Bard was announced before ChatGPT, so I doubt this is reactive.

I mean the statement is, not Bard itself.


"But increasingly, people are turning to Google for deeper insights and understanding"

I'm just trying to just get relevant results for my query man ...


My standard habbit when I don't find what I want is to append site:reddit.com site:news.ycombinator.com site:github.com site:stackoverflow.com. Mostly I get what I wanted!


Which is funny because those results get us human insight into our topics. Most top results now are completely non-sensical SEO generated word dumps. Google's search engine rating programs have failed beyond belief.


Completely agree, most of the search results are SEO generated. Lets imagine what happens in near future with ChatGPT ?


“But increasingly, people are turning to Google for deeper insights and understanding"

Neither of which are provided by AI


In recent years I have to use quotes much more aggressively and use verbatim search almost all the time.

Google is getting less and less helpful.


Do quotation marks do anything for you?


Quotes enclosing an expression search for an expression rather then the words (they have to appear together). And quotes enclosing a single word make that word more likely to appear verbatim.

But as long you don't select verbatim search, these are more of a recommendation to Google.

Just observations, not sure how reliable. YMMV.


Yeah... instead of trying to give more false results with AI, Google should first try to get good result with just simple search (and give me back my booleans operators!!!!)


unless you are searching for recipes that include a fake intro about the author's childhoold in Alabama


It's challenging to take a CEO seriously who lacks enough self-awareness to un-ironically use the term "journey".

Similarly, just open it up for everyone to use, I'm using ChatGPT now, ship your product, don't just write a blog post about it.


ChatGPT didn't just "exist" one day. OpenAI made a half-dozen models before getting to GPT-3, and even had demos like Talk to Transformer freely available for anyone to play with long before ChatGPT was in development.

Just because you didn't hear about it until the news got there doesn't mean it's not a journey.


Would you consider Google to be mid-journey?


Sure, I think that's a fair characterization. They have AI-accelerated hardware in client and server-side devices now, as well as a decent catalog of AI-powered demos and features.

They're not exactly a leading force in the AI world, but they're dedicated to making the pilgrimage all the same.


If Google isn't a "leading force in the AI world", who would you consider to be the leading forces?


Probably Nvidia and OpenAI, at the moment. Everyone else seems to be following in their footsteps, or putting together the pieces those companies commercialized.


That would be Midjourney: https://midjourney.com


It was probably written by Bard. Lack of self awareness is to be expected.


> Today, the scale of the largest AI computations is doubling every six months, far outpacing Moore’s Law.

That seems really fast.


Where bard. Can Bard summarize this article? TL;DR.


TLDR: "we're still relevant, honest!"


It's about the stock, not pandering and catering to how people think of google.


It is completely a statement to try to assure people that Google is still doing relevant work in the AI realm. It reads like a middle-of-the-night panic attack after waking from a dream that Pichai had in which Google had essentially vanished.


They forgot to include the link.


This message reminds me very much of the kind of press release that Intel would put out when caught flat-footed by AMD.


They burned a bridge already and it's too late now. People will be less likely to use whatever AI they release as we'll just assume it's trying to financially exploit us like they did with search and gmail. Mass adoption will never happen because of their spoiled reputation.

Google will not be at the center of the AI future. Watch them get even more user hostile as they realize this and cling to their ad business even harder.


Which planet are you on? Google commands the market in search. Its browser is the most popular one in the world. Its operating system, Android, has the most users in the world. I agree they are not a trustworthy company, but if everyone else is as distrustful as you say then these statistics simply do not add up.

Google is at the center of nearly every industry they operate in, and, in my opinion, AI will be no different. They weren't first to market, but they won't be an insignificant player.


As long as it is at least as good as ChatGPT, I will definitely use it.


This looks pretty bearish for Google. Bard is clearly a half-baked/unfinished product, and its release represents Google blinking in fear at OpenAI/Microsoft and the like. Google has been undergoing years of stagnation and this announcement is a good hint that they don't have any aces up their sleeve.

If there were ever a time for Google's operational problems and inefficiencies to finally catch up with them, this would be it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: