Hacker News new | past | comments | ask | show | jobs | submit login
Google to invest up to $2B in Anthropic (reuters.com)
400 points by fofoz 11 months ago | hide | past | favorite | 375 comments



$6 billion dollars raised in 2 months for their series C is blowing my mind. What does Anthropic have that OpenAI or other LLM startups don't? What other companies have raised that much in a single round?


They managed to build something equally good (if not better) than GPT4, without losing 50% of their cap table to Microsoft. That alone is worth a lot.

The investments are absurdly large but if you believe LLMs will be fundamentally transformative then I can understand the logic.


Claude 2 is good and better than 3.5 but nothing close to 4 in general.


I’m not saying your experience is wrong, but for my use case (code, mainly), I’ve found Claude to be far more reliable (not to mention faster) than GPT4.


The larger context window might be nice, but it's nowhere close to GPT4's reasoning ability. GPT4 is still the best for coding.


I’m now almost always using Claude over GPT4 in my day to day and rarely have moments where I’m not happy with the output and fallback to ChatGPT

The big context window is so nice, I can dump in huge files or conversation exports and it can handle them without problems


How do you guys access Claude?


Claude is available without API on Poe and with API on Amazon Bedrock.


http://Claude.AI

Am I the same only one who uses the web interface?

I keep thing simple (KISS) because I teach for a living, and only things my student can pick up in a class are useful to me.


I use the web interface as well. Is there even any other way? I also pay for Claude Pro


Google announced you'll be able to use Claude via GCP Vertex AI soon.


claude.ai


In my experience Claude 2 is better than GPT 4 in many non programming tasks.


It's such a small playing field with too few players.

I have a slide in a presentation on the topic that's an inverted pyramid, as pretty much the entire LLM field rests on only a few companies, who aren't necessarily even doing everything correctly.

The fact that they even have a seat at such a small table with so much in the pot already and much more forecasted means they command a large buy in regardless of their tech. They don't ever need to be in the lead, they just need to maintain their seat at the table and they'll still have money being thrown their way.

The threat of missing the next wave or letting a competitor gain exclusive access is too high at this point.

Of course, FOMO driving investments is also the very well known pattern of a bubble, and we may have a bit of a generative AI bubble among the top firms where large sums of money are going to go down the drain on investing into overvalued promises because the cost of missing a promise that will actually come to fruition is considered too high.

Ironically the real payoff is probably in focusing on integration layers at this point, particularly given the gains in performance over the past year in research by developing improved interfacing with SotA models.

LLMs at a foundational pretrained layer are in a race towards parity. Having access to model A isn't going to be much more interesting than having access to model B. But if you have a plug and play intermediate product that can hook into either model A or B and deliver improved results to direct access to either - that's where the money is going to be for cloud providers in the next 18 months.


> What does Anthropic have that OpenAI or other LLM startups don't?

The big context window is pretty magical for some use cases. There are lots of things RAG with limited context can't do.


The "big context window" talk reminds me of high wattage speakers in the 80s. Yes, it's loud. "Does it sound good?", is the question.

Having a large context window is pointless unless the model is able to attend to attention on the document submitted. As RAG is basically search, this helps set attention regardless of the model's context window size.

Stuffing random thing into a prompt doesn't improve things, so RAG is always required, even if it's just loading the history of the conversation into the window.


RAG is not always required. If you can fit a whole document/story/etc in the context length, it often performs better, especially for complex questions (not just basic retrieval).


Exactly. RAG is great if you need certain paragraphs to answer specific questions but there are limits.

Assuming a new unheard of play called "Romeo and Juliet " RAG can answer "Who is Rosaline?".

But a large context window can answer "How does Shakespeare use foreshadowing to build tension and anticipation throughout the play?" "How are gender roles and societal expectations depicted in the play" or "What does the play suggest about the power of love to overcome adversity?".

In other words, RAG doesn't help you answer questions that pertain to the whole document and aren't keyword driven retrieval queries. It's a pretty big limitation if you aren't just looking for a specific fact.

RAG limits you to answers where the information can be contained in your chunk size.


Gotta be the talent. Started by ex-OpenAI and probably attracted some AI geniuses who buy in to their safety approach too.


joking, but maybe they have achieved agi (internally) ;)


OpenAI is said to have done so by a guy who leaked some very specific information that turned out to be accurate.


> What does Anthropic have that OpenAI or other LLM startups don't

I dunno I kind of get the perception Microsoft's locked up OpenAI's funding so Google couldn't throw money at them even if they wanted to?


Yeah I believe so as well. They are probably going after the second best thing that is out there that has a chance of overtaking GPT 4


I also rather like Claude's capability of using XML tags as scrap-pads of sorts to see the inner workings of the AI. Makes for better prompt design.

https://docs.anthropic.com/claude/docs/give-claude-room-to-t...

You can see how the AI arrived at the response.


This is just a prompting hack you can use with any LLM, not exclusive to Claude. But I do like the fact that they include these tricks in their documentation.


The large(100k tokens) context window together with the fact that it can actually use the information in that context window. From personal experience other models including open ai fail to properly answer when provided large(more than 5k tokens) inputs as context even when the model officially accepts much larger contexts. But Claude 2 models are uncannily good at taking all that context into consideration.


It's absolutely good, have you tried their Claude 2 AI?


It’s not as good as GPT-4, the only feature I used often was their PDF QA. The rest ChatGPT plus ( GPT-4 ) still miles ahead


I regularly compare queries with both GPT-4 and Claude 2 and I find I prefer Claude's answers most often. The 100k token context is a major plus too.

Claude is more creative and that comes with a higher rate of hallucinations. I hope I'm not misremembering but GPT-4 was also initially more prone to hallucinations but also more capable in some ways.


Massive context length


Competition.


[flagged]


How is that relevant?


If you ran a successful lemonade stand, and suddenly I discovered that Al Capone gave you the money to buy lemons, I would find it relevant, although not sure how I would act on the information.

Also, possibly there is some money laundering aspect to these investments that we don't fully see through.


Because it was a wasteful investment back then, Anthropic has more equity to spare now in a Series C.


I'd love to see the statistics on the daily number of Google searches since OpenAI (ChatGPT in particular) came to the fore this year.

Anecdotally, I now use ChatGPT for at least 25-50% of the queries that I previously would have had no other channel for other than a search engine.

If I was in charge of Alphabet I'd be starting to worry. This move makes them look a bit desperate.


I don't see how that could even register. We use OpenAI at home (both me and my wife) but everything ChatGPT spouts must, invariably, be verified.

It's not even "trust but verify" but "Oh yup, I didn't thought of that. However it may be a lie because ChatGPT is a pathological liar, and as I cannot possibly trust a lying tool, I'll now verify" (I know, I know, there's a discussion regarding nomenclature: "lying" / "hallucinating" or whatever. But anyone who's actually using ChatGPT knows what I mean).

Basically the output of ChatGPT, for me, goes directly into Google / Wikipedia / etc.

The one case where I can use the output of ChatGPT directly is when I translate from, say, english to french or vice-versa and I know both languages well enough to be able to tell if the translation is okay or not.

Those believing they can use the output of ChatGPT without verifying it are basically these lawyers who referenced hallucinated cases to a judge.

As another person as commented: it didn't even make a dent in Google's search requests and that is no surprise.


It starts with the power users. GPT4 has certainly had a big impact on my Google searches.

For example, all my tech support searches are now GPT4. Those are painful on Google. There's no need to verify with a Google search, since you can just try out what GPT4 says.

Concrete example: I use it all the time to help me with Excel. What it suggests is nearly always correct. It has turned me into an Excel power user within a few weeks.

You need to develop a sense for what it's likely to be correct on, but once you do it's insanely useful. Simple rule of thumb: if you think you'd find a direct response to your question by wading through pages of ad infested Google results, it'll definitely work great on GPT4.

The way I recall it, it took multiple years for Google to go from a secret power user thing to displacing Yahoo and Altavista for the broad user base. And that was at a time where being online in itself was sort of an early adopter thing.

Anyway, I guess my point is, I would be worried if I was Google, and ignore this tech at your own risk...


ChatGPT remixes info from beyond the first page of Google search. That’s the value. If you ask it for a list of nice nature spaces in Tokyo, like I just did, it returns 12 spots that all seem appealing. Already, that’s more information density than a Google Search. But now I have to go look up if these actually exist (this isn’t the sort of mistake it usually makes though), where it is, the hours, admission prices etc. So that’s going to be a few Google searches for this question. Of course, I’ll have to actually go to the sites for these gardens, if they exist, because you can’t quite trust Google Maps’s accuracy either - hours and opening days can be off, especially around the holidays, when I’d like to go to Tokyo. One ChatGPT query -> several Google searches.

If for this text output, ChatGPT also linked me directly to the gardens’ sites, scraped the info from the live site, and summarized this - that would actually save a ton of time. Google could have a leg up bc it has a knowledge graph, but so does Microsoft. This requires a lot more than training an LLM - this requires it to be an actual product, not a tech demo like it is. A chat with an agent that occasionally lies is a terrible UI.

I think there’s great scope for UI innovation here. But such an experience might be pretty expensive in terms of compute - lots of LLM queries and extra lookup systems. Someone who does this hard integration work and is willing to spend a lot of resources per query will deliver a delightful, time-saving user experience, and can probably charge for it. And that may be a great value-prop for local AI - you can give it tons of resources to solve your particular problem. As I see it, mass market LLMs that are provided for free will never do this extra work for you. ChatGPT might be in a good position bc it already has a ton of paying customers that it can continue to draw a wall around. Their early Nov announcement might be something along these lines.


Doesn’t ChatGPT’s Browse with Bing option do that? It’s definitely provided me with inline links to its browsed sources.


I don't know, I make plenty of queries that don't need verification. I would say the majority.

Write a polite email to X saying Y -> doesn't need verification.

Rewrite this in formal English for a grant application -> doesn't need verification.

How to tar gz a folder in Linux? -> doesn't need verification. When I get the answer, I will probably think "oh, sure, it was czvf". And even if I didn't know what the arguments were, I would know enough to know that a tar command isn't going to delete my files or anything like that, so I would just try and see if it worked.

Write Python code to show a box plot with such and such data -> doesn't need verification, I'm too lazy to write the code myself (or look into how seaborn worked) but once I see the code, I can quickly understand it and check that it does what it should. Or actually run it and see if it worked.

Brainstorming (give me a bunch of titles for a paper about X/some ideas about how I could do Y) -> doesn't need verification, I can see the ideas and decide which I like and which I don't.

I get that if you're a journalist, a lawyer or something like that, probably the majority of your ChatGPT queries will need verification, but that's definitely not my experience... probably because I don't often ask ChatGPT for things that I don't know, most of my use is either for things that I could do myself but require a time investment and I'd rather ChatGPT does them in a few seconds, or for brainstorming. Neither of those require a Google search at all.


I still have no clue how so many people in this thread are relying on it for code just based on the code output I get from it.


I find GPT-4 pretty amazing at coding myself. I have learnt basic R language + a particular package within it, enough to solve an emergency issue that barred us from launching a product for a coding competition within 24 hours of only having heard of the language before, just barely meeting our deadline. The team was amazed and I could never have done it without GPT-4.

Otherwise I'm regularly coding with it as an assistant while it's not always 100% correct, it's often decimating the time especially when I can request things like a website skeleton layout and it just seamlessly doing it all using CSS flow layouts and whatnot. I ask it to implement a Javascript library for a chat-style interface with bubbles and just bam it's there, even adapted for existing code.

I'll still have to debug and I _will_ still find issues occasionally but the win is still often enormous for me.

ChatGPT 3.5 though. That's mostly just a toy, yes.


I love it (gpt4) for code. At least for Python and Typescript.

It doesn’t always get it perfect, but it gets a lot right. Having enough experience in the field makes it easy to know when it’s right or not. Then you either ask again with more context and information, or do it yourself.

Overall, it’s definitely increased my productivity.


They're using 4. The quality difference is huge for coding. Nearly a different product.


ChatGPT is like Google in the early days. Some people just never learned how to use a search engine. Other people very quickly did.


I don’t use chatgpt or LLMs for anything factual at all given the workflow you’ve described of asking a question and then needing to separately search all results. But they are very useful for framing documents, summarizing things, etc.


I use it with Browse with Bing and it works quite well for keeping up with research on a diverse set of topics - CS, AI, economics, neuroscience etc. The citations help you verify quickly.

And this is besides the other stuff that search ones can’t do directly but it does quite well. It has definitely cut web searches for me.


If you want more factual answers why not use Bing with ChatGPT?


The types of queries where gpt is useful (or I hope will soon be useful) to me are those where google themselves have "destroyed the internet.^"

If you want to basic information about a physical exercise, a recipe, a travel destination or such... All the content in Google seems to be content farmed. Written by people who don't know, quickly, by copying similar articles. It's just a buggy, manual version of the proverbial gpt anyway. At this point, I may as well just go direct.

If I want to know about doing weighted crunches instead of sit ups, Google results already represent all the downsides of using gpt. The got UI is nicer. You can also narrow in on your questions, push back and generally get the same level of content in a better package.

Maybe the old chestnut of "computerized recipe book" will finally be solved.

^Ra Ra!


> All the content in Google seems to be content farmed

you are just forgetting to add word "reddit" at the end of your query..


Idk... Maybe... But Reddit seems to hold info pretty poorly. A lot of the subs/posts and incredibly dumb.

The good information is there, often, but finding it is hit and miss. Until relatively recently, unofficially archived gives web forums were still available. Reddit search is a downgrade, imo.

In any case gpt can do reddit pretty well.


> In any case gpt can do reddit pretty well.

I am not sure I trust gpt, hallucinations are still very common.


That doesn’t help and is just as astroturfed at this point. Companies caught on to this years ago


that's very far from my experience, mods and voting community do great job in ranking good posts and comments. Maybe you can give example of your query which you are not satisfied with results?



obviously many try to manipulate, the question is in final results, if they can really change picture dramatically.


Casual users don't care about ChatGPT or any other smart chat app because they are used to Google Search or classical search bar type web search for decades now.

If chat apps take over, I think they will first taker over desktop computers because on mobile it is easier to type a few keywords in the web search bar and get 5 or 6 relevant web results than it is to chat with a chat bot for like 5 minutes.

Informational quires are most likely to be more useful in chat apps than in the classical web search but navigational and transactional quires are here to stay on Google or whatever search engine comes about.


I was going to say that mobile can just chat on the mic to the gbt but then I remembered speech to text is an apparently hard problem that no one has made progress on in 15 years


Actually, OpenAI just made the progress you're asking for. Try the voice conversation mode in ChatGPT app (if you can, it's still a beta testing version IIRC). This one is to the likes of Google Assistant as GPT-4 is to GPT-2. Huge quality jump - not only it doesn't suck (like e.g. Google Assistant does), it works, and absurdly well at that. ~1% error rate for me in English, which isn't even my first language, on the go, in rain, next to noisy street...


Google reportedly has $118 billion cash on hand. I think they would have happily invested more than $2 billion in an OpenAI competitor if they thought more money than that would matter.


I'm struggling in finding the source, but I read some in depth analysis that reports that bing only got 0.53% market share since the launch of chatGPT integration. Apparently they are cannibalising small players but not google.


There is no difference whatsoever, you can look up some Similarweb stats on this.


the supermajority of humanity doesn't know what chatGPT is useful for, so google's total amount of searches are almost certainly the same. you could only detect the difference for, say, searches that would return stackoverflow results


...which I suspect has a large overlap with ad blocker users and, even those that don't, probably are not as sensitive to online ads as average Joe.


Also growth in Bing searches with Bing Chat (backed by ChatGPT). I even switched my mobile browser from Brave to Edge to use the integration.


In my experience, claude.ai outperforms OpenAI's ChatGPT in several key areas. Notably, claude.ai excels in long-tail document reading and recall, demonstrating superior comprehension and information retrieval. Additionally, claude.ai offers enhanced contextual understanding, allowing it to provide more relevant and precise responses.

However, while Claude.ai certainly showcases its strengths in specific areas, it doesn't quite measure up to OpenAI's ChatGPT in terms of adaptability and precision. ChatGPT stands out for its capability to understand intricate queries and produce nuanced, tailored responses.

Both platforms have distinct strengths; I firmly believe they'll evolve to dominate different niches in the AI ecosystem.


> In my experience, claude.ai outperforms OpenAI's ChatGPT in several key areas.

ChatGPT on GPT-3.5? I can buy this. GPT-4? No fucking way. If Claude got anywhere close, this would be front page news in every tech and tech-adjacent outlet.

This has to be said again and again: "ChatGPT" is meaningless without specifying whether you mean GPT-3.5 or GPT-4; statements about "ChatGPT" (or LLMs in general) capability limits are invalid unless tested on GPT-4.


I have been using GPT since the inception of GPT-2.

You're correct, I should have specified. I rarely if ever use GPT-3.5. I have 7 paid / premium accounts, and I primarily use GPT-4.


> ChatGPT on GPT-3.5? I can buy this. GPT-4? No fucking way. If Claude got anywhere close, this would be front page news in every tech and tech-adjacent outlet.

Have you compared both? For my use case of generating & modifying simple code & Q&A based off of docs claude is in a similar range to GPT-4 & you can pay as you go instead of 20 bucks a month.


GPT-4 is "pay as you go" too if you use it via API. I haven't compared the prices though - for my use cases, which are mostly ad-hoc and manual, the cost doesn't add up to anything particularly noticeable.


OpenAI does this disservice to themselves by paywalling GPT-4 and not 3.5, under the umbrella of "ChatGPT".


I’ve had the opposite experience. Soon after claude 2 came out, i pasted in a long document that started with the document’s title and it couldn’t answer a “what is the title?” question.


[deleted]


I legitimately don't think 99% of engineers should start a business.

Just constantly ride the narrative waves, preferably work remotely, and you can retire in 10 years without much risk or effort.

Friends I know who got 3 months of solidity made millions riding the Web3 wave, and have now jumped to AI and making millions more.


This is why I don't share how much I make with my friends.


How do you know when the information asymmetry isn't in your favor?


Did you read the followup where even more private information was shared?

https://news.ycombinator.com/item?id=38050343

Update: the person deleted the thread.


That's not really that high, maybe for the UK, but again not unheard of


[deleted]


What an incredible amount of private information to publicly spill on the internet. Some “friend”.


I'm a pretty private person myself but I hardly see the issue here. The friend hasn't been named, we have no idea who this person is.


Have you not followed @TizzyEnt?


Ok pretend I’m a bad guy. What bad things can I do with this information that you’re so worried about being leaked?


It is very high for UK, that's the point. You can't just convert currency and compare numbers, things don't work that way


That's how it works now in certain companies in this remote/async world.


It's very, very high for the UK, but pretty normal for the USA.


For a remote IT job?

It’s not unheard of, but no that’s not normal (I guess depending on what “IT job” actually means).


When I hear IT job I imagine moving 12 dell desktops on a moving cart across a campus for $50k a year


For 250k I imagine it's a very stressful sysAdmin/DevOps position where they're expected to be Jack of all trades.


Or it’s a startup with shitloads of cash that’s willing to overpay, as indicated by the person’s immediately preceding salary.


[deleted]


It's high for a UK job, probably somewhat normal for a Bay Area equivalent (source - conduct diligence on tech companies both US and UK and we look at salaries).


It’s not crazy money, it’s just workers getting a fairer compensation


Amazon recently annouced their investment in Anthropic as well. Two, of the largest ad providers on the planet. Invest in the competition so you can have "input" into various aspects of the company, all the while profitting on any sucess.


I generally agree with this sentiment, but Anthropic has agency and have deemed it acceptable to grant these companies some control.


There may be some interest in having access to Google’s machine learning hardware as well. What we don’t know publicly is the extent of any deals these companies have to share technology and infrastructure. Google is one of the only hyperscalers to have succeeded in building its own ML hardware. That’s a key asset.


Anthropic will be interesting company if founders lost voting power in the board, with FTX, Amazon and Google representatives arguing to each other on key decisions protecting their interests.


We announced our board structure here: https://www.anthropic.com/index/the-long-term-benefit-trust

Quoting:

"""The Trust is an independent body of five financially disinterested members with an authority to select and remove a portion of our Board that will grow over time (ultimately, a majority of our Board). Paired with our Public Benefit Corporation status, the LTBT helps to align our corporate governance with our mission of developing and maintaining advanced AI for the long-term benefit of humanity."""


> to select and remove a portion of our Board

and how large is that portion?


Who is the FTX representative at this point?


there is some custodian of their assets for sure.


I'm confused. Ok, Bard isn't cutting edge but it's pretty damned good for every day stuff ... and I actually detest Google ... but I've been using it because I'm sick of Edge on my PC and I think the UI is just 'friendlier' and quicker.

N=1


Sorry but Bard is terrible and Google knows it.

They've already announced a replacement / better model coming but considering how long Google has been doing the AI thing, it's amazing they dropped the ball this badly.

This investment is Claude sounds like their backup plan.


Google literally invented the Transformer architecture, so yes, an impressive dropping of the ball.


Bard is so stupid that I tried the take a picture feature, asked a question about a picture, then when I asked a followup question about the picture it said "Sorry I can't look at pictures."

I do expect it to get much smarter by the end of next year.


I asked bard to make a comparison chart of OpenAI, Claude, and PaLM's specs (size and context window) and it repeatedly gave me a nice looking table with obviously wrong values.


If you've used the competition (ChatGPT, Claude, even Perplexity free version), you will find that Bard is not good in terms of quality of answers. (I'm forced to use Bard at work. The UI is responsive I'll give you that, but the answers are often more wrong than ChatGPT and Claude)


Wonder if Anthropic could end up making FTX shareholders whole given SBF invested quite bit into them a while ago.


That and the crypto bull market. Imagine being SBF and knowing you could have avoided decades in jail if only you hadn't filed for bankruptcy.


FYI if your creditors are demanding their money back and you don’t have it… you really don’t have many other options.

The fact that 1 or 2 years later the people will get their money back doesn’t mean there was any hope of surviving this a year ago.


SBF claimed multiple times that filing for bankruptcy was the worse decision of his life, so it was presumably a deliberate decision at the time. Not sure how long he could have avoided it though.


Isn't it interesting that his biggest regret isn't, you-know... stealing customer funds to prop up his failing trading company?


this happened to him several times, and each time he ended up betting it all again. if anthropic would make him whole, he'd bet it all on more bullshit crypto


If you rob a bank and then win the lottery, you still robbed the bank.


True, so the bank robber should be tried for bank robbery.

The question here is whether the bank can get back some or all of the stolen money, i.e. can FTX customers get back the money & cryptocurrency they gave to FTX for safe keeping, maybe as shares in Anthropic?


Totally agree but it is a strange situation. Everyone will probably be made whole now


Feels like the best situation. Those who had their cash/investment misappropriated get it back, and someone who clearly should not have been in a position to have that much responsibility and power has been removed.


Indeed, but the DOJ might have never found out.


Yeah, but also, if the hole is $0 vs $8 billion, you probably get less jail time.


Don't be too sure, that's not how it played out for Martin Shkreli.


We can never know the outcome of the counterfactual scenario for Shkreli.


Yep, that’s already a given.


I'm using LLMs at my day job and claude v2 getting close to gpt-4's intelligence. It does use a different prompting style and has a huge 100,000 context window so it expects multishot prompting to train the LLM just-in-time.


I've recently subscribed too. The huge context window is a game-changer and I can't wait for API access. Unfortunately it hallucinates a lot. It's amazing for extracting information out of large documents, altough doing so made me realize how limited current models are in terms of detecting "subtleties" that would be obvious to a human.


I have API access using AWS Bedrock which is hosted claude v2. boto3 has a client to trigger the completions but I've been practicing using just the playground in AWS console.


Thanks, I've requested access, how long did it take for you?

Update: access already granted!


BTW what do you mean by "different prompting style"?


Human: I have a message and I need to categorize it into the right bucket. Can you help?

    General Inquiry
    Feedback & Suggestions
    Technical Issues
    Billing & Payments
    Appointments & Scheduling
    Others
return only JSON like {"category": "Technical Issues"}

Assistant: Sure using just JSON!

Human: The app freezes every time I launch it.

Assistant: {"category": "Technical Issues"}

Human: I think your platform could use a night mode feature.

Assistant: {"category": "Feedback & Suggestions"}

Human: How do I reschedule my consultation?

Assistant: {"category": "Appointments & Scheduling"}

Human: I don't like the new open layout.

Assistant: {"category": "Feedback & Suggestions"}



Is this google indirectly admitting that it cannot compete in the current AI landscape (e.g bard), or is it just to keep OpenAI in check by funding its closest competitor?


Really surprised that Google didn't outright buy OpenAI perhaps the price is already too much now even for Google. Previously, Google has no problem buying direct competitors to its products for example Youtube and Waze.

ChatGPT with Bing search is just very good, intuitive and convenient that I highly recommend it if you have not tried it. Imagine a version of ChatGPT that has tight integration with Google Search, Google Scholar, Google Patent, Google Books, Google Deepmind, etc.

Early last year or so I've read that people were lamenting on how Google at the time was struggling to monetize Deepmind products and suddenly OpenAI ChatGPT 3 came and changed the game forever, while the game changing algorithm is right under their nose, so to speak. It's quite telling that none of the original authors of the Attention paper is still with Google for now.

To showcase the powerful nature of ChatGPT 4 with Bing search I've queried where the original authors of the Attention paper are working now and the answers are as follows:

"As of 2023, here are the current affiliations or recent activities of the original authors of the "Attention is All You Need" paper:

Ashish Vaswani: Co-Founder and Chief Scientist at Adept AI

Noam Shazeer: Co-founder and CEO of Character.AI

Niki Parmar: While there isn't specific information about Niki Parmar's current affiliation, it's known that both Vaswani and Parmar were authors at Google when the paper was published in 2017

Jakob Uszkoreit: CEO and co-founder of Inceptive Nucleics, Inc

Llion Jones: Left Google Japan in July 2023 to launch a startup, Sakana AI, alongside David Ha, the former head of Google's AI research arm in Japan

Aidan N. Gomez: CEO & Co-founder of Cohere, a company focused on Natural Language Processing (NLP)

Lukasz Kaiser: The search for Lukasz Kaiser's current affiliation did not yield relevant results. Further research may be required to find his current affiliation.

It's also noted that many of the co-authors have left Google to start their own ventures or joined other organizations since the publication of the paper."


> Lukasz Kaiser: The search for Lukasz Kaiser's current affiliation did not yield relevant results. Further research may be required to find his current affiliation.

The Lukasz Kaiser who's been at OpenAI since 2021?

https://www.linkedin.com/in/lukaszkaiser

I know it's popular on HN to give Google crap about having bad search results these days and to use ChatGPT instead, but this was literally the first Google search result for me.

IMO it's especially embarrassing that Bing (owned by Microsoft) was unable to find Lukasz Kaiser via LinkedIn (also owned by Microsoft).

That said: native Bing Search gets it right:

    Lukasz Kaiser | LinkedIn

    Connections: 500+
    Followers: 5.3K
    Works For: OpenAI
(obligatory disclaimer that Google pays me money in exchange for work that has nothing to do with AI/ML, so let that inform how you read my post)


Also: Parmar and Vaswani were cofounders of Adept in Nov 2021, but have since left to found a different startup around Nov 2022 per Niki's LinkedIn denoting a "Stealth" startup [0] (and supported by a news article [1]).

I guess my point is that it's really hard to tell which pieces of information are correct vs outdated vs outright hallucinated, without doing the legwork yourself.

[0] https://www.linkedin.com/in/nikiparmar

[1] https://www.theinformation.com/briefings/two-co-founders-of-...


Maybe it only does a certain amount of sub queries? Like a DOS protection?

To be fair. It says research yourself and provided data for most of the others saving the user enormous amount of time.

The genie is out of the bag.


> Early last year or so I've read that people were lamenting on how Google at the time was struggling to monetize Deepmind products

The problem was that all of the ML/AI products would demonetize Google's flagship moneymaker.

Everybody is trying to use ML/AI to undo all the damage the Google monopoly on search has done to the web ecosystem. Microsoft is fine with this for now because Google has the dominant position.

It is pretty clear, however, that this is not a sustainable situation. At some point, people are somehow going to want to turn ML/AI into cash extraction from end users. How to do that isn't obvious.


>Imagine a version of ChatGPT that has tight integration with Google Search, Google Scholar, Google Patent, Google Books, Google Deepmind, etc.

Google wants Google Bard to become that and use all the Google's services.


OpenAI is $80 billion now, you want to buy it outright, that's probably $100 billion. Google does not have that cash, probably only Apple does, and OpenAI would not agree to any 'equity' based purchase.


It's also not for sale - OpenAI is still owned by its non profit, and Microsoft only has effectively debt equity (capped profit).


Microsoft also owns a large percentage of it and would almost certainly prevent any sale


Google reportedly has $118 billion cash on hand. But makes no sense to blow it on one company. And no way that Microsoft could let it happen.


the more interesting number is $20B in net income in last Q, so $80B is their annual profit.


I always assumed Google bought Waze to prevent Facebook from buying it.


Huh, this together with their "Gemini" AI product still not being released does not paint a good picture of progress at the Google DeepMind Team.


Are they the ones doing Bard? I’ve got a ChatGPT subscription but interested in trying googles product but it says my workspace hadn’t enabled bard yet


You should be able to use Bard with a non-workspace account. Any personal gmail/google account should be able to go directly to https://bard.google.com and agree to the terms and enroll.


Claude nor Bard are available in Canada shrug


I wonder what the burn rate is for these AI companies. $2b is a shit ton of money but I can easily see it quickly draining with a large team of devs, data scientists, SREs, and misc costs such as specialty hardware and cloud costs.


There was an article a while ago about how it costs OpenAI around $700,000 per day to operate. Could be north of $1M / day now.


Even at $1M/day, the 3 billion from Amazon and Google would last 3000 days, which is ~8.2 years. Nothing to sneeze at! And that’s assuming they make no money themselves, which would extend the runway. A billion is a lot of money. I always accidentally think of it as 100 million instead of a thousand million for some reason.


They are also have a $1B+ annual run rate [1].

[1] https://www.reuters.com/technology/openai-track-generate-mor...


Even though 1M/day sounds like a lot, $10B (their funding from Microsoft) would last them 27 years at this rate.


That's still only $365M per year – $2B would last you 5+ years.


Right now, it's mostly hardware costs.


Source? I'm guessing salaries aren't cheap either.


Is there any significance in Google proper making the investment rather than another arm of Alphabet?


I can't understand the mindset of any AI researchers who stay at Google at this point.

You're there (presumably) because you want to make an impact. You gave them a massive lead with a novel approach... and Google failed to capitalise on that so spectacularly that they're now investing billions into their competitor who's miles ahead of Google, using Google's approach, all to try and head off another competitor who's miles ahead of everybody also using Google's approach

I hope that (counter to stories) they pay the researchers very well, because the best result I can see is Google shutting down what they can publish to try and stop this happening again.


I have a simple answer for you - researchers measure impact differently. If they measured it primarily by shipping products, they would work as engineers, not research scientists. As a researcher you realize impact by researching and publishing (internally or externally) and are ok with the idea that it's up to others to pick it up and turn into cool products.

Hopefully they work at the same company, but if they don't - it's not your problem to solve, but VPs and above.

But yes, it can be frustrating if you're junior and take your work personally and attribute its success to its (and even worse, your own) value. After a few years of career, you learn to look at it from a distance.

Google publishes amazing stuff, people are paid great salaries, they have a ton of fun doing it, work life balance is way better than at startups - what's not to like?


> people are paid great salaries, they have a ton of fun doing it, work life balance is way better than at startups

Compared to AI startups, neither of these is true.


That's why all the researches from the Attention is all you need paper not longer work at Google. /s

Food for thought: https://www.aichat.blog/google-exodus-where-are-the-authors-...

Each of the co-authors who spoke to the FT said they wanted to discover what the toolbox they had created was capable of. “The years after the transformer were some of the most fertile years in research. It became apparent . . . the models would get smarter with more feedback,” Vaswani says. “It was too compelling not to pursue this.” [2]

[2] - https://www.ft.com/content/37bb01af-ee46-4483-982f-ef3921436...


This quote says nothing about product, but about continuing research.

Most researchers don't care about products. Many don't even care about any practicality (just the scientific pursuit). How do I know? Well, I happen to work as Research Scientist. I worked at Google 5y. :) For the majority of my colleagues, "having to" support their work after it was published was a nuisance and they wanted to work on new cool stuff instead.


I am, unironically, not sure whether your description is reflective of being a Research Scientist, or just being a Google engineer in general...


> I can't understand the mindset

I can. None of this tech will maintain Google as the chokepoint of internet and make billions for them. You can download and run a LLaMA or Mistral but can't download a Google. On-topic information has been commoditised.

Even OpenAI is in a bad spot. They owned the whole LLM mountain in 2020, now they only own the peaks. Almost the whole mountain has been conquered by open models. Their area of supremacy is shrinking by the day.

The two factors at play here are 1. ability to run LLMs locally and 2. ability to improve local LLMs with data exfiltrated from SOTA LLMs.

I foresee an era of increased privacy as a consequence of this. We will be filtering our browsers with LLMs to remove junk and override Google, Twitter and FB. Bad news for ad providers, they got to move into some other profitable line of business now. Nobody got time to read garbage and ads.


Right now GPT4 is useful/powerful and probably anything else sucks. That may change but the mountain has real value at the top only. At least for chat used for either human assist or decision making or code generation. Categorisation, yeah you can use lesser LLMs.


Not necessarily true anymore. As a counterexample, consider CodeLlama 33b, which is quite good (and which has replaced GPT-4 for my coding assistant needs).

OpenAI's models are likely to remain the best, but I see open models catching up and becoming "good enough." Why pay for GPT-4 when I can run a model locally for free? (Barring the initial capital cost of a GPU; and not even this if you're, say, using a Macbook)


CodeLlama spit out some of the wildest garbage when I tested it, even against GPT-3.5. Which tasks have you found it to perform well?


What's CodeLlama 33b? I couldn't find anything regarding it online



Off by 1B error


Just a 1B mistake


^-^


Whoops, sorry. 34b :)


Not 100% sure but I believe Llama [1] is a LLM created by meta. Code Llama is probably one tailored as a coding assistant

[1] https://ai.meta.com/llama/


Like yes, as long as GPT-4 remains the most useful (and now they have multi-modality in Plus so they've just extended their lead) nobody will adopt anything else.

So grandparent's argument makes even less sense as search should also be fungible but it isn't when Google still provides the best search product and has since it started. If OpenAI can maintain that lead and continue to ship improvements that continue to push their peak higher, most people will not pay for anything less even when that price is free.


The GP didn't say who the value was meant to be for. There is definitely a lot of value for users, regardless of which model.

I would like LLaMa to be more local, responsive and private rather than more intelligent; it is good enough in that regard.


No way, for creative writing zephyr (based on mistral) is my favourite. Also no stupid limits like chatgpt

For assistance, chatgpt is the best but llama is more than adequate.

For code generation they all suck, chatgpt 4 before they nerfed it was good.

I use codellama phind locally


I don't work in tech and I use LLaMa. It is great for giving me ideas, writing overviews which I use to make sure I have most of my bases covered (and helping me see which of my ideas are more original), getting more examples, along with other creative tasks.

Could I get my employer to pay for it if all free options disappeared? Probably, but I don't have to while they exist.


> We will be filtering our browsers with LLMs to remove junk and override Google, Twitter and FB ... Nobody got time to read garbage and ads.

Exactly, there is a paradox that is getting more extreme by the day - that social media (and the broader web) is a wellspring of knowledge, yet also a vortex of addiction, filter bubbles & lost productivity. I want one without the other.

This pushed me to start building open source at OpenLocus, contributions & feedback are welcome - details in other comment.


Isn't Microsoft deeply integrating those peaks into Windows and Office? That's one hell of a moat.


Microsoft is also integrating ChatGPT into Bing.


Does anyone know vision models that detect ads? I wasn't able to find any, which is kind of surprising... That sounds like an armsrace that would exist.


I tried something not dissimilar, but without AI models: having webpages rendered in a real browser, but "headless". But there was really no way for the webpage to detect it was "headless": I'd render in a real browser, in an actual X session, under Xephyr. And I wouldn't be showing that Xephyr X server (so I wouldn't see what that real browser was rendering).

I'd then put gray rectangles on what where ads and only show visually that, with the grey rectangles, after having covered the ads with gray rectangles.

I just did it as some quick proof-of-concept: there are plenty different ways to do this but I liked that one. It's not dissimilar to services that renders a webpage on x different devices, without you needing to open that webpage on all these devices.

But the issue is that while it's relatively easy to get rid of ads, it's near impossible to get rid of submarine articles/blogs and it's getting harder and harder by the day to get rid of all the pointless webpages generated by ChatGPT or other LLMs that are flooding the web.

Meanwhile sticking to the sites I know (Wikipedia / HN / used car sales websites / a few forums I frequent etc.) and running a DNS locally that blocks hundreds of thousands of domains (and entire countries) is quite effective (I run unbound, which I like a lot for it's got many features and can block domains/subdomains using wildcards).

I'm pretty sure detecting and covering ads before displaying a webpage can be done but I'd say the bigger problem is submarines and overall terribly poor quality LLM generated webpages.

So basically: is it even worth it to detect ads while the web has now got a much bigger problem than ads?


Absolutely its still worth blocking ads. They abuse my compute rendering client side. Off with their heads. Presumably you could use ai to sniff out putative ai content in text and flag or block that too.



It won't work as long as you have HTTPS preventing any large scale interception and analysis of traffic. This is why Google was so quick to promote and switch to HTTPS. I bet you my left nut right now there is some random ex Googler with an NDA and interesting story to tell about this topic.

It pretty much means you have to detect the ad locally. And by then you've already lost and transferred it down at least.


I agree heartily with you. OpenAI may be the best, but the open models are getting good.

A weak analogy: as you might take a hybrid cloud/local compute strategy, I think it makes sense to be very flexible and use LLMs from different sources for diversity and not getting locked in. I mostly use OpenAI, but I am constantly experimenting with options. Local options are most exciting to me right now.

I usually use APIs, but OpenAI’s app that supports multi modal input images and voice conversation is impressive and points to a future mode of human computer interactions.

Exciting times!


> We will be filtering our browsers with LLMs to remove junk and override Google, Twitter and FB

I cannot wait for the day where I can create a list of blocked terms "Musk, Trump, American politics, ..." and have an intelligent LLM filter so that it doesn't matter what site I visit, whether The Verge, or The Guardian, or Reddit, all articles or posts related to these terms will be gone.

I think this has a chance to have a far higher effect than just ad tech. It's going to effect all media publishers. But I also expect them, led by Google and Facebook, to fight this tooth and nail with every dirty trick they can.


I’ve just the thing for you my friend https://github.com/devxpy/anti-chatgpt


Great start. Please develop a plugin I can secretly add to my Mother's browser to dial down the outrage she reads on the fly! She'll notice 'redacted' ;)


This is a good start, devxpy! What other features are you planning to build over it? Hit me up if you want to collaborate.


The main issue is that its very slow and expensive to browse the internet like this. The LLM will only perform well if you have it do chain of thought reasoning, and that has a latency hit because of a longer generation.


That looks great, have you tested it?


Oh you're the author, in that case I guess you have! Thanks for sharing.

As a suggestion, personally I'd rather the offending items just be removed entirely rather than showing "redacted".


This is very interesting desire to me.. Wont you end up with a weird, contrived context for everything else you do read though? Like you would be reading an article about the UN and it would say: "and then, some country put forward a new resolution." Wouldn't that, like, kinda drive you crazy?

Like I know the state of journalism is less than stellar, but patching it after the fact for each reader seems like the wrong direction. The implicit conception of "the news" in this desire reifies it into a weird kind of commodity for your personal entertainment/edification; which is precisely the conception operating today which makes it so bad!

Like, maybe, if you have psychological considerations where certain triggers are very damaging, I can kinda understand this. But if that is really the case, then just why read the news anyway? Of course you gotta read some sometimes, but in general you can read other things. There is a lifetime and a half of fiction and nonfiction to read, no GPU required!


I don't want to remove all mentions. If I'm reading an article about something else and it happens to mention something on my list, that's fine. I just want to remove all the top level articles and posts about these things.

It's not about being triggered by anything or trying to hide from anything. I also don't need (or even want) 100% efficacy. It's about cleaning up noise. For example, at this point I'm fully aware that Musk has turned Twitter into even more of a cesspool. I don't need any more information about that. And yet I get it, all the latest "juicy Musk gossip" any time I go near any tech sites. And it's just noise to me at this point.

Same with American politics. I'm not from the US so I'd be happy with a short monthly synopsis on what's happening there. But on the English speaking web, American politics is everywhere. It's exhausting. I want to filter it, reclaim the attention it steals from me while still being engaged with online society to a degree that I choose. And I believe that reclaiming this attention, energy, and time would allow me to engage more with subjects I do care about.


I see, that totally makes sense, but in that case it doesn't seem like you need a fancy AI for this, or at least it feels way overkill for what you actually want here.


Using an LLM allows for more flexible filters e.g. clickbait, celebrity news, people trolling in comments.


patching it after the fact is definitely needed. same way I use a plugin to filter out curse words because when it comes to your brain, garbage in = garbage out. patching "news" stories after the fact will never be perfect though because half the problem with "journalism" is omission of facts, not just biased half-truths or opinions.


Explains why Google tried (or is trying) so hard to push WEI though. If they can't programmatically evade adblockers, might as well programmatically prevent them.


You may achieve this by only visiting websites or pages with tailored feeds.

The problem is that it requires a bit of self discipline. Google and Meta are probably safe then...


What websites are you talking about?

I do use a feed reader already but it's not good for discovering new things. The problem is when I venture out to discover what's going on in the world I'm deluged with Musk spam. A couple of years ago it was Trump spam.


Agreed. I use - A feed reader with multiple sources - On X, I started by muting words and accounts. I now only watch one highly curated list - Here - podcasts - Engaging with relevant people directly

I'm sometimes a bit late on new trends, but I spend more time on long forms and reinforce a meaningful network in the process.


We started work on an open-source LLM-powered media filter at OpenLocus (https://github.com/openlocus), aiming for an alpha release by late 2023.

In the more medium term we are collaborating on improving information overload, filter bubbles & misinformation with labs at AllenAI, CMU, UPenn & Utah.

Contributions and feedback are welcome! Feel free to hit me up for an early access - email in the profile.


"I foresee an era of increased privacy as a consequence of this. We will be filtering our browsers with LLMs"

I am contemplating feeding Firefox extensions code to AI to detect possible malicious behaviors quickly.


But how can we possibly be this optimistic though? Like, time and time again there are these little glimmers of fantasy that some technology alone can alleviate the heinous/annoying parts of a mature capitalism, and every time the technology has ended up, after all is said and done, on the other side (so to speak).

Like, go back and read old mailing lists. See how everyone was so assured that computers themselves, then cryptography, then the modern internet, would create radical changes in the way the economy works, and enable avenues for greater self-determination and happiness. They all seem so naive in retrospect, knowing what we know and how it all played out.

I love that so many people are these days inspired by the llm tech to imagine a better world with them, and its important in itself to inspired like this. But precedence does not favor putting all your hopes in solely the technology itself for a better world. Hope I am wrong though!


It will undoubtedly be a turbulent time. All the skills and knowledge expressed in text or images is compressed into models. And these models offer back all they know to each of us according to our needs. Learning from everyone indiscriminately, serving everyone specifically, open sourcing skills and knowledge to a much deeper degree than open source had on software.

This will have social implications, it will speed up the learning process of any new technology. For example by upgrading Copilot, Github can upgrade coding practices everywhere (where it's used) to new coding standards instead of taking years, that's how fast it can go.

On another line of thinking, an image-language-action model can be used to control robots, and robot hardware is getting accessible. There is a chance for inventing self reliance automation for people, a possible solution to job loss.


Right, there are lots of things it could do, but the point is that these capabilities, or rather this potential, is merely necessary but not sufficient for actually following through with those nice things.

It doesn't matter what it can do when there is a truly scary amount of power and interest directed towards, e.g., making sure job loss (or unemployment in general) doesn't change too much. They'll make these things illegal before they'd let it get even close to messing something like that up!

There are already so many innovations and pieces of technology that could be helping instead of hurting. It should seem clear that the sheer capability of something could never be enough to change the state of affairs alone.

Sure the printing press was critical, but there was also a whole lot of blood shed and tumultuous times before the enlightened subject of the printing press could enjoy their new class and literacy.


Powerful vision! Thank you.


is there a strongly written essay or paper arguing this viewpoint?


the original: "We Have No Moat, And Neither Does OpenAI" https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...


Most researchers aren't there to make a commercial product. They're there to publish well-cited research papers, talk at top conferences, and have access to far more compute than anyone else can offer to have the biggest chance at state-of-the-art results.

In terms of FLOPS, Google is rumoured to be by far the biggest, since they have lots of their own silicon, while most other people are limited by what NVidia can make or smallish amounts of custom silicon.

If Google can't turn that into a commercial product, the researchers don't really care.


> and have access to far more compute than anyone else can offer to have the biggest chance at state of the art results

What models do they have to show after 15 years of AI research? AlphaGo, AlphaFold, and that's about it. Search is mediocre, translation the same, not even their OCR APIs are very good.

On pure research they invented the transformer in 2017 (Vaswani) and word embeddings in 2012 (Mikolov). But Microsoft invented residual connections (Kaiming He 2015) which underline transformers and modern CNNs. CNNs and LSTMs were invented longe before, by Yann LeCun (1989) and Jürgen Schmidhuber (1997).

And yes, OpenAI also invented something of their own - the Adam optimiser (Kingma 2015), that trains 99% of the models today.

Hinton invented Dropout (2014) and backpropagation (1986).


OpenAI didn’t even exist when Adam was invented! The contact email on the paper is from 2017.

OpenAI has made 1 major contribution - the auto regressive decoder. You could argue also their productisation of RL for LLMs has been highly influential even though they strictly didn’t invent it.


Plus all the things they haven't yet published... there must be something, because nobody has yet come close to the performance of GPT-4 8 months later, despite many people trying.


The likely architecture of GPT-4 was invented at Google - MoE. OpenAI poached the team who developed it.

OpenAI is ahead on the RL data side. To me that’s the likely biggest advantage they have in GPT-4.


Wouldn’t Google be able to catch up easily if that was really OpenAI’s only advantage? Google has a lot more resources to throw at RL. It seems like there must be some other secret sauce for OpenAI to continue maintaining such a huge lead.


I believe they are using much more synthetic data than others. That's the secret sauce besides their larger training set and model. Synthetic data makes the model more consistent.


For all of the novel interest in the tech and service side, I can't understand why so many overlook the value of the datasets. I've been promoting them as potential products to license, similar to textbook publishing, and a bit reminiscent of how Google used its search appliances. Automating methods of creation will be an immensely valuable service.

I've already introduced methods of instruct formatting within our org, because as the architectures change, what we learn from performing the work to organize and account for our internal knowledge will continuously pay off.


Nobody has tried to come close to the performance of 4 and failed.

Palm-2 was never intended to match 4 (Gemini is). Llama 2 was never intended to match 4 either. Same with Claude 2. Nobody has matched GPT-4's training compute and failed to reach it.


True, the paper is dated Jan 2015 and company was founded in Dec. But if you read the paper it is signed:

> Diederik P. Kingma* University of Amsterdam, OpenAI dpkingma@openai.com


What? CLIP? Whisper? Dota 2 RL was big at the time, they made notable contributions to deep rl


Google researchers publish leading/important research monthly. They publish impressive work in vision, nlp, and RL. Have you tried reading papers or following research out of google?


>On pure research they invented the transformer in 2017 (Vaswani)

Transformers were invented in the early 90s: https://people.idsia.ch/~juergen/fast-weight-programmer-1991... Google were just the first with the compute to scale them up.


fwiw Schmidhuber is famous for claiming to have come up with virtually every architecture that's now useful/important.


They're good at research.

They're bad at turning the research into a product.

Specifically, most AI based products need to be trained on user data, and Google is too scared to make any real use of much user data in AI models for fear of getting sued into oblivion if, for example, gmail can be tricked into writing something in an email which reveals some other gmail users private data.


>They're good at research. They're bad at turning the research into a product.

Tbh, Google Lens is one of the best computer vision products in the market. Generally speaking, it's hard to crate a product around something that is completely new.


>translation the same

What? between what languages?


>Most researchers aren't there to make a commercial product. They're there to publish well-cited research papers, talk at top conferences, and have access to far more compute than anyone else can offer.

I think that's core to what I'm getting at - I don't see how Google continues to let them publish core research at this point given how much of a lead they've given their competitors (especially combined with their own inability to execute)


> I don't see how Google continues to let them publish core research at this point

From what I hear from Google DeepMind researchers, the current policies are making it very difficult to publish NLP research. The barrier to disclosing research advances to the wider community appears much higher than it used to be.


If Google didn't let them publish, the researchers would leave for sure.


> I can't understand the mindset of any AI researchers who stay at Google at this point.

Have you heard of money?

But seriously, Google have talent because they pay insane salaries like all the other big tech companies


Google doesn’t pay very well, actually


They definitely pay a large number of people a large amount, but they're happy to pay less when they can.


they pay average market salaries to large number of mediocre people.


Maybe Google doesn't see how it can turn AI into a low traction product that will only be used for renaming and turning some cosmetic improvements to justify someone's promotion

Google truly has grown complacent and whoever has a good idea feel like they can't develop it there. But if you have an idea on how Google can lose money and reputation by retiring another product they're all ears


It is also a testament to what happens when you open up powerful technology instead of hoarding


> You gave them a massive lead with a novel approach...

From everything I've read/watched, the original researchers didn't know at that point their work was going to be so impactful. In hindsight, yeah. But otherwise no.


Yeah, Bard is still the worst major LLM product


Classic moment of business people not understanding how some of these technical things work.


>I can't understand the mindset of any AI researchers who stay at Google at this point. >You're there (presumably) because you want to make an impact.

I can't speak to AI researchers specifically but I can speak to research more generally. Ideally you want to focus on somewhere you can make and impact, pursue the general line of research you find interesting/promising, AND have stability to at least live (or ideally enough pay to not care).

I've known enough highly talented researchers who almost always have to sacrifice the "making a difference" and "pursuing their pathway" to large degree just to survive. Ultimately what people are paying for to be researched is what matters, be it some business R&D division's head, federal agency biased directions, of whatever philanthropic connections some organizations are able to extract decide you should be doing. It's not just medicore researchers, it's what one might consider "world class" top of their field researchers. Everyone at some point makes sacrifices to pay the bills but there's this neverending gaslighting that occurs about what researchers want to do and what areas they choose to impact or some nonsense like that. That happens in very few idealized cases. E.g., tenured professorship at highly endowed institutions where they've already invested enough time doing the stuff they don't want to game the system to get to a point they can finally pursue their own paths and so on.

If you want to focus on an impact and perform research you need to be able to self-fund that and hope the area you work isn't capital intensive (AI specifically DNN, LLM, and highly data/compute intensive approaches is for the most part pretty capital intensive, hence one reason big tech pursued them to reduce competition--you can pursue paths theoretically but ultimately unless you find approaches that aren't as capital intensive it'll be awhile before you can experimentally test and iterate approaches).

Researchers just hope to get remotely close to the area they want to work in but by in large have to chase the money and lines of research behind it if they hope to remain a researcher. Maybe Google is different but I doubt it. This happens in other industries as well with people in R&D near the tops of their field working at market leaders still caving.

If by some miracle you happen to be near the top of your field and happen to guess or have some natural deep insight to the path forward where you're fortunate enough to make a huge breakthrough, you can often just spin it out yourself or take on some investment risk. Why work for Google if you have the practical path to say AGI in your hands? Try and do it on your own or find someone who doesn't own all your IP at the end of it. You work at big corp because there's ideally some stability balance with some semi-interesting research paths to you, probably not exactly what you want but as close as you can get.


Google AI : Xerox PARC AI : GUI


Can’t risk losing the AI war entirely. Much better to be embarrassed to the tune of $2B than to have no profits from future AI success.


Can't wait for the day where it makes sense to export all my ChatGPT history and import it into Google's competition product in order for them to customize my experience.

At this point Google is offering nothing which comes close to ChatGPT, Bard is a laughing stock compared to it. But I don't doubt that Google will catch up and then offer a better experience.

The only thing OpenAI has from me is that (valuable) chat history (and the monthly subscription money), so I wouldn't have any problems with moving away from them once they are no longer in the unique position they're currently in. This is different with Google, which has several orders of magnitude more data about me and which they also do manage for me.

I'm concerned that it's only a matter of time until OpenAi gets hacked and some of the user account's data gets leaked.


There’s already been hacks with 100k user account data leaked: https://www.bleepingcomputer.com/news/security/over-100-000-...

OpenAI says it was an issue with the user devices not their service, though.


What's the actual big gain of having history? I have it enabled as well and I sometimes jump back into old chats but it's mostly not so useful.


the big gain is for long running queries. i had to refactor a big codebase and i went back to the same conversation numerous times over the course of a month. of course it would sometimes lose context, but having it all in the same place made things very easy for me.


Anthropic and Claude are signals to me that Artificial Intelligence will be a captured technology before it actually becomes useful.

Google investing in Anthropic shows that their priorities are on the capture over the usefulness. Because that's all that Claude really is.


Could it be more to prevent monopoly by their competitor Amazon becoming the biggest investor?

OpenAI gave Microsoft a competitive edge and this could just be a hedge to prevent Amazon from getting even further ahead.


Google paid $26 billion in 2021 to become the default search engine on browsers and phones - $2B is peanuts to them.


That means all the work they did on their own AI has failed. So they are buying someone else's AI.


Or it means they have more money than they know what to do with so they're throwing it at any de-risking they can find


I hope anthropic improves their api with this money. Having usage information from OpenAI is very nice. Their ai is really good just need to improve their features to support commercial usage


Amzoogle. Coining it here for the benefit of our future Amzooverlords.


"Googazon" has a better ring to it, IMHO.


Seems to capture the scale and ridiculousness as well.


Amagongle?


"alphazon"

"Amabet"


Ok, Amzoogoogazongle


Or perhaps AmazonGoogleKleinerPerkinsKlaxoGoldmanSachs GmbH


One of amazon's leadership principles is "Think Big" why stop at amazon and google? I propose Amzoogoogazonglezure.

Don't ask me how to pronounce it.


FTX Anthropic worth a lot now or has it been diluted ?


Worth a lot. It probably would have been enough to plug the hole from stealing customer funds, if he managed to stay afloat for a few more months. Ah, the irony.


Hope ftx bankruptcy CEO can make it whole for the customers. 80% of my nw is stuck there


Why is 80% of your net worth in FTX


What happened to their shares? Just sold to cover claims?


from what I know, they still have their stake, but they will eventually sell it


Even though Claude sounds French, it seems it is not directly accessible from European countries? I'd love to give it a spin.


They don’t have much confidence. Microsoft invested 10b into OpenAI. That investment seems to be paying off well.


Funny, seeing that Amazon seems to be dipping their toes in the same water.

https://press.aboutamazon.com/2023/9/amazon-and-anthropic-an...


Didn't Amazon invest in Google in the Google's early days? Now Google is returning the favor by saying: "Look FTC and DOJ, we are not a monopolist, web users prefer Amazon over Google when they search for products to buy."


It’s about as close as one can get to an acquisition, no?


Those who can't do.. pay.


They have no choice their internal models are just ridiculously bad compared to OpenAI. The behemoth is so fat it can't move anymore, this doesn't sound good for the company as a whole and the only thing that keep them afloat is that they are paying to be the default on devices.


So the internal effort to build a competitive AI product are going well?


I guess the main driver here is gaining control / influence ?


Does anyone know what clauses OpenAI workers work under in terms of competition trying to pick them up and win them over?

I still don't see any competition to what openai has done and so I expect some heavy rivalry coming up


If anything this is a signal that there’s hiccups with Gemini GA or performance


I feel like Jeff Goldblum in Jurassic Park rn


Seems like an odd thing to do when all we’ve been hearing about for the last 10 years is how great Google’s AI capabilities are (“but you’re not allowed to actually see or use them”) - even more so when Amazon just invested $1.5B in Anthropic a few weeks ago.


It's not odd, this is how big capital extends itself into the future. Alphabet and Amazon are giant tech conglomerates that have been doing this for years. Alphabet bought Youtube, Android, DoubleClick, DeepMind, Maps, etc. Plenty of things, and maybe the best things people think of when they think of Google are actually bought from the marketplace and integrated into the giant.

They buy whatever they want that might help their cause. This quarter alone Alphabet's free cash flow is $22B. This is a small bet for them. Why wouldn't they spend a small fraction in a key area for them?


But this investment seems different from all the other ones as Google isn't actually taking over Anthropic. Amazon is an even larger shareholder and I think Anthropic has made a commitment to prioritise AWS. So what is Google trying to do here? Offer Anthropic to GCP customers as yet another option?


Enemy of my enemy? Google and Microsoft have much greater competitive surface area than either of them have with Amazon. It would make sense for G and A to both prop up an alternative to OpenAI. If Google could integrate quality LLMs into their assistant, that would boost their position relative to Apple too.


That would make sense if Google didn't have its own AI offering and research that is supposed to be first rate and a strategic priority of the company.

Of course Google would prefer Anthropic to beat (or at least be competitive with) OpenAI, but only if they can't do it themselves.

Has Google management lost confidence in its own AI capability? Or are there so many leading AI researchers that refuse to work for Google?

To me this looks like Google could be in big trouble.


Google has astronomical sums of cash and so many hands that the left hand often doesn’t know what the hundreds of right hands know. My point is $2B can be considered a side bet to make sure if their own efforts fail there is a serious competitor to open ai to buy or license by finance people without knowledge of how their own internal modeling efforts are going.


>...a side bet to make sure if their own efforts fail there is a serious competitor...

My point is that this is not how a healthy company can think about its most strategic activities. This sort of failure is not something you can hedge against.

It's like Apple hedging against the risk of failing to keep iPhone competitive by investing in some other device maker that might be able to compete with Samsung.


I don’t think it’s anything like that example, google is an advertising conglomerate not an LLM api company. Anthropic is like a camera company when apple first started making phones in your analogy. Going to become a very important feature to be sure, but not a competing device maker.

By failure I don’t mean they can’t make an LLM product, I mean the LLM product doesn’t have majority market share to be used as an ad surface (or other monetization strategy). That’s only partly dependent on their research and modeling efforts. The revenues they could make from an LLM api business are too small to register for them right now, the market share and product work is also important.


I disagree. Google's entire advertising business depends on them being the best at "organising the world's information". (Apple says Google is the best, so it must be true, right? :)

If Google fails to compete in AI, they will lose search and some competitor will siphon off all their advertisig revenue.

This is all coinciding with regulators taking a closer look at all the other ways in which Google protects its search monopoly. You know, the ones that are not so much based on merit.

Google is at risk of losing both its technology leadership and its grip on distribution channels at the same time.


I don’t see the risk at all. Google has a LLM product with much more reach and better monetization than anthropic already. If you think the DOJ is going to break up google or otherwise destroy their market share in search that’s another story altogether, but if they keep it they’ll be well positioned to make a lot of money buying anthropic and getting their models onto their infrastructure and in their channels. It’d be like buying YouTube, which had absolutely nothing to do with needing their video hosting infrastructure but because they were winning market share with their product decisions.


Google probably knows how to do pretty good AI powered replacement search.

It doesn't know how to keep new information flowing into the AI.

If AI replaces the need to visit the original website and everyone just stays on google.com then a great deal of the web will just stop being updated because nobody is reading it and nobody will read it.

That's googles problem. Frankly it's a problem for every company who wants to try and supplant google search with AI.


I was also thinking about that; in that scenario, for example instead of going to amazon.com to search and buy something you would go to a chat app like ChatGPT or Google Bard and you would interreact with Amazon's chat bot to search and buy something. Think about Chinese super app WeChat but on steroids. This is what Elon is probably dreaming about. But before that vision comes true, we firstly need protocols and standards for building chat bots inside super chat apps. If this comes true, super chat apps would be gigantic walled gardens, even worse than Facebook is today. I think the same conflicting thought process was behind Steve Jobs' push for iPhone web apps whereas web apps are more open and free for users to use than native mobile apps but native mobile apps enable more richer and better user experience. Sadly Apple decided to lock iPhone apps inside iOS and App Store.

When first LLMs and ChatGPT came about, I thought it was just another hype but the web, and the web search industry hangs in a balance (Google in particular).

P.S.

>If AI replaces the need to visit the original website and everyone just stays on google.com then a great deal of the web will just stop being updated because nobody is reading it and nobody will read it.

But even today and for a very long time as a matter of fact, you can use RSS for website updates and read them in your RSS reader and yet classic web still didn't fade away.


Google isn't a focused company, they typically run many parallel bets that often competes with each other. See all their chat apps etc. Betting both on an inhouse and an external solution is normal for them.


Name one other example of Google betting on an external solution in a key strategic area where that bet is a minority interest in a company co-owned by a big competitor.


Google invested in Anthropic before Amazon did, so not sure what you mean. This is just Google strengthening their bet.

You should first ask yourself why Amazon became a minority shareholder in a company already invested in by Google.


It makes far more sense for Amazon to prop up an OpenAI competitor to secure access to state of the art models for AWS. Amazon doesn't compete on excelling in AI.


AWS and Google Cloud invested in it for exactly the same reason, to enlarge their cloud businesses. Googles previous investment came with a cloud partnership, then AWS also gave them a cloud partnership, and now Google strengthened their cloud partnership to ensure AWS doesn't take it all.

So this bet doesn't have anything to do with AI, it is a competition between AWS and Google cloud. They are competing over who gets to sell shovels in a gold rush.


This makes absolute sense when looking at GCP and AWS in isolation.

But Google is at the same time propping up a competitor while Amazon is propping up a purely complementary service.


But Google is very fragmented as I said, what cloud does has little to do with what other parts of Google does. This investment makes sense for cloud so they make it. It is a weakness and a strength of the company, Amazon is much more top down and that leads to other problems.


Money shouldn’t sit around? It should be invested to make more money. Perhaps Google just thought this was a good investment that would likely provide a good return? It wouldn’t be the first time a company did that.


This is by no means a small bet.



Maybe someone can explain this, because I never understood it. When a company sits on so much cash, I guess it doesn't mean cash which is liquid, but rather a variety of assets, right?

So when they just have to pull x billions out, it's not just liquid assets, but they will have stage and sell assets representing those funds.

So since it's not just cash, how does a company of this size, then determine what assets to sell? And if those assets are actually invested in something, or representing an entity of some sorts, how do they assess whether or not it will cause any damage or loss of profitability? The crux is, is the risk two fold? First let go of whatever the assets were invested in (one), and then buy a new company and hope it has ROI (two).

Or is it actually possible to have 1.5 billion dollars laying around in cash somehow? I know money is a made up idea, but that is still a big number for a bank/banks/asset holding company to just say good for and expect some kind of real monetary tangible value behind the symbolic currency.


> When a company sits on so much cash, I guess it doesn't mean cash which is liquid, but rather a variety of assets, right?

Cash is a very specific thing on a balance sheet. It has to be cash or very close to cash. "Equivalent", something like a <90d treasury that has virtually zero interest rate risk.

So when someone says "Google has 100B cash" it would mean literally cash or close enough to cash that it doesn't matter.

You'll note, if you read the 10Q, it's also wrong. Google has 30B in cash and cash equivalents, and an additional 90B in marketable securities - stocks, and bonds with >90d maturity.

That said, "marketable securities" are extremely liquid.

> Or is it actually possible to have 1.5 billion dollars laying around in cash somehow?

Yes? Depending on what you mean by "laying around in cash". It's not literal physical dollar bills, it's numbers in a computer.

1.5B is not much for a company that size. I'd image that is payroll and accounts payable for like a week or two?

> I know money is a made up idea, but that is still a big number for a bank/banks/asset holding company to just say good for and expect some kind of real monetary tangible value behind the symbolic currency.

Bank of America alone has like 2 trillion in US deposits.


Most larger companies will use treasury management software within their finance organization to handle these issues (asset mix, risk, prediction, transfer, etc...).

Scanning the players will show you how some of them solve some of the problems you mention.

https://en.wikipedia.org/wiki/Treasury_management_system


>Or is it actually possible to have 1.5 billion dollars laying around in cash somehow?

Yes. 1.5 billion is less than Google's weekly operating expense.

>So since it's not just cash, how does a company of this size, then determine what assets to sell? And if those assets are actually invested in something, or representing an entity of some sorts, how do they assess whether or not it will cause any damage or loss of profitability?

There's a lot of smart people under CFO that determine that.


It’s described in various articles as $121bn of cash, cash equivalents (debt instruments and marketable securities with maturities of < 90 days) and other short-term investments (which I think includes government bonds.) Short term investments make up more than 80% of it.


Alphabet has 190k employees. Let's assume the average salary is 5k/month, that's already close to a billion just going to employees accounts each month.


Maybe it’s semantics, but generally with fund allocation when you’re deploying more than 1% of your AUM towards a single thesis that’s a significant bet.


The closest equivalent to "AUM" for a company like that would be the market capitalization, not cash reserves, so over 1.5 trillion.

Then again, shareholders usually have less risk tolerance with companies than investors with hedge funds, so the two aren't really comparable either. (But I assume your 1% number is for more risk-averse funds? Hedge funds regularly make much bigger bets than that)


Do you ever think how much better the world would be off if Alphabet wasn't allowed to buy all those things? I do. Antitrust has to come back in a big way.


And that is why they should have been broken up many years ago. The lack of teeth of us anti competitive laws goes against capitalism.


Those parts would still end up bought by Blackrock, Vanguard or whatever index fund is in fashion and steered by these funds that have the easiest business model but no capacity to innovate.


Then maybe funds like Blackrock, Vanguard, should be dismantled as well.


Why? "Owned by Vanguard" is the closest possible thing to "owned by the American public". What do you think Vanguard is and where do you think their money comes from?


Downvoting because parent is clearly referring to corp gov activities and it’s insulting to insinuate they don’t know what an asset manager is.


Maybe just restrict index funds from interfering with whatever business they own? Index funds are ultimately a dumb momentum-based business model that just buys whatever companies are doing well at the moment and sells whichever aren't, but bear no risk themselves outside the economy shrinking. Not sure why they should have any say in how the companies they own operate, it seems like a complete competence mismatch (dumb ones with little risk controlling the smart ones with the skin in the game).


Everyone should be their own asset manager? The devolution of voting has already started, so not sure that direction has much merit.


I don't feel that capitalism is an end to be pursued, but I agree that these conglomerates should be broken, for the sake of avoiding irreversible concentration of power.


Not really irreversible unless they get a monopoly on violence, so as long as the overton window from the publics perspective shifts dominantly towards breaking them up in the future we'll be able to get a better situation.


When you can reliably tell they have a monopoly on violence, it's far to late to do something about it.

However, Very Bad Things™ can happen long before they have monopoly on violence. And violence where? Large companies operate in weak jurisdictions, not only in their home states of Maryland and Ireland.


Did you mean Delaware instead of Maryland?


Maybe propose an alternate system that is 100% free from any similar hypothetical future abuse?


Well, historically, there have been a number of companies with the right to violence. The East Indies and West Indies Companies, for instance, or more recently United Fruit. I haven't bothered to check, but I'd be surprised if Big Oil didn't have mercenaries on call in hot zones, with little to no oversight.

Nobody (company, nation or gang) needs a monopoly on violence to become dangerous.


Those companies didn't have the right to violence in their home countries, they waged wars in other countries instead. Companies overthrowing their home country hasn't really been a thing.


True. However, please don't forget that the conglomerates being discussed initially do exist and have considerable influence in other countries than the US, so my remark remains :)

Also, the Medici family was initially a wool company, then grabbed power in their homeland, kept it for about three centuries and somewhere along the way produced descendants that ruled over much of Europe. Similarly, the fascist uprising that gave power to Franco over Spain was largely privately funded by a bank [1] and I seem to remember that the German Nazi party was largely funded by industrialists and bankers [2] until reached power.

So, I'd say that companies overthrowing their home country's government has unfortunately been a thing for quite some time.

[1] https://en.wikipedia.org/wiki/Juan_March

[2] https://www.bibliotecapleyades.net/sociopolitica/wall_street...


They (search, social media, advertising companies) are gaining a monopoly on truth. With that they indirectly control the government, which is the one with the monopoly on violence.


You are never given the real truth in the media. There’s never been an incentive for that.


To emphasize, all human productions are biased, that's human nature, and news media are not exempt, despite many outlets making serious attempts to produce unbiased news.

Part of the job of being a consumer of media is understanding the bias of what we're consuming, determining whether it skews the news, and possibly counter-balancing by consuming other media with different bias. And yes, it's lots of work and most people aren't willing to spend the time doing that.


> for the sake of avoiding irreversible concentration of power

In the context of capitalism, it’s for the same reasons.


What. Deploying capital to make investments is the central conceit of capitalism. It’s right there on the tin.


hmmm isn't it just exactly capitalism? and government intervening would maybe be against capitalism?


Google Cloud and Google DeepMind have different goals.

The goal of Cloud is to sell as much compute as possible to the rest of the world. So, they want as many foundational models that are running on their cloud.

Since OpenAI is married to Azure, Google and Amazon are trying hard to get the remaining. Of course GCP can just bet on Gemini, but if you are the head of GCP you wouldn't put all your eggs in one basket. I mean the essence of cloud is redundancy and load distribution. It applies to business strategies too


Interesting take.

I was analyzing this as not putting all your eggs in one basket in terms of AI capabilities, but you are right that it's probably also true in terms of cloud sales.


Anthropic, like OpenAI, has a big lead in Google in terms of productization, but lags behind OpenAI in visibility, posing a big risk of OpenAI getting so much mindshare that even if Google were to catch up in productization, everyone in the market would already be tied ti what was visibly the only game in town.

Helping fund Anthropic and assuring that there is visible competition also makes it more likely that LLM-backend-agnostic services will exist and a culture of evaluating offerings rather than just automatically going to OpenAI is established, which puts Google in a better position if they really are ahead in basic science and just suffering from a past lack of commeecialization focus that is remedied by the recent reorg and refocusing of Google’s AI efforts.

(Also, it may help avoid the situation where Google’s lack of product and and the financial relationship between other cloud providers and the AI vendors that have competitive product means that Azure is the favored enterprise platform for OpenAI, Amazon works out something similar with Anthropic, and Google Cloud loses competitive position because Google AI isn’t competitive and Google doesn't have the right partnerships.)


Anthropic lacks more than visibility. I don’t know if it’s an issue with compute or leadership but it’s been really difficult getting access. Of course I guess you can run it on bedrock now? I signed for api access on Anthropics site maybe 6 months ago. Never any follow up, not even to tell me I can use bedrock to run it.


Not sure how this happened, really sorry that this happened to you. Please reach out again, we've gotten through all of our backlog.


They are keeping the landscape from becoming too centralized around another direct competitor.

Open AI is now basically Microsoft. Copilot in all your docs, etc.

If only Amazon was the large investor in Anthropic it encourages a similar exclusive partnership to develop where Anthropic ends up powering Alexa with proprietary access to its SotA models.

Google throwing money that way keeps Anthropic from becoming too siloed into a partnership with one cloud direct competitor, and also helps fund another competitor to the company in the lead who is partnered with a different cloud direct competitor.

I wouldn't read into it beyond that, and certainly not in thinking this reflects product shortcomings or advantages.


1. OpenAI is too large a supplier for LLMs, and they are trying to create supplier diversity

2. $2B is a drop in the bucket for Google. Even after the severe fall this month, the Google Market Cap is $1.536 Trillion. Consider the investment as a % of market cap

3. It is undeniable that small players can move fast, so even if Google's AI is amazing, the slope is probably not as high as a fast moving startup

4. These products have 2nd order revenue boosters -- you might use Anthropic's marketplace app, as a theoretical example, but you end up spending much more on compute/storage/cloud in doing so, helping the cloud majors


I would not consider the investment as a % of market cap since Google doesn't own all that market cap... I would consider the non float shares + short term investments + cash on hand or Revenue/Profit better


> I would not consider the investment as a % of market cap since Google doesn't own all that market cap.

Market cap does matter because you can use it to gauge how easily Google could hypothetically raise the same amount[1] by issuing additional shares without a shareholder revolt.

1. Or complete an all-stock deal or acquisition.


Not seeing how that distinction matters. Google’s owners (shareholders) do own all that. And google represents the collective financial interests of the shareholders.


Google's shareholders own stock. Nothing more nothing less.


> even more so when Amazon just invested $1.5B in Anthropic a few weeks ago.

Amazon invested not $1.5B but $4B in Anthropic one month ago [0].

[0] https://www.reuters.com/markets/deals/amazon-steps-up-ai-rac...


Isn't this just laundering money into the cloud division? GOOG was decimated after the last earnings call by poor cloud results. This is at least a billion in revenue for Google Cloud, right?


I think their primary goal is to make life difficult for OpenAI by investing in a competent competitor that brings a level of agility that Google just can't deliver.


Simplest explanation is it was all PR hype.


The simplest is Google's caste of middle managers are more obstructive to the company than enabling it.


Caste is an accurate word to describe it.


ouch


But Google did discover the tech the current AI craze is based on.


Doesn't mean they know how to make it into a product.


A common failure mode for startups and a reminder of how universal those lessons are. Whether it would be the death knell of Google we'll have to wait a bit longer.


> Whether it would be the death knell of Google we'll have to wait a bit longer.

I don't understand the trope on HN that OpenAI is a competitor to Google. I do not think Google has the appetite or DNA to be purely an AI API vendor; I don't think selling APIs is as profitable as Google current business. Google investing in Anthropic is a good, cheap hedge against the low-probability (IMO) event that LLMs somehow displace the search results page and display ads.


Still not just "PR hype".


xerox as well invented computer mouse and GUI as we know it.


And you wouldn't accuse Xerox of doing "PR hype". Xerox did the exact opposite thing, they had not realized the value of research done in PARC.


Google engineers invented most of the primary innovations in NLP in the last decade, such as transformers and word2vec which started it all, that led to the current stack that OpenAI is built on. Their best public models certainly lag OpenAI, but it certainly isn't vaporware.


Researchers, not engineers.


Researchers are engineers in most industry jobs


You are telling me Google Assistant and Alexa aren't sentient ASI?


Let's not flatten the differences between ChatGPT and those (your humble source worked on Google Assistant and appreciates your kindness)


This is the only thing they can do. They don’t produce anything. They buy. Their AI is a joke and all they earn money on is ads. MS is beating them on the AI front, hard, and they have multiple interesting avenues like having an actual OS, basically all the world’s code and all the world’s business.

2B is too little too late as well. It’s almost embarrassing. MS dumped 10B into OpenAI. 2B is a lukewarm, visionless move.


Why do you say that MS beating Google on AI? OpenAI is not part of MS (49% ownership is nice but it doesn't give MS control of the company). MS by itself doesn't own any of foundational models competitive with Google. MS has a solid AI research team, but it's not ahead of Google DeepMind. Microsoft Azure beats Google Cloud, but I don't think you count GPU rentals as AI?

If MS actually bought OpenAI, it would be a different story.


Looks like Google has better hardware for AI than anyone else, and they are the only company having both AI research and AI hardware. OpenAI+Microsoft and Anthropic+Amazon are much less integrated, working at arms length.

> Google will have multiple very large clusters across their infrastructure for training and by far the lowest cost per inference, but this won’t automatically grant them the keys to the kingdom. If the battle is just access to compute resources, Google would crush both OpenAI and Anthropic.

> Being “GPU-rich” alone does not mean the battle is over. Google will have multiple different clusters larger than their competitors, so they can afford to make mistakes with pretraining and trying more differing architectures. What OpenAI and Anthropic lack in compute, they have to make up in research efficiency, focus, and execution.

https://www.semianalysis.com/p/amazon-anthropic-poison-pill-...


> Looks like Google has better hardware for AI than anyone else, and they are the only company having both AI research and AI hardware

On top of that they have lots of training data: youtube, gmail, google docs, google drive, indexed www for google search, lots of data from those fancy cars driving around for google street project and probably still some data from google+


AWS has trainwave, which is less off the ground that TPUs but does exist.


I don’t think 2B is too little. Anthropic already has a competitive LLM they lack a competitive dataset I think and need to scale. 2B can allow them to do this. I don’t know that Google wants Claude to be theirs they just don’t want Microsoft to own the generative AI market


When you’re pulling down $15 billion in profits a quarter, it’s not nuts to drop a chunk on an opportunity that might end up winning the whole market.

Safe hedge imo.


It could very well be that they lost the talent.


They lost most of top talent, it is their people who created OpenAI and Anthropic.


They are buying upside and they are funding a competitor to OpenAI and Microsoft. The enemy of my enemy is my friend. There are only so many groups in AI that have the chops to take on OpenAI and Anthropic is one of them - if not the most promising.


In addition to what others have said about them having so much capital to invest, another angle is that AI is a hot topic right now. It is a much worse PR headline for it to come out that Google’s AI did something bad. If a headline comes out that Anthropic’s model did something bad, very few people will connect the dots to Anthropic’s investors.


Speculating... but maybe Anthropic is about to come up with something big


Nope


Can you maybe elaborate since you’re clearly in the know?


No


It is a diversification bet I guess.


They are starting to let people try their "capabilities" and it isnt great from having tried them.


Google and their also-ran AI. Pichai is the lame duck CEO, Brin and Page are the lame duck founders. Hassabis is a lego nerd.

Company hasn’t done shit but shutter products and piss people off, and the search engine is working about as good as AltaVista circa 1999.

Google employees downvoting me


I don't get how Pichai isn't in the crosshairs right now, he must be good at bullshitting the board and shareholders, because things have very clearly been going downhill under his leadership.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: