Hacker Newsnew | past | comments | ask | show | jobs | submit | brokencode's commentslogin

Nobody claimed it was impressive. It’s a little unusual to use C instead of C++, but that’s about it.

It's impressive these days when software quality and craftsmanship is declining.

There was a point in time when basically every well known AI researcher worked at Google. They have been at the forefront of AI research and investing heavily for longer than anybody.

It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.

But they are in full gear now that there is real competition, and it’ll be cool to see what they release over the next few years.


>It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.

Not really. If Google released all of this first instead of companies that have never made a profit and perhaps never will, the case law would simply be the copyright holders suing them for infringement and winning.


It's not even that. It's way easier to do R&D when you don't have a customer base to support.

Also think of how LLMs are replacing web searches for most people - Google would have been cannibalising their Search profits for no good reason

> It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.

It’s not that crazy. Sometimes the rational move is to wait for a market to fully materialize before going after it. This isn’t a Xerox PARC situation, nor really the innovator’s dilemma, it’s about timing: turning research into profits when market conditions finally make it viable. Even mammoths like Google are limited in their ability to create entirely new markets.


This take makes even more sense when you consider the costs of making a move to create the market. The organizational energy and its necessary loss in focus and resources limits their ability to experiment. Arguably the best strategy for Google: (1) build foundational depth in research and infrastructure that would be impossible for competition to quickly replicate (2) wait for the market to present a clear new opportunity for you (3) capture it decisively by focusing and exploiting every foundational advantage Google was able to build.

I also think the presence of Sergey Brin has been making a difference in this.

Ex-googler: I doubt it, but am curious for rationale (i know there was a round of PR re: him “coming back to help with AI.” but just between you and me, the word on him internally, over years and multiple projects, was having him around caused chaos b/c he was a tourist flitting between teams, just spitting out ideas, but now you have unclear direction and multiple teams hearing the same “you should” and doing it)

the rebuke is that lack of chaos makes people feel more orderly and as if things are going better, but it doesn't increase your luck surface area, it just maximizes cozy vibes and self interested comfort.

My dynamic range of professional experience is high, dropout => waiter => found startup => acquirer => Google.

You're making an interesting point that I somewhat agree with from the perspective of someone was...clearly a little more feral than his surroundings in Google, and wildly succeeded and ultimately quietly failed because of it.

The important bit is "great man" theory doesn't solve lack of dynamism. It usually makes things worse. The people you read about in newspapers are pretty much as smart as you, for better or worse.

I actually disagreed with the Sergey thing along the same lines, it was being used as a parable for why it was okay to do ~nothing in year 3 and continue avoiding what we were supposed to ship in year 1, because only VPs outside my org and the design section in my org would care.

Not sure if all that rhymes or will make any sense to you at all. But I deeply respect the point you are communicating, and also mean to communicate that there's another just as strong lesson: one person isn't bright enough to pull that off, and the important bit there isn't "oh, he isn't special", it's that it makes you even more careful building organizations that maintain dynamism and creativity.


Yeah people seem to be pretty poor at judging the impact of 'key' people.

E.g. Steve Jobs was absolutely fundamental to the turn around of Apple. Will Brin have this level of incremental impact on the Goog/Alphabet of today? Nah.


The difference is: Apple had one "key person", Jobs, and yes the products he drove made the company successful. Now Jobs has gone I haven't seen anything new.

But if you look at Google, there isn't one key product. There are a whole pile of products that are best in class. Search (cringe, I know it's popular here to say Google search sucks and perhaps it does, but what search engine is far better?), YouTube, Maps, Android, Waymo, GMail, Deep Mind, the cloud infrastructure, translate, lens (OCR) and probably a lot of others I've forgotten. Don't forget Sheets and Docs, which while they have been replicated by Microsoft and others now were first done by Google. Some of them, like Maps, seem to have swapped entire teams - yet continued to be best in class. Predicting Google won't be at the forefront on the next advance seems perilous.

Maybe these products have key people as you call them, but the magic in Alphabet doesn't seem to be them. The magic seems to be Alphabet has some way to create / acquire these keep people. Or perhaps Alphabet just knows how to create top engineering teams that keep rolling along, even when the team members are replaced.

Apple produced one key person, Jobs. Alphabet seems to be a factory creating lots of key people moving products along. But as Google even manages to replace these key people (as they did for Maps) and still keep the product moving, I'm not sure they are the key to Googles success.


Docs was just an acquisition of Writely, an early „Web 2.0“ document editor service, so „first done by google“ is a bit imprecise

> what search engine is far better?

Since you ask, this surely has to be altpower.app!


In Assistant having higher-ups spitting ideas and random thoughts ended up in people mistakenly assume that we really wanted to go/do that, meaning that chaos resulted in ill and cancelled projects.

The worst part was figuring what happened way too late. People were having trying to go for promo for a project that didn't launch. Many people got angry, some left, the product felt stale and leadership&management lost trust.


Isn’t that what the parent is describing? “Ill and cancelled projects” <==> “luck surface area”, and “trying to go for promotion” <==> “cozy vibes and self-interested comfort”?

I'm in a similar position and generally agree with your take, but the plus side to his involvement is if he believed in your project or viewpoint he would act as the ultimate red tape cutter.

And there is absolutely nothing more valuable at G (no snark)

(cheers, don't read too much signal into my thoughts, it's more negative than I'd intend. Just was aware it was someone going off PR, and doing hero worship that I myself used to do, and was disabused over 7 years there, and would like other people outside to disabuse themselves of. It's a place, not the place)


That makes sense. A "secret shopper" might be a better way to avoid that but wouldn't give him the strokes of being the god in the room.

He was shopping for other strokes from Google employees: https://finance.yahoo.com/blogs/the-exchange/alleged-affair-...

Oh ffs, we have an external investor who behaves like that. Literally set us back a year on pet nonsense projects and ideas.

What'd he say

That the rocket company should buy an LLM

Please, Google was terrible about using the tech the had long before Sundar, back when Brin was in charge.

Google Reader is a simple example: Googl had by far the most popular RSS reader, and they just threw it away. A single intern could have kept the whole thing running, and Google has literal billions, but they couldn't see the value in it.

I mean, it's not like being able to see what a good portion of America is reading every day could have any value for an AI company, right?

Google has always been terrible about turning tech into (viable, maintained) products.


Is there an equivalent to Godwin's law wrt threads about Google and Google Reader?

See also: any programming thread and Rust.


I'm convinced my last groan will be reading a thread about Google paper clipping the world, and someone will be moaning about Google Reader.

“A more elegant weapon of a civilised age.”

Lol, it seems obvious in retrospect, there really, really, needs to be.

Therefore we now have “Vinkel’s Law”


It's far from the only example https://killedbygoogle.com/

I never get the moaning about killing Reader. It was never about popularity or user experience.

Reader had to be killed because it [was seen as] a suboptimal ad monetization engine. Page views were superior.

Was Google going to support minimizing ads in any way?


Right. Reader was not a case of apathy and failure to see the product’s value.

It was Google clearly seeing the product’s value, and killing it because that value was detrimental to their ads business.


How is this relevant? At best it’s tangentially related and low effort

Took a while but I got to the google reader post. Self host tt-rss, it's much better

Can you not vibe code it back into existence yet?


If this is true, this is disappointing :/

On a similar topic, it is worth mentioning the entrepreneurs that are forced into sex (or let’s say, very pushed) by VCs.

For those who feel safe or taking it as a joke, this affects women AND men.

Some people are going to be disappointed about their heroes.


Dont mention the recent Eric Schmidt scandal.

Barely any of these jokers are clean. Makes MZ look seemingly normal in comparison.


>> If this is true, this is disappointing

Wait for the second set of files...

"...One of Mr. Epstein’s former boat captains told The New York Times earlier this year that he had seen Mr. Brin on the island more than once..."

https://dnyuz.com/2026/01/31/powerful-men-who-turn-up-in-the...


What's striking is the sheer scale of Epstein's and Maxwell's scheduling and access. The source material makes it hard to even imagine how two people could sustain that many meetings/parties/dinners/victims, across so many places, with such high-profile figures. And, how those figures consistently found the time to meet them.

Ghislaine making a speech at the UN... https://youtu.be/-h5K3hfaXx4?t=350

> It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.

I always thought they deliberately tried to contain the genie in the bottle as long as they could


Their unreleased LaMDA[1] famously caused one of their own engineers to have a public crashout in 2022, before ChatGPT dropped. Pre-ChatGPT they also showed it off in their research blog[2] and showed it doing very ChatGPT-like things and they alluded to 'risks,' but those were primarily around it using naughty language or spreading misinformation.

I think they were worried that releasing a product like ChatGPT only had downside risks for them, because it might mess up their money printing operation over in advertising by doing slurs and swears. Those sweet summer children: little did they know they could run an operation with a seig-heiling CEO who uses LLMs to manufacture and distribute CSAM worldwide, and it wouldn't make above-the-fold news.

[1] https://en.wikipedia.org/wiki/LaMDA#Sentience_claims

[2] https://research.google/blog/lamda-towards-safe-grounded-and...


The front runner is not always the winner. If they were able to keep pace with openai while letting them take all the hits and miss steps, it could pay off.

Time will tell if LLM training becomes a race to the bottom or the release of the "open source" ones proves to be a spoiler. From the outside looking while ChatGPT has brand recognition for the average person who could not tell the difference between any two LLMs google offering Gemini in android phones could perhaps supplant them.


I swear the Tay incident caused tech companies to be unnecessarily risk averse with chatbots for years.

Attention is all you need was written by Googlers IIRC.

Indeed, none of the current AI boom would’ve happened without Google Brain and their failure to execute on their huge early lead. It’s basically a Xerox Parc do-over with ads instead of printers.

I’m hoping they add better IDE integration to track active file and selection. That’s the biggest annoyance I have in working with Codex.

Idk, I feel like these coding assistant features aren’t that hard to add, but can provide a lot of value to developers. Most or all popular IDEs now support similar features.

I don’t disagree that Apple could use a major focus on bug fixing across their platforms right now though.


Yeah, I'd like to see another OS release like to Snow Leopard (10.6.x) which had as a prime focus simplification and so forth w/o adding many (any?) features.

At least Elon Musk didn’t end up getting it, which he tried before Google swooped in.

Every line of code is technical debt. Some of the hardest projects I’ve ever worked on involved deleting as much code as I wrote.


Exactly. I once worked on a large project where the primary contractor was Accenture. They threw a party when we hit a million lines of C++. I sat in the back at a table with the other folks who knew enough to realize it should have been a wake.


It’s just the easiest metric to measure progress. Measuring real progress in terms of meeting user needs with high quality is a lot harder.


Oh yeah. At a previous job there was a guy who'd deleted more code than he'd written which I always found a little amusing.

Being in a similar position to him now though... if it can be deleted it gets deleted.


Yes, but when does Claude have the opportunity to kill children? Is it really something that happens? Where is the risk to Anthropic there?

On the other hand, no brand wants to be associated with CSAM. Even setting aside the morality and legality, it’s just bad business.


> Yes, but when does Claude have the opportunity to kill children? Is it really something that happens?

It's possible that some governments will deploy Claude to autonomous killer drone or such.


There are lots of AI companies involved in making real targeting decisions and have been for at least several years.


> On the other hand, no brand wants to be associated with CSAM. Even setting aside the morality and legality, it’s just bad business.

Grok has entered the chat.


So using something once or twice is plenty to give it a fair shake?

How long did it take to learn how to use your first IDE effectively? Or git? Or basically any other tool that is the bedrock of software engineering.

AI fools people into thinking it should be really easy to get good results because the interface is so natural. And it can be for simple tasks. But for more complex tasks, you need to learn how to use it well.


So is it strictly necessary to sign up for the 200 a month subscription? Because every time, without fail, the free ChatGPT, Copilot, Gemini, Mistral, Deepseek whatever chatbots, do not write PowerShell faster than I do.

They “type” faster than me, but they do not type out correct PowerShell.

Fake modules, out of date module versions, fake options, fake expectations of object properties. Debugging what they output makes them a significant speed down compared to just, typing and looking up PowerShell commands manually and using the -help and get-help functions in my terminal.

But again, I haven’t forked over money for the versions that cost hundreds of dollars a month. It doesn’t seem worth it, even after 3 years. Unless the paid version is 10 times smarter with significantly less hallucinations the quality doesn’t seem worth the price.


Not necessary. I use Claude/Chatgpt ~$20 plan. Then you'll get access to the cli tools, Claude Code and Codex. With web interface, they might hallucinate because they can't verify it. With cli, it can test its own code and keep iterating on it. That's one of the main difference.


> So is it strictly necessary to sign up for the 200 a month subscription?

No, the $20/month plans are great for minimal use

> Because every time, without fail, the free ChatGPT, Copilot, Gemini, Mistral, Deepseek whatever chatbots, do not write PowerShell faster than I do.

The exact model matters a lot. It's critical to use the best model available to avoid wasting time.

The free plans generally don't give you the best model available. If they do, they have limited thinking tokens.

ChatGPT won't give you the Codex (programming) model. You have to be in the $20/month plan or a paid trial. I recommend setting it to "High" thinking.

Anthropic won't give you Opus for free, and so on.

You really have to use one of the paid plans or a trial if you want to see the same thing that others are seeing.


You are exposing your lack of learning how to use the tools.

Tools like GitHub copilot can access the CLI. It can look up commands for you. Whatever you do in the terminal, it can do.

You can encode common instructions and info in AGENTS.md to say how and where to look up this info. You can describe what tools you expect it to use.

There are MCPs to help hook up other sources of context and info the model can use as well.

These are the things you need to learn to make effective use of the technology. It’s not as easy as going to ChatGPT and asking a question. It just isn’t.

Too many people never get past this low level of knowledge, then blame the tool.


I hate that Microsoft did this but I meant Microsoft 365 Copilot. Not Github Copilot. The Copilot I am talking about does not have those capabilities.


GitHub Copilot has a free tier as well. The $20/month one gives you much better models though.

All I’m saying is that the vast majority of people who say that AI dev tools don’t work and are a waste of time/money don’t know how and really haven’t even made a serious attempt at learning how to use them.


To be fair there seems to be a weird dissonance between the marketing (fire your workers because AI can do everything now) and the reality (actually you need to spend time and effort and expertise to setup a good environment for AI tools and monitor them).

So when people just Yolo the ladder they don't get the results they expect.

I'm personally in the middle, chat interface + scripts seems to be the best for my productivity. Agentic stuff feels like a rabbit hole to me.


Well I am not a dev so I am just using the freely available search assist and chatbots. I am not saying the dev tools don’t work; I am saying the chatbot makes up fake PowerShell commands. If the dev tool version is better it still seems significantly less efficient and more expensive than just running “Get-Help” in the terminal from my perspective.


You are not disproving my point. You are just repeating that you don’t want to try to learn how you can actually use AI tools to help you work, but yet you still want to complain online that they are a waste of time and money.


I tried. Several times. Just quick prompting, full agentic, and all I see as a result is mostly garbage to be honest. Not even talking about the atrophy one would get skill-wise by relying on AI tools all the time.


I'm on the $20 plan with Claude. It's worth mentioning that Claude and Codex both support per token billing, if your usage is so light that $20 is not worth it.

But if you use them for more than a few minutes, the tokens start adding up, and the subscriptions are heavily discounted relative to the tokens used.

There are also API-neutral tools like Charm Crush which can be used with any AI provider with API keys, and work reasonably well (for simple tasks at least. If you're doing something bigger you will probably want to use Claude Code).

Although each AI appears to be "tailored" to the company's own coding tools, so you'll probably get better results "holding it right".

That being said, the $3/month Z.ai sub also works great in Claude Code, in my experience. It's a bit slower and dumber than actual Claude, so I just went for the real thing in the end. 60 cents a day is not so bad! That's like, 1/3 of my canned ice coffee... the greater cost is the mental atrophy I am now undergoing ;)


No, it's not necessary to pay 200/mo.

I haven't had an issue with a hallucination in many months. They are typically a solved problem if you can use some sort of linter / static analysis tool. You tell the agent to run your tool(s) and fix all the errors. I am not familiar with PowerShell at all, but a quick GPT tells me that there is PSScriptAnalyzer, which might be good for this.

That being said, it is possible that PowerShell is too far off the beaten path and LLMs aren't good at it. Try it again with something like TypeScript - you might change your mind.


im unconvinced that you can learn to use it well while its moving so quickly.

whatever you learn now is going to be invalid and wasteful in 6 months


Who cares if it’s better in 6 months if you find it useful today?

And I reject that anything you learn today will be invalid. It’ll be a base of knowledge that will help you understand and adopt new tools.


It can also backfire and sometimes give you absolute made-up nonsense. Or waste your whole day moving in a circle around a problem.


I’m not so sure. Just think about coding assistants with MCP based tools. I can use multiple different models in GitHub Copilot and get good results with similarly capable models.

Siri’s functionality and OS integration could be exposed in a similar, industry-standard way via tools provided to the model.

Then any other model can be swapped in quite easily. Of course, they may still want to do fine tuning, quantization, performance optimization for Apple’s hardware, etc.

But I don’t see why the actual software integration part needs to be difficult.


> But I don’t see why the actual software integration part needs to be difficult.

That’s not the issue. The issue is that once Gemini is in place as the intelligence behind Siri, the bar is now much higher than today and so you have to be more careful if you consider replacing Gemini, because you’re as likely as not to make Siri worse. Maybe more likely to make it worse.


Oh well that’s a good problem to have, isn’t it? Siri being so good that they don’t want to mess it up.

That gives them plenty of runway to test and optimize new models internally before release and not feel like they need to rush them out because Siri sucks.


Email seems like not only a pretty terrible training data set, since most of it is marketing spam with dubious value, but also an invasion of privacy, since information could possibly leak about individuals via the model.


> Email seems like not only a pretty terrible training data set, since most of it is marketing spam with dubious value

Google probably even has an advantage there: filter out everything except messages sent from valid gmail account to valid gmail account. If you do that you drop most of the spam and marketing, and have mostly human-to-human interactions. Then they have their spam filters.


I'd upgrade that "probably" leak to "will absolutely" leak, albeit with some loss of fidelity.

Imagine industrial espionage where someone is asking the model to roleplay a fictional email exchange between named corporate figures in a particular company.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: