Hacker News new | past | comments | ask | show | jobs | submit login
Ex-Googler – AI work is driven by 'a stone cold panic' (businessinsider.com)
37 points by Brajeshwar 12 days ago | hide | past | favorite | 17 comments





He's a UI strategist, from a UI perspective some of these AI things are a mess.

But Google says that their mission is to organize the world's information and make it universally accessible and useful. A technology that understands and generates human language, with all its idiosyncrasies and connotations, is probably pretty important to that mission. And OpenAI stole a march on Google in commercializing it.

Maybe they justifiably panicked a little because they were starting to miss the boat.

It's a fine line between running around with your head cut off, and entering terminal decline due to too little too late.


That mission of organizing the worlds info fell to the way side, after they became the world's largest Human Attention trading (some call it thieving) marketplace.

They can't reconcile both missions.

Ideally Alphabet should be running this as separate company that is free from any obligation to incorporate what they build, into Google's existing line up of garbage no customer ever asked for. That ever growing landfill has mostly been built up to increase digital real estate, on which to sell ads. And if it all burns down its good for the planet.


> "This myopia is NOT something driven by a user need,"

I think this has become pretty clear across the board at this point, personally. Broader AI aside, the LLMs/GPTs of the world are settling into some fairly narrow use cases relative to the extremely broad, borderline "everything", use cases they are sold as. The zeitgeist is already starting to cool, I think.


I think the phrase you are looking for is that LLMs/GPTs are "a solution in search of a problem". They suffer from the exact same that every neural network has in that their behaviour is unpredictable in corner cases. That's not something one can trust their business to. There have been enough legal issues created by simple chat bots agreeing to deals with customers that I don't see how one can build trustworthy automation on top of LLMs/GPTs without a human in the loop to catch the failures. Of course, there will be plenty of people that will do that regardless.

100% agree. You can even see it here on HN. When was the last time you saw a post where someone was excited about something they were actually _doing_ with and LLM? Mostly I see threads about business hype and the periodic conversation started by a confused person saying “so uh I tried LLMs and how do you actually get any value out of them?” And I think programming is actually the domain that LLMs will be MOST effective in.

AI feels like Google's Xerox PARC moment. Xerox PARC basically invented the modern PC, but others commercialized and profited.

Would Xerox have been better off if they had a stone cold panic reaction to the realization that others were commercializing their work? Probably.


I think the recent interview by The Verge with Sundar Pichai[0][1] kind of reaffirms what this article/person is saying.

As I said in my comment yesterday[2], Sundar was giving really shallow answers and 90% of the answers he gave were so broad that they could have already changed from yesterday to today.

The real issue is that Google is doing a lot of real-time testing with all the AI features and that is wreaking havoc not only for publishers but also for users.

[0]: https://www.theverge.com/24158374/google-ceo-sundar-pichai-a...

[1]: https://news.ycombinator.com/item?id=40415846

[2]: https://news.ycombinator.com/item?id=40416898


It is, but on the plus side we will get cheap NPUs to do stupid shit with them. MS is asking for 40 TOPS for certification as Cortana+.

That's 4X what anything has today, even the M4, which is as next gen as it gets barely reaches 38 TOPS, there's nothing close to that on AMD or Intel.

This will probably cause a race to the bottom in cost of NPUs.

It will be fun and a lot of crappy apps will be built (and a lot of money will be wasted), but many things will get better once we figure out what these are good for.

Object selection, image editing, audio editing, audio isolation, summarization, etc. all those used to be extremely hard, and will get a lot better.



What I don't understand is why they can't use AI to improve search quality. I'm sure there are tons of reasons, like institutional paralysis, but from an outside perspective it's frustrating because I'd be happy with search that just works.

I use ChatGPT these days largely as a better search. Where Google is flooded with junk, ChatGPT usually answers my question straightforwardly. If it's something important to me, I'll go back to Google to verify that the answer I got wasn't a pure hallucination, and that's usually pretty easy to do now that I have the key details I was missing before.


Chatgpt quality is going to degrade over time. Google search results were amazing before monetization and SEO kicked them in the teeth.

Same is going to happen to chatgpt once they will start trying to extract more value out of it.


I'm curious how an AI assistant will "lock me in." It's different than services that depend on network effects like Facebook. I suppose giving it access to things like Gmail is a sort of "lock". So far I haven't seen many compelling use cases for having personalized info that would also stop me from using other companies' AI products.

That's exactly how they're going to lock you. When your AI assistant is trained on 10 years of texts and emails, the lock-in potential is going to be huge. Switching phone ecosystems will be an equivalent hassle to firing your personal assistant that's been handling all your crap for 10 years.

I think this is pretty obvious to anyone who has been following this space closely.

Well, even if this is extremely lazy businessinsider linkedin journalism, it does align with this small anecdotal experience I had. Google came over to pitch their generative ai stuff.

The sales guy almost yelled "people tell us Google is missing the boat on AI?! People, we BUILT the boat". While referencing "attention is all you need". (An admittedly seminal 2017 paper written by academics that were on googles payroll back then)

Extremely cringe and unconvincing.



TL;DR Designer at Google who, by their own admission, had no-to-limited visibility into any plans, says leadership doesn't know what they're doing. Lazy media outlets driven by clickbait headlines passes this on as proof of assumed broader industry trends.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: