Hacker News new | past | comments | ask | show | jobs | submit login

I wonder if Apple ever approached Google about using Gemini as the flagship integration. I say that because during the keynote I kept thinking to myself, this could be the moment that Google realises it needs to stick to what it knows best - Search - and all they have to do is sit back and watch the hype fade away.

But that’s in a perfect world.

Even to this day, post ChatGPT, I still can’t imagine how I would ever use this AI stuff in a way that really makes me want to use it. Maybe I am too simple of a mind?

Maybe the problem is in the way that it is presented. Too much all at once, with too many areas of where and how it can be used. Rewriting emails or changing invitations to be “poems” instead of text is exactly the type of cringe that companies want to push but it’s really just smoke and mirrors.

Companies telling you to use features that you wouldn’t otherwise need. If you look at the email that Apple rewrote in the keynote - the rewritten version was immediately distinguishable as robotic AI slop.




My understanding is that Apple's approach to this integration is adaptable; much like how you would change your browser's search engine, you'll be able to change which external AI model is utilized. ChatGPT, Gemini, Claude, etc.


I don't think the choice of integration really matters for GP's point. Regardless of which model is used, how useful is the ability to rewrite an email in AI Voice really going to be? If I'm struggling over how to word an email there's usually a specific reason for it; maybe I'm trying to word things for a very particular audience or trying to find a concise way to cover something complicated that I have a lot of knowledge of. General purpose language model output wouldn't help at all in those cases.

I'm sure there are usecases for this and the other GenAI features, but they seem more like mildly useful novelties than anything revolutionary.

There's risk to this as well. Making it easier to produce low value slop will probably lead to more of it and could actually make communication worse overall.


TBF I was too harsh in my original comment. I did use ChatGPT to automate away the chore part of the coding (boiler plate for example). But I have a gut feeling that in maybe 5-10 years this is going to replace some junior programmer's job.

My job can be largely "AIed" away if such AI gets better and the company feeds internal code to it.


> My job can be largely "AIed" away if such AI gets better and the company feeds internal code to it.

The first company to offer their models for offline use, preferably delivered in shipping container you plug in, with the ability to "fine tune" (or whatever tech) with all their internal stuffs, wins the money of everyone that has security/confidentiality requirements.


Unless the company handles national security, the existing cloud tos and infrastructure fulfill all the legal and practical requirements. Even banks and hospitals use cloud now.


The context here is running third party LLM, not running arbitrary things in the cloud.

> the existing cloud tos and infrastructure fulfill all the legal and practical requirements

No, because the practical requirements are set by the users, not the TOS. Some companies, for the practical purposes of confidentiality and security, DO NOT want their information on third party servers [1].

Top third party LLM are usually behind an API, with things like retention, in those third party servers, for content policy/legal reasons. On premise, while being able to maintain content policy/legal retention on premise, for any needed retrospection (say after some violation threshold), will allow a bunch of $$$ to use their services.

[1] Companies That Have Banned ChatGPT: https://jaxon.ai/list-of-companies-that-have-banned-chatgpt/

edit: whelp, or go this route, and treat the cloud as completely hostile (which it should be, of course): https://news.ycombinator.com/item?id=40639606


If it can automate a junior away it seems as likely it will just make that junior more capable.

Somebody still needs to make those decisions that it can't make well. And some of those decisions doesn't require seniority.


That’s not what happens.

What happens is if you don’t need junior people, you eliminate the junior people, and just leave the senior people. The senior people then age out, and now you have no senior people either, because you eliminated all the junior people that would normally replace them.

This is exactly what has happened in traditional manufacturing.


> this could be the moment that Google realises it needs to stick to what it knows best - Search

In my mind Google is now a second class search like Bing. Kagi has savagely pwned Google.


> this could be the moment that Google realises it needs to stick to what it knows best - Search

You misspelled "ads"




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: