> Lessons learnt for both Apple and Google in recent weeks then. If you want to turn our phones into AI-fueled machines, then let us know what you’re doing before you do it, and give us the opportunity to say yes or no. Otherwise it fuels fear of the unknown. And if AI is to bed down on our smartphones with access to all our apps and data, then it needs to establish high trust bars and stick to them rigidly.
Is that really a lesson learnt?
Didn't Google already know what the reaction would be, before they consciously (given it's a significant launch) decided to do it that way?
i can't wait for the "there's been confusion about what we've done" message to come out. cuz its always us, right? the customer is always wrong in this era of techno-feudalism.
The customer is always wrong because the customer is too small to matter. Google's scale gives them all the power in the relationship, undermining the market's role in forcing them to compromise in negotions. We would need something like consumer unions that can collectively bargain on behalf of customers to offset this power imbalance.
Or, antitrust laws that can break them into smaller units, recognizing that strict monopoly is not the only market failure scenario. That would be good too.
While I am sympathetic to the notion that government is supposed to regulate corporations, it is a grave misunderstanding to conflate it with either unions or corporations.
Suppose I come to your house and start breaking your windows, and I tell you that you don't need windows because it's so easy for me to break them. Is that a good argument, or is the real problem the person that's breaking your windows?
It's true that government is vulnerable to capture, but from that it does not follow that government is bad. It is more logical to conclude that the problem is unbounded personal enrichment allowing individuals to catapult themselves above the law. Corporations are the Roman Legions of the 21st century.
Corporations are also vulnerable to capture, after all, and that is their default condition. Why should we not treat powerful conglomerate corporations with the same skepticism as governments, or armies? Their internal structure is an inefficient command economy, why is this not also treated as a problem? Their scale, diversification, and entrenchment, have all allowed them to escape the regulatory power of market competition.
I can buy a Polestar or a VW or a Chevy instead of a Tesla, if I don't like the way Elon Musk runs his company. I have no such option if I don't like the way he runs the government.
The problem isn't the wealth. The problem is the power.
Power follows wealth, regardless of whether there's a government or not, and while the economy is not a zero sum game, power is - economic inequality matters.
The correlation between wealth and power is even stronger in countries where government is weak, and it is thus the function of government to prevent wealth from obtaining unchecked power. It is no wonder that wealth would seek to destroy government. That we appear to be losing this fight does not invalidate its importance.
You may have your choice of oligarchs to buy your car from, but what they will never allow you is the choice to compete with them on a level playing field. A cartel can allow internal competition, but they will all cooperate to crush upstarts.
You may have your choice of oligarchs to buy your car from, but what they will never allow you is the choice to compete with them on a level playing field. A cartel can allow internal competition, but they will all cooperate to crush upstarts.
It's government that interferes with competition, not "oligarchs" or "cartels." Otherwise I could buy a Chinese EV. The only way the "oligarchs" and the "cartels" can interfere with my decision is by using their wealth to capture and subvert the government.
It appears to me that this article, at least the title, is wrong. The update simply adds a local only api that apps can ise, it doesn't do any scanning. And Google sent out a notification that they plan to use this API in an update to Google Chat. Notably, it will not (at least for now) communicate the scan results back to Google.
Forgiveness? Thats a wrong term here and inapplicable. Loss of trust is the topic, you don't simply ask for forgiveness if you mess up, you work your ass off with building that trust again over lengthy period of time. And even then its not 100% back, trust is a finicky thing.
This leads to loss of revenue, maybe small maybe not. Ie if google is paired with some mentality and/or actions of current US government, US phones and all internet-enabled electronic products everywhere outside of US should be viewed with same scrutiny Huawei was treated previously, with very similar actions.
Not much discussion on hn from the few threads that come up in the search. But I want to highlight rini17 which links to a post from GrapheneOS that in turn links to the Google blog from Oct 2024.
My 2c is that this style of on device monitoring to warn users/children is an expected evolution of parental content monitoring tools for the llm era.
Graphine is right. As open source it would be much better. Privacy and trust concerns would be easier to check and give the opportunity to make tools that empower parents and children to decide how they should moderate content on their devices.
Perhaps some communities are very worried about nudity where others are more concerned about hate speech. Having the choice on what to detect and how, or if, to act would be helpful.
Is there a use case for that? If it is about bypassing parental restrictions or just blurring images because the user requests that feature, the pool of potential buyers seems small and likely too risky to engage.
It's also about blocking spam. Don't know why that's buried in this particular article, but many sources have pointed to that as one of the primary use cases for the new system tool.
> Application error: a client-side exception has occurred (see the browser console for more information).
I got this after the Forbes tab had sat open for a few minutes. Gotta love the modern web.
I hate how modern society all but requires a smartphone running either Android or IOS. I am very happy that something like GrapheneOS (mentioned in the article as well) exists, and allows me to have a smartphone without fully joining either cult and trusting everything those megacorps push. I hate that this is such a vulnerable position to be in.
Sure it does. But at the same time GrapheneOS is one of the very options where you can run a modern Android smartphone with precautions, and use the Play Store only for selected apps (banking, government auth, and in the Netherlands, WhatsApp (unfortunately)), and just use F-Droid for everything else.
I don't mind paying Google for a piece of hardware quite as much as I do running their OS without anyone looking out for the user's privacy. The hardware transaction is much more straightforward and transparent (although by no means perfect).
Before this I used a Ubuntu Phone, and that was fine, but like a dumbphone, you can't use any 'official app store sanctioned' apps with it, which has become really cumbersome these past few years. If using PureOS means I can't install banking apps and government auth apps, then that is a non-starter. I wish that wasn't the case, but it is.
(The article threw this error on my Linux desktop in Firefox by the way.)
I think the author of the article got a few things wrong. As far as I understood, SafetyCore is not "scanning all your photos". It is an API that apps on your phone CAN use to LOCALLY detect nudity and other potentially inappropriate things in photos.
I'm pretty sure what Apple proposed was an offline system for scanning, but any matches would be submitted to them. This is just the offline parts, and apps can choose to use it to detect nudity or other NSFW images. There's no scanning going on at all actually.
This is not correct. This is how most people assumed the Apple proposal worked, but it actually worked in a very different way. The device never knew if an image matched. Matches could only be determined via the combination of a receipt calculated on the device, plus information on the server, plus meeting a threshold of many matches. It was not offline scanning and uploading matches.
It was offline scanning and sending uploaded matches to law enforcement, this was very clearly laid out in their plans (that they eventually rolled back after massive uproar).
Of course, you could argue some people that choose to not use iCloud would not get that last bit, but considering they're turning it on by default (and even turning it on whenever you switch devices even though you restore a backup with it off), I'd say that would be a tiny minority of their customers.
Also, since we're on the subject of "unannounced scanning of all photos", Apple went and did it anyway, same as Google turning it on by default but claiming it's only to look for "landmarks" LOL :) https://news.ycombinator.com/item?id=42533685
It was not. The device was incapable of determining matches and the process only applied to photos being uploaded to iCloud.
> Also, since we're on the subject of "unannounced scanning of all photos", Apple went and did it anyway, same as Google turning it on by default but claiming it's only to look for "landmarks" LOL :) https://news.ycombinator.com/item?id=42533685
They did not. The landmark process works in an entirely different way for an entirely different purpose.
Oh that's right, they would upload a hash basically then flag it using whatever criteria they wanted on the server side. I think some of the outrage too was the question of what happens when they detect something, it was the issue of it reporting content outside of your control at all, which again, this Google thing doesnt seem to do.
That's fair, but the point is still that "The Worst Thing Ever" was the online part, which this doesn't do, so the comparison makes little to no sense. I'll check out their write up though, sounds interesting.
You had to provide multiple confirmed matches to the cloud before the human verifier could decrypt any of the images sent. They weren't scp:ing images to a share for all employees to see ffs.
The proposal was to scan photos people were uploading to iCloud not all photos[1]. The panic on HN was a) misunderstanding or deliberately misrepresenting the proposal as if it was scanning all offline photos, b) fantasising what if they don't do what they announced and instead scan all offline photos, c) realising that all Silicon Valley tech companies can change client software through updates, therefore Apple bad. [There were non-panicky comments about whether it's a legally significant move, whether it's a wedge change, whether it's abusable by governments in other ways, etc. the panicky ones were not those].
> "it was the issue of it reporting content outside of your control at all, which again, this Google thing doesnt seem to do."
All the big cloud providers [Google, Microsoft, Facebook] report abusive imagery sent to their clouds to the authorities (search for annual NECMEC reports), except Apple. The others slurp up unencrypted data (Facebook photos, Google Drive, Microsoft OneDrive) and scan it and report on it, and nobody [on HN] says anything. Apple was trying to do a more privacy-preserving approach and it seemed like they might be pushing it to the client upload code so they could offer fully encrypted storage where they couldn't scan photos on their side, and a couple of years later they did, they announced optional Advanced Data Protection[2] which fully encrypts iCloud photos among other things.
Dark patterns aside, it's in your control whether to upload data to companies, so 'reporting content outside your control' is deliberately misrepresenting it.
> There were non-panicky comments about whether it's a legally significant move, whether it's a wedge change, whether it's abusable by governments in other ways, etc. the panicky ones were not those
That’s what I find most frustrating about the reaction to this. This was a sophisticated, privacy-preserving process that represented a genuine step forward in the state of the art and there was an interesting argument to be had about whether it struck the right balance. But it was impossible to have that argument because it was drowned out by an overwhelming amount of nonsense from people guessing incorrectly about how it worked and then getting angry about their fantasies.
This is true of literally any software with auto updates.
You’re criticising them for something they did not do, did not intend to do, and designed a system that worked in an entirely different way… just because they could do it differently to what they actually proposed doing.
If they wanted to do it that other way, they could have just done it that other way in the first place and saved themselves a lot of effort.
In general, the less you trust in a company, the better. Free software allows to decrease the trust in the vendor by watching the code and forking whenever you have to.
As far as I remember, Apple was planning to detect CSAM, and upload the detected images to the cloud, showing the photos to apple employees. This is not at all comparable to a locally running API that returns something like "{nudity: true}".
You needed to have multiple (exact count undefined) confirmed matches before anything was sent anywhere.
And even then "Apple employees" would only see a "reduced quality" (Can't remember the actual wording) version of the images. Basically just enough for them to determine if there's something actually illegal in there.
You'd have to be the unluckiest person in the world to get 5 false matches against actual verified child abuse imagery. And even then you'd just slightly inconvenience a human checker.
Now they're doing it in the cloud for every image you upload unless you turn on Advanced Data Protection. Just like every provider.
I think I'm literally one of the very few people in the world who actually read and understood the white paper - and I have the downvotes to prove it :)
More privacy-invading bloatware for Android users. This is like an app that demonstrates how chat control would work. Nobody stops Google from flagging "interesting" images/files as suspicious and uploading them to some cloud service for "examination". Besides, there are countless other ways to avoid exposure to CSAM.
Google's move of installing the app without my permission makes me distrust the company even more. And they did the same thing again with "Android System Key Verifier" which has a different purpose, but they should have asked for my consent. My reaction: uninstalled both apps and turned off auto-updates. I thought I owned my smartphone.
I've been very reluctant for several years to reinstall my stock Android on Pixel with GrapheneOS, as I don't want to backup lots of things, have partially broken banking apps and having another linux moment when I have to figure out bunch of workarounds from scratch, but it seems that Google doesn't give me any choice.
One espionage scandal after another: first Pixels send their location even without location sharing to Google, then several attempts to add some AI crap with completely closed-source implementation and who-knows-what-it-does.
Does anybody use it? Will I have a hard time migrating from Pixel's stock Android?
I've seen this article circulating other sites, and it's kind of crazy to compare that headline and the content to the reality of what's happening here. In fact, reading the article now, I see it's continuously been edited to add quotes pulled from Reddit, so I have to assume the goal here was largely just rage/engagement bait.
The reality is that it's an app that offer's some on device models to detect nsfw content. Apps can choose to use those models instead of having to implement it themselves and I think that's kind of it. There's no scanning, there's no uploading, the article is even pulling quotes from a random forum that claims it's listening to your microphone and reading your contacts. The gap between what's written and reality is really ridiculous.
There's a discussion to be had around Google's ability to update software on our phones without our control, absolutely. But that's not what this article is doing, and wont lead the majority to that topic. This article is like reading a conspiracy website, it's a flashy headline with a kernel of truth, but just leads to dangerous behavior, like all the people installing stub apks from GitHub to keep this from installing.
Actually, all we have is Google word but we don't know exactly what this will do. Is there a proof that it will still not contact home to report whatever in some cases? Even only for a specific whitelist or government mandated secret white list? In the same way, if the Libra was pushed, there are probably things that have been pushed in other Google apps at the same time. Secretly for the moment.
Thanks, this is how I understand them as well for years now. I flagged this submission not because we shouldn't be critical and having a discussion about whatever this tool does but I would rather see a security professional disassembling and reverse engineering it to see what it does versus ragebait sourcing reddit comments.
if that was done in the first place would you have understood the implication and reasoning behind it. Perhaps that this article has made light of something troubling someone else can make the research claims and provide facts. Nothing wrong with this submission. Not long ago my long time google account was shut down from inappropriate material hosted on my drive which was not made clear to me what it was. Along with 100's of photos of my mom and family that i don't have any copies of were deleted. I was under the assumptions it was my naked photos when i was a kid with my mom that triggered the Child-Porn detection system which is a shame that all those photos got deleted without any evidence or notice to what actually happened. So no i don't agree with your opinion, However i ask you to allow people the ability to surface insecurities that we all share with large tech.
Yes you are right that this is nonsense. They have clearly already been doing this for years on the backend at a minimum to power the Google Photos search functionality, and who knows what else.
Would like to understand how this is done. Is this flashing some unofficial Android version? What hinderances does this cause -- can you run typical apps? The apps I use most often are WhatsApp, LinkedIn, Uber, Outlook, etc. Thanks.
It heavily depends on the phone you have. I used LineageOS on an older Moto, but if you have a Google Pixel, you might want to try GrapheneOS. I followed the steps on the wiki, involving running fastboot and adb commands to load the images. As for app stores, I primarily use F-Droid, but also have Aptoide. I don't use any apps that require the Google services frameworks, but if you need that, maybe MicroG will suffice.
It completely blows away everything on the internal storage (but not SD cards), so no going back without reflashing from another device. You might want to have a stock image from the manufacturer handy just in case. I struggled with a "Baseband: <not found>" error for an entire day (preventing calls and sms) until I flashed an image that worked.
Though if you want the real Google ecosystem back, instead of going back to stock, try reflashing LineageOS and add in the MindTheGapps zip. MindTheGapps includes the real Google Play Store and other components, just like most Android phones. The LineageOS flashing directions for your phone should mention when and how to do that. You'll have a phone with Google, but without BS apps that you can't delete (aside from simple phone, sms, contacts, camera, etc apps), and should run faster.
In other words, you're in the 50% of the population who don't get unsolicited d*ck photos sent to them. But do consider that the other 50% may have significantly different preferences on this matter, and that those preferences may drive tech company product decisions.
I don't follow your logic. How would one's membership in a big tech ecosystem determine whether they get unsolicited pics? Or whether they feel they can jump out of those ecosystems?
> But do consider that the other 50% may have significantly different preferences on this matter
Or am I reading it the wrong way? 50% of people want to get those pics? That's either news to me, or I run in different circles than you do.
Either way, I don't feel that I'm evangelizing about it, I'm not telling everyone that they should do what I'm doing. I'm merely telling my experiences and what I've done.
@tetromino_ is expressing a prior that women are sent dick pics more frequently than men.
From that they infer that women might actually like a feature that blurs out dicks automatically (at least, more than men would).
From that they infer that your expressed dislike of that feature means it's likely you're a man.
From that they infer that you're not used to thinking like a woman, and encourage you to try doing that to see if you think it makes Google's actions make more sense.
(now, the real question - did I get "whooosh"ed, or did you?)
EDIT: To be honest, I don't particularly like the sarcasm of @tetromino_'s comment and I don't think your original comment deserved a response like that. But seeing two responses apparently totally miss @tetromino_'s point was too much for me to ignore.
Even more bizarre. I have an unrooted Android phone with the Safetycore all and I was able to uninstall it. Not the uninstall update either. The whole app. Prob will keep reinstalling?
Is it just me, or is this outrage super amusing given the fact that for a long time you could search through you Google Photos (on Android, iOS or web) for e.g. “baby” and get a bunch of hits?
Is that really a lesson learnt?
Didn't Google already know what the reaction would be, before they consciously (given it's a significant launch) decided to do it that way?
Why would they care about a little blogger noise?
reply