Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> These cases will be manually reviewed. That is, according to Apple, an Apple employee will then look at your (flagged) pictures.

I'm surprised this hasn't gotten enough traction outside of tech news media.

Remember the mass celebrity "hacking" of iCloud accounts a few years ago? I wonder how those celebrities would feel knowing that some of their photos may be falsely flagged and shown to other people. And that we expect those humans to act like robots and not sell or leak the photos, etc.

Again, I'm surprised we haven't seen a far bigger outcry in the general news media about this yet, but I'm glad to see a lot of articles shining light on how easy it is for false positives and hash collisions to occur, especially at the scale of all iCloud photos.



They wouldn't be falsely flagged. It doesn't detect naked photos, it detects photos matching real confirmed CSAM based on the NCMEC's database.


The article posted, as well as many others we've seen recently, demonstrate that collisions are possible, and most likely inevitable with the number of photos to be scanned for iCloud, and Apple recognizes this themselves.

It doesn't necessarily mean that all flagged photos would be of explicit content, but even if it's not, is Apple telling us that we should have no expectation of privacy for any photos uploaded to iCloud, after running so many marketing campaigns on privacy? The on-device scanning is also under the guise of privacy too, so they wouldn't have to decrypt the photos on their iCloud servers with the keys they hold (and also save some processing power, maybe).


Apple already use the same algorithm on photos in email, because email is unencrypted. Last year Apple reported 265 cases according to the NYT. Facebook reported 20.3 million.

Devolving the job to the phone is a step to making things more private, not less. Apple don’t need to look at the photos on the server (and all cloud companies in the US are required to inspect photos for CSAM) if it can be done on the phone, removing one more roadblock for why end-to-end encryption hasn’t happened yet.


> all cloud companies in the US are required to inspect photos for CSAM)

This is extremely disingenuous. If their devices uploaded content with end to end encryption there would be no matches for CSAM.

If they were required to search your materials generally, then they would be effectively deputized-- acting on behalf of the government-- and your forth amendment protection against unlawful search would be would extended to their activity. Instead we find that the both cloud providers and the government have argued and the courts have affirmed the opposite:

In US v. Miller (2017)

> Companies like Google have business reasons to make these efforts to remove child pornography from their systems. As a Google representative noted, “[i]f our product is associated with being a haven for abusive content and conduct, users will stop using our services.” McGoff Decl., R.33-1, PageID#161.

> Did Google act under compulsion? Even if a private party does not perform a public function, the party’s action might qualify as a government act if the government “has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that of the” government. [...] Miller has not shown that Google’s hash-value matching falls on the “compulsion” side of this line. He cites no law that compels or encourages Google to operate its “product abuse detection system” to scan for hash-value matches. Federal law disclaims such a mandate. It says that providers need not “monitor the content of any [customer] communication” or “affirmatively search, screen, or scan” files. 18 U.S.C. § 2258A(f). Nor does Miller identify anything like the government “encouragement” that the Court found sufficient to turn a railroad’s drug and alcohol testing into “government” testing. See Skinner, 489 U.S. at 615. [...] Federal law requires “electronic communication service providers” like Google to notify NCMEC when they become aware of child pornography. 18 U.S.C. § 2258A(a). But this mandate compels providers only to report child pornography that they know of; it does not compel them to search for child pornography of which they are unaware.


This is not disingenuous - it's a statement of the reality within which we live. You can claim all you like that there is no coercion taking place, and that e2e would solve all ills, but it doesn't change the facts that:

- All cloud providers scan for it. Facebook, Google, Amazon, Apple, Imgur ... There's a list of 144 companies at NCMEC. There must be a damn good reason for that consensus...

- Because they scan for it, they are obliged (coerced, if you will) to report anything they find. By law.

- Facebook (to pull an example out of the air) reported 20.3 million times last year. Google [1] are on for 365,319 for July->Dec and are coming up on 3 million reports. Apple reported 265 cases last year.

- Using e2e doesn't remove the tarnish of CSAM being on your service. All it does is give some hand-wavy deniability "oh, we didn't know". Yes, but you chose to not know by enforcing e2e. That choice was the act, and kiddy-porn providers flocking to your service was the consequence. Once the wheels of justice turn a few times, and there becomes a trend of insert your e2e service being where all the kiddy-porn is stored, there's no coming back.

The problem here is that there's no easy technical answer to a problem outside the technical sphere. It's not the technology that's the problem, it's the users, and you don't solve that by technological means. You take a stand and you defend it. To some, that will be your solution ("It's all e2e, we don't know or take any ownership, it's all bits to us"). To others, it'll be more like Apple's stance ("we will try our damndest not to let this shit propagate or get on our service"). Neither side will easily compromise too much towards the other, because both of them have valid points.

You pays your money and you takes your choice. My gut feeling is that the people bemoaning this as if the end-times were here will still all (for reasonable definitions of "all") be using iCloud in a few months time, and having their photos scanned (just like they have been for ages, but this time on upload to iCloud rather than on receipt by iCloud).

[1] https://transparencyreport.google.com/child-sexual-abuse-mat...


> You can claim all you like that there is no coercion taking place

It's not just myself claiming it, it is google claiming it under oath. If they were perjuring themselves and their scanning was, in fact, coerced by the government it would not change improve the situation: In that case by scanning in response to government coercion they would be aiding an unlawful violation of their user's fourth amendments and covering it it up.

If you'd like to sustain an argument that there is a large conspiracy of tech companies along with the government to violate the public's constitutional rights on a massive scale and lying in court to cover for it-- well I wouldn't be that shocked. But if that's true then it their complicity is a VASTLY greater ethical failure.

I think, however, we should greatly penalize the prospects of a vast conspiracy to violate the public's constitutional rights. It would be far from a new thing for there to be large number of commercial entities violating the civil rights of the public without any government coercion to do so.

> My gut feeling is that the people bemoaning this as if the end-times were here will still all (for reasonable definitions of "all") be using iCloud in a few months time, and having their photos scanned

Well not me. I don't use any of those privacy invading services, and I hope you won't either!

And there are plenty of data storage providers that have users data encrypted by default.


Am I missing something? Apple says they literally scan stuff locally on your iCrap now and call the police on you if you have $badstuff. Nobody should be having their data scanned in the first place. Is iCloud unencryped? Such a thing exists in 2021? I've been using end to end crypto since 2000. I don't understand why consumers want their devices to do all kinds of special non-utilitarian stuff (I mean I totally understand, it's called politics).

This new iCrap is like a toaster that reports you if you put illegally imported bread in it. It will be just like the toaster which will have no measureable impact on illegal imports. Even if $badguys are so dumb to continue using the tech (iCloud???) and lots go to jail, lots more will appear and simply avoid the exact specific cause that sent previous batch to jail. They do not even thave to think.

The problem with all this is that everyone is applauding Apple for their bullshit, and so they will applaud the government when they say "oh no, looks like criminals are using non-backdoored data storage methods, what a surprise! we need to make it illegal to have a data storage service without going through a 6 month process to setup a government approved remote auditing service".

Then there's also the fact that this is all a pile of experimental crypto [1] being used to solve nothing. Apple has created the exact situation of Cloudflare Pass: they pointlessly made $badip solve a captcha to view a read-only page, and provided a bunch of experimental crypto in a browser plugin to let him use one captcha for multiple domains (they would normally each require their own captcha and corresponding session cookie). They later stopped blocking $badip all together after they realized they are wrong (this took literally 10 years).

1. https://www.apple.com/child-safety/ "CSAM detection" section


If there were no false positives there would be no legitimate reason for Apple to review-- they would just be needlessly exposing their employees to child abuse material.

But the fact that there is no legitimate reason according to the system's design doesn't prevent there from being an illegitimate reason: Apple's "review" undermines your legal due process protection against warrantless search.

See US v. Ackerman (2016): The appeals court ruled that when AOL forwarded an email with an attachment whos hash matched the NCMEC database to law enforcement without anyone looking at it, and law enforcement looked at the email without obtaining a warrant was an unlawful search and had AOL looked at it first (which they can do by virtue of your agreement with them) and gone "yep, thats child porn" and reported it, it wouldn't have been an unlawful search.


If that would always work, a manual review would not be necessary. Just send the flagged photo and its owner straight to the police.


It will flag pictures that match a perceptual hash of pictures of child abuse. Now what legal kinds of pictures are most similar in composition, color, etc. to those offending pictures? What kinds of pictures would be hardest to distinguish from offending pictures if you were given only 16x16 thumbnails?

I'm going to bet the algorithm will struggle the most with exactly the pictures you don't want reviewers or the public to see.


Hashes, no false matches, pick one.


That really alarmed me. I don't think a hosting provider like Apple should have a right to access private pictures, especially just to enforce copyright.

Edit: I see now it's not about copyright, but still very disturbing.


So we have an Apple employee, the type of person who gets extremely offended over such things as "Chaos Monkeys," deciding if someone is a criminal? No thanks!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: