Apple defends anti-child abuse imagery tech after claims of ‘hash collisions’ - https://news.ycombinator.com/item?id=28225706
Convert Apple NeuralHash model for CSAM Detection to ONNX - https://news.ycombinator.com/item?id=28218391
This is useful for two purposes I can think of. One, you can randomize all the vectors on all of your images. Two, you can make problems for others by giving them harmless-looking images that have been cooked to give particular hashes. I'm not sure how bad those problems would be – at some point a police officer does have to look at the image in order to get probable cause. Perhaps it could lead to your Apple account being suspended, however.
Of course, this is assuming everything works as intended and they don't find anything else they can use to charge you with something as they search your home. If you smoke cannabis while being in the wrong state, you're now in several more kinds of trouble.
They will instead contact the police and say "Person X has Y images that are on list Z," and let the police get a warrant based off that information and execute it to check for actual CSAM.
iCloud is encrypted, so that warrant is useless.
They need to unlock and search the device.
> iCloud content may include email, stored photos, documents, contacts, calendars, bookmarks, Safari Browsing History, Maps Search History, Messages and iOS device backups. iOS device backups may include photos and videos in the Camera Roll, device settings, app data, iMessage, Business Chat, SMS, and MMS messages and voicemail. All iCloud content data stored by Apple is encrypted at the location of the server. When third-party vendors are used to store data, Apple never gives them the encryption keys. Apple retains the encryption keys in its U.S. data centers. iCloud content, as it exists in the customer’s account, may be provided in response to a search warrant issued upon a showing of probable cause, or customer consent.
They regularly are, and they regularly give up customer data in order to comply with subpeonas. They give up customer data in response to government requests for 150,000 users/accounts a year.
Man that seems horrible. So you just have to trust the description is accurate? You’d think there’d at least be a “private viewing room” type thing (I get the obvious concern of not giving them a file to take home)
That said, I'm not willing to say it won't happen. There are too many law enforcement entities of wildly varying levels of professionalism, staffing, and technical sophistication. Someone innocent, somewhere, is likely to have a multi-year legal drama because their local PD got an email from Apple.
And we haven't even gotten to subjects like how some LEOs will happily plant evidence once they decide you're guilty.
You could take actual CSAM, check if it matches the hashes and keep modifying the material until it doesn’t (adding borders, watermarking, changing dimensions etc.). Then just save it as usual without any risk.
In fact, Apple themselves generate fake/meaningless safety vouchers a certain percentage of the time (see synthetic safety vouchers.) If a jailbroken phone could trigger that code path for all images in the pipeline, apple’s system would be completely broken.
On the other hand, this may be just the excuse apple needs to lock down the phone further, to “protect the integrity of the CSAM detection system.” Perhaps they could persuade congress to make jailbreaking a federal crime. Perhaps they’re more clever than I ever imagined. Or perhaps they can fend off alternate App Store talk for sake of protecting the integrity of the system. Or perhaps staying up too late makes me excessively conspiratorial.
Whatever you say about Apple, they are an extremely well oiled communication machine. Every C-level phrase has a well thought out message to deliver.
This interview was a train wreck. Joanna kept asking: please, in simple terms, to a hesitant and inarticulate Craig. It was so bad that she had to produce infographics to fill the communication void left by Apple.
They usually do their best to “take control” of the narrative. They were clearly caught way off guard here. And that's revealing.
This was painful to watch.
And because of this they calibrated their communication completely wrong, focusing on the on device part as being more private. Using the same line of thinking they use for putting Siri on device.
And the follow up was an uncoordinated mess that didn't help either (as you rightly pointed out with Craig's interview). In the Neuenschwander interview , he stated this :
> The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled.
This still has me confused, here's my understanding so far (please feel free to correct me)
- Apple is shipping a neural network trained on the dataset that generates NeuralHashes
- Apple also ships (where ?) a "blinded" (by an eliptic curve algo) table lookup that match (all possible?!) NeuralHashes to a key
- This key is used to encrypt the NeuralHash and the derivative image (that would be used by the manual review) and this bundle is called the voucher
- A final check is done on server using the secret used to generate the elliptic curve to reverse the NeuralHash and check it server side against the known database
- If 30 or more are detected, decrypt all vouchers and send the derivative images to manual review.
I think I'm missing something regarding the blinded table as I don't see what it brings to the table in that scenario, apart from adding a complex key generation for the vouchers. If that table only contained the NeuralHashes of known CSAM images as keys, that would be as good as giving the list to people knowing the model is easily extracted. And if it's not a table lookup but just a cryptographic function, I don't see where the blinded table is coming from in Apple's documentation .
Assuming above assumptions are correct, I'm paradoxically feeling a tiny bit better about that system on a technical level (I still think doing anything client side is a very bad precedent), but what a mess did they put themselves into.
Had they done this purely server side (and to be frank there's not much difference, the significant part seems to be done server side) this would have been a complete non-event.
 : https://daringfireball.net/linked/2021/08/11/panzarino-neuen...
 This is my understanding based on the repository and what's written page 6-7 : https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
That's a *huge* amount of crypto mumbo-jumbo for a system to scan your data on your own device and send it to the authorities.
They must really care about children!!
If only this system was in place while Trump, Jeffrey Epstein, and Prince Andrew were raping children, surely none of that would have happened!! /s
This is what would need to happen:
1. Attacker generates images that collide with known CSAM material in the database (the NeuralHashes of which, unless I'm mistaken, are not available)
2. Attacker sends that to innocent person
3. Innocent person accepts and stores the picture
4. Actually, need to run step 1-3 at least 30 times
5. Innocent person has iCloud syncing enabled
6. Apple's CSAM detection then flags these, and they're manually reviewed
7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times
Note that other cloud providers have been scanning uploaded photos for years. What has changed wrt targeted attacks against innocent people?
Just insert a known CSAM image on target's device. Done.
I presume this could be used against a rival political party to ruin their reputation - insert bunch of CSAM images on their devices. "Party X is revealed as an abuse ring". This goes oh-so-very-nicely with Qanon conspiracy theories which even don't require any evidence to propagate widely.
Wait for Apple to find the images. When police investigation is opened, make it very public. Start a social media campaign at the same time.
It's enough to fabricate evidence only for a while - the public perception of the individual or the group will be perpetually altered, even though it would surface later that the CSAM material was inserted by hostile third party.
You have to think about what nation state entities that are now clients of Pegasus and so on could do with this. Not how safe the individual component is.
They are even in the same political party.
Or maybe thirty. You have to surpass the threshold.
Also, if Twitter, Google, Microsoft are already deploying CSAM scanning in their services .... why are we not hearing about all the "swatting"?
Now apple is in this crappy situation where they can't claim their software is secure because it's open source and auditable, but they also can't claim it's secure because it's closed source and they fixed the problems in some later version because this entire debacle has likely destroyed all faith in their competence. If apple is in the position of having to boast "Trust us bro, your iPhone won't be exploited to get you SWATTED over CSAM anymore, we patched it" the big question is why is apple voluntarily adding something to their devices where the failure mode is violent imprisonment and severe loss of reputation when they are not completely competent?
This entire debacle reminds me of this video: https://www.youtube.com/watch?v=tVq1wgIN62E
>T H E I R S E R V I C E S
Because it's on their SERVICES, not on their user's DEVICES, for one.
Also, regardless of swatting, that's why we have an issue with Apple.
How is that a meaningful difference for the stated end goals, that can explain the lack of precedent.
In this specific case yes. That is what is supposed to happen.
But Apple also sets the standard that this is just the beginning, not the end. They say as much on page 3 in bold, differentiated color ink
And there’s nothing to stop them from scanning all images on a device. Or scanning all content for keywords or whatever. iCloud being used as a qualifier is a red herring to what this change is capable of.
Maybe someone shooting guns is now unacceptable, kids have been kicked from schools for posting them on Facebook or having them in their rooms on zoom. What if it’s kids shooting guns? There are so many possibilities of how this could be misused, abused or even just an oopsie, sorry I upended your life to solve a problem that is so very rare.
Add to that their messaging has been muddy at best. And it incited a flame war. A big part of that is iCloud is not a single thing. It’s a service, it can sync snd hold iMessages, it can sync backups, or in my case We have shared iCloud albums that we use to share images with family. Others are free to upload and share. In fact that’s our only use of iCloud other than find my. They say iCloud photos as if that’s just a single thing but it’s easy to extrapolate that to images in iMessages, backups etc.
And the non profit that hosts this database is not publicly accountable. They have public employees on their payroll but really they can put whatever they want in that database. They have no accountability or public disclosure requirements.
So even I, when their main page was like 3 articles was a bit perturbed and put off. I’m not going to ditch my iPhone, mainly because it’s work assigned but I have been keeping a keen eye on what’s happening, how it’s happening and will keep an eye out for their chnages they are promising. I’m also going to guess they won’t nearly be as high profile in the future.
All images on your device have been scanned for years by ML models to detect things all sorts of things and make your photo library searchable regardless of whether you use an Android or Apple device. That's how you can go and search "dog", "revolver", "wife", etc and get relevant photos popping up.
Ex. This article from 2013 where they talk about searching for [my photos of flowers]
I'm an iOS guy, and don't have an Android device to confirm it. I've got a few photos visible on photos.google.com and they're able to detect "beard", at least. Which, to be fair, it's just a few selfies.
iOS does this pretty well. I searched my phone and it was able to recognize and classify a gun as a revolver from a meme I'd saved years ago. That's not this CSAM technology, just something they've been doing for years with ML.
And the only reason I know about them is because my wife asked about them and why our iPhones dont do them.
But we don't put stuff on Facebook. Our photos are backed up to our NAS. Phones backup to a macmini only. Siri and search are basically disabled as much as possible (we have to somewhat enable it for carplay) but definitely no voice or anything.
Effectively the same for Apple. It’s only when uploading the photo. Doing it on device means the server side gets less information.
Yeah, basically. It doesn't seem like people actually use CSAM to screw over innocent folks, so I don't think we need to worry about it. What Apple is doing doesn't really make that any easier, so it's either already a problem, or not a problem.
> And that making an existing situation even more widespread is also completely OK?
I don't know if I'd say any of this is "completely OK", as I don't think I've fully formed my opinion on this whole Apple CSAM debate, but I at least agree with OP that I don't think we need to suddenly worry about people weaponizing CSAM all of a sudden when it's been an option for years now with no real stories of anyone actually being victimized.
If it were to become a problem in the future, it could become a problem regardless of whether or not the scanning is done at the time of upload on device or at the time of upload on server.
Okay, and then what? You think people will just look at this cute picture of a dog and be like "welp, the computer says it's a photo of child abuse, so we're taking you to jail anyway"?
Yes, but then the hash collision (topic of this article) is irrelevant.
You don't even need to go that far. You just need to generate 31 false positive images and send them to an innocent user.
Also, “sending” them to a user isn’t enough; they need to be stored in the photo library, and iCloud Photo Library needs to be enabled.
What do you mean “just”? That’s not usually very simple. It needs to go into the actual photo library. Also, you need like 30 of them inserted.
> I presume this could be used against a rival political party
Yes, but it’s not much different from now, since most cloud photo providers scan for this cloud-side. So that’s more an argument against scanning all together.
The one failsafe would be Apple's manual reviewers, but we haven't heard much about that process yet.
iMessage photos received are automatically synced so no. Finding 30 photos take zero time at all on Tor. Hell finding a .onion site that doesn't have CP randomly spammed is harder.....
1. Obtain known CSAM that is likely in the database and generate its NeuralHash.
2. Use an image-scaling attack  together with adversarial collisions to generate a perturbed image such that its NeuralHash is in the database and its image derivative looks like CSAM.
A difference compared to server-side CSAM detection could be that they verify the entire image, and not just the image derivative, before notifying the authorities.
But a conceivable novel avenue of attack would be to find an image that:
1. Does not look like CSAM to the innocent victim in the original
2. Does match known CSAM by NeuralHash
3. Does look like CSAM in the "visual derivative" reviewed by Apple, as you highlight.
1. Looks like an innocuous image, indeed even an image the victim is expecting to receive.
2. Downscales in such a way to produce a CSAM match.
3. Downscales for the derivative image to create actual CSAM for the review process.
Which is a pretty scary attack vector.
At this point you’re already inside the guts of the justice system, and have been accused of distributing CSAM. Indeed depending on how diligent the prosecutor is, you might need to wait till trial before you can defend yourself.
At that point you’re life as you know is already fucked. The only thing proving your innocence (and the need to do so is itself a complete miscarriage of justice) will save you from is a prison sentence.
If the creation of fakes is as easy as claimed, Neuralhash evidence alone will become inadmissible.
There are plenty of lawyers and money waiting to establish this.
> If the creation of fakes is as easy as claimed, Neuralhash evidence alone will become inadmissible.
As has been already pointed out the system is designed to handle attacks like this.
Here is the relevant paragraph from Apple’s documentation:
“as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possi- bility that the match threshold was exceeded due to non-CSAM images that were ad- versarially perturbed to cause false NeuralHash matches against the on-device en- crypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.”
What if they are placed on the iDevice covertly? Say you want to remove politician X from office. If you got the money or influence you could use a tool like Pegasus (or whatever else there is out there that we don't know of) to place actual CSAM images on their iDevice. Preferably with an older timestamp so that it doesn't appear as the newest image on their timeline. iCloud notices unsynced images and syncs them while performing the CSAM check, it comes back positive with human review (cause it was actual CSAM) and voilà X got the FBI knocking on their door. Even if X can somehow later proof innocence by this time they'll likely have been removed from office over the allegations.
Thinking about it now it's probably even easier:
Messaging apps like WhatsApp allow you to save received images directly to camera roll which then auto-syncs with iCloud (if enabled). So you can just blast 30+ (or whatever the requirement was) CSAM images to your victim while they are asleep and by the time they check their phone in the morning the images will already have been processed and an investigation started.
I doubt deleting them (assuming the victim sees them) works once the image has been scanned. And, given that this probably comes with a sufficient smear campaign, deleting them will be portraye. as evidence of guilt
This part IMO makes Apple itself the most likely "target", but for a different kind of attack.
Just wait until someone who wasn't supposed to, somewhere, somehow gets their hands on some of the actual hashes (IMO bound to happen eventually). Also remember that with Apple, we now have an oracle that can tell us. And with all the media attention around the issue, this might further incentivize people to try.
From that I can picture a chain of events something like this:
1. Somebody writes a script that generates pre-image collisions like in the post, but for actual hashes Apple uses.
2. The script ends up on the Internet. News reporting picks it up and it spreads around a little. This also means trolls get their hands on it.
3. Tons of colliding image are created by people all over the planet and sent around to even more people. Not for targeted attacks, but simply for the lulz.
4. Newer scripts show up eventually, e.g. for perturbing existing images or similar stunts. More news reporting follows, accelerating the effect and possibly also spreading perturbed images around themselves. Perturbed images (cat pictures, animated gifs, etc...) get uploaded to places like 9gag, reaching large audiences.
5. Repeat steps 1-4 until the Internet and the news grow bored with it.
During that entire process, potentially each of those images that ends up on an iDevice will have to be manually reviewed...
Can anyone else think of times where Apple has admitted to something bad on their end and then reversed/walked away from whatever it was?
The next Macbook refresh will be interesting as there are rumors they are bring back several I/O ports that were removed when switching to all USB-C.
I agree with your overall point, just some things that came to mind when reading your question.
The trashcan MacPro is still the only mea culpa I am aware of them actually owning the mistake.
The Airpower whatever was never really released as a product though, so it is a strange category. New question, is the Airpower whatever the only product offically announced on the big stage to never be released?
With Apple, nothing is "obvious".
But I am certain they will not want all the bad publicity that would come if the system was widely abused, if you worry about that. That much is actually "obvious", they are not stupid.
A better collision won't be a grey blob, it'll take some photoshopped and downscaled picture of a kid and massage the least significant bits until it is a collision.
Just to clarify, Apple doesn't report anyone to the police. They report to NCMEC, who presumably contacts law enforcement.
No, the images are only decryptable after a threshold (which appears to be about 30) is breached. If you've received 30 pieces of CSAM from WhatsApp contacts without blocking them and/or stopping WhatsApp from automatically saving to iCloud, I gotta say, it's on you at that point.
A fair point, yes, and somewhat scuppering towards my argument.
If you know the method used by Apple to scale down flagged images before they are sent for review, you can make it so the scaled down version of the image shows a different, potentially misleading one instead:
At the end of the day:
- You can trick the user into saving an innocent looking image
- You can trick Apple NN hashing function with a purposely generated hash
- You can trick the reviewer with an explicit thumbnail
There is no limit to how devilish one can be.
In this scenario you could create an image that looks like anything, but where it’s visual derivative is CSAM material.
Currently iCloud isn’t encrypted, so Apple could just look at the original image. But in future is iCloud becomes encrypted, then the reporting will be don’t entirely based on the visual derivative.
Although Apple could change this by include a unique crypto key for each uploaded images within their inner safety voucher, allowing them to decrypt images that match for the review process.
It occurs to me that compromising an already-hired reviewer (either through blackmail or bribery) or even just planting your own insider on the review team might not be that difficult.
In fact, if your threat model includes nation-state adversaries, it seems crazy not to consider compromised reviewers. How hard would it really be for the CIA or NSA to get a few of their (under cover) people on the review team?
I'd be astonished if it wasn't possible to do the same thing here.
But in the case of Apple's CSAM detection, the collision would first have to fool the victim into seeing an innocent picture and storing it (presumably, they would not accept and store actual CSAM [^]), then fool the NeuralHash into thinking it was CSAM (ok, maybe possible, though classifiers <> perceptual hash), then fool the human reviewer into also seeing CSAM (unlike the innocent victim).
[^] If the premise is that the "innocent victim" would accept CSAM, then you might as well just send CSAM as an unscrupulous attacker.
step 1 - As others have pointed out, there are plenty of ways of getting an image onto someone's phone without their explicit permission. WhatsApp (and I believe Messenger) do this by default; if someone sends you an image, it goes onto your phone and gets uploaded to iCloud.
step 2 - TFA proves that hash collision works, and fooling perceptual algorithms is already a known thing. This whole automatic screening process is known to be vulnerable already.
step 3 - Humans are harder to fool, but tech giants are not great at scaling human intervention; their tendency is to only use humans for exceptions because humans are expensive and unreliable. This is going to be a lowest-cost-bidder developing-country thing where the screeners are targeted on screening X images per hour, for a value of X that allows very little diligence. And the consequences of a false positive are probably going to be minimal - the screeners will be monitored for individual positive/negative rates, but that's about it. We've seen how this plays out for YouTube copyright claims, Google account cancellations, App store delistings, etc.
People's lives are going to be ruined because of this tech. I understand that children's lives are already being ruined because of abuse, but I don't see that this tech is going to reduce that problem. If anything it will increase it (because new pictures of child abuse won't be on the hash database).
Or just blackmail and/or bribe the reviewers. Presumably you could add some sort of 'watermark' that would be obvious to compromised reviewers. "There's $1000 in it for you if you click 'yes' any time you see this watermark. Be a shame if something happened to your mum."
>If the premise is that the "innocent victim" would accept CSAM, then you might as well just send CSAM as an unscrupulous attacker.
This adds trojan horses embedded in .jpg files as an attack vector, which while maybe not overly practical, I could certainly imagine some malicious troll uploading "CSAM" to some pornsite.
Every single tech company is getting rid of manual human review towards an AI based approach. Human-ops they call it - they dont want their employees to be doing this harmful work, plus computers are cheaper and better at
We hear about failures of inhuman ops all the time on HN. people being banned, falsely accused, cancelled, accounts locked, credit denied. All because the decisions which were once by humans are now made by machine. This will happen eventually here too.
It's the very reason why they have the neuralhash model. To remove the human reviewer.
Just because the PoC used a meaningless blob doesn't mean that collisions have to be those. Plenty of examples of adversarial attacks on image recognition perturb real images to get the network to misidentify them, but to a human eye the image is unchanged.
Anyway, as for human reviewers, depends on what the image being perturbed is. Computer repair employees have called the police on people who've had pictures of their children in the bath. My understanding is that Apple does not have the source images, only NCMEC, so Apple's employees wouldn't necessarily see that such a case is a false positive. One would hope that when it gets sent to NCMEC, their employees would compare to the source image and see that is a false positive, though.
>> What has changed wrt targeted attacks against innocent people?
Anecdote: every single iphone user I know has iCloud sync enabled by default. Every single Android user I know doesn't have google photos sync enabled by default.
Yeah, but a lot of them have long ago maxed out their 5 GB iCloud account.
Given the scanning is client-side wouldn't the client need a list of those hashes to check against? If so it's just a matter of time before those are extracted and used in these attacks.
I find it hard to believe that anyone has faith in any purported manual review by a modern tech giant. Assume the worst and you'll still probably not go far enough.
When reports come in the images would not match, so they need to intercept them before they are discarded by Apple, maybe by having a mole in the team. But it's so much easier than other ways to have an iOS platform scanner for any purpose. Just let them find the doctored images and add them to the database and recruit a person in the Apple team.
If anything, this gives weapons to people against the scanner as we can now bomb the system with false positives rendering it impossible to use. I don't know enough about cryptography but I wonder if there is any ramifications of the hash being broken.
You can do steps 2-3 all in one step "Hey Bob, here's a zip file of those funny cat pictures I was telling you about. Some of the files got corrupted and are grayed out for some reason".
It's my understanding that many tech companies (Microsoft? Dropbox? Google? Apple? Other?) (and many people in those companies) have access to the CSAM database, which essentially makes it public.
30 times a human confused a blob with CSAM?
> If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them.
If you didn't have Apple scanning your drive trying to find a new way for you to go to prison then it wouldn't be a problem.
Also, most other major cloud photo providers scan images server side, leading to the same effect (but with them accessing more data).
People are getting nerd-sniped about hash collisions. It's completely irrelevant.
The real-world vector is that an attacker sends CSAM through one of the channels that will trigger a scan. Through iMessage, this should be possible in an unsolicited fashion (correct me if I'm wrong). Otherwise, it's possible through a hacked device. Of course there's plausible deniability here, but like with swatting, it's not a situation you want to be in.
Sure, but those don’t go into your photo library, so it won’t trigger any scanning. Presumably people wouldn’t actively save CSAM into their library.
If you don't like someone (which happens very often in this line of work) you could potentially screw someone over with this.
Depending on how the secret sharing is used in Apple PSI, it may be possible that duplicating the same image 30 times would be enough.
Is the process actually documented anywhere? Afaik they are just saying that they are verifying a match. This could of course just be a person looking at the hash itself.
The fact that you can randomly manipulate random noise until it matches the hash of an arbitrary image is not surprising.
The real challenge is generating a real image that could be mistaken for CSAM at low res + is actually benign (or else just send CSAM directly) + matches the hash of real CSAM.
This is why SHAttered  was such a big deal, but daily random SHA collisions aren't.
(Added later:) I should note that the DoS attack is only possible with the preimage attack and not the second preimage attack as the issue seemingly suggests, because you need the original CSAM to perform the second preimage attack. But given the second preimage attack is this easy, I don't have any hope for the preimage resistance anyway.
(Added much later:) And I realized that Apple did think of this possibility and only stores blinded hashes in the device, so the preimage attack doesn't really work as is. But it seems that the hash output is only 96 bits long according to the repository, so this attack might still be possible albeit with much higher computational cost.
 To be fair, I don't think that Apple's claim of 1/1,000,000,000,000 false positive rate refers to that of the algorithm. Apple probably tweaked the threshold for manual checking to match that target rate, knowing NeuralHash's false positive rate under the normal circumstances. Of course we know that there is no such thing like the normal circumstances.
Sounds pretty stupid to me to fill your phone with kiddie porn in protest, but you do you internet people.
So at some point you generate an image which triggers a subroutine and you know that image collides.
- The device generates a secret X and divides it into X...X[m] with the secret sharing algorithm. m is some large number and any k (but no less) copies out of X[i] are enough to reconstruct X.
- The device stores blinded hashes f(H)...f(H[n]). The function f itself is not known to the client.
- The image hash H0 is compressed with another function g to the range between 1 and n.
- The downscaled image data (for the human check) is encrypted with X and appended with (probably) random X[i].
- The result is then encrypted again with a key derived from f(H0) and sent to the server with an associated data g(H0).
- The server tries to decrypt it with a key derived from f(H[g(H0)]). This is only possible when H[g(H0)] = H0, i.e. H0 represents some known CSAM.
- You can only decrypt the second layer with at least k copies of X[i] then.
At this stage Apple can still learn the number of CSAM images less than k. The fix is described in an overly technical document and I can't exactly follow, but supposedly the client can inject an appropriate amount of synthetic data where only the first layer can be always decrypted and the second layer is bogus (including the presumed X[i]).
Assuming this scheme is correctly implemented, the only attack I can imagine is the timing attack. As I understand a malicious client can choose not to send false data. This will affect the number of items that pass the first layer of encryption, so the client can possibly learn the number of actual matches by adjusting the number of synthetic data since the server can only proceed to the next step with at least k such items.
This attack seems technically possible, but is probably infeasible to perform (remember that we already need 2^95 oracle operations, which is only vaguely possible even in the local device). Maybe the technical report actually has a solution for this, but for now I can only guess.
> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.
In addition, they payload is protected at another layer by your user key. Only with enough mash matches can Apple put together the user decryption key and open the very innards of your image's payload containing the full hash and visual derivative.
Otherwise, they'd just keep doing it on the material that's actually uploaded.
> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning
images in the cloud, the system performs on-device matching using a database of known CSAM image
hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database
into an unreadable set of hashes, which is securely stored on users’ devices.
Yes but as stated in the technical description, this match is against a blinded table, so the device doesn’t learn if it’s a match or not.
Granted, they most likely won't care, but it's a legitimate attack vector.
Attempted entrapment and abuse of computing systems, which is an uncomfortable way to phrase the WhatsApp scenario, would be quite sufficient cause for a discovery warrant to have WhatsApp reveal the sender’s identity to Apple. Doesn’t mean they’d be found guilty, but WhatsApp will fold a lot sooner than Apple, especially if the warrant is sealed by the court to prevent the sender from deleting any CSAM in their possession.
A hacker would say that’s all contrived nonsense and anyways it’s just SWATting, that’s no big deal. A judge would say that’s a reasonable balance of protecting the sender from being dragged through the mud in the press before being indicted and permitting the abused party (Apple) to pursue a conviction and damages.
I am not your lawyer, this is not legal advice, etc.
It is, actually. Remember that hashes are supposed to be many-bit digests of the original; it should take O(2^256) work to find a message with a chosen 256-bit hash and O(2^128) work to find a "birthday attack" collision. Finding any collision at all with NeuralHash so soon after its release is very surprising, suggesting the algorithm is not very strong.
SHAttered is a big deal because it is a fully working attack model, but the writing was on the wall for SHA-1 after the collisions were found in reduced-round variations of the hash. Attacks against an algorithm only get better with time, never worse.
Moreover, the break of NeuralHash may be even stronger than the SHAttered attack. The latter modifies two documents to produce a collision, but the NeuralHash collision here may be a preimage attack. It's not clear if the attacker crafted both images to produce the collision or just the second one.
It's not particularly surprising to me that a perceptual hash might also have collisions that don't look similar to the human eye, though if Apple ever claimed otherwise this counterexample is solid proof that they're wrong.
Not that it's going to happen, since it would also require NCMEC to think the images match, but whatever. Attack me! Attack me! I want to retire.
For now, sure. What happens when their money runs short? What about the other tech companies that will inevitably be forced to deploy this shit? Will they also have Apple's pretty deep pockets?
Blind faith in this system will not magically fix how flawed it is nor the abuse and harm it will allow. This is going to hurt a lot of innocent people.
If you post your whatsapp address, I'm sure someone will oblige.
Never? You sure that one or more human operators will never make this mistake, dooming someone's life / causing them immense pain?
One example that sticks out in my mind is a pair of grandparents who photographed their grandchildren playing in the back yard. A photo tech flagged their photo, they were arrested, and it took their lawyer going through the hoops to get a review of the photo for the charges to be dropped.
If the photo was a grey blob and they had to go through a judicial review for someone to look at the photo and confirm 'yes that is a grey blob' then color me wrong.
They'll view visual hashes, look at descriptions, and so forth, but nobody from Apple will actually be looking at them, because then they are guilty of viewing and transmitting CSAM.
I noted in another comment, even the prosecutors and defense lawyers in the case typically only get a description of the content, they don't see it themselves.
Over the years there have been countless articles etc about how fucked up being a reviewer of content flagged at all the tech companies is. https://www.vice.com/en/article/a35xk5/facebook-moderators-a...
It is not illegal to be an unwilling recipient of illegal material. If a package shows up at your door with a bomb, you're not gonna be thrown in jail for having a bomb.
At the very least, you'd be one of the primary suspects, and if you somehow got a bad lawyer, all bets are off.
What is the scenario where a grey blob gets on your phone that sets off CSAM alerts, an investigator looks at it and sees only a grey blob, and then still decides to alert the authorities even though it's just a grey blob, and the authorities still decide to arrest you even though it's just a grey blob, and the DA still decides to prosecute you even though it's just a grey blob, and a jury still decides to convict you, even though it's still just a grey blob?
You're the one who's off in theory-land imagining that every person in the entire justice system is just as stupid as this algorithm is.
On this topic, the Supreme Court has ruled in Dickerson v US that, in all cases, to avoid First Amendment conflicts, all child pornography statutes must be interpreted with at least a "reckless disregard" standard.
Here is a typical criminal definition, from Minnesota, where a defendant recently tried to argue that the statute was strict liability and therefore unconstitutional, and that argument was rejected by the courts because it is clearly written to require knowledge and intent:
> Subd. 4. Possession prohibited. (a) A person who possesses a pornographic work or a computer disk or computer or other electronic, magnetic, or optical storage system ․ containing a pornographic work, knowing or with reason to know its content and character, is guilty of a felony․
Now, Dickerson rightfully lost and it's appropriate that SCOTUS rejected his case because he was involved in child porn production, not posession, so he can't rely on the Ferber precedent. He had the opportunity to ask the underage person in question their age, and chose not to, which would meet the reckless disregard standard anyway.
It only becomes public knowledge if law enforcement then chooses to charge you - and if all that happens on the basis of an obvious adversarial net image, the result is a publicity shitshow for Apple and you become a civil rights hero after your lawyer (even an underpaid overworked public defender should be able to handle this one) demonstrates this.
As others have stated in this thread, I think the real failure case is not someone's life getting ruined by claims of CSAM possession somehow resulting from a bad hash match, but the fact that planted material (or sent via message) can now easily ruin your life because it gets automatically reported; you can't simply delete it and move on any more.
But not really comparable, IMO. You won't even know you got investigated until after the original images have been shipped off to NCMEC for verification.
What if the legal porn of a 21 year old that triggered the collision match looked really really really close? So close that a human can not distinguish between the image of a 12 year old being raped that they have in their database and your image? Well then you might have a problem, legal and otherwise.
> defence lawyers are not allowed to look at alleged CSAM material in court right
I know this is not true in many countries, but cant speak for your country.
I'm not talking about images of rape here. I'm taking about images that you'd see on a regular porn site, of adults and their body parts.
You are also aware that CSAM covers anywhere from 0 to 17.99 years of age, and the legal obligation to report exists equally for the whole spectrum?
So let's say I download a close up pussy collection of 31 images of what I believe to be consenting 20 year olds, and what are consenting 20 year olds.
But they are actually planted by an attacker (let's say an oppressive regime who doesn't like me) and disturbed to match CSAM, that is, pussy close ups of 17 year olds. They are all just pussy pics. They will look the same.
Should I go to jail?
Do I have a non zero chance of going to jail? Yes.
Without getting into the metaphysics of what is an image, at that point, you basically have a large collection of child porn.
Your hypothetical oppressive regime has gone to a lot of trouble planting not illegal evidence on your device. It would be much more effective to just put actual child porn on your device, which you would need to have to conduct the attack in the first place.
I doubt images that look quite generic will make it into those hash sets, though.
This wasn't the question I asked.
Yes I can say 100% that no human operator will ever classify a grey image for a child being raped. Happy to put money on it.
When dealing with a monotonous task that the operator is probably getting PTSD from, I think the chance is greater than 0%.
Articles about content moderators and PTSD:
Why do you have an idea that image have to be benign? Almost everyone watch porn and it's will be so much easier to find collisions by manipulating actual porn images which are not CSAM.
Also this way you'll more likely to trigged false-positive from Apple staff since they aren't suppose to see how actual CSAM looks like.
Strongly disagree. (1) The primary feature of any decent hash function is that this should not happen. (2) Any preimage attack opens the way for further manipulations like you describe.
But hashing is used in many places that could be vulnerable to an attack, so I think the distinction is blurry. People used MD5 for lots of things but are moving away for this reason, even though they're not in cryptographic settings.
The reviewer, likely on a minimum wage, will report images just in case. Nobody would like to be dragged through the mud because they didn't report something they thought it is innocent.
Also, generating images that look the same as the original and yet produce a different hash.
* Is this a problem with Apple's CSAM discriminator engine or with the fact that it's happening on-device?
* Would this attack not be possible if scanning was instead happening in the cloud, using the same model?
* Are other services (Google Photos, Facebook, etc.) that store photos in the cloud not doing something similar to uploaded photos, with models that may be similarly vulnerable to this attack?
I know that an argument against on-device scanning is that people don't like to feel like the device that they own is acting against them - like it's snitching on them. I can understand and actually sympathise with that argument, it feels wrong.
But we have known for a long time that computer vision can be fooled with adversarial images. What is special about this particular example? Is it only because it's specifically tricking the Apple CSAM system, which is currently a hotly-debated topic, or is there something particularly bad here, something that is not true with other CSAM "detectors"?
I genuinely don't know enough about this subject to comment with anything other than questions.
A devastating scenario for such a system is if an attacker knows how to look at a hash and generate some image that matches the hash, allowing them to trigger false positives any time. That appears to be what we are witnessing.
This is my understanding too. But is this not also true for other (cloud-based) CSAM scanning systems? Why is Apple's special in this regard?
Apple could have saved themselves so much backlash and not have caused the outrage to be focused exclusively on them if they hadn't tried to be novel with their method of hashing, and had just announced that they were about to do exactly what all the other tech companies had already been doing for years - server side scanning.
Apple would still be accused of walking back on its claims of protecting users' privacy, but for a different reason - by trying to conform. Instead of wasting all the debate on how Apple and only Apple is violating everyone's privacy with its on-device scanning mechanism, which was without precedent, this could have been an educational experience for many people about how little privacy is valued in the cloud in general, no matter who you choose to give your data to, because there is precedent for such privacy violations that take place on the server.
Apple could have been just one of the companies in a long line of others whose data management policies would have received significant renewed attention as a result of this. Instead, everyone is focused on criticizing Apple.
There is a significant problem with people's perception of "privacy" in tech if merely moving the scan on-device causes this much backlash while those same people stayed silent during the times that Google and Facebook and the rest adopted the very same technique on the server in the past decade. Maybe if Apple had done the same, they would have been able to get away with it.
This is a smart thing to disable, even outside this recent discussion of CSAM.
Edit: Though, to be fair, the specific hash-collision scenario would be that someone could send you something that doesn't look like CSAM and so you wouldn't reflexively delete it.
Personally I don’t really see the issue.
All you have to be is accused of CP for your life to be destroyed. It doesn't matter if you did it or not.
If the government wants to get you they don’t need this Apple scanning tech, or anything at all really
I think this is blowing up because it's cathartic to see a technology you disagree with get undermined and basically broken by the community...
Apple defends anti-child abuse imagery tech after claims of ‘hash collisions’ - https://news.ycombinator.com/item?id=28225706
Convert Apple NeuralHash model for CSAM Detection to ONNX - https://news.ycombinator.com/item?id=28218391