Hacker News new | past | comments | ask | show | jobs | submit login

How can you use it for targeted attacks?

This is what would need to happen:

1. Attacker generates images that collide with known CSAM material in the database (the NeuralHashes of which, unless I'm mistaken, are not available)

2. Attacker sends that to innocent person

3. Innocent person accepts and stores the picture

4. Actually, need to run step 1-3 at least 30 times

5. Innocent person has iCloud syncing enabled

6. Apple's CSAM detection then flags these, and they're manually reviewed

7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times

Note that other cloud providers have been scanning uploaded photos for years. What has changed wrt targeted attacks against innocent people?




"How can you use it for targeted attacks?"

Just insert a known CSAM image on target's device. Done.

I presume this could be used against a rival political party to ruin their reputation - insert bunch of CSAM images on their devices. "Party X is revealed as an abuse ring". This goes oh-so-very-nicely with Qanon conspiracy theories which even don't require any evidence to propagate widely.

Wait for Apple to find the images. When police investigation is opened, make it very public. Start a social media campaign at the same time.

It's enough to fabricate evidence only for a while - the public perception of the individual or the group will be perpetually altered, even though it would surface later that the CSAM material was inserted by hostile third party.

You have to think about what nation state entities that are now clients of Pegasus and so on could do with this. Not how safe the individual component is.


FL Rep Randy Fine filed a report with the Florida Department of Law Enforcement that the sheriff was going to plant CSAM on his computer and arrest him for it.

They are even in the same political party.

https://www.reddit.com/r/321/comments/jt32rs/fdle_report_bet...


Indeed this is not new and has probably been happening for many years already. There are services advertising on dark net markets to “ruin someone’s life” which means you pay some Ukrainian guy $300 in Bitcoin and he plants CSAM on a target’s computer.


> Just insert a known CSAM image on target's device.

Or maybe thirty. You have to surpass the threshold.

Also, if Twitter, Google, Microsoft are already deploying CSAM scanning in their services .... why are we not hearing about all the "swatting"?


Their implementations are not on-device and thus it's actually significantly more difficult to reverse engineer. Apples unique implementation of on-device scanning is much easier to reverse engineer and thus exploit.

Now apple is in this crappy situation where they can't claim their software is secure because it's open source and auditable, but they also can't claim it's secure because it's closed source and they fixed the problems in some later version because this entire debacle has likely destroyed all faith in their competence. If apple is in the position of having to boast "Trust us bro, your iPhone won't be exploited to get you SWATTED over CSAM anymore, we patched it" the big question is why is apple voluntarily adding something to their devices where the failure mode is violent imprisonment and severe loss of reputation when they are not completely competent?

This entire debacle reminds me of this video: https://www.youtube.com/watch?v=tVq1wgIN62E


>Also, if Twitter, Google, Microsoft are already deploying CSAM scanning in their services .... why are we not hearing about all the "swatting"?

>their services

>T H E I R S E R V I C E S

Because it's on their SERVICES, not on their user's DEVICES, for one.

Also, regardless of swatting, that's why we have an issue with Apple.


It only happens on Apple devices right before the content is uploaded to the service.

How is that a meaningful difference for the stated end goals, that can explain the lack of precedent.


I think this is where a disconnect is occurring.

In this specific case yes. That is what is supposed to happen.

But Apple also sets the standard that this is just the beginning, not the end. They say as much on page 3 in bold, differentiated color ink

https://www.apple.com/child-safety/pdf/Expanded_Protections_...

And there’s nothing to stop them from scanning all images on a device. Or scanning all content for keywords or whatever. iCloud being used as a qualifier is a red herring to what this change is capable of.

Maybe someone shooting guns is now unacceptable, kids have been kicked from schools for posting them on Facebook or having them in their rooms on zoom. What if it’s kids shooting guns? There are so many possibilities of how this could be misused, abused or even just an oopsie, sorry I upended your life to solve a problem that is so very rare.

Add to that their messaging has been muddy at best. And it incited a flame war. A big part of that is iCloud is not a single thing. It’s a service, it can sync snd hold iMessages, it can sync backups, or in my case We have shared iCloud albums that we use to share images with family. Others are free to upload and share. In fact that’s our only use of iCloud other than find my. They say iCloud photos as if that’s just a single thing but it’s easy to extrapolate that to images in iMessages, backups etc.

And the non profit that hosts this database is not publicly accountable. They have public employees on their payroll but really they can put whatever they want in that database. They have no accountability or public disclosure requirements.

So even I, when their main page was like 3 articles was a bit perturbed and put off. I’m not going to ditch my iPhone, mainly because it’s work assigned but I have been keeping a keen eye on what’s happening, how it’s happening and will keep an eye out for their chnages they are promising. I’m also going to guess they won’t nearly be as high profile in the future.


>And there’s nothing to stop them from scanning all images on a device.

All images on your device have been scanned for years by ML models to detect things all sorts of things and make your photo library searchable regardless of whether you use an Android or Apple device. That's how you can go and search "dog", "revolver", "wife", etc and get relevant photos popping up.


I don't think this is accurate. I don't use the Google Photos cloud service, and searching in the Photos app on my Android phone returns zero results for any search term.


My impression was Google had been doing it for ages.

Ex. This article from 2013 where they talk about searching for [my photos of flowers] https://search.googleblog.com/2013/05/finding-your-photos-mo...

I'm an iOS guy, and don't have an Android device to confirm it. I've got a few photos visible on photos.google.com and they're able to detect "beard", at least. Which, to be fair, it's just a few selfies.

iOS does this pretty well. I searched my phone and it was able to recognize and classify a gun as a revolver from a meme I'd saved years ago. That's not this CSAM technology, just something they've been doing for years with ML.


Right, the Google Photos service does this, but it's a cloud service, it's not on-device.


I looked on our iphones. And its possible I am an edge case. With the restrictions I have on Siri, icloud (the only use for iCloud is some shared albums with family, in lieu of facebook, and find my) etc. My phone doesn't categorize photos by person or do those montages others routinely get within the photos app.

And the only reason I know about them is because my wife asked about them and why our iPhones dont do them.

But we don't put stuff on Facebook. Our photos are backed up to our NAS. Phones backup to a macmini only. Siri and search are basically disabled as much as possible (we have to somewhat enable it for carplay) but definitely no voice or anything.


> Because it's on their SERVICES, not on their user's DEVICES, for one.

Effectively the same for Apple. It’s only when uploading the photo. Doing it on device means the server side gets less information.


A server-side action is triggering a local action. The action is still local.


Who cares that Bad Company XYZ already well known for not caring about customer privacy does it? Wouldn't you want to push back against even more increasing surveillance? Apply was beating the drum of privacy while it was convenient, wouldn't you want to hold their feet to the fire now that they seemed to do a U-turn?


Their point is that the attack vector being described isn’t new, as CSAM could already be weaponized against folks, and we never really ever hear if that happening. So the OP is simply saying that perhaps it’s not an issue we need to worry about. I happen to agree with them.


So in your mind, because so far we've seen no evidence that this has been abused, it's nothing to worry about going forward? And that making an existing situation even more widespread is also completely OK?


> So in your mind, because so far we've seen no evidence that this has been abused, it's nothing to worry about going forward?

Yeah, basically. It doesn't seem like people actually use CSAM to screw over innocent folks, so I don't think we need to worry about it. What Apple is doing doesn't really make that any easier, so it's either already a problem, or not a problem.

> And that making an existing situation even more widespread is also completely OK?

I don't know if I'd say any of this is "completely OK", as I don't think I've fully formed my opinion on this whole Apple CSAM debate, but I at least agree with OP that I don't think we need to suddenly worry about people weaponizing CSAM all of a sudden when it's been an option for years now with no real stories of anyone actually being victimized.


And if it is a problem, not doing this doesn't resolve the problem.

If it were to become a problem in the future, it could become a problem regardless of whether or not the scanning is done at the time of upload on device or at the time of upload on server.


It’s that the situation can’t be a “slippery slope” if there’s no evidence of there being one prior


> I presume this could be used against a rival political party to ruin their reputation - insert bunch of CSAM images on their devices.

Okay, and then what? You think people will just look at this cute picture of a dog and be like "welp, the computer says it's a photo of child abuse, so we're taking you to jail anyway"?


You don't have to add a real picture, just add the hash of a benign, (new) cat picture to the db, then put the cat picture on the phone, then release a statement saying the person has popped up on your list. By the time the truth comes out the damage is done.


> "How can you use it for targeted attacks?" > Just insert a known CSAM image on target's device. Done.

Yes, but then the hash collision (topic of this article) is irrelevant.


> Just insert a known CSAM image on target's device. Done.

You don't even need to go that far. You just need to generate 31 false positive images and send them to an innocent user.


But how will you do that without access to the non-blinded hash table? Then you need access to that first.

Also, “sending” them to a user isn’t enough; they need to be stored in the photo library, and iCloud Photo Library needs to be enabled.


How is this different from server side scanning, the policy de jour that Apple was trying to move away from?


> Just insert a known CSAM image on target's device. Done.

What do you mean “just”? That’s not usually very simple. It needs to go into the actual photo library. Also, you need like 30 of them inserted.

> I presume this could be used against a rival political party

Yes, but it’s not much different from now, since most cloud photo providers scan for this cloud-side. So that’s more an argument against scanning all together.


WhatsApp for example automatically stores images you receive in your Photos library, so that removes a step, and those will thus be automatically uploaded to iCloud.

The one failsafe would be Apple's manual reviewers, but we haven't heard much about that process yet.


> It needs to go into the actual photo library.

iMessage photos received are automatically synced so no. Finding 30 photos take zero time at all on Tor. Hell finding a .onion site that doesn't have CP randomly spammed is harder.....


iMessage photos do not automatically add photos to your photo library. Yes they’re synced between devices but afaik Apple isn’t deploying this hashing technology on iMessages between devices. Only for iCloud photos in the photo library.


Cross-posting from another thread [1]:

1. Obtain known CSAM that is likely in the database and generate its NeuralHash.

2. Use an image-scaling attack [2] together with adversarial collisions to generate a perturbed image such that its NeuralHash is in the database and its image derivative looks like CSAM.

A difference compared to server-side CSAM detection could be that they verify the entire image, and not just the image derivative, before notifying the authorities.

[1] https://news.ycombinator.com/item?id=28218922

[2] https://bdtechtalks.com/2020/08/03/machine-learning-adversar...


Right. So, sending actual CSAM would also work as an attack, but would be detected by the victim and could be corrected (delete images).

But a conceivable novel avenue of attack would be to find an image that:

1. Does not look like CSAM to the innocent victim in the original

2. Does match known CSAM by NeuralHash

3. Does look like CSAM in the "visual derivative" reviewed by Apple, as you highlight.


Reading the imagine scaling attack article, it’s looks like it’s pretty easy to manufacture an image that:

1. Looks like an innocuous image, indeed even an image the victim is expecting to receive.

2. Downscales in such a way to produce a CSAM match.

3. Downscales for the derivative image to create actual CSAM for the review process.

Which is a pretty scary attack vector.


Where does it say anything that indicates #1 and #3 are both possible?


Depends very much on the process Apple uses to make the "visual derivative", though. Also, defence by producing the original innocuous image (and showing that it triggers both parts of Apple's process, NeuralHash and human review of the visual derivative) should be possible, though a lot of damage might've been done by then.


> Also, defence by producing the original innocuous image

At this point you’re already inside the guts of the justice system, and have been accused of distributing CSAM. Indeed depending on how diligent the prosecutor is, you might need to wait till trial before you can defend yourself.

At that point you’re life as you know is already fucked. The only thing proving your innocence (and the need to do so is itself a complete miscarriage of justice) will save you from is a prison sentence.


And now you will be accused of trying to hide illegal material in innocuous images.


This isn’t true at all.

If the creation of fakes is as easy as claimed, Neuralhash evidence alone will become inadmissible.

There are plenty of lawyers and money waiting to establish this.


> This isn’t true at all.

> If the creation of fakes is as easy as claimed, Neuralhash evidence alone will become inadmissible.

Okay. https://github.com/anishathalye/neural-hash-collider


Uh? So his if statement is true?


Please read what is written right before that... You are taking something out of context.


Why do you keep posting links to this collider as though it means something?

As has been already pointed out the system is designed to handle attacks like this.

Here is the relevant paragraph from Apple’s documentation:

“as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possi- bility that the match threshold was exceeded due to non-CSAM images that were ad- versarially perturbed to cause false NeuralHash matches against the on-device en- crypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.”

https://www.apple.com/child-safety/pdf/Security_Threat_Model...


> So, sending actual CSAM would also work as an attack, but would be detected by the victim and could be corrected (delete images).

What if they are placed on the iDevice covertly? Say you want to remove politician X from office. If you got the money or influence you could use a tool like Pegasus (or whatever else there is out there that we don't know of) to place actual CSAM images on their iDevice. Preferably with an older timestamp so that it doesn't appear as the newest image on their timeline. iCloud notices unsynced images and syncs them while performing the CSAM check, it comes back positive with human review (cause it was actual CSAM) and voilà X got the FBI knocking on their door. Even if X can somehow later proof innocence by this time they'll likely have been removed from office over the allegations.

Thinking about it now it's probably even easier: Messaging apps like WhatsApp allow you to save received images directly to camera roll which then auto-syncs with iCloud (if enabled). So you can just blast 30+ (or whatever the requirement was) CSAM images to your victim while they are asleep and by the time they check their phone in the morning the images will already have been processed and an investigation started.


If you are placing images covertly, you can just use real CSAM or other compromat.


> but would be detected by the victim and could be corrected (delete images).

I doubt deleting them (assuming the victim sees them) works once the image has been scanned. And, given that this probably comes with a sufficient smear campaign, deleting them will be portraye. as evidence of guilt


Why would someone do that? Why not just send the original if both are flagged as the original?


The victim needs to store the image in their iCloud, so it needs to not look like CSAM to them.


Because having actual CSAM images is illegal.


Doesn’t that make step 1 more dangerous for the attacker than the intended victim? And following this through to its logical conclusion; the intended victim would have images that upon manual review by law enforcement would be found to be not CSAM.


> 7. Apple reviewer....

This part IMO makes Apple itself the most likely "target", but for a different kind of attack.

Just wait until someone who wasn't supposed to, somewhere, somehow gets their hands on some of the actual hashes (IMO bound to happen eventually). Also remember that with Apple, we now have an oracle that can tell us. And with all the media attention around the issue, this might further incentivize people to try.

From that I can picture a chain of events something like this:

1. Somebody writes a script that generates pre-image collisions like in the post, but for actual hashes Apple uses.

2. The script ends up on the Internet. News reporting picks it up and it spreads around a little. This also means trolls get their hands on it.

3. Tons of colliding image are created by people all over the planet and sent around to even more people. Not for targeted attacks, but simply for the lulz.

4. Newer scripts show up eventually, e.g. for perturbing existing images or similar stunts. More news reporting follows, accelerating the effect and possibly also spreading perturbed images around themselves. Perturbed images (cat pictures, animated gifs, etc...) get uploaded to places like 9gag, reaching large audiences.

5. Repeat steps 1-4 until the Internet and the news grow bored with it.

During that entire process, potentially each of those images that ends up on an iDevice will have to be manually reviewed...


Do you think Apple might perhaps halt the system if the script get wide publication?


I've only seen Apple admit defeat once, and that was regarding the trashcan MacPro. Otherwise, it's "you're holding it wrong" type of victim blaming as they quietly revise the issue on the next version.

Can anyone else think of times where Apple has admitted to something bad on their end and then reversed/walked away from whatever it was?


The Apple AirPower mat comes to mind although there are rumors they haven't abandoned the effort completely. Butterfly keyboard seems to finally be acknowledged as a bad idea and took several years to get there.

The next Macbook refresh will be interesting as there are rumors they are bring back several I/O ports that were removed when switching to all USB-C.

I agree with your overall point, just some things that came to mind when reading your question.


ah yes, the butterfly keyboard. i must have blocked that from my mind after the horror it was. although, they didn't admit anything on that one. that was just another "you're holding it wrong" silent revision that was then touted as a new feature (rather than oops we fucked up).

The trashcan MacPro is still the only mea culpa I am aware of them actually owning the mistake.

The Airpower whatever was never really released as a product though, so it is a strange category. New question, is the Airpower whatever the only product offically announced on the big stage to never be released?


Do you really care what they "admit"? I thought you were worried about innocent people being framed. Obviously if a way to frame people gets widespread, Apple will stop it. They don't want that publicity.


You clearly have me confused with someone else, as I never mentioned anything about innocent people being framed.

With Apple, nothing is "obvious".


The comment above that I responded to seemed to talk about that. But in any case, I for one don't care what Apple admits.

But I am certain they will not want all the bad publicity that would come if the system was widely abused, if you worry about that. That much is actually "obvious", they are not stupid.


> 7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times

A better collision won't be a grey blob, it'll take some photoshopped and downscaled picture of a kid and massage the least significant bits until it is a collision.

https://openai.com/blog/adversarial-example-research/


So the person would have to accept and save an image that when looks enough like CSAM to confuse a reviewer…


Yes, the diligent review performed by the lowest-bidding subcontractor is an excellent defense against career-ending criminal accusations. Nothing can go wrong, this is fine.


I would think there is way easier ways to frame someone with CSAM then this. Like dump a thumbdrive of the stuff on them and report them to the police.


The police will not investigate every hint and a thumbdrive still has some plausible deniability. Evidence on your phone looks far worse and, thanks to this new process, law enforcement will receive actual evidence instead of just a hint.


My WhatsApp automatically saves all images to my photo roll. It has to be explicitly turned off. When the default is on, it's enough that the image is received and the victim has CP on their phone. After the initial shock they delete it, but the image has already been sent to Apple, where a reviewer marked it as CP. Since the user already gave them their full address data in order to be able to use the app store, Appla can automatically send a report to the police.


> [...] Appla can automatically send a report to the police.

Just to clarify, Apple doesn't report anyone to the police. They report to NCMEC, who presumably contacts law enforcement.


FBI agents work for NCMEC. NCMEC is created by legislation. They ARE law enforcement, disguised as a non-profit.


> but the image has already been sent to Apple, where a reviewer marked it as CP

No, the images are only decryptable after a threshold (which appears to be about 30) is breached. If you've received 30 pieces of CSAM from WhatsApp contacts without blocking them and/or stopping WhatsApp from automatically saving to iCloud, I gotta say, it's on you at that point.


Just a side point, a single WhatsApp message can contain up to 30 images. 30 is the literal max of a single message. So ONE MESSAGE could theoretically contain enough images to trip this threshold.


> a single WhatsApp message can contain up to 30 images

A fair point, yes, and somewhat scuppering towards my argument.


You’re aware that people sleep at night, and phones for the most part don’t, right?


Victim blaming because of failure to meet some seemingly arbitrary limit, ok.


Not necessarily.

If you know the method used by Apple to scale down flagged images before they are sent for review, you can make it so the scaled down version of the image shows a different, potentially misleading one instead:

https://thume.ca/projects/2012/11/14/magic-png-files/

At the end of the day:

- You can trick the user into saving an innocent looking image

- You can trick Apple NN hashing function with a purposely generated hash

- You can trick the reviewer with an explicit thumbnail

There is no limit to how devilish one can be.


The reviewer may not be looking at the original image. But rather the visual derivative created during the hashing process and sent as part of the safety voucher.

In this scenario you could create an image that looks like anything, but where it’s visual derivative is CSAM material.

Currently iCloud isn’t encrypted, so Apple could just look at the original image. But in future is iCloud becomes encrypted, then the reporting will be don’t entirely based on the visual derivative.

Although Apple could change this by include a unique crypto key for each uploaded images within their inner safety voucher, allowing them to decrypt images that match for the review process.


Depending on what algorithm apple uses to generate the "sample" that's shown to the reviewer it may be possible to generate a large image that looks innocent unless downscaled with that specific algorithm and to a specific resolution


So here's something I find interesting about this whole discussion: Everyone seems to assume the reviewers are honest actors.

It occurs to me that compromising an already-hired reviewer (either through blackmail or bribery) or even just planting your own insider on the review team might not be that difficult.

In fact, if your threat model includes nation-state adversaries, it seems crazy not to consider compromised reviewers. How hard would it really be for the CIA or NSA to get a few of their (under cover) people on the review team?


I don't see how a perfectly legal and normal explicit photograph of someone's 20-year-old wife would be indistinguishable to an Apple reviewer from CSAM, especially since some people look much younger or much older than their chronological age. So first, there would be the horrendous breach of privacy for an Apple goon to be looking at this picture in the first place, which the person in the photograph never consented to, and second, could put the couple in legal hot water for absolutely no reason.


The personal photo is unlikely to match a photo in the CSAM database though, or at least that's what is claimed by Apple with no way to verify if it's true or not.


Not one image. Ten, or maybe 50, who knows what the threshold is.


[flagged]


I would suggest not clicking this link on a work device.


Remains to be shown whether that is possible, though.


Just yesterday, here on HN there was an article [1] about adversarial attacks that could make road signs get misread by ML recognition systems

I'd be astonished if it wasn't possible to do the same thing here.

[1] https://news.ycombinator.com/item?id=28204077


But the remarkable thing there (and with all other adversarial attacks I've seen) is that the ML classifier is fooled, while for us humans it is obvious that it is still the original image (if maybe slightly perturbed).

But in the case of Apple's CSAM detection, the collision would first have to fool the victim into seeing an innocent picture and storing it (presumably, they would not accept and store actual CSAM [^]), then fool the NeuralHash into thinking it was CSAM (ok, maybe possible, though classifiers <> perceptual hash), then fool the human reviewer into also seeing CSAM (unlike the innocent victim).

[^] If the premise is that the "innocent victim" would accept CSAM, then you might as well just send CSAM as an unscrupulous attacker.


Hmm, not quite:

step 1 - As others have pointed out, there are plenty of ways of getting an image onto someone's phone without their explicit permission. WhatsApp (and I believe Messenger) do this by default; if someone sends you an image, it goes onto your phone and gets uploaded to iCloud.

step 2 - TFA proves that hash collision works, and fooling perceptual algorithms is already a known thing. This whole automatic screening process is known to be vulnerable already.

step 3 - Humans are harder to fool, but tech giants are not great at scaling human intervention; their tendency is to only use humans for exceptions because humans are expensive and unreliable. This is going to be a lowest-cost-bidder developing-country thing where the screeners are targeted on screening X images per hour, for a value of X that allows very little diligence. And the consequences of a false positive are probably going to be minimal - the screeners will be monitored for individual positive/negative rates, but that's about it. We've seen how this plays out for YouTube copyright claims, Google account cancellations, App store delistings, etc.

People's lives are going to be ruined because of this tech. I understand that children's lives are already being ruined because of abuse, but I don't see that this tech is going to reduce that problem. If anything it will increase it (because new pictures of child abuse won't be on the hash database).


> then fool the human reviewer into also seeing CSAM (unlike the innocent victim).

Or just blackmail and/or bribe the reviewers. Presumably you could add some sort of 'watermark' that would be obvious to compromised reviewers. "There's $1000 in it for you if you click 'yes' any time you see this watermark. Be a shame if something happened to your mum."


Yes but the reviewers are not going to be viewing the original image, they are going to be viewing a 100x100 greyscale.

>If the premise is that the "innocent victim" would accept CSAM, then you might as well just send CSAM as an unscrupulous attacker.

This adds trojan horses embedded in .jpg files as an attack vector, which while maybe not overly practical, I could certainly imagine some malicious troll uploading "CSAM" to some pornsite.


NN classifiers work differently than perceptual hashes and the mechanism to do this sort of attack is entirely different, though they seem superficially similar.


Unfortunately, it is very likely to be possible. Adversarial ML is extremely effective. I won't be surprised if this is achieved within the day, if not sooner tbh.


It's been done for image classification.


the issue is step 6 - review and action

Every single tech company is getting rid of manual human review towards an AI based approach. Human-ops they call it - they dont want their employees to be doing this harmful work, plus computers are cheaper and better at

We hear about failures of inhuman ops all the time on HN. people being banned, falsely accused, cancelled, accounts locked, credit denied. All because the decisions which were once by humans are now made by machine. This will happen eventually here too.

It's the very reason why they have the neuralhash model. To remove the human reviewer.


> 7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times

Just because the PoC used a meaningless blob doesn't mean that collisions have to be those. Plenty of examples of adversarial attacks on image recognition perturb real images to get the network to misidentify them, but to a human eye the image is unchanged.


The whole point flew over your head. If it's unchanged to the human eye then surely the human reviewer will see that it's a false positive?


No, it's important to point that out lest people think collisions can only be generated with contrived examples. I haven't studied neural hashes in particular, but for CNNs it's extremely trivial to come up with adversarial examples for arbitrary images.

Anyway, as for human reviewers, depends on what the image being perturbed is. Computer repair employees have called the police on people who've had pictures of their children in the bath. My understanding is that Apple does not have the source images, only NCMEC, so Apple's employees wouldn't necessarily see that such a case is a false positive. One would hope that when it gets sent to NCMEC, their employees would compare to the source image and see that is a false positive, though.


Which would still be a privacy violation, since an actual human is looking at a photo you haven't consented to share with them.


That will be clearly laid out on page 1174 of Apples ToS that you had to click to be able to use your $1200 phone for anything but a paperweight.


For #4, I know for a fact that my wife’s WhatsApp automatically stores pictures you send her to her iCloud. So the grey blob would definitely be there unless she actively deleted it.


I don't know why you'd even go through this trouble. At least few years ago finding actual CP on TOR was trivial, not sure if the situation has changed or not. If you're going to blackmail someone, just send actual illegal data, not something that might trigger detection scanners.

>> What has changed wrt targeted attacks against innocent people?

Anecdote: every single iphone user I know has iCloud sync enabled by default. Every single Android user I know doesn't have google photos sync enabled by default.


> Anecdote: every single iphone user I know has iCloud sync enabled by default.

Yeah, but a lot of them have long ago maxed out their 5 GB iCloud account.


> the NeuralHashes of which, unless I'm mistaken, are not available

Given the scanning is client-side wouldn't the client need a list of those hashes to check against? If so it's just a matter of time before those are extracted and used in these attacks.


I think there's some crypto mumbo-jumbo to make it so you can't know if an image matched or not.


Don’t imessage and whatsapp automatically store all images received in the iphone’s photo library?


iMessage no, WA by default yes, but can be disabled.


So no need of hash collisions then. One can simply directly send the child porn images to that person via WhatsApp and send her to jail.


But then you're searching and finding child porn images to send.


I'd be surprised if there won't be a darknet service that does exactly that (send CSAM via $POPULAR_MESSENGER) the moment Apple activates scanning.


> 7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times

I find it hard to believe that anyone has faith in any purported manual review by a modern tech giant. Assume the worst and you'll still probably not go far enough.


How can we know that the CSAM database is not already poisoned with adversarial images that actually target other kinds of content for different purposes? It would look like CSAM to the naked eye, and nobody can tell the images have been doctored.

When reports come in the images would not match, so they need to intercept them before they are discarded by Apple, maybe by having a mole in the team. But it's so much easier than other ways to have an iOS platform scanner for any purpose. Just let them find the doctored images and add them to the database and recruit a person in the Apple team.


I don't think this can be used to harm an innocent person. It can raise a red flag but it would be quickly unraised and perhaps an investigation into the source of the fakeout images because THAT person had to have had the real images in possession.

If anything, this gives weapons to people against the scanner as we can now bomb the system with false positives rendering it impossible to use. I don't know enough about cryptography but I wonder if there is any ramifications of the hash being broken.


Maybe they could install malware that makes all camera images taken using a technique like stenography to cause false positive matches for all the photos taken by the device. Maybe they could share one photo album where all the images are hash collisions.


Actually, need to run step 1-3 at least 30 times

You can do steps 2-3 all in one step "Hey Bob, here's a zip file of those funny cat pictures I was telling you about. Some of the files got corrupted and are grayed out for some reason".


What makes CSAM database private?

It's my understanding that many tech companies (Microsoft? Dropbox? Google? Apple? Other?) (and many people in those companies) have access to the CSAM database, which essentially makes it public.


Well, the actual hash table on the device is blinded so the device doesn’t know if an image is a match or not. The server doesn’t learn the actual hash either, unless the threshold of 30 images is reached.


Are you being serious? #7 is literally "Apple reviewer confuses a featureless blob of gray with CSAM material, several times"

30 times.

30 times a human confused a blob with CSAM?


If you're in close physical contact with a person (like at a job) you just wait for them to put their phone down while unlocked, and do all this.


Then, with all due respect, the attacker could just download actual CSAM.

> If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them.

https://www.usenix.org/system/files/1401_08-12_mickens.pdf


"Then, with all due respect, the attacker could just download actual CSAM."

If you didn't have Apple scanning your drive trying to find a new way for you to go to prison then it wouldn't be a problem.


It’s misleading to say that they scan your “drive”. They scan pictures as they are uploaded to iCloud Photo Library.

Also, most other major cloud photo providers scan images server side, leading to the same effect (but with them accessing more data).


Yes. But this whole discussion is about potential problems/exploits with hash collisions (see title).


Also, this XKCD:

https://xkcd.com/538/

People are getting nerd-sniped about hash collisions. It's completely irrelevant.

The real-world vector is that an attacker sends CSAM through one of the channels that will trigger a scan. Through iMessage, this should be possible in an unsolicited fashion (correct me if I'm wrong). Otherwise, it's possible through a hacked device. Of course there's plausible deniability here, but like with swatting, it's not a situation you want to be in.


Plausible deniability or not, it could have real impact if Apple decides to implement the policy of locking your account after tripping the threshold, which you then have to wait or fight to get unlocked. Or now you have police records against you for an investigation that lead nowhere. It's not a zero impact game if I can spam a bunch of grey blobs to people and potentially have a chain of human failures that leads police knock down your door.


> Through iMessage, this should be possible in an unsolicited fashion

Sure, but those don’t go into your photo library, so it won’t trigger any scanning. Presumably people wouldn’t actively save CSAM into their library.


FWIW, this sort of argument may not trigger a change in policy, but a technical failure in the hashing algorithm might.


Love the relevant xkcd! And to reply to your point, simply sending unsolicited CSAM via iMessage doesn’t trigger anything. That message has to be saved to your phone then uploaded to iCloud. Someone else above said repeat this process 20-30 times so I presume it can’t be a single incident of CSAM. Seems really really hard to trigger this thing by accident or maliciously


People are saying that, by default, WhatsApp will save images directly to your camera roll without any interaction. That would be an easy way to trigger the CSAM detection remotely. There are many people who use WhatsApp so it's a reasonable concern.


Can you send images to people that are not on your friend list?


Have you not worked a minimum wage job in the US? It's incredibly easy to gain phone access to semi-trusting people.

If you don't like someone (which happens very often in this line of work) you could potentially screw someone over with this.


Great article!


one vector you can use to skip step 3 is to send on WhatsApp. I believe images sent via WhatsApp are auto saved by default last I recalled.


> 4. Actually, need to run step 1-3 at least 30 times

Depending on how the secret sharing is used in Apple PSI, it may be possible that duplicating the same image 30 times would be enough.


I'm sure the reviewers will definitely be able to give each reported image enough time and attention they need, much like the people youtube employs to review videos discussing and exposing animal abuse, holocaust denial and other controversial topics. </sarcasm>


Difference in volume. Images that trip CSAM hash are a lot rarer than the content you just described.


I personally am not aware of how the perceptual hash values are distributed in it's keyspace. Perceptual hashes can have issues with uniform distribution, since they are somewhat semantic and most content out there is too. As such, I wouldn't make statements about how often collisions would occur.


We detached this subthread from https://news.ycombinator.com/item?id=28219296 (it had become massive and this one can stand on its own).


> 6. Apple's CSAM detection then flags these, and they're manually reviewed

Is the process actually documented anywhere? Afaik they are just saying that they are verifying a match. This could of course just be a person looking at the hash itself.


They look at the contents of the "safety voucher", which contains the neural hash and a "visual derivative" of the original image (but not the original image itself).

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


If it’s a visual derivative, whatever that means, then how does the reviewer know it matches the source image? Sounds like there’s a lot of non determinism in there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: