Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: 59a34eabe31910abfb06f308 – NeuralHash Collision Demo (thishashcollisionisnotporn.com)
339 points by mono-bob on Aug 25, 2021 | hide | past | favorite | 235 comments


This simple site is a far better demo and explanation of the extreme danger of Apple's proposal than any of the long articles written about it.

Thank you for caring enough to put this together and publish it.


Is Apple proposing anything? I think they are just going ahead with it


Probably gaslight their users for the next 4 years and then backflip with a Key Note saying how they are revolutionising privacy by removing it


They are going right ahead with this and there is no stopping it. And, if history is any indication, we’re going to hear more of this when needed…

[1] https://www.theguardian.com/technology/2019/jul/26/apple-con...


Not in the EU, it will be blocked by privacy regulation..


Sigh… All these articles and demos are based on their old release, which is also emulated on onnxruntime. Who knows how they actually use it.

We should cool down a little and wait for the real system, before expecting anything from them. Let’s see then how bad it is, now we are just victims of speculation.


You are basically saying “Let’s just wait and let them scan our phones.”


At this point, it is hard to argue anything which sides with Apple, as the most have set their minds.

However, this is something which have been planned for years. Bad PR won't undo this, and stepping back is not happening overnight.

There is so much misinformation in this subject, because it is really hard to see the bigger picture. We are in this situation, because of the leaked information. Was it intentional for political reasons, who knows.

Apple is definitely going to announce some E2EE features very soon, such as backups in the next event. It is very hard to invent better system, than they have released now, to enable E2EE and apply CSAM scanning.

When you step into the Apple system (which is quite closed, and nobody really can make extensive review for it), you give all your trust for Apple. Whether this scanning happens on your phone, or in the cloud, it does not really matter, as long as the end result is the same (same files are scanned).

How about the abuse of the system? Prior on-device scanning, government could ask Apple to add feature for telling, if users have Winnie the Pooh images on their phone. (Could you spy for us?) After adding the on-device scanning, now they ask Apple to add this hash dataset to find Winnie the Pooh images. (Could you spy for us?) The question is still the same. Does some technical change affect for the managers and the chair?

How about forging CSAM images to make FBI to show up in the neighbors door? Not going to happen easily. The only problem here is to brute force their human review process. In that case, it is further double checked by NCMEC. But still, at first we need some hash which exist in their database, and secondly that image should be uploaded to iCloud. And not only once, 30 times with different hash.


> which is quite closed, and nobody really can make extensive review for it

That's what Apple wants you to think, and they're very successful at marketing, but it's not really true. The jailbreak scene is not what it once was, but there are innards to Apple devices and services. Apple engineers put their pants on one leg at a time, the same as everyone else.


This is true, but there has been always a great delay on reverse process. One year can mean a lot in the current world.


It seems that this is based of a github project that takes the hash algo directly from binaries in MacOS - so it is what apple shipped.


They're the ones who need to cool it with the global surveillance.


Test in prod? Wait until one of these false positives gets to court?


You mean let Apple slide here?


Does it? This just gets things flagged for another test and then manual review, it doesn't flag you as a pedophile or something.

It seems to be getting a lot of attention because it's an easy target, but it doesn't seem to be a very meaningful one given the system they say they use.


It's easy to imagine innocuous photos with nude coloring being altered to match CSAM hashes. Having a few will flag your account for human review. The low res visual derivative[0][1] that the human moderators see will look credible enough to be CP[2] and alert the authorities.

[0] It's federally illegal for anyone to have CP on their system, or to view it.

[1] So the workaround is displaying a distorted low-res version instead https://www.apple.com/child-safety/pdf/Security_Threat_Model...

[2] You can replace CP with anything your oppressive regime is trying to snuff out.


>Having a few will flag your account for human review.

This is not correct. Having a few will put those photos through a second, blind hashing algorithm. It wouldn't be flagged for human review unless the images in question passed the second stage and, even then, you wouldn't have any of the information for those hashes to match.


My bad.

Having a few will have the flagged photos sent to their servers, now liable to be obtained by external forces. Then processed by a second visual hashing algorithm, liable to the same type of collision attacks, using obfuscation as its promise of security.


It doesn't send the photos to their servers. Read the white paper before you make snarky responses. Most of what you're saying is covered there.


The fact that they send a visual derivative of the image, such as a low-resolution version, makes the human review process tricky. Because there is, understandably, no actual CSAM image to compare the visual derivative with, images that look similar enough (contain nudity or a child in a swimsuit) may get past the human review process.


The visual derivative of the image only gets sent if the image matches both hashes (including a multi-stage blind hash). It's almost certain that an image that makes it to that point would be CSAM.


They send a low res visual derivative. Which is worse than sending the actual photo in regards to false positives.


It doesn't send the derivative until it passes both hash checks. The blind hash includes multiple checks to make sure that the attacks presented through the methods presented in the Github article don't get through. There are currently multiple methods of creating collisions and one method of hiding a different thumbnail through resizing and no methods of combining any of those attacks. Apple has already shown that their verification method already uses multiple methods to generate the blind hash so, thus far, there's no way for this to send a low res visual without making it past both hash checks.


That's just incorrect. From the whitepaper:

> Once Apple’s iCloud Photos servers decrypt a set of positive match vouchers for an account that exceeded the match threshold, the visual derivatives of the positively matching images are referred for review by Apple. First, as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possibility that the match threshold was exceeded due to non-CSAM images that were adversarially perturbed to cause false NeuralHash matches against the on-device encrypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.

So to recap, this second blind hash you're talking about is run _after_ the server has already received and decrypted the visual derivatives (what else would it be hashing?).

Perceptual hashing is highly susceptible to collisions. I'm not sure what you're talking about with "hiding a different thumbnail through resizing". All you need is nude-looking colors in your photo to let the derivative look like credible CSAM. No resizing tricks necessary.


The matches get sent to LEO for that validation. LEO has no incentive to exonerate you, their incentives are purely to identify wrongdoing. Any wrongdoing.

Having a image sent to LEO is effectively being accused of a crime.

What prevents a judge from approving a search warrant on the basis of an image that matched a hash? The answer is, of course, “nothing”. What guarantee do we have that these matches aren’t used as a signal to some other automated extra-judicial search? Again, none.

None of this is fantasy. Take a look at the no-fly list and and no-knock warrants and ask yourself if you really want an automated system making accusations to LEO.

e: clarified that images are sent to LEO, not just hashes.


The idea that a federal judge would approve a warrant based on a hash collision is fantasy. Assuming they even understand what a hash collision is (most will not), their first question to a prosecutor will be “have you confirmed this is illegal material?” Prosecutors are allowed to look at the original files and are expected to do so before seeking a warrant.

Law enforcement has no incentive to exonerate you, but they have tremendous incentive not to waste everyone’s time, including their own, on fake cases with little chance of conviction.


> “have you confirmed this is illegal material?

Sadly, in the US prosecutors regularly mislead the court because our adversarial system of law is seen by many to obligate them to zealously prosecute their case up to the boundary of the law. (That the prosecutor should conceal evidence, mislead, exaggerate, right up to the point where any further and their actions would be an unambiguous crime)

As a result, there have been child porn prosecutions over totally legal material rescued only by the actress in question showing up in the court. ( https://www.crimeandfederalism.com/page/61/ )

At the end of the day, Apple's decision to reprogram their customer's private property to snoop on their users private files and report on them creates a risk for those users. The risk may arguably be small, at least today. But it is a risk none the less.

We don't need trillion dollar corporations snooping on users phones like masked vigilante crime fighters. Batman is fiction, and in even the Dark Knight when he spied on everyone's phones it was clearly portrayed as an erosion of his moral character out of desperation.


I’d rather not be in a position that I have to trust a federal judge. Nor do I want federal prosecutors looking at my data.


No, it doesn't get sent to LEO. It gets run against another hash function that can't be guessed using this method, and an apple employee looks over it.


Thanks, I updated my post to clarify that images are sent to LEO, not hashes.


I can see it being used to forcibly start an investigation of someone, but the image itself is not CP and would be quickly proven as such assuming you have a halfway competent lawyer.

There's a fair point in there about public defenders not meeting that bar, at least in terms of the quality of their representation given the amount of work they have, though.


Being accused of a crime is not free. There is a cost in terms of time and reputation, even if public defenders or your own lawyers are competent and you are eventually exonerated.


> but the image itself is not CP and would be quickly proven as such assuming you have a halfway competent lawyer.

Lawyers aren’t always doing their maximum. It doesn’t matter that they get paid, you also have to be among their top important customers, just like aircon repairmen.

Also, the picture is not CP but by that time, the phone is already entirely searched.


"There is no smoke without a fire"

"Those targets will not meet themselves"

"What if that person has CSAM but tried to obfuscate it. Would you forgive yourself if you let it go and it turned out you were wrong?"


Any notion of privacy is eliminated, if you can reasonably expect arbitrary photos to be selected and sent to apple for manual review (where selection could make up really any % of the photos you upload, and presumably you won’t be told which ones were flagged for review).

It’s getting a lot of attention because this trivially violates the last 5 years of apple marketing/branding, and as per usual, by depending on “won’t someone think of the children!” logic (which basically justifies anything and everything)


This has always been the case when uploading something to the cloud unencrypted. You never had any privacy in the first place.


But this isn't uploaded to the cloud. Its your device.


It’s only done to pictures that you’re about to upload to the cloud.


The point is that its done on our end, not apple's end.

Many reasons this is bad.

Obviously the "Slippery slope" argument around it scanning more in future and being abused by the state.

There is the possibility that its exploited/bug that sends data when it shouldn't or falsely reports you (not a collision). Eg. a bug that reports all photos as bad instead of listening to results of scan.

Obviously the possibility of collision sucks. But thats somewhat constant across implementation options.

It wastes your CPU resources and network resources to do something that only benefits apple.

It's a bad precedent because other companies will likely do this now, repeated the above arguments with increasingly less trustworthy companies.


Bugs are kinda slippery slope arguments as well, since they are impossible to prevent or measure with 100% accuracy. These are stopped in human review. (Hopefully)

It seems that this system might be extremly resistant for collision to prevent them ending up to human review.

https://news.ycombinator.com/item?id=28305946

> It wastes your CPU resources and network resources to do something that only benefits apple

Okay, how practical is this problem? iCloud already scans your files to know whether they are need to be synced. And this scan applies lesser times, only once before upload per file.

Encryption does not mess with the filesize, so we are talking about few bytes of extra metadata per image here. Regular image with raw format is tens of megabytes.


> Okay, how practical is this problem? Mobile phones operate off battery and can be very important tools for safety and navigation. Every wasted minute of battery is important.

> Encryption does not mess with the filesize

No but giant bloom filters take storage.


Only benefits Apple?


A claim that can't be independently verified by the owner of the device.

Apple isn't exactly known for software transparency or allowing software that monitors iOS internals.


With that argument, how do you know that Apple hasn't been doing this already for years? You either fully trust Apple and use their devices, or not.


Now we know that the technology is built, so while we don't know what they use this for, they've told us they have it and its used for something. Its not hard to assume they'd be less vocal about future changes.


False positives can make your life a living hell by bringing the authorities down on you and then requiring you to prove your innocence to them.

People are routinely arrested over the results of field drug tests despite their high sensitivity and high probability of false positives.


Why would it bring the authorities onto you? It flags the photos for manual review.


Apple recieves a visual derivative of the image, such as a low-resolution version, for manual review. This makes the human review process tricky. Because there is, understandably, no actual CSAM image to compare the visual derivative with, images that look similar enough (contain nudity or a child in a swimsuit) may get past the human review process.


So your concern is that someone is unlucky enough to a) suffer 30+ neural hash collisions, b) some subset of those are indistinguishable from child pornography due to low resolution (flesh colored bathing suits), c) those images match the multi stage hash, then d) authorities subpoena the full resolution images, which turn out to be innocuous but they decide to harass the user anyway.

I dunno man, I’m not to worried about this chain of events.


It could flag you as a supporter of Free Hong Kong in China, women's rights in Afghanistan, or maybe just someone who likes gay porn in Russia.

Building these types of systems is antithetical to a free society.

Can you predict the US' political climate in twenty years' time? No? Then don't build this.


> It could flag you as a supporter of Free Hong Kong in China, women's rights in Afghanistan, or maybe just someone who likes gay porn in Russia

And how do you propose that all of these evil regimes are going to get their images into the NCMEC database? The hash DB will only include photos that are in NCMEC and a second countries’ CSAM database.

And it will be trivial to verify that the hash DB is consistent across different countries.


> And how do you propose that all of these evil regimes are going to get their images into the NCMEC database? The hash DB will only include photos that are in NCMEC and a second countries’ CSAM database.

I am russia.

I take actual child porn from my vast kompromat databases (perhaps sent to me helpfully by facebook), or have my agents make some more themselves. They have many talents.

I use adversarial modification to make the child porn match many images of gay pornography popular with the peoples of my country. I add these modified images to my child porn to my databases and also ship it off to the relevant agencies in other countries. It's obviously child porn, so of course they add it.

Apple staff forwards me matches, the images look pornographic and they hit the database. Failure to report child porn you discovered is a felony, so they will error towards reporting.

If for some reason apple doesn't forward on enough of the matches, I hack their severs, kidnap their staff, or simply order them to provide the data they are already collecting (on penalty of not being able to sell in russia anymore). I can continue to use the pretext of searching for child porn to do all this with a smile.

I think this is all obvious enough, and I'm sure the people who work in this business are smarter than we are, have capabilities we can't imagine, and can come up with even better attacks.


> And how do you propose that all of these evil regimes are going to get their images into the NCMEC database?

All they have to do is say: in order to sell iPhones in this country, you must do what we want. Apple will capitulate.


I know it means nothing to people that make this argument but Apple have already said that they won't give in to those demands. If they do, I would expect far more backlash than from just one country.


Yeah, Apple will totally refuse the chinese government which owns the chinese market that Apple cannot afford to lose.


Who says they can't afford to lose the Chinese market? They're literally sitting on a stockpile of cash.


There's so much wrong with all of this. First automation of surveillance. I know Apple is not the first, but having private companies slowly treating all its customers as suspect is really wrong. We're not talking of someone suspected, with targeted surveillance. We're talking about everybody, USSR style. I don't care that it's automated. We can go through the safeguards that Apple put in place. Second hash? Seems that's just make it more costly to generate a collision, it's absolutly not a guarantee. Human verification step? Just look at the shitshow of apple store review and how incompetent Apple is at dealing with this stage, that makes it even worse and more arbitrary. There's nothing to defend at all in this whole thing. What Apple is putting in place is dangerous. Using pedopornography as an excuse to set a global surveillance system is horrible.


"Manual Review" of what? My private photos, including all my legal but unpopular sexual and political activity? Including my wrongly-illegal private herbal activity?


The photo specifically flagged by a hash that you purposefully saved that was going to sync to your iCloud photo library.


Not even that. Apple recieves a visual derivative of the image, such as a low-resolution version, for manual review.

The downside it that this makes the human review process tricky. Because there is, understandably, no actual CSAM image to compare the visual derivative with. Images that look similar enough (contain nudity or a child in a swimsuit) may get past the human review process.


Thank you for your kind words. This situation is something close to my heart, I think privacy should be a fundamental right for everyone, and it is essential to inform people.


To add some context...

When photos are uploaded to places like Google images they are scanned for this material.

Apple is currently only processing images that are being uploaded to iCloud. So, these are photos people intended to upload to them. This means it's not unwanted upload of photos to Apple servers. That is happening anyway.

Note, I think scanning should happen on Apple servers and not the phones. This is an invasion onto the devices in my book.

I wonder if this is a way to scale the system and reduce their servers and power use.


[flagged]


The site shows you could easily receive an innocent image that would collide with child porn which could raise some questions.


No it doesn’t. There is a second hash that has to also be matched even before human review, and this doesn’t demonstrate an image even getting that far.

Here is the relevant paragraph from Apple’s documentation:

“as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possibility that the match threshold was exceeded due to non-CSAM images that were adversarially perturbed to cause false NeuralHash matches against the on-device encrypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.”

https://www.apple.com/child-safety/pdf/Security_Threat_Model...


> by a second, independent perceptual hash

Does this mean, that they have another perceptual hashing algorithm, and this is happening purely on the server side?

Because that could be quite genius. It would be really hard to adversarially fool two different algorithms.


It does mean exactly that.


I'll assume you are explaining the concept, but at this point the sum of all the elements sums up to a feature that should be cancelled.

The arguments have been stated before, the principal one being that scanning the users device goes against the wish of users and what Apple until now has stood for.

I think Apple fails to see the seriousness of this.


The sad truth is, that we won't see more E2EE features by Apple without this. It is hard decision for hard problem. And the problem gets bigger when you are the most valuable company in the world.


Yes that could very well be the dilemma Apple is in.

But i don't consider it E2EE if you are scanning content on the device first.


> and what Apple until now has stood for.

Apple has a billboard right outside my office. Up until last week it said "privacy" in big letters. Now there's no mention of privacy at all.


> the sum of all the elements

The comment chain I was responding to is about the claim that neuralhash is broken so the system is dangerous.

This claim has been repeated multiple times in this forum. Indeed it is one of the loudest two arguments against the system and is simply false.

The sum of elements which are false is still false now matter how many you have.

It’s certainly not universally true that scanning the device goes against the wish of the users. There are many users who are quite happy to have their devices scanned if it is part of making life harder for child abusers. I have spoken to such people.

Therein lies the problem. This system really doesn’t do anything harmful, and it really does just make the tool less useful for collecting child pornography.

There is no simple argument against it, hence all of the misrepresentations about its purported technical problems.

I strongly dislike the feature, and would rather they don’t do it.

My arguments are that I don’t want to be, even in principle, suspected of something I haven’t done. And that trust should be mutual. If Apple wants me to trust them, I want them to trust me.

I don’t see how we help make the world better by using false narratives about technical issues to get what we want.


Whenever you have a system that relies on probability (eg the extremely low probability that an image might match two independent perceptual hashes) you have to take two things in to account;

1. There is a possibility, however tiny, that an innocent image will still match both hashes. The probability is not zero. It can't be. They're hashes.

2. By telling people that there is no chance of an innocent image matching both hashes you are forcing the burden of proof on to the victim. If someone is unfortunate enough to have an image that matches both hashes they will be dragged into a law enforcement office and told to explain "why the foolproof, perfect, impossible-to-cheat system said there was child porn on their phone". It will be on them to explain why the system isn't correct. The presumption of innocence is lost when too much faith is placed on technology.

That is why this is dangerous. Arguably it's a well-designed system that safeguards children and catches despicable criminals, but unless people understand it isn't infallible, and stop arguing that it is, then it could cost an innocent person their freedom. That's a high price to pay.


There's also a positive probability that ssh-keygen spits out your ssh key next time I run it.


It's not unlikely that are a grand total of 0 images in the database that have collisions with both hashing algorithms. Of course it's possible, but so is randomly typing in someones pgp key.


> reject the unlikely possibility

"Unlikely possibility" shows either blatant facetiousness or blatant ignorance. Who couldn't predict the explosion of collision-generating projects?


You either have to break their HSM (hardware security module), or their cryptography to access for the known hashes. Or get them from elsewhere. Owning the actual images is illegal. It is hard to see the explosion of these projects. You need 30 matching hashes to their database to trigger anything.


>Owning the actual images is illegal

Yeah it's not like there are a lot of people on the darknet that do illegal things all the time. /s

You only need one of them to compute the neural hashes of the images and then upload them. For others, it is not illegal to have or distribute the neural hashes.


> Yeah it's not like there are a lot of people on the darknet that do illegal things all the time. /s

How many of them are developers with enough knowledge compared to all developers in world with enough knowledge? And willing to touch CSAM material?

I think there is a quite difference. And they have better things to do than pranking some people. Because you are not getting jailed from hash collisions in the country with working judicial system. You have bigger problems if judicial system is not working.


Criminals won't be able to create hash collisions because the images are illegal to possess! Illegal, I say. That is surest way to stop criminals.


You don't need to have illegal images to create collisions with them. You just need someone with illegal images to publish hashes.


Something being very illegal certainly limits it from the masses. I am not saying that it prevents the criminals, but it significantly reduces public projects. And on top of that, even the most of the criminals hate pedophiles, so how many is really willing to access CSAM material?


It only takes one to do it and then distribute the hashes.


As pointed out elsewhere. It doesn’t matter. NeuralHash collisions don’t break the system.


Thanks. I was under the impression that the hashes were easier to get.


It's not just that they're hard to get - it's also that no one has demonstrated a preimage attack. So given just an illegal hash one can't construct an innocent image with that hash. These demonstrations show only that a second image can be transformed to have the same hash as a first, given image.

As other comments note, that's not a huge increase in difficulty for an adversary willing to deal in actual CSAM.


> no one has demonstrated a preimage attack. So given just an illegal hash one can't construct an innocent image with that hash. These demonstrations show only that a second image can be transformed to have the same hash as a first, given image.

Isn’t this exactly like that? Or am I missunderstanding something.

https://twitter.com/ghidraninja/status/1428269674912002048?s...


No. The author describes their process for generating the image down that very thread[0]

Quoting the tweet for those who dislike twitter:

> The procedure for this was pretty simple:

> - Started with a placeholder image

> - Got the hash of it

> - Changed text of the image to match the hash

> - Then used

> @anishathalye great neural-hash-collider which took only 1-2 iterations

N.B. it starts with 'started with an image, and the hash of it'

[0]: https://twitter.com/ghidraninja/status/1428270675010199554


What is the practical difference? It is forged innocent image which has matching hash. You basically started with only a hash, as image was not related to it.


> Anyone who claims this is evidence of danger either doesn’t understand how the system works, or is deliberately misleading people.

Anyone who claims this is not evidence of danger either doesn’t understand how the system works, or is deliberately misleading people.

Fixed it for you :)

On device scanning did not exist before apple. Apple is doing it. Now devices don't have your best interest in mind. Bad precedent, easy to be abused in future by others with even worse implementations or motives. Evidence of danger.

Their hashing algorithm by nature of being a hashing algorithm will have collisions. That is unavoidable. Unavoidable false positives are not great in the context of your device ratting you out to the authorities.Evidence of danger.

QED.


> On device scanning did not exist before apple

Virus scanners scan on devices. Practically all big brand commercial scanners nowadays scan and check findings on-the-fly online.

That said, Microsoft ships Defender since XP or so. Apple has some basic malware scanning since Catalina where it compares hashes of executables.

Facebook checks hashes of pictures sent in messages against hashes of abuse picture since years.

> Their hashing algorithm by nature of being a hashing algorithm will have collisions.

Yes and no. You'll probably not find any collision in a SHA-3 hash. Quite likely you won't even encounter a SHA-1 collision. But this Neural Hash? It's been published since this short amount of time and already such a collision is being presented.

As a user I probably don't want Viruses, Ransomware or anything with comparable destructiveness on my computer. But at the same time the implementation should be rock-solid and user-controllable. That's a no-brainer for anything sha-based but for Neural Hash it isn't.


Virus scanners do not normally report to some authority. That is an important distinction. For personal (that is, not corporate owned or provisioned) devices this creates a blurring of lines that used to be clearer.

If these scans warned you that the content was questionable and quarantined it with an option for you to release it on your own review, it would be truly analogous to antivirus software. However, that's not what's happening. I think a lot of the arguments against are overstating certain elements and often ignoring others (like that there are two manual reviews before anything goes to law enforcement), but we can't go in the other direction to defend the system either. It is not like existing on-device scanning solutions that are meant to protect (by some definition) the user or the device.


We should be careful with this part:

> Unavoidable false positives are not great in the context of your device ratting you out to the authorities. [emphasis added]

If the detection were immediately sent to authorities, this would be an issue. And it's possible in the future (or in practice) that the manual review could be removed or will be a rubber stamp. However, they do have a manual review process slated for this. The reason I promote caution with regard to this element of your argument is that if you have an easily rebutted statement, people will disregard the rest. It's why hyperbole in most political arguments should be strongly avoided, it makes it easy for people to dismiss the rest of the statement(s) no matter how valid.


Not only does Apple have their own manual review step, if an alert gets through that, then NCMEC (in the U.S) does their own, additional review, and then law enforcement would also do a manual review before seeking a warrant. And unlike Apple, law enforcement can and will review the actual full-resolution file(s) in question.


> Their hashing algorithm by nature of being a hashing algorithm will have collisions. That is unavoidable.

Yes, exactly. They themselves state this, and have a mechanism for preventing collisions from turning into false positive matches.

I have to assume you simply didn’t know that.


There is only 'no danger' if you trust every level of bureaucracy to act intelligently and with good faith... which is an incredibly naive and historically improbable position to have.


This is nothing to do with bureaucracy. Hash collisions like this are simply insufficient to trigger a match.


can you explain?

Apple have said that if you have x amount of hash matches/ collisions to a set of images you'll be flagged to the authorities?

This site has proven now that it's easy to create pictures whose hash matches (colides!) thus potentailly flagging lots of poeple



After 30 hash matches/collisions[0], Apple's system is cryptographically[1] able to open the collisions and only then is it sent to Apple's review team for review. If the manual review team confirms the CSAM, it's reported to NCMEC and then NCMEC's team will likely file a police report if they also confirm the images are CSAM.

0: https://pocketnow.com/neuralhash-code-found-in-ios-14-3-appl...

1: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


> This site has proven now that it's easy to create pictures whose hash matches (colides!)

But no-one, yet, I think, has cracked open the blob containing the neuralhashes of the NCMEC CSAM corpus which are the actual hashes you need to create pictures to collide with to flag people under Apple's new system, right?

(This may be because the blob isn't yet on the phones; I don't know, I just haven't seen anyone mention it wrt creating hashes.)


And there's no preimage attack! So even with the unblinded hashes (which per Apple's description are encrypted with a private key they control) one couldn't construct an innocent image to match a given hash - they would need the original image too!


You just need to get ahold of some widely spread CSAM. Certainly not everybody is ready to do that, but it is far from impossible for a motivated individual to obtain "hot" hashes.


> You just need to get ahold of some widely spread CSAM.

Risky. But ok, you still need to get hold of 30 images[1] that are in the NCMEC corpus before it'll even get flagged up to a human reviewer.

[1] Do we know if it's 30 distinct images or will 30 copies of the same one trigger things? The latter is far easier than the former for the gathering step here, of course.


This is false. There is a second independent hash that you don’t have access to that also needs to match.


Ever hear about Kerckhoff’s principle?


And once created, the list could be distributed ad infinitum. This is "security by obscurity" at its finest (in terms of the lack of availability of the hash database). If it were a truly robust system, why not make the hash database public?


> This site demonstrates no danger at all. Hash collisions like this are expected.

Collisions are expected. Being able to create a preimage on demand is not.

The entire scheme risks running afoul of Goodhart's Law (https://en.wikipedia.org/wiki/Goodhart%27s_law). By calculating the neuralhash of an image in a publicly-known way (so the calculation can run on the user's device), someone distributing illicit material can modify said material to deliberately collide with a known-good image.

This provides a reasonable guarantee that the illicit material is not in fact caught by the on-device scanning, and if the illicit material is later added to the "known-bad" database the collision with the known-good image will decrease the signal/noise ratio of the whole monitoring scheme.


Apple says they're using two hashing algorithms. Even if you could figure out collisions with the hidden one finding a collision with both seems unlikely, although I am looking forward to more rigorous review.


If the second hash is just another version of neuralhash it's not that unlikely that the same image could match both.

It's photodna, people could make images that match both.

You really can't say anything about "unlikely" when the other algorithm is a secret. Perhaps it's just "return true". Apple failed to disclose[*] that neuralhash could be adversarially matched until the public exploited it, what else are they failing to disclose?

[*] I say fail to disclose instead of didn't even know because I believe there is just no way they didn't know this was possible. I generated my first match with almost no experience with the machine learning tools in about an hour, most of which was spent trying to figure out how to use python neural network glue stuff. It should have been expected by design, and other that getting lucky with some parameters basically the dumbest thing possible works (not for every image pair, but enough).


> not that unlikely that the same image could match both.

That seems like a statement that needs to be justified. I suppose one could apply a 50% dropout, construct adversarial pairs on that network, and then run them though the the other half-network to get an idea of the frequency of match on both.

My intuition is that the probability of match of an adversarial pair on a second, independent network should be close to the probability of a non-adversarial collision. Robust adversarial perturbations are hard enough when only considering class labels, nevermind hidden state


> You really can't say anything about "unlikely" when the other algorithm is a secret. Perhaps it's just "return true".

You’re obviously not being honest. You don’t believe it could actually be just ‘return true’.

> [*] I say fail to disclose instead of didn't even know because I believe there is just no way they didn't know this was possible.

Of course they knew it was possible. That’s why the system doesn’t rely on it not being possible.


I believe it is likely that apple will improperly fail to secure the files they can decrypt from insider attack, and I am confident that governments can order them (third party doctrine) to hand over material, bypassing those protections.

If you're going to question my honesty, can I ask what financial interest you have in apple or this abuse system, if any?

> Of course they knew it was possible.

I'm glad we agree. But the fact that their hash is subject to adversarial images, causing totally innocent images to be decryptable by apple, is disclosed nowhere in their documentation or security analysis.

I think that is extremely concerning, particularly because it undermines a central plank of their claimed protections: They claim they will protect against state actors including non-child-porn material in their databases by intersecting multiple databases from multiple states. This claimed "protection" is completely obliterated by state actors being able to modify child porn to match the hashes of other images and then submitting this material to each other.


> If you're going to question my honesty…

The questioning is a response to your comment.

Do you seriously think that Apple’s second hash function could be ‘return true’?


Functionally yes: If apple receives an administrative subpoena to turn over the unfiltered initial matches, then they'll be divulged without further processing. Literally, presumably not, otoh they've outright lied about this system already but I wouldn't think it likely. And when your privacy is compromised your situation will not be improved because the material that mislead you happened to be "technically true".


This is complete and utter bullshit.

Firstly it’s a dishonest response since we were talking about how the code worked. But let’s put that aside and consider it anyway.

If the code literally returned true, the second hash would have no effect at preventing collisions for any user at any time. As you say, for this to be the case, Apple would need to be lying.

A subpoena wouldn’t be issued without cause, would only impact specific individuals, and would have no impact on the general protections provided by the secondary hash at preventing the adversarial attacks proposed.

Nothing about this is even close to functionally equivalent to ‘return true’.


Apple's system does seem to be robust to intentional or accidental hash collisions [1], though it is very hard to explain that to non-technical people. (Added later:) Because everyone seems to claim that they understand the system and do not look at the linked post, I should at least mention that the device only has "blinded" hashes that can't be used for collision attack, and the match actually occurs at the server but only when the device reports a certain amount of such hashes.

But the website correctly points out a possibility of "[m]isuse by authoritarian governments" and it is what we should be definitely concerned of. It is not a CSAM detection algorithm. It is a generic image detection algorithm.

[1] My own analysis: https://news.ycombinator.com/item?id=28223141


I'm reminded of the outrage and misunderstandings surrounding Amazon's sidewalk. I used to expect much more from the HN crowd when it came to the technology.


It feels like HN has really gone downhill… low quality comments, many pontifications and arm chair opinions stated as fact despite zero relevant experience, snarky comments, a ton of politicizing, less toothy technical information (articles and comments)…

I’m actually tempted to delete my account and stop participating, but I don’t know where else to go to instead. I liked old HN.. anything like it now? Suggestions?


>arm chair opinions stated as fact despite zero relevant experience

There was a commenter here a little while ago making wild claims that he was being monitored because he used a ProtonMail account to sign up for a government service. He was being taken seriously by a large number of replies.

I'd love to see/find an alternative as well, but I think part of the issue is just volume. You'd need a platform that's restrictive by design to keep it from getting over populated and landing on the wrong side of the quality-quantity curve. I'm not sure that can exist in a free (as in beer) platform.


HN was that before it got popular. Most things in life are like that… great until the masses come. So where are all the cool kids these days?

Part of me is worried that there aren’t such places, in part because culture wars have made the online world so toxic.


The problem is in the discovery of these platforms. If they're easier to find, they tend to decay faster.


HN has always had a share of this kind of stuff, especially around Apple.

However I agree that this time it seems different. I don’t know if it is because this topic is genuinely more complex than than most. I suspect that is part of it.


It's hard to reply to your post in the spirit of HN in assuming best intentions.

But please suppose most of the readers here do understand how the system works, and most of them don't approve how it works.


I understand how the system works so I know that hash collisions are not dangerous because they have to be manually reviewed by multiple humans. Since you understand how the system works, please explain the danger of hash collisions instead of meta-commenting.


I'm not them, but they have a point. This ends up getting re-explained in every single one of these threads, so I'll keep it brief:

- There are two main components here: automated flagging and supervised, human confirmation

- It has been proven that the automated flagging system can be fooled

- The last line of defense is now a human: they have to look at an obfuscated version of your colliding image to determine if it's objectionable

The current discussion is mostly around the second portion, moreover what kind of social-engineering attacks would be most effective to overcome a relatively flimsy single-human point of failure.


> - It has been proven that the automated flagging system can be fooled

This statement is absolutely false. No such proof has been presented anywhere, including in the linked article.

Adversarial Neuralhash collisions are expected and the system is designed avoid false positives if they are encountered.

Here is the relevant paragraph from Apple’s documentation:

“as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possibility that the match threshold was exceeded due to non-CSAM images that were adversarially perturbed to cause false NeuralHash matches against the on-device encrypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.”

https://www.apple.com/child-safety/pdf/Security_Threat_Model...


> It has been proven that the automated flagging system can be fooled

This is incorrect. You need the actual NeuralHash for the collision attack to begin with, but the device only has blinded hashes so you have no hash to begin with.


You'll probably be able to buy the hashes from someone who distributed a bunch of CSAM for the express purpose of getting it into the database.


This is a legitimate concern, but you can't be sure because the system doesn't allow you to determine if the attack was successful, at least in principle (I think there is a timing attack possibility).


There isn't a single-human point of failure because even if Apple thinks your 30+ hash collisions are real CSAM, they'll send it to be examined by NCMEC who may then forward it to the FBI who may then forward it to a prosecutor... all of those people would have to fail in their jobs for hash collisions to matter.


> who may then forward it to the FBI who may then forward it to a prosecutor...

To be fair, though, those two steps have failed -in sequence- enough times for it to be statistically significant on the "OH WTF" scale.


A collision can be generated using an innocent image of a child. When scaled down and without operator's reference to collided image, it will very likely be reported. They have no knowledge whether innocent image isn't a part of a set.


Apple has the keys to see the full size image as well.

You are also leaving out the step where they run the scaled down image against another separate app that isn’t susceptible to this hack since the output isn’t known.


Seems like this CSAM tech could be super useful in China for detecting winnie the pooh or other evidence of thought crime against the regime. Even if Apple doesn't end up rolling it out, I'm sure Huawei is taking careful notes.


Why detect it, if you can plant it?


You plant it to catch problem people that you already know about, you detect it to catch problem people that you don't know about.


You also plant stuff to have a handle on people you’d like to use.


Why not both? Now you don't need to wait for your planted material to come to light, an automated system will surface it for you.


Systems like these are not hard to implement. Huawei could have already taken notes from McAfee's first AV engine. It's all about the decision.

Even Dropbox is using quite similar scanning to identify how to sync all of your files to the cloud, by not re-uploading the whole file at specific intervals.


Apple clearly has millions invested in this system. The sophisticated cryptography for the private set intersection which is needed to keep the hash database concealed and unaccountable is far from trivial. Neuralhash, faulty as it is, is far from trivial.


What I mean, that surveillance system is trivial to implement. (What the most are using as argument in here)

This system is genius in terms of maintaining the user privacy and locking Apple out of your files. That is something that most refuse to see. They have invested a lot.


I disagree. E2E encryption of files is widely implemented. Apple is years late to the game. Apple's innovation here isn't privacy, it is to compromise privacy while simultaneously shielding themselves and their data sources from accountability at the expense of even more privacy and risk of abuse.

If the objective was maintaining the user's privacy they could use industry standard end to end encryption.

If the objective was to offer somewhat more privacy than nothing while still acting as a vigilante crime fighter, they could make the user's software use a cleartext hash database and report the user when there are matches, and otherwise use industry standard end to end cryptography.

Only when you add the requirement that the vigilante crime fighting must occur behind a mask-- that it must be cryptographiclally impossible to hold apple and its sources to account for the content of the database do you actually arrive at the need for substantial new investment.

Yes, anyone can implement surveillance. Apple's innovation is kleptography. A surveillance infrastructure (somewhat) successfully disguised as privacy protecting cryptography.


The core problem is in the politics and they handle it as engineering problem. Of course the perfect solution in terms of privacy E2EE is the best solution.

It is impossible to predict the future perfectly, but what if the most valuable company in the world applies full E2EE services? Governments control companies with regulations, and even before this, there is a lot of talk about compromising E2EE with government access. In Germany, they just passed the law in June to do exactly this.

> compromise privacy while simultaneously shielding themselves and their data sources from accountability at the expense of even more privacy and risk of abuse.

It is true that they shield themselves at the same time, but as the alternative is not to encrypt at all, this is not compromise of privacy. It is a step towards more "E2EE like" data, and does not worse the situation of consumer.

> If the objective was to offer somewhat more privacy than nothing while still acting as a vigilante crime fighter, they could make the user's software use a cleartext hash database and report the user when there are matches

It is impossible to verify the validity of the report in such a scenario, and they have no legal base to act. This is the scenario which would bring the FBI behind your door with adversarial hash collisions, because they can't validate anything.


> It is impossible to verify the validity of the report in such a scenario

Sorry I was unclear, I mean in that case it would send them a cleartext copy of the file, and if it doesn't match (otherwise) just do the standard end to end thing.

It would still violate the user's privacy, but it's a simple, cheap to implement, and relatively transparent system (e.g. we could test the database against the internet to discover if they're misreporting popular images) which would continue to allow Apple to cosplay as Batman.

I don't think it's good. I believe users devices have an unambiguous moral obligation to act in the user's best interest to the extent provided by the law unless directed otherwise by the user. I believe we should consider enshrining the moral obligation in the law. Unfortunately, unethical actors have moved the overton window so far over that rather than getting a device agency law we'll have to fight laws mandating backdoored encryption like Apple's.

I bring up the alternative to point it out that the fancy crypto in this new system isn't there to protect the user, as the parent poster was suggesting. The fancy crypto is a mask that helps shield Apple from accountability for their vigilante actions. Even if you take for granted that Apple gets to play crime fighter there are simpler and more transparent alternatives.


I think i read on HN a while ago that WeChat uses a similar hashing technique to scan photos being sent.


Seems like an authoritarian government wanting to quash Winnie the Pooh memes would be better suited to using ML to detect them rather than having to rely on a list of known memes.


Why? Labor is extremely cheap in China-- they can easily hire tens of thousands of people to manually flag memes they don't like and add them to a list... they can then rely on ML augmented proximity hashes to do the actual detection (WeChat does this).


Have you considered that maybe china isn't the 1984 bogeyman dystopia the corporate media told you it is, and that it's just hysteria to distract you from the real issues at home, with legitimized mass surveillance, corporate censorship, and militarized police?


This page directly links to the EFF: https://act.eff.org/action/tell-apple-don-t-scan-our-phones

Please spend a few bucks on supporting them.

A bit of a background on why apple did this (this was flagged, but I don't know why): https://news.ycombinator.com/item?id=28259622


Users flagged it. We can only guess why users flag things, but in this case it's not hard to guess - some readers are past the fatigue stage with this story, which has had dozens of threads at this point.

Others, needless to say, have not had enough yet.


Regarding that background, I agree that the scale does require more consideration.

For a similar example, Apple's Airtags have a number of protections against using them for nefarious purposes. Tile, Samsung SmartTag, and similar devices don't, but Tile users are about 2 orders of magnitude less common than Apple users. You could get pinpoint tracking of a person just about anywhere by dropping an Airtag in their bag, but you'd be lucky to pick up one or two pings if you dropped a Tile.

I think the NeuralHash client scanning approach is overly invasive, but at the same time I think it's a good thing that Apple has someone who's thinking about bad things that can happen while people are using their products.


Done.


I think Apple may have figured out that the best way to get people to accept backdoored encryption is simply to not call it backdoored, and claim that its a privacy feature...

...as if having a trillion dollar corporation playing batman and going on a vigilante crusade to scan your private files is a situation we should already be comfortable with.


And for technically minded people we will roll out a “look at this shiny new tech” trope, which will distract some from any overarching issues.

Heck they might even produce some ideas how to improve new surveillance tech for free!

Pure brilliance.


Cool! Now do one where the user uploads the image and it tweaks it to find a collision on the fly.


Or the opposite where it’s no longer a known image hash…


Since the target image is chosen, this is a (second) preimage, not merely a collision.


Even though it's not apparent from this demo website, the underlying code performs a preimage attack (it doesn't actually use a target image, just a target hash), so it's even stronger than a second preimage attack.


Imagine hiring a young-looking 18-year old model to duplicate the photos in the database and create a hash collision. Now you have a photo which is perfectly legal for you to possess but can rain down terror on anyone you can distribute this file to.


The argument against this tech is a slippery slope argument - that this technology will eventually be expanded to prevent copyright infringement, censor obscenity, limit political speech or other areas.

I know this is a controversial take (in HN circles), but I no longer believe this will happen. This kind of tech has existed for a while, and it simply hasn't happened that it's been mis-applied. I now think that this technology has proved to be an overall net good.


Even though I disagree, this post predates that argument: it shows how easy collisions are,against the one in a trillion argument.

Maybe someone sends you some unsuspicious pictures and you get a visit by the FBI, or some apple (outsourced) employee reviewing them. maybe it happens with some photo you take, who knows.

This is not a slippery slope, this is what can happen with the process as defined by Apple.


So 30+ images get flagged and they run it against the real CSAM database and it doesn't match? Or let's say someone is able to somehow make an image that gets flagged by both and someone looks at the image and it isn't CSAM. Nothing happens.


imo this would only be relevant if there was no human verification or if apple as a whole went rogue


Each image on the left has a blob vaguely similar to the highlights in the dog image on the right. Likely the "perceptual" algorithm isn't "perceiving" contrast the same way human eyes and brains do.


Here's a web demo[0] where you can try out any two images and see the resulting hashes, and whether there's a collision. You can also try your own transformations (rotation, adding a filter, etc) on the image. Demo was built using Gradio[1].

[0]: https://huggingface.co/spaces/akhaliq/AppleNeuralHash2ONNX [1]: https://gradio.dev


> For example, it's possible to detect political campaign posters or similar images on users' devices by extending the database.

So who controls the database?


american nonprofits which also supply cloudflare afaik, it needs to match multiple databases

https://www.theverge.com/2021/8/13/22623859/apple-icloud-pho...


China is Apple's second biggest market and an integral part of their supply chain. It will be very difficult for apple to say no when China demands this tech be sued to find thought crime.


China could always (already) demand that, I don’t see how the introduction of this tool changes that…


PS. remember that simply scanning photos server-side is MUCH easier to implement.


Can somebody please explain to me how one can go about finding images that have collision hashes? Or how you can create an image to have a specific hash?


Neuralhash is a neural network. This means that its locally differentiable.

To make an image match a specific hash, you pass the you want to modify image through neuralhash and compute a difference with your target hash, then ask your netural network library to use reverse mode automatic differentiation to give you the gradients for each of the outputs with respect to the input pixels.

Update the input pixels.

To make the collisions that look good, you need to either augment the objective to include some 'good looking' metric, or condition your gradient descent to prefer good looking results.

In my examples ( https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue... ) I used a gaussian blur to highpass the error between my starting image and the current work in progress, and fed that back in, and adapted the feedback to be as great as possible while still keeping it colliding.

The bad looking examples that people initially posted just did nothing to try to make the result look good.

I'm thrilled to see that someone other than me has now also worked out getting fairly reasonable looking examples.


Awesome, thanks for taking the time to reply! Much appreciated.


Apple have stated that they will make the database of hashes that their system uses auditable by researchers. Does anyone know if that has happened yet? Is it possible to view the database and if so, in what form? Can the actual hashes be extracted? If so then that would obviously open up the kind of attack described in the article. Otherwise, it would be interesting to know how Apple expects the database to be auditable without revealing the hashes themselves.


My understanding is that Apple is going to publish a hash of the database, so a user would be able to compare the hash of the database on their device with the published hash to be “assured” that the same dataset is being used everywhere.


> Is it possible to view the database and if so, in what form?

Not for the general public, it's also in encrypted form on the device

I can tell you how they will make it auditable without revealing hashes to all of their users: by only showing them to experts, just like you don't publish the entire databreach credentials in cleartext but hand them to someone experienced who acts as a middleman for public transparency


Irrespective of whether or not NeuralHash is flawed, should Apple scan user data or should they not?

If not, what is going to convince them to stop at this point?

I believe that they should scan user data in some capacity, because this is about data that causes harm to children.

However, I believe that they should not run the scan on the device, because that carries significant drawbacks for personal privacy.


> I believe that they should scan user data in some capacity, because this is about data that causes harm to children.

I think this is actually a pretty weak argument. Children may be vulnerable, but so are many other segments of the population, including those with disabilities, the elderly, and so on. I guess my question is if we're trying to protect children, then why not go all the way? The answer is because going all the way is dangerous: where does it stop? Why here but not there?

I think a better argument would be: we should scan phones for X because X is illegal, and we've deemed, as a society, that X is wrong/bad/punishable/etc. But the problem with that argument is that we already have ways of checking phones/user data for X, and, thankfully, most have to go through a legal process of probable cause, obtaining warrants, and so on. Apple's proposal completely circumvents the legal system.

This is a complicated question that I'm sure will be litigated in court in the next few years. SCOTUS has ruled that drug sniffing dogs (e.g. in airports) are constitutional, whereas using infrared imaging (e.g. for finding drug grow houses) is not. Are hashing algorithms more like drug sniffing dogs or like infrared imaging?


I'm not understanding how it completely circumvents the legal system if the law enforcement agencies are still contacted after the flag is triggered.

But if the lack of a warrant is the problem then the main issue would be the third-party doctrine and having no reasonable expectation of privacy when entrusting your data with any company, not just Apple.


> I'm not understanding how it completely circumvents the legal system if the law enforcement agencies are still contacted after the flag is triggered.

Because, if I were to compare to what OP claims here, constant device scanning and then contacting law enforcement is like having a company constantly infrared scan your house for drug growing, and then contact the authorities about it. This is circumventing the authorities' legal limitations by moving the burden of responsibility to a company.


My friend, this was never about “data that causes harm to children”.

This is a carefully chosen stated reason for Apple to add this functionality to avoid user pushback. The actual uses will vary from country to country, but on the whole will not result in the protection of anyone other than the government or ruling party.

In countries with less civil freedom this will be exploited and used as a tool of oppression against minority political groups.


I think this has been established already, but nevertheless it's best to repeat it.

The on-device scanning capabilities will be expanded in the future, but it can be stopped if Apple care about their users.

Right now it seems they don't care.

Forgot to mention that I'll not be recommending or purchasing any Apple devices until they halt this scanning idea.


> but it can be stopped if Apple care about their users.

Apple cares about selling devices, maximizing revenue. Users are just something required to authorize transactions.

If Apple has to kowtow to the most oppressive regimes in the world to keep doing the above, you bet they will and probably already are. And since it is in the interest of both Apple + Gov't to do shady stuff in the dark, there is a good chance they can hide it the public indefinitely.


No, they shouldn't scan data. Apple's role is not the police and they don't have the responsibility or moral imperative to scan for illegal imagery on their customers' phones. In fact it's the opposite. They should be in service to their customers and people's devices especially should be in service to its owner. On-device scanning violates this premise.

There are things you can use technology for to help combat CSAM, but this is certainly not an acceptable way to go about it. Creating tools to help report abuse and illegal imagery in their software, for example, could be useful. This is not.


>Apple's role is not the police and they don't have the responsibility or moral imperative to scan for illegal imagery on their customers' phones

What if the police come to Apple with a warrant? Should Apple have the responsibility to reveal the user's content to authorities?

Part of the motivation for this type of thing is that E2EE effectively makes the old system of warrants obsolete and hides this data permanently from authorities. Lots of people see that as a huge win. Lots of people see that as a huge loss.


> What if the police come to Apple with a warrant? Should Apple have the responsibility to reveal the user's content to authorities?

If they lack the capability to do so then they don't need to worry about it.

> Part of the motivation for this type of thing is that E2EE effectively makes the old system of warrants obsolete and hides this data permanently from authorities. Lots of people see that as a huge win. Lots of people see that as a huge loss.

The speculation that this enables E2EE is just that; speculation. It's also not a strong argument because Apple could implement E2EE without snooping people's photographs and they'd be in the same position of being unable to assist law enforcement.

If you have a system in place that intercepts all files before they're encrypted and uploaded to the cloud, then E2EE doesn't serve its purpose of ensuring privacy of people's data. It completely defeats the point.


>If they lack the capability to do so then they don't need to worry about it.

"They don't need to worry about it" may be enough for you, but it is not a satisfactory moral answer for a lot of people. It shouldn't be tough to understand that some people don't want to actively enable illegal activity.

>If you have a system in place that intercepts all files before they're encrypted and uploaded to the cloud, then E2EE doesn't serve its purpose of ensuring privacy of people's data. It completely defeats the point.

Which is the reason why Apple proposed running this on the device so that those files don't leave the device if they aren't flagged.


> "They don't need to worry about it" may be enough for you, but it is not a satisfactory moral answer for a lot of people. It shouldn't be tough to understand that some people don't want to actively enable illegal activity.

You're not "actively" enabling illegal activity. The consequence of protecting people's rights is that illegal and immoral activity can occur under those same protections. It shouldn't be tough to understand that people don't want their rights trampled on to enable surveillance.

> Which is the reason why Apple proposed running this on the device so that those files don't leave the device if they aren't flagged.

How does this solve the issue, exactly? The fact that it's on-device doesn't change the equation here.


>You're not "actively" enabling illegal activity.

I don't want to get into a semantic debate about the word "actively". Suffice it to say that some people don't want to work on a product or support a company that they know is enabling criminal behavior while also refusing to do anything to curtail that illegal behavior.

>It shouldn't be tough to understand that people don't want their rights trampled on to enable surveillance.

Apple recognizes this. Once again, this is why they are doing this on device and not in the cloud like other providers. They are trying to do the scanning in a way that is invades privacy the least.

>How does this solve the issue, exactly? The fact that it's on-device doesn't change the equation here.

It changes the equation because it allows for everything leaving the device to be E2EE. One example is that this would stop anyone at Apple from snooping on your files. If the scanning was done on the server, Apple would need decrypted versions of the files which could be viewed by Apple employees.


> I don't want to get into a semantic debate about the word "actively". Suffice it to say that some people don't want to work on a product or support a company that they know is enabling criminal behavior while also refusing to do anything to curtail that illegal behavior.

I already gave one example of something they could do to help curtail illegal behavior without requiring an invasion of privacy. Some people also don't want to support a company that they know is enabling surveillance, which is what Apple is doing here.

> They are trying to do the scanning in a way that is invades privacy the least.

A violation of your right to privacy is a violation no matter how small you might think it is. They don't need to invade our privacy at all.

> It changes the equation because it allows for everything leaving the device to be E2EE. One example is that this would stop anyone at Apple from snooping on your files. If the scanning was done on the server, Apple would need decrypted versions of the files which could be viewed by Apple employees.

Everything leaving the device isn't E2EE and this doesn't "allow" for everything leaving the device to be E2EE. Stop pushing this false dichotomy. Things can be E2EE without any invasion of privacy whatsoever and there are other options to help combat CSAM that aren't violations of a user's privacy. Stop pretending like this is the only option when it's clearly not.


If there is a significant dropoff in the number of CSA convictions because all the tech companies gave up the current method of CSAM scanning for another technique, the government will believe that those companies would be failing to protect children, because there would be some number of criminals they'd be letting go.

There's no practical way to test this because there's no incentive for any of those companies to change how they scan for CSAM. None of those companies are going to take the risk of implementing something that could enable future sexual abuse crimes to take place just to improve privacy. The system in place apparently works, but there appears to be little statistical data validating those techniques.

What I would want to know is what alternative method for reporting CSAM could be developed that would meet the current expectations of the law in the number of cases they successfully uncover while also satisfying the desire for absolute privacy under all circumstances. A system that has users reporting content themselves relies on the users acting to incriminate other people, and the criminals can simply improve their privacy by hiding the content out of public view. It also doesn't address the fact that the government doesn't want people storing CSAM on their phones at all, and if they happen to notice you're storing CSAM for any reason, they're going to arrest you for it.

I'm honestly not sure that it's possible for any company to implement true E2EE given the current state of the law. It sounds like a tradeoff had to be made between privacy and the ability for the law to be upheld.


> A system that has users reporting content themselves relies on the users acting to incriminate other people, and the criminals can simply improve their privacy by hiding the content out of public view.

Isn't this in itself a win for reducing distribution of CSAM? The content is already out of public view. You can't just go to some clear web website and find CSAM unless you're incredibly unlucky and all legal entities have systems in place to report it if it does end up on there. Companies like pornhub have fairly recently created systems to reduce the chance of CSAM ending up on their platform by requiring verification. There's certainly nothing stopping most CSAM from being distributed encrypted through official or illicit channels and the sad reality is there probably is significantly more that is than isn't.

Law enforcement has a big task on their hands here and I can empathize with how difficult it can be but you cannot sacrifice an entire population's rights to catch the relatively few bad actors. This erosion of rights was justified after 9/11 to implement the PATRIOT Act and people are trying to use CSAM as a justification to do so now. It's not good.

> I'm honestly not sure that it's possible for any company to implement true E2EE given the current state of the law. It sounds like a tradeoff had to be made between privacy and the ability for the law to be upheld.

There's already companies that do offer E2EE encryption and there is no law requirements to scan for CSAM in the US. I'll concede the point that Google, Microsoft, and Apple are going to be under much more scrutiny of law enforcement and the government than a small privacy focused cloud storage provider, but the point stands that there is no legal reason they must implement this.


> Isn't this in itself a win for reducing distribution of CSAM? The content is already out of public view.

Making CSAM illegal isn't merely about preventing the material from being publicly visible. Having a ring of child abusers privately circulating CSAM outside of clearnet by using Tor or by exchanging thumb drives is just as damaging of an act to law enforcement agencies, because it means the market can still exist in a different environment. There will always be other ways to distribute CSAM, but what the law sees as within its reach in order to prevent the spread of CSAM to the greatest extent possible is to pressure companies that offer user data storage into increasing the amount of scanning for such material.

> you cannot sacrifice an entire population's rights to catch the relatively few bad actors.

Unfortunately this is a matter of public opinion, and there are many people out there that would argue instead that we need to give up some amount of privacy in order to better protect the human population.

> There's already companies that do offer E2EE encryption and there is no law requirements to scan for CSAM in the US.

What are the names of these companies? Are you sure there isn't a clause in their terms of service stating they will give up your data to law enforcement if requested, including the contents of encrypted files?

Even though there is no law compelling companies to scan for CSAM, the law is not going to let any of them store CSAM for an indefinite period of time. That would defeat the entire purpose of outlawing CSAM to begin with. Regardless of what's legally required of them, it's in the companies' best interests to ensure that the data stays off their servers. No matter what moral stance they take in deciding to implement CSAM scanning, they're not going to take the risk of being found criminally liable for the data they're storing.


> Making CSAM illegal isn't merely about preventing the material from being publicly visible. Having a ring of child abusers privately circulating CSAM outside of clearnet by using Tor or by exchanging thumb drives is just as damaging of an act to law enforcement agencies, because it means the market can still exist in a different environment. There will always be other ways to distribute CSAM, but what the law sees as within its reach in order to prevent the spread of CSAM to the greatest extent possible is to pressure companies that offer user data storage into increasing the amount of scanning for such material.

How do you know what "the law" sees as being within its reach? Law enforcement and rights are always going to be at odds with each other.

> Unfortunately this is a matter of public opinion, and there are many people out there that would argue instead that we need to give up some amount of privacy in order to better protect the human population.

I don't really care so much about this hypothetical public opinion. Refrain from using this unknowable nebula of consensus to argue your points for you. Even if there is massive consensus that would not change my argument that it is wrong.

If a company has no means of which to know what is stored on their servers, they have no reason to presume that it is illicit material, even if it in fact is. Removing those liabilities by restricting what Apple can know has been how Apple's been operating for quite some time. Even this implementation could be argued that this is what Apple is attempting to do. The issue is that ultimately, Apple is installing software on your device to work only as adversary to its user by violating their privacy.

> What are the names of these companies? Are you sure there isn't a clause in their terms of service stating they will give up your data to law enforcement if requested, including the contents of encrypted files?

By definition if the third party can decrypt your files it is not end to end encrypted, the third party in this case being Apple. Many files upload to iCloud already are encrypted but since Apple also has access to the key it can decrypt and examine the contents of the files on its servers. With E2EE the only person with access to the private key would be yourself. They could certainly hand over the encrypted files and if law enforcement gained access to your private key they could then decrypt those files, but (hypothetically) not until after they've obtained the private key.


>I already gave one example of something they could do to help curtail illegal behavior without requiring an invasion of privacy.

You said they should add a reporting mechanism, that won't stop anything because if a person already has knowledge that this content is being shared, they could just report it to the authorities. There is no reason to report it to Apple. Like you said, "Apple's role is not the police".

>Some people also don't want to support a company that they know is enabling surveillance, which is what Apple is doing here.

I acknowledged that in my first comment. Some people feel one way. Some people feel another. Apple is acknowledging both sides of this while you are only acknowledging one.

>A violation of your right to privacy is a violation no matter how small you might think it is. They don't need to invade our privacy at all.

This type of black and white ideology can't function in the real world. All rights have limits and the general rule is that your rights end when they start infringing on my rights. Your right to privacy does not supersede my right to not be a victim of a crime.

>Everything leaving the device isn't E2EE and this doesn't "allow" for everything leaving the device to be E2EE. Stop pushing this false dichotomy.

Everything I have seen from people in the know has suggested that this is a step on the path to Apple enabling E2EE encryption for iCloud.

>Things can be E2EE without any invasion of privacy whatsoever and there are other options to help combat CSAM that aren't violations of a user's privacy. Stop pretending like this is the only option when it's clearly not.

Do you have an actual suggestion that you think will cut down on CSAM beyond providing a redundant reporting system?


> You said they should add a reporting mechanism, that won't stop anything because if a person already has knowledge that this content is being shared, they could just report it to the authorities. There is no reason to report it to Apple. Like you said, "Apple's role is not the police".

Recognition isn't something people would always be able to immediately do with CSAM imagery so Apple, which has already created a tool to recognize it, can assist with that. It can also create tools to reduce the overhead of reporting. If you make it easier to report, more people will do it. You're assuming that anyone that ever encounters CSAM would go out of their way to report it, which simply isn't true.

> This type of black and white ideology can't function in the real world. All rights have limits and the general rule is that your rights end when they start infringing on my rights. Your right to privacy does not supersede my right to not be a victim of a crime.

My right to privacy is not an infringement of your rights so this argument has no bearing in reality. Law enforcement requires probable cause to get permission to surveil the population and there's been countless cases thrown out because of this violation, which, as it turns out, was even tested in the context of technology interfacing with NCMEC.

> Everything I have seen from people in the know has suggested that this is a step on the path to Apple enabling E2EE encryption for iCloud.

Those people are also pushing the same false dichotomy that you are. There is no technical reason that this is required to enable E2EE for iCloud, it's purely speculation as to why Apple would roll this surveillance tool out, as a compromise to having a fully encrypted system.

> Do you have an actual suggestion that you think will cut down on CSAM beyond providing a redundant reporting system?

The reporting system isn't redundant, but beyond that, the impetus isn't on the person whose rights are being violated to offer solutions to replace that violation.


> If you make it easier to report, more people will do it.

The law isn't going to rely on people that might or might not report CSAM if they see it. Average users have no obligation to report CSAM, unlike the law and the companies that store user data. If the law finds CSAM themselves, they will always report it, and in their eyes that only improves the chances of a successful conviction.

> My right to privacy is not an infringement of your rights so this argument has no bearing in reality.

The law would argue that a right to privacy that includes the ability to store CSAM privately is infringing on the rights of others, because allowing CSAM to be consumed and distributed creates a market that incentivizes the further abuse of children. CSAM cannot be produced without CSA taking place, which makes it a part of that cycle.

> There is no technical reason that this is required to enable E2EE for iCloud, it's purely speculation as to why Apple would roll this surveillance tool out, as a compromise to having a fully encrypted system.

That is Apple's own fault for failing to precisely explain their reasoning for designing the system the way they did, and the confusion was entirely preventable.


The legislative branch might try to do something but it’d inevitably be tested in our judicial system. We can only speculate how the courts might rule in such cases but in just a philosophical sense there is no argument here.


Here is an opinion from the district court of Northern California. Among other things, it mentions that Microsoft was not acting as a government agency to conduct CSAM scanning, and that the Fourth Amendment does not apply to private entities. It might be worth reading through for some of the other people discussing this issue, since it happened fairly recently (December of 2020).

https://casetext.com/case/united-states-v-bohannon-15


I don't doubt that a case that involved scanning for CSAM on their own property wasn't found to be a violation of someone's 4th amendment rights. One of the cases cited is United States v. Ackerman that did involve a case regarding NCMEC opening other email attachments containing CSAM other than the one that AOL scanned which was a violation of his 4th amendment rights.

The reason this is murky territory is because it's on-device, and we have precedence of there being an expectation of privacy on our cell phones via Carpenter v. United States which found that in a very narrow circumstance third party doctrine did not apply. However, other cases have not been tested and there are many factors regarding Apple's implementation that may or may not protect it in an actual case. We don't really have a clear picture of how and when third party doctrine applies with the case of our cell phones, which have been determined to be nearly indispensable in modern society.

But there's a difference between the legality of something and morality to it and while the former is up in the air, the latter really isn't in this case, at least for a lot of people that are arguing for or against this system. In my opinion, it does not matter that Apple's implementation only ever activates if you there is intent to upload to their servers. The fact that it is performed on device is in itself the most problematic aspect. There are no acceptable justifications to be had here. If Apple wants to scan their own servers that is their prerogative. The same cannot be said about my phone.


>Recognition isn't something people would always be able to immediately do with CSAM imagery so Apple, which has already created a tool to recognize it, can assist with that.

You said you didn't want Apple to be the police but you know want them to be the judge of what is and isn't CSAM?

>It can also create tools to reduce the overhead of reporting. If you make it easier to report, more people will do it. You're assuming that anyone that ever encounters CSAM would go out of their way to report it, which simply isn't true.

I don't know why you are assuming that potential reporters knowing about CSAM but not reporting it is anywhere close to as common as CSAM being unknown and therefore obviously unreported.

>My right to privacy is not an infringement of your rights so this argument has no bearing in reality. Law enforcement requires probable cause to get permission to surveil the population and there's been countless cases thrown out because of this violation, which, as it turns out, was even tested in the context of technology interfacing with NCMEC.

As the other reply stated, certain content does infringe on other people's rights. Also Apple isn't law enforcement. They don't need probable cause. One of the primary and unmentioned motivators here is that they don't want CSAM ending up on their servers and opening themselves up to legal action.

>Those people are also pushing the same false dichotomy that you are. There is no technical reason that this is required to enable E2EE for iCloud, it's purely speculation as to why Apple would roll this surveillance tool out, as a compromise to having a fully encrypted system.

It is a technical requirement once you accept the moral requirement that Apple doesn't want to enable the sharing of CSAM. Once again, you are ignoring that some people think the morality of enabling (or at least not curtailing) the sharing of CSAM is a moral failing that can't be accepted.


It's also illegal to operate a meth lab in your basement, but the government would still be required to have probable cause to grant a warrant to look in your house without your permission. You don't start with the premise that everyone is doing illegal things and must prove otherwise, which is what this system assumes.

Your argument of it being a moral requirement is dubious. If you really wanted to stop CSAM why not we all live in glass apartments like in Zamyatin's We? To what extent would invading your privacy be acceptable to you if it means we could stop some or even all CSAM distribution and production? Once you decide that it's a moral imperative then you accept that any erosion of privacy in the interest of saving the children isn't off the table.


They will absolutely have to worry about it when Apple is declared a safe haven for child abusers and the government declares they have criminal liability.

In my mind, what might have actually transpired for Apple to announce this change was a less severe instance of such governmental or legal pressure on Apple to scan as much data as the other major tech companies. Last year Apple only submitted a couple hundred reports to the NCMEC, while Facebook submitted 20 million. Looking at those numbers makes it sound like Apple was a nail sticking out that needed to be hammered down.


To be honest, no. The police should come to you with a warrant. I understand that in exceptional cases of advanced crime an elaborate case should be built, but for the average joe these collusive investigation tactics are overkill and, quite frankly, a form of power abuse.


>The police should come to you with a warrant.

How does that change the warrant's effectiveness? Should you be obligated to unlock your phone for authorities? Otherwise the warrant still does nothing because the device's encryption can't be broken without forcing a person to potentially self-incriminate themselves. This is fundamentally the same problem that encryption breaks how warrants used to behave.


> I believe that they should scan user data in some capacity, because this is about data that causes harm to children.

Should GM use onstar to record every conversation you have in your car and use AI to flag conversations that relate to child abuse? After all it involves children.

And if down the road law enforcement decides to expand what gets flagged, you shouldn’t care if you aren’t breaking any laws, right?

This is a horrible idea and it’s a matter of if, not when it is abused.


Your devices should not betray you, period no exceptions.

I dont care if it will save billions of lives, your devices should not betray you.


I don't believe they should scan user data. User privacy is of utmost important (even more than slim risk of doing whatever). Do you want companies to have free reign scanning anything you use for "unauthorized" activities?


> I believe that they should scan user data in some capacity, because this is about data that causes harm to children.

The legal system already provides a means for invading people's privacy given certain criteria, whether by scanning their phones, physically searching their residences and offices, or whatever. The criterion is "probable cause" and the search is launched by a judge signing a warrant that the probable cause exists. That is how the Framers wrote it into the US Constitution (see Amendment 4), because they had dealt with too many bogus searches done by the Colonial governments and wanted to put a stop to that. Things have not really changed since then.

"Think of the children!!!" is a traditional thought-stopper, but really, if there is substantive reason to search someone's phone, just convince a judge to sign a warrant and everything will be fine. Searching people's stuff when they are not under any concrete suspicion is part of why we had a revolution in this country in the first place. I.e., why does Apple hate America, as the memes like to put it?


> I believe that they should scan user data in some capacity, because this is about data that causes harm to children.

Google, Facebook, and others are already scanning the images when uploaded to them. We should not be surprised that the images we upload are being scanned. Sometimes for CSAM, sometimes to train ML, and sometimes for other things.

Apple doing scanning images that were uploaded to their servers would just be another company doing the same thing.

If I understand things correctly, these companies even have some legal obligations to do some of this.

> However, I believe that they should not run the scan on the device, because that carries significant drawbacks for personal privacy.

This is about where Apple is doing the scanning. Doing it in icloud is on them. Doing it on our phones is an invasion of privacy. And, it builds a system that can be used for other things. It opens a door, as many others have pointed out.

I wonder if Apple did all of this to scale the implementation. So it wouldn't have to run on their servers. They wouldn't have to maintain and power them. Poor implementation if that's the case.


Here’s a web demo[0] where you can try out any two images and see the resulting hashes, and whether there’s a collision. You can also try your own transformations (rotation, adding a filter, etc) on the image. Demo was built using Gradio[1]. [0]: https://huggingface.co/spaces/akhaliq/AppleNeuralHash2ONNX [1]: https://gradio.dev


The children are used as a manipulation tool - if you talk about CSAM, your videos are being demonetised and reach is being reduced and people don't want to be associated with it in any way or being attacked for opposing the idea of protecting children. If they said from the outset that they are going to scan phones for any illegal content, the push back would be much bigger.

So now they have a wedge that they can gradually use to expand what is going to be scanned and when.

This is a classic power move straight from 48 laws of power.


Even searching for case law and public policy on this subject causes a chilling "WARNING CHILD ABUSE IMAGERY IS ILLEGAL" box to display on google.

Speaking out enough on this gets you called a pedo.

I will not go as far to claim that the people for this are intentionally using children to shield themselves from criticism push out a universal surveillance mechanism. ... but if you wanted to do exactly that, hiding behind the spectre of child abuse is exactly that.

There are some hard questions we should be asking. Facebook claims to have reported some 20 million insances of child porn last year, but there were only a couple prosecutions. Are the matches fake? Is the goal really to just build a huge database of kompromat? If these images aren't a concern why are only an infinitesimal proportion of cases prosecuted?


> this is about data that causes harm to children.

The data is not what causes the harm. It is the method used to produce (some of) the data that causes the harm.

And the problem is that looking for the data in order to find the people causing the harm runs afoul of the converse fallacy. It's the same fallacy that causes people to think that because most terrorists (that they know of) are Muslims, that most Muslims must be terrorists. Most child abusers have photos of naked children on their phones, therefore most people who have photos of naked children on their phones must be child abusers. It's false in both cases. A friend of mine, for example, has photos of her naked child on her phone that she sent to the doctor to help diagnose a rash.


> should Apple scan user data or should they not?

I guess it depends on whether the system is 100% safe and cannot be exploited;

Since only iCloud photos are scanned, and other cloud storage providers already do this and have been scanning their users photos for a long time. I haven’t heard anyone complain when Google started scanning their cloud photos, which is kind of… interesting.


> If not, what is going to convince them to stop at this point?

Maybe if masses of people go into apple stores and save photos on demo phones that collide with the files they're looking for. Do those phones get scanned? Will apple ban their own app store accounts?


I wonder when they'll add this feature to MacOS?


[deleted]


How could a picture be in a CSAM database if you just took it with your camera?


[flagged]


I predict no reversal. No one cares about this besides for privacy zealots who complain about literally everything.

"Real" people actually care about getting to see their iProstitute naked though.


Worse than that - there are excellent reasons to oppose this system, but the debate is centered around misunderstandings and false claims rather than why the system is wrong. That’s not going to go well.


always nice to see your attractive friends naked


If people are only talking to you because you are paying them money to talk to you, they aren't your friends.


A nice quip, I first heard it about fraternities and university greek life

A) Many OnlyFans accounts are free to subscribe to

B) Many OnlyFans accounts are free to direct message

C) You as a consumer dont have your OnlyFans account tied to your identity from the performers perspective, so you can do both if you want. Be a friend and consumer without tying the two. No different than seeing your friend on a billboard or magazine. But you can also tie it to your identity if you want, not all people treat their friends only as customers once they subscribe. If you have a toxic relationship then you need to address that on your own.

D) Much of OnlyFans commerce comes from its own economy of creators. Many subscribe to each other. Many also cross promote.


Now let's create one for the hash matching that Google, Microsoft, and other cloud providers use.

If your problem with Apple's proposal is the fact they do hash matching (rather than the system is run on your device), why is the criticism reserved for Apple instead of being directed at everyone who does hash matching to find CSAM? It seems like a lot of the backlash is because Apple is being open and honest about this process. I worry that this will teach companies that they need to hide this type of functionality in the future.


The backlash is because Apple differentiates on privacy. Users are understandably upset that Apple went back on their word and no longer puts user privacy first.


This _is_ a user privacy feature. It's clearly a precursor to E2EE iCloud services they weren't able to provide before.


I fail to see how scanning my photos on my device is a win for privacy. What’s the point of E2E encryption if there’s access to the device? E2EE is meaningless in the presence of this backdoor.


The CSAM scanning system Apple describes prevents[0] the sort of abuse that has historically been used as as argument against convenient strong encryption. By casting a very specific net for the most egregious abuse, they defuse the strongest political argument against E2EE while remaining compatible with a strong E2EE system. Read the docs upthread - they go to lengths to cryptographically enforce the topline privacy guarantees. Client side scanning of iCloud-destined photos is a net-zero change, since they’re already doing this server side. Scanning plus future E2EE is a strict improvement in privacy for iCloud users.

If you're concerned about ways Apple might access on-device contents in the future ... there's myriad ways for your OS vendor to circumvent the privacy protections offered by your OS ...

[0] or is at least as comprehensive a mitigation as possible on systems with access to cleartext


Does that mean we shouldn't hold companies accountable for violating privacy as long as they don't market themselves as being pro-privacy?


Honestly, if we want a higher baseline of privacy (or environmental protection, or worker rights, etc.) we need laws not public pressure.


I think people are harsher over the intent more than the the hashing algorithm being used (it does exacerbate the problem, though)?


The intent is supposed to be the same as with the usage of PhotoDNA. If anything, this is proof that security by obscurity is an effective strategy not only for hiding activity in a black box, but also preventing potential protesters from ever becoming aware that there was a problem to begin with.

I completely agree that the magnitude of the backlash against Apple is not fair in comparison to the backlash against every other tech company using PhotoDNA, and it feels hypocritical to for everyone to have given them a pass for over a decade despite the principle and intent remaining the same.


Having a device you own scan your files and report you is fundamentally different from having a server which you submitted plaintext files to search you. I agree both are bad, but Apple's scheme erodes a critical boundary.

It would be like if the bank searched your safe deposit box stored in their vault in the course of their business, but then later announced that to reduce the need to search your vault they'd be stationing a spy in your home to inspect the box before you leave for the bank. The spy will secretly report back to the bank if he sees something he suspects, and then the bank will search.

With these collision examples, you now learn that the spy will also be tripping balls.


Yes, that is the reason behind my comment. The linked site is not criticizing the intent. It is criticizing the hashing algorithm. Therefore this post either shouldn't be part of the criticism of Apple's plan or it should be part of the overall criticism of the tech industry including Google and Microsoft.

I don't think the hashing exacerbates the problem because the alternative to Apple's proposal seems like it would be to run the same hash matching on the server like those other companies. That doesn't fix the hash collision problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: