Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How would they manually vet the matches, except by looking at the matched pictures?

And here's the real question, what's to stop them from using this on say: political memes instead of CSAM?



They have access to a "visual derivative" (which I suppose is their way of saying "thumbnail") but it probably wouldn't help if the adversarial example is normal porn. This being said, once the authorities are contacted, they will have to work to obtain the full image, because if all they have is a thumbnail and a voucher, the evidence would probably be thrown out in court.

As for using it for other things than CSAM, well, for one, Apple would know, because the thumbnails would show political memes, so they'd have to be in on the conspiracy. They probably don't want that liability. Furthermore, the hashes are supposed to be auditable: a third party could check that they are what they are, a court order could order such an audit, and it would be suspicious for Apple to refuse. They wouldn't want to include anything that could piss off any sufficiently powerful government or, say, the EU, because they are likely to figure it out. And if they give different hashes to different citizens, that will also be obvious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: