Hacker Newsnew | past | comments | ask | show | jobs | submit | newArray's commentslogin

Hash collisions happen by design in the perceptual hash, its supposed to give equal hashes for small changes after all.

Something I find interesting is the necessary consequences of the property of small edits resulting in the same hash. We can show that this is impossible to absolutely achieve, or in other words there must exist an image such that changing a single pixel will change the hash.

Proof: Start with 2 images, A and B, of equal dimension, and with different perceptual hashes h(A) and h(B). Transform one pixel of A into the corresponding pixel of B and recompute h(A). At some point, after a single pixel change, h(A) = h(B), this is guaranteed to happen before or at A = B. Now A and the previous version of A have are 1 pixel apart, but have different hashes. QED

We can also ATTEMPT to create an image A with a specified hash matching h(A_initial) but which is visually similar to a target image B. Again start with A and B, different images with same dimensions. Transform a random pixel of A towards a pixel of B, but discard the change if h(A) changes from h(A_initial). Since we have so many degrees of freedom for our edit at any point (each channel of each pixel) and the perceptual hash invariant is in our favor, it may be possible to maneuver A close enough to B to fool a person, and keep h(A) = h(A_initial).

If this is possible one could transform a given CSAM image into a harmless meme while not changing the hash, spread the crafted image, and get tons of iCloud accounts flagged.


The best part of this is we could all be flagged in a database for things like this and not even realize it. Who knows what kind of review process the governments are actually keeping around these


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: