Hacker News new | past | comments | ask | show | jobs | submit login
Inverting PhotoDNA (anishathalye.com)
131 points by anishathalye on Dec 21, 2021 | hide | past | favorite | 25 comments



A bit of context: Microsoft developed PhotoDNA to identify illegal images like CSAM -- NCMEC maintains a database of PhotoDNA signatures, and many companies use this service to identify and remove these images.

Microsoft claims:

> A PhotoDNA hash is not reversible, and therefore cannot be used to recreate an image.

This project shows that this isn't quite true: machine learning can do a pretty good job of reproducing a thumbnail-quality images from a PhotoDNA signature.

There has been some discussion in the past on HN about PhotoDNA: https://news.ycombinator.com/item?id=28378254. It has been claimed that PhotoDNA is reversible, but there was no public demonstration as far as I know.


Interesting work. It seems like these lossy photo hashes are more obfuscation and resolution reduction than anything secure.

In the photo of the child, facial features aren't reconstructable. Is it possible that PhotoDNA hashes might be sufficiently potato to not infringe on privacy or meaningfully (if not legally) count as explicit/illicit content?

Are secure lossy photo hashes possible? I feel like it'd require indistinguishability obfuscation (iO), where distinguishing a hash from zillions of random noise preimages would be guaranteed intractable. But I'm not sure if you could guarantee all the preimages matching the hash aren't all still recognizable.

Maybe you could set up some kind of adversarial system using your work as the adversary? (e.g. Set of transformed input images --[hashing neural net]-->hashes, then hashes--[Ribosome]-->preimage, then penalize hashing neural net (and reward Ribosome) based on distance of input image and preimage, along with rewarding the hashing net for distance from its hashes of other images + closeness of its hashes of similar transformed images.)


The reversible restriction is important because otherwise distributing PhotoDNA signatures of CSAM is distributing CSAM.


That seems a little bit extreme unless you mean lossless reversibility, which for a process like this will never be available, as per information theory. Obviously the best thing you get by this process is a smudge of colors which will never pass the "I know it when I see it" test.


Maybe, but is blurry CSAM still CSAM? If not, then they could just distribute the blurry thumbnails instead of this hash.


It's not if it can't be distinguished of the same blur of something completely different.


Couldn't you then pipe the lossy thumbnail into something like nvidia's ML tool that creates "real" images from sketches?


Of course you can. The problem is you almost certainly won't end up with the original image unless the ML tool has been trained with this original image (and even then it might not necessarily appear as an output). There's so many different images it could create from a smudge that there's no reason why the original one should be preferred.


On a side note, I find it kind of funny how, when using the model trained on Reddit, some of the outputs contain a quite readable "The image you are requesting does not exist or is no longer available" text, and a faint "imgur.com" watermark in the lower left corner.

For the former, I guess when training the original model, a bunch of the Reddit images weren't available at crawl time. Wouldn't it make sense to somehow weed those out from the data set before the training?


Yes, that would have made sense, but I was being lazy :)

I also didn't notice this issue until after I had trained the model and tried inverting hashes (didn't want to look through the NSFW dataset myself). And I thought it was amusing to leave it as-is (it demonstrates a point that issues from the dataset can be carried over to the results).

For the "image not available" ones, removing those in an automated way probably would be straightforward (could have used perceptual hashing for this task!).

For removing watermarks, there's some neat work from Dekel et al. in CVPR 2017: https://watermark-cvpr17.github.io/


I'd say that the project confirms that PhotoDNA is not reversible.

This project generates discolored deformed thumbnails with maybe 12 pixels of resolution, and that's after addition of synthesized/imaginary data into them. Without priming by looking at the ground truth image, any attempts to guess what was in the images is just a Rorschach test.


This is the first paper on the matter. I imagine the techniques will improve.


I'm not a mathematician, but isn't there a direct correlation between reversibility and the unlikelihood of collisions? That is, if you have few to no collisions in the entire dataset of human-created images, it must be technically possible to reverse the hash into a reasonable thumbnail?


Hashes aren't strictly reversible, so "reversing" a hash usually means finding data that hashes to a given value. E.g. in the case of PhotoDNA there are 256^144 possible hash values (the hash is 144 bytes long). So you can't avoid collisions in a set with more than 256^144 elements. So recersing PhotoDNA means finding an element that hashes to a given value, but it doesn't mean that that's the image that was hashed to this value that is the real-world source of this hash.

Now as for completely theoretical possibility you can reverse any hash by brute-force. But that may not be computationally feasible. Now let's think what is easier: reversing a hash with few or many collisions? Let's take an edge case: a hash function hashes everything to just one value (not a very good hash function, but bear with me). That will be the easiest hash function to reverse, since any input is a right answer. Now as you reduce the number of collisions, the task becomes harder: you can divide images into disjoint sets such that each of them is defined by the hash value images in this set hash to, so if you have two images, each from distinct set, they hash to distinct values, and if they are in the same set, they hash to the same value. When you reduce the number of collisions the sets get smaller in size, but their amount grows. So for any given hash value which you want to reverse you need to check a bigger number of candidates to figure out which of the sets is the one for the given hash value (one candidate to check from each set). And that's assuming it would be feasible to just keep a list of those sets and their members, which it isn't, but illustrates the correlation. So actually it's easier to reverse when you have more collisions. It's counter-intuitive, because it's a different kind of reversing than what is usually meant by "reversing a function", which requires a function to be injective.

But: with fewer collisions you have higher likelihood that what you got as the result of reversing is what someone else used as input to the hash function (each of the sets is smaller, so with uniform distribution we get 1/n probability, where n is the number of elements in a given set of elements hashing to the same value).

Anyway, the sets I wrote about here are elements of a quotient set defined by the equivalence relation of hashing to the same value. You can read more on Wikipedia, if my description was a bit confusing: <https://en.wikipedia.org/wiki/Equivalence_class#quotient_set>


The requirement that changing the image a little bit changes the hash a little bit makes the image space smooth and more suitable for machine learning.


Exactly. Unlike other hashes, this photoDNA "hash" has to be somewhat of continuous function of its inputs.


Seems like false advertising to even call it a "hash" at this point? If meaningful data can be regained, it ain't a hash.


Yes and no. If you are thinking about cryptographically-secure hashes, then you're correct. However, not all hashes are intended to be cryptographically secure; this particular kind of hash is called a perceptual hash. Perceptual hashes are specifically intended to be able to identify slightly-modified images; and for that purpose we can't have all of the cryptographic security of, say, SHA-256.

The third type of hashing is the dirt-simple hashing schemes you see used in hash tables; although even there denial-of-service resistance is still a desired property for public-facing web services.


Irreversibility is not what defines a general hash function. It’s a critical characteristic for cryptographic hashes, but I serially doubt anyone knowledgeable ever described PhotoDNA as a cryptographic hash function.


Yea, a "hash" should be maximally non-continuous: Minimal input changes should change the output as much as big input changes.


A cryptographic hash definitely should, but there are many more use cases for hashes.

You want SHA to be maximally non-continuous, but you certainly want the opposite for ssdeep for instance.


I wonder if, with a couple million passwords and their salted hashes, we can reconstruct something similar to the original password and reduce the search space somewhat.

I know it should not be possible, but, still, I’d love to play with that kind of dataset.


No, because hash functions are non-continuous, aka they change a lot if the input changes minimally. This is different from imageDNA, which needs to be robust against small changes in input.


I know. It should not be possible, but sometimes I get an itch to check if something that wasn't supposed to be really isn't.


Wow... People really has issues with the ways I pass my time...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: