Hacker News new | past | comments | ask | show | jobs | submit login
Fawkes: Image “Cloaking” for Personal Privacy (uchicago.edu)
498 points by MindGods 3 months ago | hide | past | favorite | 121 comments



I was immediately was curious how it would protect against image compression and/or otherwise de-noising these protection pixel changes. Their paper does address this question, and for those that are interested:

    Even image compression cannot defeat our cloak.
    We use progressive JPEG [57], reportedly used
    by Facebook and Twitter, to compress the images
    in our dataset. The image quality, as standard
    by Independent JPEG Group [1], ranges from 5 to
    95 (lower value = higher compression). As shown
    in Figure 15, image compression decreases the
    protection success rate, but more significantly
    degrades normal classification accuracy.
I wonder how it works, but it works.

p.s. Mobile friendly copy-paste: "Even image compression cannot defeat our cloak. We use progressive JPEG [57], reportedly used by Facebook and Twitter, to compress the images in our dataset. The image quality, as standard by Independent JPEG Group [1], ranges from 5 to 95 (lower value = higher compression). As shown in Figure 15, image compression decreases the protection success rate, but more significantly degrades normal classification accuracy."


They create a picture that is designed to fool facial recognition systems and then they will test it against JPEG and other image compression techniques and then run the facial recognition on the compressed version and if both of them pass then they don't change the image.

If the procedure fails then the GAN will know this and it would change the output so that it would pass both outcomes.


It doesn't use a GAN. I haven't read the paper closely, but is uses a feature extracting network and then tries to modify the features to match a different person. It doesn't try to take advantage of degenerative states in existing networks.

I think it does things like subtly change the shape of your eyes and whatnot. That makes people consistently difficult to pick out of a large set of unmodified faces, but a hu man still finds the images recognizable because they're comparing it to a smaller set.

They didn't create the model by training it on the facial recognizers they tested against, and they tested with several different feature extractors.


They are lucky in this case because compression does defeat the cloak to some degree— However, compression at those levels also defeats the ability of Facebook to accurate identify you.


How about a 1px Gaussian blur? Haven't read the paper but I had the same thought. Seems way too tenuous.


from the paper:

> However,we find that none of these transformations defeat our cloaks. The protection success rate remains 100% even when data augmentation is applied to cloaked images5. Ap-plying Gaussian blurring degrades normal accuracy by up to18% (as kernel size increases) while cloak protection success rate remains>98% (see Figure13). Adding Gaussian noise to images merely disrupts normal classification accuracy –the cloak protection success rate remains above 100% as the standard deviation of the noise distribution increases(seeFigure14). Even image compression cannot defeat our cloak.We use progressive JPEG [57], reportedly used by Facebookand Twitter, to compress the images in our dataset. The im-age quality, as standard by Independent JPEG Group [1],ranges from 5 to 95 (lower value = higher compression). As shown in Figure15, image compression decreases the pro-tection success rate, but more significantly degrades normal classification accuracy.


You don't even have to read the paper, it's in the FAQs: "Can't you just apply some filter, or compression, or blurring algorithm, or add some noise to the image to destroy image cloaks?"

Short answer: No not really. Long answer: Look at the FAQs :)


Color me extremely skeptical. A low-pass filter will make short work of any "tiny, pixel-level" changes designed to thwart ML. After all, one of the most tell-tale identifiers (space between eyes/nose/mouth) is still plainly observable and unaltered in the "cloaked" image.

If a human's neural network can correctly correlate the before/after examples, so can a computer's. They might have found an issue with some modern implementations of facial recognition, sure. But it's a false sense of security to claim "when someone tries to identify you using an unaltered image of you [...] they will fail."


> If a human's neural network can correctly correlate the before/after examples, so can a computer's.

color _me_ skeptical, but this is like saying we have functioning AGI; that artificial NNs are the same as the ones we have in our skulls. This to me, is an effect of the over-anthropomorphization of machine learning. It's a bad intuition to have.

However, I do agree. This is just one step in an arms race, and one iteration from being worthless.


That bit was more of a forward-looking statement about the future capabilities of image recognition, but yes it is somewhat hyperbolic in the general case. I don't believe we'll ever achieve AGI, but I do believe we'll have super reliable application-specific classifiers that vastly outperform humans and won't be fooled by tricks like this.


Machine vision will likely be one of the last of these, though; the visual cortex is a pretty large chunk of very sophisticated hardware.


From the linked article:

Q: Can't you just apply some filter, or compression, or blurring algorithm, or add some noise to the image to destroy image cloaks?

A: As counterintuitive as this may be, the high level answer is no simple tools work to destroy the perturbation that form image cloaks. To make sense of this, it helps to first understand that Fawkes does not use high-intensity pixels, or rely on bright patterns to distort the classification value of the image in the feature space. It is a precisely computed combination of a number of pixels that do not easily stand out, that produce the distortion in the feature space. If you're interested in seeing some details, we encourage you to take a look at the technical paper (also linked above). In it we present detailed experimental results showing how robust Fawkes is to things like image compression and distortion/noise injection. The quick takeaway is that as you increase the magnitude of these noisy disruptions to the image, protection of image cloaking does fall, but slower than normal image classification accuracy. Translated: Yes, it is possible to add noise and distortions at a high enough level to distort image cloaks. But such distortions will hurt normal classification far more and faster. By the time a distortion is large enough to break cloaking, it has already broken normal image classification and made the image useless for facial recognition.


It was pretty frustrating that they did not readily offer any example images for inspection, so against my better judgement I downloaded their binaries to run some experiements.

First, a source image at an approximate resolution that you might find on a social networking site: https://imgur.com/a/9szcC1m

Text output of the tool, which ran for about 3 minutes: https://imgur.com/a/fZtfrmm

The resulting cloaked image: https://imgur.com/a/OSHXdbO

I applied a difference filter between the two images in Photoshop, to show an example of the actual pertubations performed: https://imgur.com/a/q4zC7Ms

Since it's hard to see, I compressed the output to highlight what the program actually changed. It does seem like there is a good amount of disturbance to the image: https://imgur.com/a/1Sx68o3

Now, the real test. First, a Google reverse image search for the original file - identification is pretty bang-on: https://imgur.com/a/5HJwjPx

A Google reverse image search for the cloaked file: https://imgur.com/a/QByXBfS

The only difference I'm seeing is a few images that are one or two images swapped in the "visually similar images" category.

So, I figured that that's the "best case" for the cloaked image - giving the search algorithm the full, unfiltered data, and the program still failed to disguise it. For fun, I thought I would use a "low-pass filter" (Google Lens pointed at my computer screen) as well, just for thoroughness. And the result surprised me!

Here's Google Lens pointed at my screen with the original image open: https://imgur.com/a/1BVRFG0

And here's Google Lens pointed at my screen with the cloaked image open: https://imgur.com/a/uoppuit

So, it would seem that the algorithm's distortions more effectively come through in worse quality images! But, based on my full-resolution result, I wouldn't trust it to disguise something that is being directly uploaded to a social network.

Now, one important note is that reverse image search is probably not using a facial recognition model, but more like image chunk hashing - although I would also consider that something a privacy tool should defend against, which is why I included it.

All in all, very interesting and thanks for convincing me that I should actually test it out.


GIS is specifically designed to be good at finding similar images so it's going to work great for your test case. Facial recognition algorithms are solving a different problem.

If they were promising that cloaking would work well on GIS, that'd be a different matter. I can imagine wanting your images to not show up on GIS (because people would use them to try and find the source image on your profile, or something) but it's a different set of constraints at that point.

For cloaking a big use case would be "I took a selfie with a friend and want to share it on my instagram" and your goal is for that instagram selfie to not automatically connect with, for example, a surveillance photo of you at a protest. GIS is obviously not relevant to that scenario.


When I look at the cloaked image it seems that the tool is doing something legit. It's actually changing the shape of the face ever so slightly, rather than playing tricks with noise.

I echo the sentiment of other posters regarding reverse-image search. The original image should not be available to match against. That would be operator failure.


You're not testing against facial recognition models.

"similar image" search generally uses "perceptual hashing" which is not related to facial recognition, and not really a privacy risk.

You're not testing against what this tool was designed to defeat :-/


There's also a chance Google image search is looking at the filename of the image to get a bit more context. Does the reverse image search of the cloaked image still work if you rename it something unrelated?


Fair shake, this time I cropped the image too so that it wasn't looking at any of the boundary to identify it either.

Here's the original image: https://imgur.com/a/Td4rhoy

And the cloaked: https://imgur.com/a/cPCiCZo

These were both saved as JPG with compression level 8/12. I searched for the cloaked crop (96.jpg) first this time: https://imgur.com/a/FSehQWO

And the original crop (10.jpg) next: https://imgur.com/a/yx4jF0B

This time, Google reverse image search did better at identifying the name of the singer in the cloaked image, instead of just giving the band name for the uncloaked.

Not super scientific since we don't really know what's going on behind-the-scenes with Google reverse image search, but it's certainly one adversary that doesn't seem to be easily fooled if there are other images of "you" out there for it to find. I also tried these small crops in Google Lens with less success (I got unrelated portraits for both images, cloaked or not).


The goal, afaik, is to stop facial detection software from learning to recognize you and put a face to a name, not to frustrate visual similarity searches. The images are supposed to be visually similar -- so similar that they're indistinguishable to a human viewer.

Surveillance software that purports to accurately identify a person across multiple images is not just looking for the same content with some visually insignificant modifications. It's reading your facial structure, attaching your name to it, and searching for it in every image received. Fawkes is working to defeat that specific use case, not all fuzzy matchers in general.

P.S. If you have a human assailant running a reverse image search for photos of you, I think you're well past the point that something like this could be expected to help.


But this suggests a way to defeat the cloak -- run your input through image similarity search, then run your facial recognition software on the hits. This won't work in full generality, not every picture is on the internet like that, but it can certainly help, I imagine.


I think that technique probably works a lot better with high-resolution professional headshots than it does with candid photos at the family reunion, for example.

However, if someone is willing to go to that level of effort, the target probably needs to aim for something a little more forceful than tricking Facebook's autotagger.


Since visual similarity uses similar feature vectors to facial recognition it calls into question their whole theory of operation.


Yandex seem to give even more precise matches than Google. Maybe image of someone less popular would work better.


This was my first impression as well, except with less knowledge on the subject.

Someone, somewhere said "huh...", and placed another filter into the pipeline to handle these types of images.

While looking for something to sound smart wrt the tank-training myth, I found this interesting page: https://www.gwern.net/Tanks "The Neural Net Tank Urban Legend"

And interestingly, looking at the link for "superresolution needing learned downscalers" found this: https://arxiv.org/abs/1907.12904 "Learned Image Downscaling for Upscaling using Content Adaptive Resampler", code available at https://github.com/sunwj/CAR

So, IDK, seems like this Fawkes approach will be an interesting paper.


Applying some form of transformation is a weak countermeasure to adversarial examples. Especially simple ones such as low pass filters.

https://arxiv.org/pdf/1707.07397.pdf

Neural Networks do not learn what humans learn. They can learn completely different and sometimes much smaller features.


Haven't read the paper yet but sure will do. I wonder how it works, that so many face rec implementations are fooled but I don't know how similar they work. I would have guessed they know which features are extracted and modify the relevant regions.

I guess that face rec software will quickly adapt though. That said, we have invisible watermarks that are very resistant to compression or other filters.


Or more simply, what if you just take a real life photo of the image and scan it back in and use the same classification techniques?


Reproducing the image in this way is essentially a manual low-pass filter (although with little control over the parameters), so it's certainly one valid data point with which to test the hypothesis.


This can be thwarted fairly easily https://arxiv.org/pdf/1607.02533.pdf


This is tested on existing models/Face Recognition API which means locked pre-trained models. So, They might have learned way to add pixels such that model outputs very different embedding. This is know issue in deep learning [0][1][2].

I believe, Model trained on cloaked images would defeat its purpose and make this technique useless.

[0] Su, Jiawei, Danilo Vasconcellos Vargas, and Kouichi Sakurai. "One pixel attack for fooling deep neural networks." IEEE Transactions on Evolutionary Computation 23.5 (2019): 828-841.

[1] Guo, Chuan, et al. "Countering adversarial images using input transformations." arXiv preprint arXiv:1711.00117 (2017).

[2] Liu, Yanpei, et al. "Delving into transferable adversarial examples and black-box attacks." arXiv preprint arXiv:1611.02770 (2016).


This should be somewhat mitigated if you keep using different cloaking images, no?


But the model will eventually be updated to detect and process the new cloaking images. So, to stay ahead, you decide to create a model that automatically generates different cloaking images, and... The whole system is now just a GAN : https://en.wikipedia.org/wiki/Generative_adversarial_network


I think there's a (hopefully strongly privacy preserving) combinatorial explosion here though. If current models can be trained to accurately-enough recognise me with, say, 100 training images - this tool might produce unique enough perturbations to require 100 images for each of the possible perturbations, potentially requiring you to train your new model using tens of thousands or millions of cloaked versions of the 100 images for each of the targets in your training set.

(If I were these researchers I'd totally be reaching out to AWS/Azure/GCE for additional research funding... <smirk>)


Not necessarily, because the changes are destructive. They can't restore what was there before, and they can't necessarily infer which image was cloaked and which was not.


There is a problem there that your pics are already out and you can no longer update them....


The FAQ there addresses that, suggesting you can "dilute down" the ratio of normal-to-cloaked images in the public data sets the model creators train on, and hence reduce their future accuracy.

(So now you just need to somehow get as many cloaked photos of yourself uploaded and tagged to FB as they've collected in the last decade or so...)


If you use a new cloaking image for each picture you upload to social then they will all be embedded in a different location for a given feature extractor and an adversary wouldn’t be able to reverse search for linked pictures—that’s at least my understanding of how the method would need to be used. But if you keep using the same cloaking image, your adversary could definitely learn that process and effectively undo it.


Cool. So it would prevent someone from learning my face from a collection of photos on my profile. That makes more sense...


While this seems to work against several current techniques, there's no guarantee it will work against all of them. It also offers no guarantees against future developments, and anything you put on the public internet is likely to be retained forever. Because of this I'd consider it an interesting proof of concept, but not something anyone should use as a privacy tool. You could consider it in cases where you're forced to provide a picture, for instance my public transport card requires one.


Yeah sure, I agree. At the same time: isn’t that exactly the kind of perpetual armament that’s going on in many fields?


its an arms race, like alot of things


> You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

So, if I adopt this and upload only cloaked images on social media, and the people I normally interact with also do the same, then facial recognition will be able to detect me based on someone showing the system that I’m present in the photo (even though it identifies me as the distorted version)?

If the above understanding is true, then even law enforcement could cloak all the photos they have and try to match captures with their raw photo set and the cloaked photo set to narrow it down for a human?

What am I missing?


My guess is that the changes it makes - presumably moving the standard face landmarks - are different for each picture you run through it, so multiple images of the same face will not be recognised as the same face.

(But I'm not sure, and have downloaded the paper and the apps to read and experiment with...)


Perhaps it isn't predictable? I would hope that the face recognizer can't just cloak the uncloaked images and then get the match.


Once this technique gets enough attention, a detector for it will be built. Even if the face cannot be recognized, a profile with such picture may be flagged for more scrutiny. This reminds me of using TOR that hides what you visit, and yet likely puts you into a watchlist for surveillance.

I think that a simpler and more robust strategy to achieve good privacy is avoid posting personal information online and social media altogether.


"Hey Bob? Check out this lykahb person. There's something _off_ about them. No Facebook, no Twitter, not even LinkedIn. Probably up to something, we should keep an eye on them. Add them to the list." -- some NSA/GRU/MSS/Mossad contractor


"Not standing out" isn't very scalable.


What you're saying points out that there are larger problems of government policy and capitalistic abuse that this software cannot solve.

However, simply not going on the Internet does not solve the problem people care about. People's desired solution is to use the Internet in a personal way and be safe - not just to be safe.


> The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

Do different photos of the same person produce unique results where even a comparison between two cloaked will result in a mismatch? The article mentions that only the comparison between unaltered and cloaked images will result in a mismatch. If that is the case, what's stopping someone from using this algorithm to generate a cloaked image from the unaltered one and then using both in order to identify you?


I wonder if this would be defeated by running an image I wanted to match through it first. Would current state of the art facial recognition match the two cloaked images, or did they already consider that as an attack surface?


"when someone tries to identify you using an unaltered image of you [...] they will fail."

I wonder how this holds up when someone takes a photo of that 'protected image'. I can imagine that if these miniscule pixel-scaled changes aren't visible to the naked eye, my crappy 6 megapixel camera will overlook it as well. If I then proceed to feed that image into my image recognition algorithm, is it still protected?


They go over the effects of compression - which they say only degrades the protection - but at the same time also degrades the identification accuracy of the AI model.

So if your crappy 6 megapixel camera cannot take a clear shot of the cloaked pixels - or effectively applying a blur filter - would also affect the AI detection.


More importantly, assuming they have a database of such cloaked images, what if someone just applies the same cloaking technique to the image of you? Can they still identify you?


That's making a pretty lazy assumption that even a quick read of the original article leads me to be sure it's incorrect.

There's quite a lot of comments here that stink of Dunning Kruger candidates, who read the headline and first paragraph, then just started typing their random "wisdom" assuming they're smarter and better informed that the team of PHD researchers who wrote the paper being discussed. (Am I just overly grumpy and judgemental today? Was HN always this bad?)


I've tried this with facenet and it still detects the correct faces I tried without much issue: https://github.com/davidsandberg/facenet/


How many faces were in the database you tested with? Face re-ID is much simpler for a small cohort.


I was using the pretrained LFW models, but an older version (128 instead of the 512). So roughly ~6000 faces


Also 7 days ago[1][2][3] but no upvote love so far. Which is curious given the (possibly short term, until these images join training corpus?) privacy benefits

[1]: https://news.ycombinator.com/item?id=23845760 [2]: https://news.ycombinator.com/item?id=23842016 [3]: https://news.ycombinator.com/item?id=23837565


It might be possible that these people have tried on more platforms (Face recognition APIs) but only reported those where they got good accuracy in terms of defeating system.

I personally would like to see tests done on facebook by uploading these images and checking if it can recognize it.


Looks like it is obfuscating asymmetry. In addition to the intended effect it could also make people more attractive.


Finally a comment that addresses how the cloaking works. All the other comments I've seen here wonder how pixel-level changes can prevent recognition. Well, shifting an eye a few millimeters changes the whole face!


They cloaked versions look like different people to me (except for the last). I’m more surprised by nobody mentioning this! They look like relatives, but not the same person.


I'm not surprised that there are loads of attacks like this. On QI recently (a TV prog in the UK) a series of images were presented showing just how asymmetric our faces are.

Try taking a photo of your face or someone you know with as near symmetric lighting etc as you can manage. Now cut the image vertically and mirror each half and compare visually.

Frightening isn't it?

Now add fancy patterns that can be hidden within an image that eyes miss but algos don't. AI does not see the way you and I do. It can't. AI can be twiddled constantly to get it into line with what we perceive and we could call that evolution. In 200M years it might be quite good.

I suspect that progress will be faster than that but those machines can't type on a keyboard balanced on its knee whilst drinking wine and admiring a landscape with a setting sun whilst worrying about how to shop tomorrow, now masks are compulsory. What's the SO up to? The TV is showing crap and a new Netflix series is available but I can't be arsed ...

The march of our robot overlords is unlikely soon.


Why cut and mirror? Can't you just mirror? Wouldn't this be the same as just looking at someone's face in the mirror?


No, he's not saying to flip, he's saying that the left and right halves of your face aren't symmetrical, and it's very uncanny valley if you view a portrait of a person with symmetrical left and right halves.

https://www.google.com/search?q=symmetrical+left+and+right+f...


Almost all of these (including the example on QI) are just poor editing. There are some examples in those results where it's been done properly, with symmetrical lighting, and the results look... almost exactly the same as the original. Unsurprisingly.


Oh, I see, thanks.


Here is a good video describing how this might work. Near the end he shows that even printing out an image that has been "cloaked" and viewing it from different angles can still fool a neural network classifier.

https://www.youtube.com/watch?v=4rFOkpI0Lcg


I had a similar idea for a system to add noise to videos to prevent them from being flagged by state censorship systems. Keeping videos of abuse from being deleted from public view in cases like the Great Firewall, for example. I don't have the expertise for implementation yet but I'm glad steps are being made in this direction.


Smart but not wise.

Certainly another tool in the privacy toolkit if you absolutely must surrender your likeness to someone else’s computer, but worth bearing in mind that this does not provide (and doesn’t purport to provide) the kind of privacy that strong encryption (or better yet, absent data) can provide.

Edited to add: it’s still damn cool.


Or better yet, burning Google and Facebook to the ground.

Technical solutions have never solved this sort of societal problem. Expecting a few individuals to fight against massive institutions with a little clever math is not going to work.


It'd be great to have an app on Android that applies this to every photo I take with my camera.


Agreed. I would use it, unless it was made by Facebook, Amazon, Google or Apple.


Stupid thought - technically couldn't any implementation include some sort of deliberate covert stenographic key? Granted that would likely be more "narrow down the implementation" without it being detectable via hashes per instance output.


I can't help but roll my eyes at the the introduction's "unregulated facial recognition software" part of the introduction. That is a meaningless term given the lack of regulation in the first place examples and says in itself nothing about the effectiveness. The "Clipper Chips" infamous Skipjack was regulated. It annoys me mostly because meaningless rhetoric looks like they have no defensible stance.

That rant aside I am curious if this technique will lead to more resilent facial recognition and image parsing techniques to find the shape. Obviously the fact humans can still recognize it is a hint there is some other algorithim possible.


Tried it out on a face in Google Photos. Google Photos IDed the "fawked" face without a problem.



has anyone tried with their own image? How did it go?


I can't speak for how effective Fawkes is but I can speak for the process. I just tried this out with 4 images.

One thing that I took notice of was how long the program ran on my computer. It took about 5 and a half minutes to obfuscate 4 images on an i9-9900K with the cpu was pegged at 100% the entire time. I can't imagine how long this would take on a low end laptop: especially if I needed to cloak a lot of images in bulk.

Another thing I noticed is that the discoloration that is applied to the final images can be easily mistaken for bruising. If I were to see someones post on social media and they looked like my results I'd be inclined to think that the poster is recovering from a bad fight or is a victim of abuse.

Other than those two little nit picks the tool is pretty cool! However I don't think I will be using it myself due to the second point.


Just defeated it on a dataset of 10k face images. Accuracy drops 7 +- 2%. Had to tweak some params, but that's it.


Great concept, as long as the subject can avoid canonical image-to-name mappings such as airports (now scanning everyone), US’s REAL ID database, and the like.

That said, given that for most people the threat model is social or work rather than legal, something like this would be terrific to build into consumer insta-photo devices.


My biggest pet peeve with this work is that they incorrectly say:

> These adversarial examples have been recognized since 2014 (here's one of the first papers on the topic).

Adversarial machine leaning has been around since 2006. It's only since 2014 that people decided to call them Adversarial Examples.


Stuff like this usually falls in the "too good to be true" category, and it seems like several people in this thread have already defeated Fawkes. I remember a few years ago, a Google paper claimed to have figured out a way to categorize aesthetically pleasing photos from unaesthetic ones. My friend had an idea for an app that could sort -- even roughly -- "bad" pictures from "good" pictures, as she takes like thousands of photos when on vacation. Just out of curiosity, I actually went through the trouble of setting up and running their TensorFlow implementation to pretty mediocre results.


I just tried it on myself and it produces some wierd colour distortions, particularly around the eyebrows, even with the default 'low' setting (50 iterations, threshold 0.003), so I'm not sure people will be happy using it. Kind of looks like I've been attacked by a bad eyebrow pencil. Also looks a bit like a picture printed on thin magazine paper (something trashy like OK! Magazine) held up to the light, so the image from the other side bleeds through.

If you tweak the values a bit lower it doesn't look so bad, but of course I haven't tested it with an array of DL algorithms.


Unfortunately these days it is really difficult, borderline impossible to control what images of you are uploaded to the internet. This is discussed in the "Real World Limitations" section of the paper. Even assuming you have no identifying photos online, non-public photos are still analyzed by big companies like Google, Facebook, and Apple, who have access to them through their cloud services (e.g. photos you, your friends or family sync with Google Photos, Apple Cloud). Having just one image correlate to your identification details and you lose anonymity.


This might work today, but it won’t work tomorrow.

This is just one side of a GAN, on the next iteration, it will be defeated.

Bottom line is that if a human can recognize, then it is possible for a machine as well.

Also, given that the big networks can just keep throwing more resources at it (I.e. GPT-3), it’s just a matter of increasing the network size to improve feature redundancy.


More accurately, if a human can accurately label inputs and measure outputs, it's possible for a machine. The human eye isn't the peak, just our current standard.


its not the eye thats doing the recognition. The human brain is still more impressive than any machine


I agree that it won't work tomorrow. To have a system that would continually work you would need to get access to an API that performs facial recognition and then continuously have the system perform queries on that system that would monitor that the facial recognition would fail.


Actually the system breaking tomorrow isn't likely to be the case due to the transferability property of adversarial examples.

Adversarial examples transfer between different models trained on different datasets with different architectures.

A new model from yesterday's data is essentially the same architecture, just with some fluctuations in decision boundaries.

Might it affect the success over time? Sure. But not tomorrow.


What we really need is Fawkes face paint. A little makeup/lotion that you apply to the face. It would apply random cloaking directly to your skin. This way the cloaking is automatic and applies to images that are not under your control.

I can be as careful as I want to be with my own media. That doesn't stop my wife from uploading the family photo to Facebook or a public camera capturing my image.


> What we really need is Fawkes face paint.

Check out https://cvdazzle.com (archive link, because honestly I'm surprised the site is still up: http://archive.is/v39xI)


I was thinking something more subtle.

This looks creepy but effective: http://www.urmesurveillance.com/urme-prosthetic


From a fundamental information theory perspective, if there is still enough information in the image such that we can identify the person, but we cannot identify any delta between 'cloaked' & 'uncloaked' images, then we are just talking about some arbitrary amount of noise that can effectively be ignored.

If a fixed number of bytes can be interpreted visually by a human as a specific human on a reliable basis, there is zero reason a computer cannot accomplish the same. At worst, we are talking about some minor deficit in the current ML libraries that were tested, likely somewhere in pre-filtering of training data. As mentioned in other posts, a low-pass filter is exactly what you would use to side-step this sort of thing.

From a much more sinister perspective, this is potentially even more dangerous than not applying this cloaking process at all. Presumably, there is some way to detect that this process has been applied and that it has certain hallmarks. Assuming it is resilient enough to survive JPEG and other typical compression schemes, I would wonder if perhaps this is a tool to positively identify those who would otherwise want to hide from authorities.


Your analysis has the error that you are comparing something that can recognize a face from among a few thousand (a human) with something that is designed to recognize a face from among many millions (a facial recognition algorithm).

Also, they addressed low pass filters and other image degradation techniques in the article, and almost every adversarial example paper addresses them, and works in spite of them. You're not the first person to think of that.


An easier option: don't post photos on Facebook and Twitter. And discourage your friends and family from doing so.


That's like saying to prevent STDs or pregnancy, don't have sex.


I'd agree if the phrase you used was "don't have sex in an exploitative relationship".


The second part of that conjunction--discouraging your family and friends from using FB or Twitter--is definitely not easy.

In fact, it's basically the entire problem.


I have some software that does keypoint detection for gaze tracking (which is usually roughly equivalent to what facial recognition does).

My software works absolutely fine on these images - it correctly identifies all keypoints.

I would question if this works beyond the specific pieces of software they tested against.


This is awesome and really promising. But the fundamental fact about machine learning is its supposed to approximate/model any scenario so the basic premise of ML would defeat this isn't it? Its a matter of time before someone creates a face recognition model to defeat this.


In theory, yes, you can feed these new distorted images into a new model and defeat the cloaking.

In practice, building new training sets is much more expensive than slightly changing the cloaking algorithm.


That is awesome. I need to read the paper. Really promising. Are you the author btw?


Not the author; just a fellow ML practitioner.


...and someone will run their training data through this and retrain a model that defeats it.


I wonder how much photos would need to be altered for someone's face to look similar enough to a person on casual inspection but sufficiently that there's some N other people whos faces can't be differentiated from it.


I wonder if it works against clearview.ai

https://www.youtube.com/watch?v=q-1bR3P9RAw

It's very important that it works 'cos clearview is so creepy. It's not creepy because of its technology, it's creepy because the justifications of its existence that are given by its CEO are sooo weak. "we can do it because it's not worse than google" (ie. we entirely skip the moral argument), "we can do it because it's for law enforcement" (let's frame our stuff in a way that it's only positive) , "we can do it because we ensure that those who use our tool are strictly controlled" (yeah, we're above the states), "all images are public therefore I can do whatever I want", etc.


The more people start using this service, the better the AI will learn the differences between real and cloaked images. So eventually anybody can run an unaltered photo of yours through the cloak and it will match up.


So did anyone here not on mobile test out their software on FB auto tagger or similar social media face rec?

If this really works, why are they not publishing it in a prestigious computer vision conference?


Somehow porting it to iOS or Android would be really great, since people always take selfies from their phones and post them directly to Instagram.


Could this be used for drivers license and passport photos? these are both data sources for huge facial recognition databases


As an ML outsider, I'm surprised classifiers are so hyper-sensitive to pixel level inputs that this would work.


Some are some aren't, there is a vast vast array of different methods, many are not publicly disclosed so I highly doubt the effectiveness of most of these studies.

This may stop some internet marketers, but don't expect it to be effective against large corps and governments.


I wonder what consequences it would have if I would use a cloaked photo for my passport...


You would likely get flagged at most international terminals when returning to the US. This would mean that you will be pulled out of line, and have your documents manually checked by customs. That image would be then added to the database as an image match for you, and the cloaking would be pretty useless until you replaced the passport. (10 years give or take)


This is fantastic, and here I was using .jpg compression in paint to do it artificially.


What if the AI takes that photo and train with your name labeled on it again?


That is actually what we want to have happen. Fawkes relies on a POISON attack, in that it corrupts the model into learning the wrong thing. So taking our cloaked photos and learning on it is what corrupts the model and provides the protection.

If you're asking: what if the model trains on "uncloaked" images, we talk about that extensively in the paper and provide technical answers and experimental results. Take a look.


Creators,

i love you thank you.


You're welcome!

Ben (on behalf of the team)


Just dont forget the good old purge of exif as well.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: