Hacker News new | past | comments | ask | show | jobs | submit login
GAN-generated facial images that are capable of impersonating multiple IDs (unite.ai)
169 points by Hard_Space on Aug 4, 2021 | hide | past | favorite | 76 comments



This seems to indicate that the authentication system is using a binary comparison "is this face the same as that face" for each stored face. But why would it not do instead "which is the nearest face to this"? Surely you can't find one face which is the nearest to 40% of the DB? (unless this is yet another counterintuitive feature of high-dimensional geometry)


> unless this is yet another counterintuitive feature of high-dimensional geometry

As it turns out, that _is_ another feature of high-dimensional geometry. The details depend on how many dimensions your images have, the similarity metric being used, how big your database, is, ....

For a brief illustration, consider standard euclidean space. If your images have D dimensions, ignoring some pathological cases you can find a point equidistant from D+1 of them (and if you have more dimensions to work with you have a lot of points to choose from). If D+1 < (40% of database) then you've accomplished your goal.

Note: It is possible for such a point to be arbitrarily far from the rest of your database (in context, that it would look nothing like a face). Spare dimensions can give you enough freedom to place it "in the middle" of your database in some sense, not just some point in the middle of nowhere that's an equally bad match for everything.


Amazing, thanks!


Edit: D+1 >= (40% of database)


DLib is a facial feature extractor that extracts 68 keypoints.

I've used DLib very extensively in the wild. It's fast, has decent python integration and is easy to use.

But it get confused pretty easily. I've had even the CNN model confuse a blurry photo of a clock for a face.

It's useful enough to build facial recognizers that mostly work ok. But if you are using it for a facial authentication system... it's a pretty bad idea to say the least.


> That Can Bypass Over 40% Of Facial ID Authentication Systems

Turns out to actually mean "three CNN-based face descriptors: SphereFace, FaceNet and Dlib", which best I can tell are two academic projects and an open-source library.

By far the largest deployed facial authentication system is of course Face ID, which this has zero/zilch/no chance at all of working against.

What a terrible, terrible headline.


So I understand the first part of your argument, and it may be very bad headline, I agree.

But to say it has no chance to work against Face ID is just saying YOU don't know how to make it work.

It is short sighted, to say it delicately.

An intelligent enough person will understand there are millions even more intelligent and highly motivated people and there is no way to be sure about what they can't do short of breaking physics laws.


FaceID uses a 3d surface to identify your face. These GANs are making 2d images. Until you can make a dynamic controllable 3d surface and generate the faces in 3d you have no hope of defeating FaceID.


Is this true? I would have thought all you would need is to give it an input that maps to a 3d surface that's adversarial. There's an extra step in the pre-prep pipeline, but the basic technique is the same - gradient descent on inputs until you derive those that are sufficiently adversarial.

All neural nets are vulnerable to adversarial examples. It's a fundamental property they hold, because they're essentially stacked linear models. So (for example) they get more confident about their predictions when given a sufficiently out-of-domain input - adversarial training is essentially just finding paths that trigger an out-of-domain response.

I don't see how an additional transformation before input precludes that.


Unless you can either produce the correct 3D surface itself or fool the sensors somehow I don’t see how the inner implementation matters.

Or do you mean attacking the inner network somehow from inside the system?


I mean you train your network to produce images that translate into adversarial 3d surfaces.

You don't need to produce the correct 3d surface if the surface recogniser is neural - you just need to produce a 3d surface that's adversarial. The adversarial surface could be completely unrealistic, like these adversarial images. (Although the adversarial generator could also be trained with "realism" as a constraint.)

Are they able to detect depth independent of the surface of a presented image? That would make it harder, but the point of failure then is just figuring out a way to dynamically fool them. I wouldn't be confident saying that's impossible.


Yes, FaceID uses actual depth/distance data by projecting IR dots during scanning. So you would either need to very precisely mock these somehow, or create an actual 3D surface.

https://support.apple.com/en-us/HT208108


Yes, Face ID uses infrared depth sensors so it shouldn’t be possible to use just a printed image. You might be able to fool it with by printing with some strange material that fools them, but I don’t see the point with coming up with such an advanced technique. Then you might as well just print a 3D model.


You don't know that there won't soon be a GAN-controlled animatronic face coming.


We are cheating our eyes and brains with 3D goggles.

Don't you think you are too enthusiastic saying 3D facial authentication cannot be fooled?

It is basically an exercise in projecting right image, something that already a large number of people are working on.


> Don't you think you are too enthusiastic saying 3D facial authentication cannot be fooled?

That is not what was said. The commenter stated that THIS 2D GAN method has no chance against FaceID, and if you understand the way FaceID works you would understand they are absolutely correct.

FaceID shines dots on the user’s face and measures the distortion of those dots across the facial topology. Using this method on a 2D surface will result in no distortion, and therefore fail.


There is a huge advantage in getting better patterns when you get to use depth data ( like face id does).


The question is not whether you get advantage, but whether it makes it impossible to break.

I am responding to this comment:

"By far the largest deployed facial authentication system is of course Face ID, which this has zero/zilch/no chance at all of working against."

"zero", "zilch", "no chance" -- suggest overconfidence to me. This is not healthy when discussing any authentication system and especially one based on trained model where we don't exactly understand relation between input and output.


It doesn't have to be impossible, It has to be harder than a complex password.


The model doesn't even output the data in the correct format, rgbdit (depth, infrared, time).

So his statement is entirely correct. This model has absolutely no chance of fooling the current most popular facial detection system.

It creates a key that doesn't even fit in the lock, much less have the correct pin heights.

If your point is that this approach and architecture might contribute to a model that can beat FaceID, that's entirely valid to say as well.


Read Apple’s public docs; it wouldn’t be so simple: https://support.apple.com/en-us/HT208108


So the mentioned 40% is a lie? Otherwise the headline seems accurate.


Also the description "facial ID authentication systems". An accurate headline would be "we were able to confuse some open-source face recognition systems"


They're beating facial recognition systems, not facial authentication systems.


It is practically the same assuming the output from facial recognition is then fed as input to authentication.


That's a false assumption. 2D facial recognition isn't an input to 3D facial authentication.


We are cheating our eyes and brains with 3D goggles.

Don't you think you are too enthusiastic saying 3D facial authentication cannot be fooled?


> Don't you think you are too enthusiastic saying 3D facial authentication cannot be fooled?

Where did I say that?


do you know how face id works? or this is just a.. gut feeling


As a teenager, I was the meatspace opposite of this. I had a face that could trigger about 50% of store security guards. I was regularly searched, questioned or asked to leave. I was not shoplifting or even considering it. I’m not sure what demographic I fitted, but I am glad I have left it.


>As a teenager

>I’m not sure what demographic I fitted, but I am glad I have left it.

I think you already got it figured.

The older I get the more every authority figure everywhere leaves me alone. I was probably more 'innocent' as a teenager, though.


> The older I get the more every authority figure everywhere leaves me alone

The less the world in general messes with me, and it's by far the best thing about getting older.


For anyone who is not truly exceptional, there will likely come a point where it starts to ignore their competencies.


As a teenager I realized that I could get away with anything because of how I looked. I could effortlessly shoplift whatever I wanted and did so for many years. Later on I got away with selling drugs for a long time, including being pulled over but not searched with a quarter brick in the trunk of my car! I did get raided later but got very lucky as I had mostly cleared house and they did not find the several sheets of LSD in my freezer.


Pretty much every girl I ever dated has said the same thing about how they got away with shoplifting. And several of them worked at a department store at one time or another. Seems to be a thing.

I never stole anything, but I never worked at Nordstrom so who the hell am I to judge.


We get away with shoplifting, easier police encounters and with authority in general except when it comes to large amounts of money (business, still not seeing that rise in funding for women) or politics. Grass is greener..


Eh. You guys definitely get away with a lot of shit, but I wouldn't say that makes the grass greener from over here. I'm still glad I'm not a girl :p Still, maybe we just hear more about men running huge financial scams because men actually get caught more often. I wouldn't put it past a lot of ladies. There's a whole lotta young Martha Stewart wannabes out there on tiktok


Maybe not the smartest idea to post this on a public forum :)


it's clearly a throwaway account


Yes but if the statute of limitations did not run out, any investigators might still correlate with the other facts.


Are investigators in your country so short of crime to investigate they would attempt to pull out information from an anonymous statement with no guarantee of veracity on a website in a small corner of the internet about a potential minor crime from years ago?


They would have to actually seize drugs (in the USA) to make a case unless there was something else…


It's easily defensible as a story for entertainment.


Someone fake flexing on HN? No way!


When I was younger, I was a pretty obvious metal head. Long hair, black... everything. Every time I went to the airport I was searched and drug checked (have occasionally used drugs, but never been a recreational drug user of any kind).

Amazing the difference cutting your hair short and not wearing band shirts changes things =/

(Would like to say I'm incredibly lucky in this regard as I'm a white male).


I love how Israel isn't afraid to publicly demolish the same security systems it pioneered and that its adversaries are probably investing heavily in at the moment. Last month it was the nanotech camouflage that bent light... I'm sure the Iranian fork of that project has since been tabled.

Now all we really need to do is print copies of these faces and drop em all over China.


What does this mean for the security of FaceID? Anyone with deeper knowledge? I am not very knowledgable in this field.


I doubt Face ID would be vulnerable to these. Face ID uses projection mapping and infrared photography [1] to establish depth, ensuring a face is "genuine" and not simply a photograph.

[1] https://support.apple.com/en-gb/HT208108


> Face ID uses projection mapping and infrared photography to establish depth

It seems to me that this "just" expands the parameter space as a way to make defeating the algorithm much harder. I don't see how, in principle, that makes Face ID invulnerable to this type of attack.

Given that Face ID is only accessible using Apple devices which lock-up after a number of failed attempts, training a sufficiently sophisticated GAN might be problematic. But a motivated attacker might, for example, use a device farm or a reverse-engineered implementation of Face ID.


Not to this one, but if you use 3D faces for input, you'll end up with something that will:

a) defeat Face ID

b) look like the result of a horrific teletransporter accident.


It would have to be device specific, as the dot projectors in each FaceID device have a randomised layout unique to that device [1]. This seems to be part of why the FaceID sensor is paired to a device.

[1] https://www.apple.com/business-docs/FaceID_Security_Guide.pd... (page 3)


Props for the source AND page number.

I’ve read it before but hadn’t recalled that detail about the randomized layout. TIL!


Still possible. You'd need to extract and operate multiple Face ID detectors to get the right signals and probably scan thousands of faces to learn signatures and what's needed to fake the inputs. Harder than photos/video, but still doable.


You won’t defeat Face ID that easily. It needs to detect an infrared signature belonging to a real face.

Face ID disables itself after 5 failed attempts, falling back to a password. In my experience, if you point it at something that’s definitely not a real face (but looks like one), it disables immediately.


All you need is to fake the right signals. The sensor itself has no concept of what a face should be - get enough inputs and the GAN will eventually figure out what it needs to get the right output.


Another important question is is it really a magic face they found, or just a magic image of a random face? If they managed to build an adversarial GANs that work on the object rather than a digital image of one that would be interesting enough on it own.


OTOH, there is a 40% chance this face is on the top 10 most wanted terrorist list... Or, at least, that's what the ED-209 in the corridor will think.


>including various brands of the Differential Evolution heuristic.

Can someone tell me more about this? What various "brands"? I know only the scipy package.


Differential Evolution is just a meta heuristic algorithm. There are several variations of it. ( not unlike the many variations of Genetic algorithm or PSO).


I see! And why is there only one scipy version available for python (for what I know)? Maybe because it's simple enough that the variations can be included by simply tweaking a few settings/lines? Thanks for your answer! For context, my cofounder asked me to implement scipy.DE, and I have walked him through each line of the scipy.DE but atm I don't know much about the underlying maths (we're building a proof of concept and I'm not a professional programmer).


The mathematics is pretty simple ( I think wikipedia should give you more than enough information). Its easy to make your own implementation in 100 lines of python code (and make the corresponding variations).


The equivalent of some people having ‘one of those faces’ that look familiar and recognisable


Does this mean that AI facial recognition evidence is inadmissable in court?


Surely just common sense (amongst people who understand AI, anyway) means that AI facial recognition evidence is inadmissable in court. At best, ignoring privacy issues, it should be used to identify a potential match if the workload is too much for a human, then a human does the verification step on the matched faces.


We've had charges filed [0], without a human being part of the verification step. I don't think we've yet seen whether this information can be used as admissible evidence... But it wouldn't surprise me it is only a matter of time.

[0] https://www.wired.com/story/flawed-facial-recognition-system...


Sadly the Venn diagram of “people who understand AI” and “people involved in the lawmaking process” seems to be almost just two separate circles.


Yeah I realised after that humans would (hopefully) vet the match!


Photos and video are admissible. It would seem fairly dumb to set up cameras for facial recognition and not keep the originals?


And use humans for the recognition? What if computers get better on average?


If that ever happens, the courts could be the last to know. Or, more probably they will assume it's true several decades too early.



Why? The site is up.


You can’t archive it after it goes down.


Typically you post an archive link if the site crumbles under a hug of death. If it's performing well, there's no need to defer to an archive link.


The motivation is probably that this could be taken down by initiation.


s/initiation/intimidation/

I hate how phones have made posts on the web so incomprehensible, but I usually double check




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: