> The proposed method [...] is based on training a Generative Adversarial Network on a set of real fingerprint images. Stochastic search [...] is then used to search for latent input variables to the generator network that can maximize the number of impostor matches as assessed by a fingerprint recognizer.
So I hadn't heard about "MasterPrints"; the idea that there are fingerprints (synthetic or otherwise) that just happen to have a lot of false matches. That's not intuitive, at least for people like me who know nothing about fingerprint matching algorithms.
Also an interesting application of GANs.
As noted, this research was done in software. I'm not sure if you can apply something similar to physical sensor hardware. Especially since you only get a couple attempts on real hardware before most phones lock out to your pin code. And attacking real hardware would require either A) some kind of physical fingerprint simulator to interface with the sensor or B) opening the phone to get direct access to the I/O (which might then fall afoul of tamper detection, if such a thing exists on phones). But it's cool research regardless.
Now I'm curious if similar techniques can be applied to faces. Are there "MasterFaces"? Do some people have faces that generate more false positives than others?
The idea is that you have a few classes of errors in any calibrated heuristic system, and biometrics fall prey to the same ones. The individuals that trigger these errors were given animal names.
Given a large enough population, most of the population will be "sheep" - the system works as intended for these.
Some individuals will be prone to false positives. They will pass the system even though they should fail. We'll call these wolves.
Others will be prone to false negatives - they will fail even when they should pass. Goats.
Lambs are a group that is easy to mimic - so others will be mistaken for them once they are in the system, so they sort of perpetuate wolves by having a really generic set of distinguishing traits.
I think this zoo was extended for other unique cases and other animals, but I'm having trouble finding links to the most popular research papers in this series.
The research prompting the posted article seems to be about identifying "wolves" - a good demonstration of the validity of Doddington's theories.
* Wolves are technically just good mimickers, I'm simplifying a bit above.
A real "master face" would be a face that looks like many people, and it seems like you could try to obtain faces like that by playing a two-player game between a recognizer and a face generator (as is done when training GANs).
Facial matching usually reduces search space by assuming something about the input. I.e. you don't match a face against the whole world, just the friends of a person uploading a photo. Otherwise you'd run into matching a similar face to some stranger on the other side of the world.
This seems a massive assertion that’s not qualified at all in the article. It was my understanding that biometrics in consumer hardware have always been easily circumvented and are largely about convenience.
> In recent years, however, security researchers have demonstrated that it is possible to fool many, if not most, forms of biometric identification.
identification is more like a username, not a password, and should be used as such.
I'm willing to neglect the shoulder surfing attack vector as I feel I keep my phone sufficiently secure from pick pocketing and am not afraid of an "inside attack". Might definitely reconsider this, if I would have to carry a business phone with important secret information though.
I always liked the password alternative of showing the user a bunch of pictures or photos in random positions and have them select a number of them in sequence, perhaps showing a whole new set of pictures in between each selection.
Humans tend to have much better visual memory than verbal memory, so they're able to remember this kind of sequence better than a password, especially if the pictures they select are somehow meaningful to them. This is also very difficult for someone to shoulder surf effectively, as they'll be seeing these pictures for the first time and the pictures won't have any meaning for them.
I heard about this idea decades ago, but have never seen it implemented.
Android allows pattern unlocks without showing the pattern as you type it, only on error. That's a good trade off.
Not very imaginative. One of my brother's friends got into my brother's phone on the very first try by holding it up to a light and looking at the smudge pattern on the screen. A tapped-out 4-digit PIN would at least stop this method from working so easily.
There’s no reason why biometrics can’t both Id and authn at the same time, as long as both functions have a high degree of confidence.
Multi factor auth is probably always going to have its place. Where the confidence level is low in the authn, it must be increased by adding additional vectors.
Research like this, while ostensibly threatening an increase in false positives (due to unauthorized use of fake prints), will in all likelihood cause vendors to tighten the confidence interval, leading to greatly increased false negatives. If my print is recognized less than half the time I'm just going to disable the feature.
It also gets confused if there's any dead skin on my thumb, something that seems to happen pretty often (and I don't even play Nintendo anymore).
Why things posted from vice, vox and the likes are always shwoing up on HN I will never know.
even if there are mitigations in products that you buy, the study is quite meaningful. it proves (eg) the need for such mitigations.
There's definitely need for very hardened phones from physical attack (journalists, canaries, whistleblowers, etc). I'm just not that important so I wish I could choose my security level.
Twin Peaks fan?
I remember there being a lot of skepticism about their claim because they didn’t go into that much technical detail, but rather seemed more interested in winning press and fame (as well as being incredibly boastful about it).
Hopefully someone knows about the current state of Face ID security better than I do, since I’ve been a little out of the loop.
Fingerprint is at MOST 'username'. Never a 'password'.
If you compromise their accounts from the other side of the world by tricking a fingerprint reader, they won’t know immediately. And you’ll have broken fewer laws, and possibly be located in a country without extradition.
On the other hand, the issue here seems to be less about how unique fingerprints are, and more about how unique the machine’s reading of a fingerprint is. That has more to do with the machine than the biology.
Hashes are not unique by definition, though we can treat them as such in many practical situations. The same is true for anything with a fixed number of variables with finite states, e.g. DNA.
Still, one can be (un)lucky, or the statistical models could be wrong—maybe the number of repeats is different in a sub population or the STRs aren’tstatistically independent.
Disclaimer. My team at Neurotechnology develops fingerprint recognition algorithm VeriFinger which was used in this publication to look for vulnerabilities of small area fingerprint sensors.