
The Face Scan Arrives - zweiterlinde
http://www.nytimes.com/2013/08/30/opinion/the-face-scan-arrives.html
======
wes-exp
As the article says,

"Before the advent of these new technologies, time and effort created
effective barriers to surveillance abuse. But those barriers are now being
removed. They must be rebuilt in the law."

Time and effort maintained privacy in the past, by default. However, as
technology improves, privacy is no longer the default, so we need to request
it, specifically.

I hear again and again that "no one cares about privacy", but I'm starting to
think this is a relic of the past. Perhaps 10 years from now we will say "no
one _used to_ care about privacy, but that was before technology became
sufficiently advanced."

~~~
astrodust
Guess it's time for dazzle paint to become a trendy new fashion statement.
(Example: [http://cvdazzle.com/](http://cvdazzle.com/)).

Someone like Lady Gaga is basically untrackable if she's wearing any of her
usual ununsual outfits.

~~~
btbuildem
Except for the part where you're a person wearing a highly unusual outfit..

~~~
amirmc
Which the machine won't recognise as a person (and will discard). It would
take a _human_ to track you but that's old school. Of course, that's only
until gait recognition takes over from face recognition.

~~~
astrodust
Exactly. It's easy to get a computer to track everyone automatically, but
really expensive to hire people to track everyone simultaneously.

You'll stand out more to a human observer, but to a computer you'll be
invisible.

Or you could just have really dark skin. That usually works just as well.
(Example: [http://www.gamespot.com/news/kinect-has-problems-
recognizing...](http://www.gamespot.com/news/kinect-has-problems-recognizing-
dark-skinned-users-6283514))

------
mistercow
Given the rate of false positives when "cold hits" are used with DNA
databases, facial recognition (which is presumably higher entropy than DNA)
has the potential to send a lot of innocent people to jail.

I also have some concerns of inadvertent "software racism". See for example,
this video:
[https://www.youtube.com/watch?v=t4DT3tQqgRM](https://www.youtube.com/watch?v=t4DT3tQqgRM).
Obviously facial recognition isn't _inherently_ incapable of recognizing
people of different races, but if it's designed and calibrated on subjects of
one race, it won't necessarily work well for subjects of different races. The
last thing we want is a facial recognition database that thinks e.g. all Arabs
look alike.

And of course, there are a ton of ways that it can be thwarted easily and
inconspicuously, even if it works very well. Are all surveillance cameras
going to be fitted with IR filters to ensure that a bright IR LED necklace
doesn't blind them? And how hard is it for a criminal to hire a makeup artist?

------
unono
If VR headsets like Oculus Rift take off, this problem can be mitigated.
People will walk around with headsets to cover their entire face, defeating
the cameras.

Surveillance is going to happen, if you ban government surveillance, the
public will get this tech and drones and be doing it anyway. So the robust way
to fight it is to plan your life knowing that you can always be monitored and
someone will always know where you are, even at home.

This is a great opportunity for startups. Webapps that allow you to plan your
security, kind of mini-fiefdoms with your family, friends, neighbors,
associations that plan against attacks from rivals. Electronically activated
weapon systems that shoot when a threat is detected. The Mad Max era is
coming, and hackers will get many opportunities to earn a fortune.

~~~
learc83
>People will walk around with headsets to cover their entire face, defeating
the cameras.

You can already obscure all of your face except for your eyes with just a
strip of cloth, which will have the exact same effect.

~~~
unono
You'll look suspicious, like a ski mask wearing robber. But with headsets,
which many people will also be wearing, you won't.

------
varelse
Given the 15.8% success rate of the cat detector on large data sets:
[http://arxiv.org/pdf/1112.6209.pdf&embedded=true](http://arxiv.org/pdf/1112.6209.pdf&embedded=true),

My chief worry is overconfidence from hopelessly technologically ignorant
bureaucrats combined with a slew of false negatives and false positives when
applied to human faces on general surveillance feeds.

In contrast, driver's license, ID, and arrest photos are at least severely
constrained input filters wherein a single individual is placed against
relatively uncomplicated background pixels and the face is in roughly the same
orientation.

~~~
nathan_long
I don't know about the cat detector, but Picasa seems able to identify people
in photos pretty well, despite varying positions and lighting conditions.

It creeps me out.

Granted, they are working with the much smaller set of people I know rather
than the whole populace. But technology only gets better.

~~~
jerf
"But technology only gets better."

Technology is bounded by mathematical constraints, though. The more you
constrain the number of faces you are scanning and you are looking for, the
easier the task is. But when you're trying to pick out any of perhaps
thousands of persons of interest across an input set that is, say, in the
millions, the more impossible it becomes.

If it helps, imagine that you've equipped a person with all the people they're
looking for, and then set them up in front of the cameras. Even if you imagine
a superhuman who somehow has time to examine all of the millions of inputs,
_even the human is going to have a huge number of false positives and
negatives_. The problem itself is fundamentally, mathematically hard. Any
surveillance technique based on the idea that computer vision is better than
human vision at this sort of thing is going to fail, hard.

And that's _before_ the population starts taking active countermeasures
against the face metrics. (As usual, despite the abundant evidence we live in
a dynamic, reactive universe, most humans persist in functioning in a static
model, where the Bad Guys will just passively sit and let their faces be
scanned and never react to that.)

~~~
varelse
Just like there was a relatively cheap countermeasure to every pipe dream
revision of the SDI, such countermeasures are already in place for facial
recognition:

[http://news.discovery.com/tech/gear-and-gadgets/glasses-
foil...](http://news.discovery.com/tech/gear-and-gadgets/glasses-foil-face-
recognition-software-130123.htm)

But at least we'll fall victim to a higher, rarified caliber of terrorist
rather than that awful riffraff we're dealing with now.

~~~
jerf
Those aren't even the interesting countermeasures. Good work with makeup can
significantly change the structure of your face, completely reversibly, far
more than is necessary to fool these systems running across the full set of
human beings. You don't have to get the face recognizer to not recognize a
face at all, you just have to make it a _wrong_ face.

------
frenger
> "While this sort of technology may have benefits for law enforcement (recall
> that the suspects in the Boston Marathon bombings were identified with help
> from camera footage), it also invites abuse."

1\. Yes /without/ this technology, and

2\. How would that have prevented the Boston tragedy if it were in use?

What exactly does NYT mean here.

~~~
dougmccune
When they identified the first blurred images of the suspects' faces they put
out the images hoping people could help identify the faces they were
interested in. I assume they're saying that with better technology they'd be
able to instantly identify those faces, instead of relying on putting them up
on TV (where the suspects can see them and know they're on to them). So it
wouldn't have prevented the attack, but it theoretically would have made the
identification easier, and the arrests faster/less deadly I suppose.

~~~
frenger
Perhaps, but they were caught pretty fast already, to be honest. The
insinuation is that somehow this technology would have prevented the attacks
in the first place, which is highly improbable.

------
roc
> _" But as surveillance technology improves, the distinction between public
> spaces and private spaces becomes less meaningful."_

Or more to the point: as the NSA mandates access to all private data, the
distinction of recordings from public spaces and recordings from private
spaces is made moot.

------
wicknicks
Does anyone know how this can be technically achieved? Large scale face
recognition with 1 photo per person in the training set?

~~~
bsenftner
Yes, it exists, and I use it every day. In fact, I have a partnership with the
DOD/NSA subcontractors who created it, and I have applied the technology to
the automated creation of 3D Avatars given a single photo of a person's face.
My version of the tech is available at www.3D-Avatar-Store.com. The system
works as follows: a person's head is laser scanned, at the same time dozens of
single photos are taken of that person from different angles, different
lighting conditions, and different quality cameras. Then a neural net is
trained to associate each photo with the laser scan data. After a few thousand
trainings, anyone's single photo would generate a reasonable likeness of them
in 3D. Today, after over a decade of training and 10's of thousands of scans
added to the training database, we get remarkably good quality 3D
reconstructions given a single photo of anyone, any age, any ethnicity. There
are limitations, such as the face needs to be visible, and it needs to be
within +/-30 degrees of facing the camera for the 3D reconstruction to be
suitable for facial recognition. And that is the point of the original
application of this technology: a facial recognition pre-processor. It
corrects the face angle and removes any facial expression, creating a likeness
of the face in a passport style photo perfect for facial recognition. But, of
course, I'm not using it for that. I use it to auto-magically create 3D
avatars for video game studios.

Oh, it's really, really fast. My current system is based upon two generations
back of neural nets, and they reconstruct from one photo in 0.9 seconds. The
latest generation does 144 reconstructions simultaneously given HD video
feeds.

~~~
anonymousDan
What's the state of the art in terms of extending such systems to handle faces
that are partially obscured (e.g. wearing glasses)? References to further
resources in the area accepted and appreciated! I'm going to start work on a
face recognition system soon and I'm keen to learn more about their current
limitations.

~~~
bsenftner
It's just an issue of training. I'd expect a similar setup as I describe could
be used to associate a series of faces with and without different things
attached to their face, and a neural net could be trained to generate, for
example, a no eye glasses wearing image from one with eye glasses. We already
do that with facial expressions. I think the issues to be improved upon are
extending the recognition beyond the face - include ear characteristics to
extend recognition beyond +/\- 30 degrees, and although my lab tells me
analyzing gait is a dead end, I feel there might be something there that can
help. Also lighting neutralization is an area that needs some help, which
includes illumination balancing without removing important details, shadow
removal, and colored illumination white balancing.

------
amirmc
I notice the article didn't use it as an acronym - BOSS _" Biometric Optical
Surveillance System"_

------
outside1234
Did they really name a system such that its acronym is "BOSS"? :)

