
Facial recognition CEO: software is not ready for use by law enforcement - cpeterso
https://techcrunch.com/2018/06/25/facial-recognition-software-is-not-ready-for-use-by-law-enforcement/
======
jsty
As in many things, a blanket statement of 'not ready for use' isn't really the
most helpful way to look at the situation. There's a range of ways such
technology could be usefully deployed, even with current technological
problems, that might aid law enforcement officers in the field. It's a very
wide continuum from zero to "automatically locking people away for life based
on a facial match".

As the article says, there are huge concerns about how such tools might be
misused, whether legally or not. So the title might more accurately stated not
as 'facial recognition software is not ready for law enforcement', but that
'law enforcement is not ready for facial recognition software'. I'm fairly
sure there are countries out there where citizens do not lie awake at night in
fear of their government, and where such technologies might be responsibly
deployed as a tool in law enforcement.

~~~
shanghaiaway
Regardless of how you try to rephrase it, the fact remains the same: the
technology doesn't work well enough.

~~~
mc32
Doesn’t some kind of facial recognition work well enough for Vegas casinos?

Instead of police being able to scan one face at a time as they walk their
beats looking for suspects, an automated system could scan everyone in view
and then alert that an suspect has been identified with xx% accuracy.

Every week on next door someone posts a video still of a vagrant or thief etc.
either from a broken car window, hit and run, lurker, etc. Automation would
help if the perps. We, as a society, might want to adjust the dials on
punishment, given the efficiency, but we shouldn’t give up the chance to
minimize these crimes when reasonable.

~~~
synctext
It is a bit more complex. There are few scientific studies on this topic. See
this one:
[http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a...](http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf)

"We find that these datasets are overwhelmingly composed of lighter-skinned
subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial
analysis dataset which is balanced by gender and skin type. We evaluate 3
commercial gender classification systems using our dataset and show that
darker-skinned females are the most misclassified group (with error rates of
up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%.

~~~
mc32
That’s pretty interesting. Given those stats, why can’t it be selectively
deployed on light skinned perps till the system is trained better on other
skin color classifications in order to achieve similar accuracies? And in the
mean time catch some baddies.

~~~
solotronics
Interesting, so given your implementation only targeting light skinned
"suspects" would using this be racist?

~~~
mc32
I don’t think so. It would target more skin tones as it became better at
identifying suspects accurately. The aim is to reduce crime regardless who
commits it. We have a system which can help on a subset but not another. Why
should we ignore the potential only because we cannot deploy it against
everyone?

It’s like saying, well, since we can’t successfully prosecute big bankers,
well also ignore smaller fry bankers who are sloppier cuz it’s not fair to
them.

In the end everyone but the criminals benefit. Eventually one hopes the system
is well enough trained so it can be deployed for all skin tones.

------
vinceguidry
I dunno. We already had the Orwellian scare when security cameras started
getting ubiquitous. We keep kicking that fear can down the road, but we don't
have to. We've already seen what security cameras are _really_ good for,
watching the watchers.

It doesn't make sense to surveil regular people, but it makes a heck of a lot
of sense to surveil public officials while they're doing their jobs. We went
from "the police are going to sit back and watch cameras of everybody and show
up at our doors to arrest us!" to "Damn cop department won't release his body
cam footage because they claim it was out of battery power!"

I think facial recognition is going to follow that same sort of acceptance
arc. Nobody except China is going to have the stones to build a bureaucracy
around deploying it and we're going to want it to solve a way more pernicious
problem once we find a killer app.

If you're really worried about government malfeasance, all you have to do is
to actually read some of the Wikileaks dumps and go looking for smoking guns.
Everybody else already did that and other than the NSA programs, nobody's
found anything really interesting.

~~~
snarf21
I agree with you for now. However, these things are going to advance and
improve. I think the biggest Orwellian issue is the targeting of dissonance.
The government can't watch everyone and process the data but they can target
those who disagree with policy X or politician Y or company Z.

~~~
vinceguidry
There's nothing new about government targeting of dissidents. The worst case
scenario there played out in the fifties with McCarthy. Society survived.

~~~
jstanley
"Society survived" is a pretty low bar to set. Why not aim for "innocent
people don't face negative consequences for harmless acts"?

~~~
vinceguidry
The bar is getting ordinary people to be concerned about the possibility for
America to turn into dystopia. While yours is a laudable goal, it's well
within the ability of our existing political-economic system to ratchet closer
to over time.

------
MarkMMullin
We all know the systems are biased, because we feed them biased information -
Cathy ONeil did a great job on this with "Weapons of Math Destruction" What
puzzles me is why he doesn't just provide the counterexample to scare the hell
out of everyone - if he inverts the racial distributions in the training
datasets, then you'd have a system where the shoe was on the other foot, and
both argument and fuel for removing bias from the training sets - the
recognition systems only know what they know, and they don't really understand
that they might not know, they only know the confidence they have in knowing -
hence this is my favorite happy place example of the problem
--[https://youtu.be/UFVB5rnqjyY](https://youtu.be/UFVB5rnqjyY)

------
ptero
I agree in general with the spirit of the article, at least as I understood
it: facial recognition can empower oppression and lead to even more doubtful
convictions, for example it could erroneously supply additional rationale to
suspect or convict a person, reinforce biases, etc.

However, I do not get the "companies unite against selling X to government"
approach. It is, to me, both misguided (restricting gov't from buying
something private companies can buy will simply lead to relabeling or minor
redesigns of the same technology) and naive (to work, it needs a very broad
agreement which IMO will not get enough traction).

An approach that could work better is to inform, not restrict: make sure that
all imagery and videos used or considered for use police are _public_ unless
there is a short term, limited exception. And make police wear cameras
whenever they wear uniforms and make _those_ video streams public, too (maybe
with a few hour lag in case there is a tactical need). My 2c.

~~~
hashkb
We can't restrict what they do with something once we sell it to them. Only
Congress can. So, we can refuse to sell it to them or make it for them.

~~~
TheSpiceIsLife
Won't _they_ just buy _it_ from _someone else_?

For whatever values of _they_ , _it_ , and _someone else_ suit the situation.

There is no coherent _we_ to do the _refusing_.

~~~
acct1771
The rest of "we" can choose not to do business with the part of "we" that
helps government subvert society.

------
extralego
> _We need movement from the top of every single company in this space to put
> a stop to these kinds of sales._

What are some historical examples in which industry coalitions of unprovoked
top private executives refused government contracts in the name of the civil
rights of their fellow citizens?

~~~
MarkMMullin
Enhanced radiation weapons, aka neutron bombs - there was initial excitement
followed by a mass exodus - the govt held on for a while longer, but they move
slowly - the labs and contractors visibly got cold feet over this

------
paulie_a
Would ir LEDs built into sunglasses be effective in combatting this trend.
There was an article on hackaday about a person that did that to his license
plate holder to deal with red light cameras.

~~~
workinthehead
Nah, you just need some cool makeup:
[https://cvdazzle.com/](https://cvdazzle.com/)

~~~
Tade0
This is the best explanation I've ever seen for why hairstyles in sci-fi
movies tend to be original to say the least.

------
monetus
It seems obvious that, regardless of the legality, it will be used in parallel
construction. Once these tools are made, they will be used.

[https://en.wikipedia.org/wiki/Parallel_construction](https://en.wikipedia.org/wiki/Parallel_construction)

------
sschueller
Yet Axon (formally Taser International) is building systems to recognizes
faces for law enforcement body cams. [1]

Especially disturbing is that these systems seem to have a much higher rate of
misidentifying minorities.

[1] [https://www.npr.org/2018/05/12/610632088/what-artificial-
int...](https://www.npr.org/2018/05/12/610632088/what-artificial-intelligence-
can-do-for-local-cops?t=1530009904658)

------
kolbe
>Facial recognition technologies, used in the identification of suspects,
negatively affects people of color. To deny this fact would be a lie.

Since, instead of support for this statement, the CEO of this 'company'
decided that shaming his reader would be more effective, I'll give you his
support. It's an article that he wrote based on studies that show there are
significantly higher error rates in gender and race classification in some
algorithms, without ever showing why this even matters within the context of
how facial recognition algorithms are or would be used by law enforcement. Nor
did he show that the specific algorithms being proposed have these biases, and
not just some other ones. Nor did he show that race classification is even a
result that law enforcement runs.

[1] [https://www.kairos.com/blog/face-off-confronting-bias-in-
fac...](https://www.kairos.com/blog/face-off-confronting-bias-in-face-
recognition-ai)

------
sriku
On a side note - I'm unable to square this fear of watchers with parents
happily singing to their kids - "he knows when you've been sleeping .. he
knows when you're awake .. he knows when you've been bad or good, so be good
for goodness sake"

------
bsenftner
Their FR is not ready for Law Enforcement, because it is weak, poorly
designed, and overly expensive. They are smucks trying to capitalize on
Amazon's name with generic, low quality FR. Anyone that understands FR knows
to go to the NIST web site and use the companies competing to be the best for
government contracts. Their algorithm stats are all tested and the results
published at NIST.

------
drpgq
Does Kairos actually produce any useful software or does it just exist so
Brackeen gets publicity?

------
wpdev_63
decidethefuture.org

------
rablo
Why are all of these articles written under the premise that this software
sends you automatically to jail? It’s just a first step in the vetting
process.

~~~
KenanSulayman
I think the even bigger problem here is that if face recognition software is
malfunctioning those who have been identified by mistake are suddenly
subjected to an investigation that can pose serious threat to their privacy.

~~~
rndgermandude
Not just "privacy". I would imagine some judges will issue search or even
arrest warrants based on the information ("it's science!"), which might mean
in the worst case the police might roll up to your house, shoot your dog,
scare your children, take you into custody (even without an arrest warrant if
you're somehow deemed "resisting") or even shoot you because they got scared
of how you look, you may lose your job because you cannot show up to work
while you're in jail, your neighbors will think you're a hardened criminal,
etc.

Even if the courts do not convict in the end, a lot of damage might already
been done.

The pseudo-scientific hair analysis stuff performed by the FBI showed the
dangers of "science" and "tech"[1]. People went to real prison because
investigators, judges and juries overestimated the flaky results the sometimes
outright negligent pseudo-science produced. I imagine some people were shoot
during arrests based on that evidence, died in prison or committed suicide.

There also was the Phantom of Heilbronn here in Germany, where police looked
for a master criminal and serial killer for ages (2001 to 2009), but it turned
out the materials they used to do DNA swaps at crime scenes were contaminated
at the factory by a worker[2]. For years nobody of the many involved in the
investigations even considered questioning the DNA results.

So even if the science and tech is sound (which it really isn't in case of
facial recognition yet, if ever), wrong application, common mistakes, and
misunderstanding the results are real problems.

[1] [https://www.fbi.gov/news/pressrel/press-releases/fbi-
testimo...](https://www.fbi.gov/news/pressrel/press-releases/fbi-testimony-on-
microscopic-hair-analysis-contained-errors-in-at-least-90-percent-of-cases-in-
ongoing-review)

[2]
[https://en.wikipedia.org/wiki/Phantom_of_Heilbronn](https://en.wikipedia.org/wiki/Phantom_of_Heilbronn)

~~~
stickfigure
There's a lot of assumption about future behavior here.

I think there are a couple reasons to be hopeful about court systems taking a
more nuanced view of this technology: 1) it cannot be denied that it _is_
imprecise, and 2) it's pretty easy for laity to understand (at least in
principle) how it works. DNA evidence is effectively _magic_ by comparison.

I actually think the imprecision is an asset for this technology. I would much
rather this tool be 95% reliable than 99.99% reliable. The former inherently
requires law enforcement to work harder; the later tempts "oh the machine said
so it must be right".

~~~
rndgermandude
People went to prison over expert testimony from FBI scientists over hair
analysis in the past (not just warrants, actual convictions).

Like this guy, who spent 23 years in prison because FBI told the jury some
hair they found was his... while it was actually not even human but dog hair.

>An FBI analyst testified that one of the hairs from the stocking mask linked
Tribble to the crime and “matched in all microscopic characteristics.”
>Tribble’s attorneys were successful in obtaining mitochondrial DNA testing on
the 13 hairs recovered from the stocking mask. None of the hairs—including the
alleged match—implicated Tribble or Wright. Further, the analysis revealed FBI
analysts’ errors, including mistakenly calling a dog hair human.
[https://www.innocenceproject.org/cases/santae-
tribble/](https://www.innocenceproject.org/cases/santae-tribble/)

As for face recognition, I wonder if you're correct. I'd expect a lot of
people think it's very precise. Well, at least the "My Sibling can unlock my
iPhone with their face" debacle might have generated some press to combat that
misconception.

------
alexnewman
lol tell that to china

~~~
dang
Could you please stop posting unsubstantive comments to Hacker News?

------
Talyen42
"We lost the contract to Amazon"

------
dsfyu404ed
This is a data retention/sharing issue, not a technology issue. Nobody cares
if their face, license plate or cell phone is seen and mapped to their
identity as long as the interaction and its metadata is soon forgotten. It's
the storage of this information that causes the problem.

When police can accurately determine who they're looking for based on their
appearance and can quickly determine whether the person they're bothering is
that person they are more limited in how they can harass people in the
present. If no records are kept big brother is more limited in his capacity to
harass people in the future.

Most people don't have any warrants out, haven't recently been recorded on a
security camera robbing a liquor store, etc. Being able to ID people reliably
without stopping them is the last thing law enforcement wants (let's ignore
the three letter agencies for a minute here) because it makes it much harder
to stop people for being "suspicious" or "looking like a drug dealer" (license
plates provide similar protection for cars, they can't just stop every silver
Camry because a silver Camry was once stolen).

License plates and cell phones almost map 1:1 to your identity. Nobody has a
problem with license plates or phones as long as they're not used to stalk
people at scale. It's the creation of the data-set that can be used to stalk
people. We need to stop creating these data sets in order to prevent the
stalking at scale (which is the real issue here, even the article says that).

Imagine a world where the speed limits were followed like normal laws (i.e.
they were reasonable enough that most people didn't have to go out of their
way to follow them). The police in that world would hate radar guns because
they're accurate (and can be audited for accuracy) Big brother would hate them
because they don't record metadata for each reading. Facial recognition,
ALPRs, need to work like that. History has proven time and again that you
can't keep technology in the closet. Better it be used on our terms than
theirs.

We're going to have to confront the data retention and government/commercial
stalking/surveillance issue eventually. Getting angry over APLRs, stingrays or
facial recognition is just playing whack-a-mole. I'm still gonna play whack-a-
mole until we solve the big problem though.

Edit: If people disagree I'd love to hear why. This isn't Reddit.

~~~
monetus
I imagine people are disagreeing with the way you describe it as a software
problem when the firmware in those devices will be unverifiable, batteries
will conveniently run out, and they surely won't be 100% tamper proof.

Just type in 'alexa google recording' and see the news of them being used.

Aside: Everyone I've described the securus scandal to has a problem with
phones.

------
bogomipz
I followed the linked article sited in this post - "Face Off: Confronting Bias
in Face Recognition AI" which states:

"Fortunately, the matter of algorithmic ethnic bias, or “the coded gaze” as
Buolamwini calls it, can be corrected with a lot of cooperative effort and
patience while the AI learns. If you think of machine learning in terms of
teaching a child, then consider that you cannot reasonably expect a child to
recognize something or someone it has never or seldom seen. Similarly, in the
case of algorithmic ethnic bias, the system can only be as diverse in its
recognition of ethnicities as the catalogue of photos on which it has been
trained."

As this is extremely concerning. I had some questions:

Are the training sets for company's selling facial recognition technology
considered proprietary and therefore not verifiable as free from bias?

Is it not possible to develop a standard or criteria that a company's training
set has sufficient distribution to be free of racial bias? Is this just not
possible for some technical reason?

------
bsenftner
There is not a single statement in that article that is not some type of fear
implied concept that fails under observation. The "FR is biased towards people
of color" issue was publicized a few years ago, and the industry adjusted -
most are no longer biased, and that bias was a mathematical color space issue
and nothing racial. It's an empty article cut and pasting fear statements from
popular media, and simply Kiaros getting their name in the news.

~~~
hashkb
> that bias was a mathematical color space issue and nothing racial.

What is the difference? Bias towards a color? What are you saying?

~~~
bsenftner
The mathematical bias was the fact that dark skin has less illumination range,
therefore less information to separate dark skinned faces. The bias was not
such that a darker skinned person would be confused with a different toned
person, it would have issues telling the difference between people with the
same darker skin tone and similar facial shapes.

~~~
simion314
The bias will also have biased consequences, like for 1 dark skin criminal you
search for you will find more suspects with a much higher probability or score
then a light skin person, this would mean more arrests/interrogations.

I would also not blame the math but the implementation aka the coding/modeling
of the problem.

~~~
bsenftner
which has been corrected.

------
symisc_devel
Technically speaking, face recognition has been a solved problem since the
early 2000s despite facing few challenges nowadays including non-frontal, tiny
or partial real-time detection. The most common algorithm is to output the
bounding box (i.e. rectangle) coordinates for each present face, extract the
facial landmarks (nose, pupils, mouth, etc.) points for the target face (the
more landmarks you have, the more accurate the result is), perform some
Euclidean distance calculation (sort of hash) and compare this to the existing
face hashes (i.e. police database). The highest score is considered a
potential match.

You can develop your own face recognition software providing a good dataset
(i.e CelebA, LFW or the fed dataset if you have access to) using our embedded
computer vision library[1] that we just released earlier this month or using
the PixLab /facecompare HTTP endpoint
([https://pixlab.io/cmd?id=facecompare](https://pixlab.io/cmd?id=facecompare)).

[1]: [https://github.com/symisc/sod](https://github.com/symisc/sod)

