
Police accused of deploying facial recognition 'by stealth' in London - pmoriarty
https://www.independent.co.uk/news/uk/crime/facial-recognition-uk-police-london-trial-data-human-rights-legal-action-met-a8466876.html
======
gaius
Facial recognition tech can spot you in a crowd if you take part in a demo
against the government

Self-driving car tech can simply refuse to take you there

~~~
whatsstolat
Stick on an anonymous mask and walk?

~~~
homero
Masks are illegal sometimes because of that

~~~
hjek
They're not illegal in the UK.

~~~
M2Ys4U
Not by default.

But police can impose an order - for a defined area for a limited amount of
time - under Section 60AA of the Criminal Justice and Public Order Act 1994
allowing officers to demand that one removes "any item which the constable
reasonably believes that person is wearing wholly or mainly for the purpose of
concealing his identity".

~~~
hjek
Interesting. Never seen that it action yet. Sounds like there's some wiggle
room, e.g. for carnival masks for entertainment purposes.

------
laichzeit0
Combine this with the recent advances in deep learning based lip reading and
you should really know how fucked we all are.

------
tribby
ignoring issues related to scale, I wonder how the tech's accuracy compares to
london's team of human "super-recognizers."[1]

1\. [https://www.newyorker.com/magazine/2016/08/22/londons-
super-...](https://www.newyorker.com/magazine/2016/08/22/londons-super-
recognizer-police-force)

~~~
drasticmeasures
>Ignoring scale, I wonder how the tech's accuracy compares to london's team of
human "super-recognizers."

You can't ignore scale when it is the game-changer of automated facial
recognition.

The tech compares very poorly to super-recognizers, with an absurdly high
number of false positives, putting the wrong innocent people under an
aggressive suspicion that they won't be able to know about and dispute.

~~~
AlphaSite
False positives are fine. It is basically a bloom filter. False negatives are
very bad, but false positives are fine.

~~~
nmstoker
As the other respondent says, it depends on your view of accidentally
involving the innocent in police activity.

In an ideal world, someone wrongly identified would just be questioned, they'd
identify that there was an error and there's no physical impact just a brief
disruption. That is the approach which genuinely could be beneficial to
society.

However, I am somewhat concerned that many police officers won't keep the fact
that it's fairly likely to be wrong in the forefront of their minds. Combine
that with an eagerness amongst some police to start applying "justice"
immediately (rough use of handcuffing, pushing to the ground etc), at best
treating people like hospital trolleys and at worst inflicting injury, and
it's a recipe for escalation and injustice in scenarios involving completely
innocent people (and you could argue it shouldn't happen with those who turn
out to be guilty too, but that's a distraction!)

~~~
simsla
It just needs to be framed as a search problem (many results), rather than
recognition (one result).

E.g. "Here are the 200 people the system thinks kinda look like that picture
you gave me. The one you're looking for might be in here. Then again, they
might not. You take it from here. Happy detectiving. "

~~~
AlphaSite
This is a more direct way of phrasing what I was getting at. Thanks.

------
alex_hitchins
How is this different to lots of coppers activly looking for suspects? We have
so much intrusive CCTV here I can’t quite get the complaint. Was it just the
introduction with lip service to community engagement?

~~~
faceplanted
The difference is automation, and the issues that come with that.

If you look like someone the system is looking for, there's nothing you can do
to whitelist yourself other than wear something on your face, you will be
flagged, every single time until a human looks at it, and if they're stopping
people, that means being stopped, every time.

If the system has the wrong face in the database, bad luck.

If someone finds a way of preventing detection, say, make up, or big glasses,
the system doesn't work any more, as far as I know, even the best facial
recognition that only relies on cameras and not depth cameras significantly
lose accuracy with either of those.

There are a _lot_ of people in London, the false positive and negative rates
aren't usually published and anything above zero means those technical
problems happen an undisclosed amount too much.

Did you ever hear the story of the guy with a very common name ending up on
the no fly list? The no fly list was for a long time, and I think still now,
literally only a list of names, meaning if someone with a common name goes on
the list, everyone with that name gets stopped at the airport as the database
gets checked to see if they're the same person. Now think what happens if
someone with a generic looking face ends up in the police database.

~~~
drusepth
>If you look like someone the system is looking for, there's nothing you can
do to whitelist yourself other than wear something on your face, you will be
flagged, every single time until a human looks at it, and if they're stopping
people, that means being stopped, every time.

Wouldn't such a system (ideally) learn the differences in [who it expected]
and [who it found] and attempt to improve its accuracy to prevent false
positives in the future? I don't know if this is too basic, but I'd assume
training with a set of similar-but-not-an-exact-match data would be vital for
accuracy at scale.

~~~
ubernostrum
I read /r/legaladvice over on reddit.

At least a couple times a month there's someone who has had repeated issues
with law enforcement banging on their door, sometimes even with guns drawn,
because someone with a similar name had a warrant out for their arrest, or a
criminal background of some type, or someone the police wanted used to live at
their address years ago. And there seems to be very little an average person
can do to get "the system" to correct itself; even after multiple instances,
the police keep coming back again and again and again and again because "the
system said so". Massively scaled automated facial recognition will multiply
that many times over, especially when officers are trained to just trust "the
system" when "the system" says you're a criminal.

~~~
ddeck
Yep. One only needs to look at the recent story _The Machine Fired Me_ [1] to
see the potential impact of such system errors once things become connected
and automated.

That poor guy was effectively fired despite the fact that every human agreed
that he actually wasn't and should still be employed.

[1]
[https://news.ycombinator.com/item?id=17350645](https://news.ycombinator.com/item?id=17350645)

------
secfirstmd
Also IMSI catching is used widely by police.

[https://news.vice.com/article/vice-news-investigation-
finds-...](https://news.vice.com/article/vice-news-investigation-finds-signs-
of-secret-phone-surveillance-across-london)

And none of this surveillance compares to the extent of surveillance in
Nothern Ireland.

------
drasticmeasures
What are you going to do about it?

~~~
dosy
I think for people who live in a place / do a job where a state
harasses/meddles instead of protects them, knowing how it works lets them
protect themsleves and even meddle back.

I think this is why activists / journalists / spies in some areas try to study
the operation and practice tradecraft / opsec / cs.

Info asymmetry is power, isnt it?

~~~
wpdev_63
massachusetts is such a place

------
justsomedude43
What exactly is the use for this facial recognition exactly? We all know from
experience that British police couldn't catch a cold let alone a criminal,
with all those tens of thousands of 240p cameras out of which maybe 20%
actually work. What is facial recognition going to solve if they can't even
see the face on the camera?

------
0xBA5ED
Perhaps the time has come to invest in Groucho glasses.

------
dosy
isn't using it by stealth sort of the point?

~~~
b_b
The article says that the police had pledged to notify people of its use in
the areas using leaflets and such, but failed to do so even when deploying the
technology, so this is why it is labeled as secrecy.

~~~
dosy
they had posters.

> Lots of people didn’t seem to be noticing the posters at all… from a basic
> data protection point of view they’re not informing people well about what
> data they’re collecting or what is going on.

i don't think data protection applies to state surveillance. if people knew
what data the state collected about them, it would defeat the point.

~~~
faceplanted
> if people knew what data the state collected about them, it would defeat the
> point.

No, it wouldn't. The point of keeping data is to be able to detect unusual
activity and cross reference information. Not to spy. The people are the
country and they should be able to make an informed decision on what they want
the government to be tracking about people.

~~~
drusepth
>No, it wouldn't. The point of keeping data is to be able to detect unusual
activity and cross reference information. Not to spy. The people are the
country and they should be able to make an informed decision on what they want
the government to be tracking about people.

This ideal mindset breaks down for everyone other than "the good guys". The
point of keeping what is tracked a secret is to prevent the-actual-people-
you're-looking-for from just occluding that particular information. They can't
really "opt in" to the kind of surveillance that is necessary to find
predictive patterns in their behavior, because they (obviously) won't, and
knowing what is tracked just tells them what to spend the extra effort hiding.

~~~
dosy
is it weird i think there's no such thing as privacy between state and
individuals, and probably remains privacy between individuals?

i accept this model of reality and actually think its sensible. personally i
have no issue with that.

