
Don’t Regulate Facial Recognition. Ban It - dredmorbius
https://www.buzzfeednews.com/article/evangreer/dont-regulate-facial-recognition-ban-it
======
tehjoker
I'm trying to decide what makes facial recognition more dangerous than the
tracking beacon we carry in our pockets that is famously being used by every
bad actor on the planet to perform population surveillance for fun and profit.

Just riffing, I think the core differences are probably:

1) Your phone has _some_ security, so only your telecom or manufacturer can
spy on you easily and consistently if you take some precautions.

2) You can leave your phone at home, switch it off, or put it in airplane
mode.

3) There are wiretapping laws already on the books that probably make it
difficult to use certain evidence in court.

Whereas with facial recognition:

1) The recognizer is controlled by random property managers, both public and
private. This makes it easy for bad actors to create systems of surveillance
where they share with each other.

2) You can change your phone, but you can't change your face except in bad
movies.

3) Fewer laws on the books to make abusing this data easier.

4) It's only semi-accurate, which is a plus, because a random detection gives
you probable cause to fuck with people so long as you claim high accuracy on a
carefully concocted test sample. The semi-random terror is a feature not a
bug.

5) You can find people that are off grid that the community is hiding from the
state or private security (undocumented immigrants, leftist activists, etc).

So much potential here.

~~~
cf498
I think the comparison with smartphones misses the point,

>You can leave your phone at home, switch it off, or put it in airplane mode.

Makes it something entirely different. The core characteristics of facial
recognition are

1) No need for a cooperative, consenting subject

2) Not alerting the subject being identified and no protection against being
identified without noticing

3) Unchangeable identification characteristic

We already have reliable easy to use systems for characteristic 1 and 3,
fingerprints, and I see how there might be a need for such systems at specific
locations. For example at a police station or border checkpoints.

Then we have facial recognition which is a lot less accurate and its only
additional benefit is point 2. Point 2 however has no positive use cases, its
a purely totalitarian instrument aimed at its scale-ability. For not only
being used in specific places but everywhere. You cant even make the argument
which makes nuclear weapons a worthwhile technology to have, the possible
usage for war against a foreign aggressor. Facial recognition is only
beneficial to keep a population suppressed.

~~~
tehjoker
Good analysis. I hadn't thought of the lack of consent angle (though
information leakage / hacking from phones renders them able to do similar
things).

------
RidingPegasus
Buzzfeednews does ok sometimes but I'd have to disagree with this. Even as a
staunch privacy advocate facial recognition could do wonderful things to
benefit society, is not the real problem overzealous governments and
wideranging police powers lauded over innocent citizens?

Maybe there really needs to be a discussion about how intrusive we are going
to allow them to be and the dangers that poses. It's a silly kneejerk response
to say "we should only allow human beings to stare at cctv cameras, not an
algorithm"

The cat's out the bag, the technology isn't the problem, tech is amoral, it's
the people using it we need to solve.

~~~
smt88
> _is not the real problem overzealous governments and wideranging police
> intelligence agency powers lauded over innocent citizens?_

Yes, they're the real problem. That doesn't mean it's futile to take away a
powerful tool for them to use.

> _tech is amoral_

Tech is not amoral. It doesn't exist in a vacuum -- it has a context of who
created it and what human actions it facilitates.

For example, ransomware is amoral. Sure, the tools used to build it are
neutral, but that's true of anything if you zoom out far enough (e.g. steel is
amoral, but CIA drones aren't).

> _it 's the people using it we need to solve_

You can't "solve" people. There will always be selfish, stupid, and
shortsighted people in the world. All you can do is make sure that they don't
have concentrated, unilateral power over others.

Technology gives people that kind of power, and we can at least make sure that
law enforcement in liberal countries doesn't misuse it.

~~~
dredmorbius
ITYM ransomeware is _immoral_?

~~~
smt88
Yes, I did. Too late to edit now. Thanks for catching that typo.

------
jammygit
Two malls in the province where I live got in trouble recently for secretly
deploying facial recognition.

[https://globalnews.ca/news/4355444/chinook-mall-calgary-
faci...](https://globalnews.ca/news/4355444/chinook-mall-calgary-facial-
recognition-technology/)

Edit: consider supporting the Canadian civil liberties association! They speak
out about these sorts of things when others don’t

------
zPwoo
I do think that facial recognition has it's place in society, after all the
cat is already out the bag. But I believe it should be reserved ONLY for
border integrity and very high-level national security incidents. For example,
I do not have any issue with facial recognition being implemented at all
points of entry with it's data being purged after a randomized time with a set
undisclosed minimum time frame. It is entirely within a country's rights to
know who is entering and leaving their country in my opinion. This would
strike a middle ground in both security and privacy and it's not like
countries don't already have this information.

Private & for-profit use in any capacity should be banned.

Basically, the technology isn't bad. Humans are bad. So we should close off
any and all avenues of abuse from a human agent. An AI system should be in
place to create, access, store and retrieve the data with no human middle-man.
The request to retrieve the data should follow the same procedures as for a
warrant, but should require a high-level, non-partisan court order and should
only return whether the subject was recognized and those specific frame of
footage. The procedure should have sufficient checks and balances so no one
can arbitrarily access this data. Every. Single. Request should be audited
before being processed by multiple parties.

Stopping governments that abuse this technology should be encouraged. But that
is naive thinking. It has definitely already begun in China for example. All
we can do is keep it out of the hands of private corporations.

------
vinay_ys
I doubt HN is the best place to have discussions about design of
societal/civil norms, culture and laws. Surely, there are people who are more
expert than us who have well-researched, historically informed opinion on
human behavior and intricacies of changing the balance of privacy vs
centralized powers.

One thing is for sure. All of the technological advantages is creating
asymmetry in information and hence power balance. This cannot be without
serious ramifications to the society.

~~~
dredmorbius
It's a less perfect forum than I'd hope (original submitter, BTW), and often
fails to meet my expectations on important tech- and startup-adjacent
questions. But it remains better than most online fora, and the questions
_should_ at least be raised or presented (if not adequately discussed or
answered) here.

There's also the fact that this issue and its regulation or prohibition may be
a concern to numerous startups, existing tech companies, employees, investors,
etc., among HN's readership and community.

This particular submission is doing relatively well, so far. Toes crossed.

------
bubblewrap
Some more specific dangers other than "the rich will get richer" would have
been useful to mention.

As with most AI scares, people forget that AIs only replace the even more
faulty human brain. Bias in AI can be debugged and worked around. Bias in
human brains is much harder. You get no insights into why your bank doesn't
give you a credit. Is it because of your skin color? Because they didn't like
your nose?

The scary scenario of an AI judging people's attitudes and ruling them
criminals only because they had a bad morning: the scenario of a human mob
believing somebody to be guilty is much more scary, and it happens on a
regular basis.

It would of course be bad to leave the judging to an AI alone. Of course the
whole justice system would still be in place. That is unrelated to AI. The
surveillance camera is not calling a killer squad on an unsuspecting grumpy
person on public transport. It could merely draw the attention of a human to
the scene, who could then judge on needed actions.

The example of the illegal immigrants also seems weird. Isn't it a good thing
to be able to expose illegal behavior? I sympathize with the plight of
immigrants, but rather than tolerating illegal behavior, it seems better to
improve the laws to make good, desirable behavior legal.

------
thunderbong
I had added this comment just yesterday [0] -

>> All I'm wondering is how this would play out in a dystopian future where
I'm recognised by everyone wherever I go. >> >> And in some cases they might
even have additional information maybe in a probabilistic fashion. Like what
kind of clothes I prefer or the kinds of food I like. And that would be the
good version. >> >> The worse scenario would be that some malicious actor puts
all this together to play on others' fears. >> >> The worst case, of course,
being big brother!

Thinking about this again though, I'm wondering whether this kind of
transparency will actually start favouring common citizens. Imagine being able
to recognise every bad (and good) cop, every corrupt politician, every
criminal wherever they go. Would the social pressure to commit immoral or
unethical acts increase many fold?

I wonder...

[0]:
[https://news.ycombinator.com/item?id=20470887](https://news.ycombinator.com/item?id=20470887)

~~~
pintxo
The ones in power will always find ways to not be subjected to the same rules
as everyone else. Proof: see human history for the last ~2000 years.

------
solveit
Does anyone believe that a ban would actually be effective? A talented
teenager can already train a facial recognition model. In a few years, you'll
be able to drop the "talented".

~~~
dredmorbius
Effective limitations on application, storage, transfer, admissibility (in
criminal or civil courts), as well as substantial enforcement rights and
noncompliance penalties, could reduce the risks to noise levels.

The major utility of such technolgies comes from centralised and
intercorrelated records, sold for business use. Disrupt aggregation, transfer,
correlation, and the business model, and you've effectively killed much the
basis for establishing such tracking.

Banning FR-enabled monitoring cameras or surveillance systems (most
deployments are not DYI), again, with bars on compiled databases (image
collection does little without matching), and again, the effectve commercial
basis is killed.

Specific targeted LEO or national intelligence aplications and deployments may
persist, but the former would be of little use if barred by courts'
evidentiary rules.

National intelligence use without commercial databases, and with bars on,
e.g., sharing of state- and local-level identity documents or photos, again
becomes much less useful for mass control, but instead reduces to fairly
targeted investigation.

------
dijksterhuis
This is going to be cryptowars all over again isn’t it...

~~~
tehjoker
For what it's worth, the power dynamics might be different here. During
cryptowars, we were trying to give people a power to hide from the state while
the state already had the power to hide from the people. With this technology,
while fun and gimmicky examples (like phone unlock, AR) come to mind for small
scale uses, only dark and dangerous uses immediately come to mind which
benefit the ruling class.

~~~
solveit
> With this technology, while fun and gimmicky examples (like phone unlock,
> AR) come to mind for small scale uses, only dark and dangerous uses
> immediately come to mind which benefit the ruling class.

That may say more about you than about the technology. You could use it to
find the lost kid in a crowded stadium. Or find lost people with dementia
anywhere.

~~~
zPwoo
And you are being incredible optimistic. No one is arguing that facial
recognition can't be beneficial. We are discussing whether those pros outweigh
the cons and honestly, with most government track record... I cannot say with
a shred of confidence this technology will not be abused, both privately &
publicly, without heavy, thoughtful and tech-savy regulations.

~~~
solveit
I will say with full confidence that this technology most certainly will be
abused, regardless of whether it is regulated, and that the most flagrant
abuses will be committed by organizations that are practically above the law.
No regulation will stop the Chinese government, and there is not and never
will be the political will to push through regulations powerful enough to
constrain the NSA.

However, the regulations will be powerful enough stop any beneficial uses of
facial recognition because say, the CDC, will follow the regulations to the
letter and not spend the GDP of a South American nation in fighting it tooth
and claw.

------
qserasera
facial recognition is probably one of the hardest metrics to track someone

~~~
dredmorbius
Often it's the impression, threat, or procedural presumed validity of
effectiveness that matters far more than actuality.

The history of debunked methods used in government regulation, court and
criminal evidence, or business practices is high. Simply posting "this area
under visual surveillance" changes behaviours, and animal studies show
dramatic differences in behaviour and detectable stress indicators under
conditions of surveillance or predator presence. Even alpha predators react
strongly negatively to the presence of human voices:

[https://www.theatlantic.com/science/archive/2019/07/humans-p...](https://www.theatlantic.com/science/archive/2019/07/humans-
predators-mountain-lions-landscape-of-fear/594187/)

Vigilance stress is almost certainly deeply innate. Humans are just as much
affected:

[https://www.healthline.com/health/hypervigilance](https://www.healthline.com/health/hypervigilance)

[https://www.healthcentral.com/article/hypervigilance-in-
anxi...](https://www.healthcentral.com/article/hypervigilance-in-anxiety)

------
rocky1138
Anyone ever play Orwell?

[http://fellowtraveller.games/games/orwell/](http://fellowtraveller.games/games/orwell/)

------
quotemstr
It is impossible to ban an idea or an algorithm.

~~~
vangelis
It's impossible to ban , but it's possible to prevent people from implementing
and carrying out that idea. There's a difference here.

~~~
quotemstr
Is it? What are you going to do? Burn the machine learning books? Good luck

~~~
forgottenpass
Murder is illegal, but that doesn't physically stop you from killing anyone
either.

Banning facial recognition wouldn't prevent you from doing it, it would
prevent anyone that doesn't run a crime syndicate from using or selling it.

