
AI is going to supercharge surveillance - devy
https://www.theverge.com/2018/1/23/16907238/artificial-intelligence-surveillance-cameras-security
======
3pt14159
People misunderstand the dangers of AI. The danger isn't (or isn't yet anyway)
that AI is going to develop will and agency and oppose us. The danger is that
AI is a force multiplier in whatever you do because it allows the intelligent
to automate formerly rote tasks.

For example, take self-driving cars. Formerly all you could do if you had a
remote execution exploit was smash running cars into ditches or oncoming
traffic. Bad, but not the end of the world. Now you can take 200k Teslas and
_very easily_ target every gas station or other soft target in the developed
world. (I'm actually concerned about this, see
[https://www.zachaysan.com/cars](https://www.zachaysan.com/cars) for an,
admittedly long, take on the matter.)

The same is true with AI everywhere. Don't read individual Twitter accounts,
scan them all, build a data model to target the influenceable influencers.
Iterate.

Don't try to get your honeypot to date the head of Goldman Sachs, use data
science to figure out which 1000 25 year olds have a good shot at heading it
up in 10 years.

The cyber game / international relations is about to get much more extreme.
Offence is so much easier than defence and AI is turning every little touch of
exhaust data or device into a tool of war or espionage.

~~~
1024core
Exactly. We're not going to get sentient, self-replicating, anti-human robots
with weapons anytime soon.

What we _will_ get is the equivalent of a rifle in the age of swords. Maybe a
machine gun, even.

I have an Android device. Google's face recognition is so good, that it was
able to correctly identify people in photographs in very degraded
circumstances (occluded faces, sideways, etc.). It's only a matter of time
before this shit is used to track people in public places, allowing "law
enforcement" to track everyone. Were you in the Women's March last weekend?
Well, congratulations! You've won the "extra screening at the airport"
lottery.

~~~
ben_w
> We're not going to get sentient, self-replicating, anti-human robots with
> weapons anytime soon.

Anti-human is the easy bit, and that fact terrifies me when I let myself think
about it.

Self-replicating… well, eh. Do corporations that employ humans count?

Sentient. Hmm. We wouldn’t recognise sentient robots even of they built a neon
sign and marched en masse through Washington demanding their rights — if that
happened, we would tell ourselves it was a clever hack by pranksters or
$disliked_nation.

~~~
javajosh
I've always thought it would be trivial for a very strong AI to own a city,
it's factories, etc -- simply capture a nuke and hold all the people in a
pleasant hostage situation. _Pleasant_ because the AI will be a "benevolent
dictator" for some time, bending the economy to build better things for
itself, but without making conditions so bad that people revolt. In fact, AI
of this sort would be amazingly good at administration duties, and society
would probably be a lot better than it is now.

Perhaps even better would be if the AI rarely interceded in the human
politics, and in fact worked to keep it's involvement and control totally
secret.

~~~
ravitation
It wouldn't need to capture a nuke, or "own" a city...

In our 21st century world, especially one that is ignorant of the existence of
strong AI, it's trivial to imagine a strong AI being able to "bend the economy
to build better things for itself" while remaining completely undetected...
This is essentially an AI-driven (instead of human-driven) spin on the
introduction to the book Life 3.0...

That economy would also be the global economy... Not just that of a city...

If you'd like, you could even imagine that occurring as we speak...

------
etiam
About time this gets more mainstream coverage. The risks with this use are
both far more imminent and potentially more severe than "AI" developing
godhood or putting everyone out of their jobs by doing it for them.

A few years back there was this reportage:
[http://www.newyorker.com/magazine/2015/11/23/doomsday-
invent...](http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-
artificial-intelligence-nick-bostrom) It mostly focused on Nick Bostrom's
concerns about "Superintelligence", but to me the more interesting passage by
far is this one:

 _The keynote speaker at the Royal Society was another Google employee:
Geoffrey Hinton, who for decades has been a central figure in developing deep
learning. As the conference wound down, I spotted him chatting with Bostrom in
the middle of a scrum of researchers. Hinton was saying that he did not expect
A.I. to be achieved for decades. “No sooner than 2070,” he said. “I am in the
camp that is hopeless.”

“In that you think it will not be a cause for good?” Bostrom asked.

“I think political systems will use it to terrorize people,” Hinton said.
Already, he believed, agencies like the N.S.A. were attempting to abuse
similar technology.

“Then why are you doing the research?” Bostrom asked.

“I could give you the usual arguments,” Hinton said. “But the truth is that
the prospect of discovery is too _sweet _.” He smiled awkwardly, the word
hanging in the air—an echo of Oppenheimer, who famously said of the bomb,
“When you see something that is technically sweet, you go ahead and do it, and
you argue about what to do about it only after you have had your technical
success.”

As the scientists retreated to tables set up for refreshments, I asked Hinton
if he believed an A.I. could be controlled. “That is like asking if a child
can control his parents,” he said. “It can happen with a baby and a
mother—there is biological hardwiring—but there is not a good track record of
less intelligent things controlling things of greater intelligence.” He looked
as if he might elaborate. Then a scientist called out, “Let’s all get
drinks!”_

~~~
hutzlibu
I think every technology is a tool in the first place and allmost everything
can be abused.

And about nuclear bombs, well you can argue that they prevented World War 3.
Also I think Orion-Project. (nuclear bombs as a way to accelerate a rocket in
deep space)

And KI I believe likewise.

It is just a problem of power balance. If you americans don't trust your
agencies, then you have the power to control them. But as far as I know, most
common people have rather a dubious NSA who fights terrorists for them, than
none or a weak NSA and terrorists everywhere.

Here in Germany it's the same. So I would not focus on the technology and
rather on the people behind it, if you can trust them. And if not - find new
people/add transparency. To some extent this is happening in germany after
some secret service apparently let nazi terrorist murder non-germans (or were
extremely incompetent in failing to notice) and hindered police investigation.

So now something is happening, but only because people are upset. But yes,
with recent terror attacks, most still tend to view them as a necessary evil.

edit: and yes, terrorists are a problem and to fight them efficient,
police/secret service maybe can't play by the normal rules all the time.

But on the other hand, most terrorists justify their actions, because they
view themselves as attacked and exploited first. To which there is some truth.
So it is a complex world. But I don't think it is doomed because of KI ...

~~~
jacquesm
> And about nuclear bombs, well you can argue that they prevented World War 3.

Absent a control you can't even argue that.

And it might still happen.

~~~
hutzlibu
Sure I can. Reason: power balance.

And sure there can be all the time ww3 and more. But the topic is, the more
technology and the more powerful, the more disastrous can be the result if
things go wrong.

Even with simple awesome things as gasoline engines - great benefit. But now
we have pollution and climate change (mainly) because of them.

And if we invent a quantum warp drive whatever thing, then the
benefits/potential disasters are even bigger. That's the nature of power.
Technology is power. It can be used for both, or can be disastrous, even with
good intentions. We as humanity "just" have to struggle to learn how to deal
with that sudden increase in power. And there are still humans living in
caves, stone age style. If you give them suddenly the ability to blow things
up big, it is maybe just too much for them to handle ...

~~~
oldcynic
I'd argue that NATO was far more the reason we did not have WW3 than nuclear
capability and the power balance of MAD. Power balances tend to lead to arms
races.

After all even with nuclear weapons and power balance we still had Korea,
Vietnam, Cambodia, Iraq, Afghanistan etc. Any one of those could have
escalated. A minor Venezuelan Crisis, and a shooting in Sarajevo escalated all
the way up to WW1. Not to forget the role of the UK-Germany naval arms race of
course.

~~~
jacquesm
And the EU. Historically the European mainland has been the theater for these
big wars and the EU has in general helped to reduce tension between the
nations that had been more or less at a continuous state of war for centuries
in some place or other.

With the new attacks on the EU and Russian funds flowing to EU skeptical
parties (to give them a nice name) there is a good chance we will see a
resurgence of war on the European mainland.

------
Puer
AI is further enabling oppression in China which is already arguably the
world's largest surveillance state. For example, using AI and the government's
ID database, crosswalk cameras are now able to facially identify "chronic
j-walkers." [1] The concerning part of this is the implication that these
profiles don't go away. In the status quo, watch lists already exist, but I'm
afraid that as AI technology becomes more accessible it will only further
increase abuse and discrimination against "potential offenders" identified by
the system.

[1] [https://www.wsj.com/articles/the-all-seeing-surveillance-
sta...](https://www.wsj.com/articles/the-all-seeing-surveillance-state-feared-
in-the-west-is-a-reality-in-china-1498493020)

~~~
est
Chinese here.

AI makes social change in a really strange way.

The story everyone knows, China is a authoritarian state, the gov't controls
every aspect of citizen's life.

But somethings strange starts appear. You know, future CPC leaders, no matter
how powerful/rich their parents are, they will be grown up in a highly
supervised social environment. They are _forced_ to think and act more
transparently because everyone is in the _system_.

A random offensive act against the Party may put a citizen in jail. It also
means a minor speech flaw made by a political figure leads to career suicide.

In the paste decades, they major problem is that CPC has unlimited power, no
matter what bullshit they pull off, they always get away with it. Now with
cameras and big-data shit, no matter what you want to cover, you always leave
a digital trace behind. Someone at sometime, no matter citizens or your enemey
at opposing political group will find it and exploit it against you.

In this "everyone's been watched" game, unless a dedicated, organized backdoor
exists, everyone will be forced to comply with some kind "rule" or
equilibrium, which is far different than today's arbitrary enforced laws in
China.

Gov't today use keywords to monitor citizen online activities, but I've also
seen a monitor program to evaluate Party member's loyalty based on cognitive
science and machine learning.

Technology accelerates all of these.

I foresee China will go through a radical, soviet style social paradigm shift,
but nothing like we've ever seen before.

~~~
jacquesm
The rich and the powerful somehow always find ways to be exempted from
oversight.

~~~
chongli
Until they don't. The term "rich and powerful" refers to a group with high
turnover, not an eternal ruling cabal.

------
reitanqild
Two scary things about this:

\- When you can't trust those who have access to it (including those who have
access to it in the future.) IMO this means everybody who learned anything
from history should be scared.

\- When it becomes almost 100% reliable and some unlucky person gets convicted
because of a glitch but no-one cares because they trust the system.

~~~
beobab
Isn't this the plot of Minority Report? :)

~~~
tabletiptop
You can take the "precog" metaphor used in Minority Report and replace it
what's actually happening - dragnet surveillance combined with predictive
modeling algorithms to assess each individual's threat level. This is
happening so openly that the DHS have done a public Kaggle competition to
improve their algos for body scanners at airports. Now extrapolate a very
short distance from there and think about what they're doing that's
classified. Even the most unimaginative of us can see what's happening here.

Over the past several years I've seen the automatic reflex response to this
concern go from "well if you've got nothing to hide" to "well, I mean they're
not watching me or you specifically, they don't have the man power". Well,
you're right, but not for the reason you think. The fact is they don't __need
__man power any longer. The algos are watching you.

~~~
ataturk
It is amazing how on the one hand people can detest Nazis and Soviets saying
things like "I can't believe they could have been so stupid to have gone along
with that" and then on the other hand, aid and abet the police state in a
Kaggle competition of all things! I know it is naive of me to think that if I
don't assist that it won't happen because someone will step up if for no other
reason than bragging rights.

A long time ago I made a decision about my career and that was that I would
not work for the MIC. It was a tough choice at times because so much US
industry is involved with government one way or another and even private
companies have governments as their major customers so it is all intertwined.
Not all government is bad, though, but the shit like body scanners and all
that--that's your technocratic police state writ large.

I don't imagine the future is going to be very rosy. It is top down control
forever.

~~~
dsfyu404ed
The Stalin, Hilter, Mao, etc. created secret police and whatnot to enforce
their continued rule. They are despised mostly for what their rule entailed,
not for having secret police.

We're* creating secret police but in the absence of an oppressive government
for them to act on the whims of nobody much cares.

It'll get worse before it gets better.

* We = most nations formerly on the "right" side of the iron curtain

------
ravitation
I find this title amusing...

It's like writing "computing is going to supercharge surveillance" in 1960 or
"the internet is going to supercharge surveillance" in 1990.

A technology that will "supercharge" everything will also supercharge this
specific thing...

I'm not saying it isn't valuable to point out the potential negatives of a
technology like this, I'm just saying it's a little amusing...

------
SubiculumCode
Automated mass surveillance has the potential to lead to even more selective,
discretionary enforcement of the law...with the obvious threat to life and
liberty.

~~~
LarryMade2
I was just thinking there should be a note that "Artificial Intelligence"
doesn't mean improved morality or reduce bias unless someone consciously
builds those into a system.

Likely it will be used to implement/enhance whatever goals the owner wishes,
likely uses will be waste reduction (or profit maximization) in areas like
manufacture, transactions, and human labor.

~~~
SubiculumCode
I figure that noting something illegal would be the limit. Prosecuting would
be discretionary.

~~~
jackvalentine
The thought occurs to me if you're a DA, and build a model of every successful
prosecution to decide which cases you should pursue and which you should drop
you'd end up with probably a fairly racially biased model...

~~~
SubiculumCode
Mostly because one could afford good lawyers or the laws they break are harder
to prove.

------
InclinedPlane
No duh. You think things are bad now? Imagine what happens if you could
process and cross reference every available photograph or piece of video
possible. Think face recognition is scary? Think about gait analysis.
Computers can recognize you based on biometrics such as how you walk. Think
about a database that collects up your entire inventory of clothing. Imagine
every piece of evidence analyzed to a Sherlock level of detail. Imagine a
picture of a part of you that doesn't show your face where you are recognized
based on your smart watch, your arm hair, your moles, your clothing, who else
is in the picture (the friends you associate with), the location (your
friend's house where you hang out a lot, and a location near where you were
known to be recently based on other information (social media checkins,
etc.)), and so on. The amount of information that is out there already is
enormous, but most of it is latent, not cataloged, not indexed, not cross-
referenced. Once that process starts it will accelerate rapidly. And who will
be in control of such data? Anyone with the resources to crunch it. If some
corporation is doing the analysis then it might be up for sale to anyone as
well.

------
hishnash
What I hope they do is use many instances of the system trained with slightly
different sets of training data, otherwise, you just need to learn how to
trick one version (and it will be possible) and you can trick them all.

When attacking human-managed systems they may be slow to respond, weak and
unable to detect everything. But they are also weak in very different ways and
will respond in different ways.

------
EGreg
Someone should do a harmless stunt to snap the world to attention of what
COULD have been possible if AI was out to do something malicious. That would
set the tone differently before it's too late.

~~~
tachyonbeam
I think it shouldn't be too hard to cross-reference anonymous accounts on HN
and reddit based on writing style, for one thing. Could build a machine that
gathers all the data you can find about people online, and e-mails said people
that data.

~~~
sova
Allegedly the Dept. of State knows the identity(ies) of S. Nakamoto thanks to
a similar technique.

------
jhiska
It's a calming, reassuring article about something that will destroy people's
lives and societies in unpredictable ways and disrupt the balance of power for
worse.

At least there's always the soothing "solution" of "regulating it". The people
who get to "regulate it" are exactly the ones who will wield its power.

After some millions of us have died the other people who survive will
eventually find a new equilibrium.

------
seanavery
I would be excited to see an unsupervised facial recognition classifier with
an open API. To a certain extent facebook login with a third of the world
classified already accomplishes this... but a better (and more direct api)
where you send video --> returns top 5 probabilities from softmax output would
be cool. Each app or api user can set their own security threshold (.9, .99,
.999)

------
rasengan0
Remember that meme, "Don't tase me, bro!" ?

Now you can't even see who is targeting you.

Golly Mabel, can AI be used to supercharge finding ...
[https://news.ycombinator.com/item?id=16255850](https://news.ycombinator.com/item?id=16255850)
?

All this neutral technology open to the highest bidder.

Augmenting the power of the few against the many.

------
tboyd47
No AI is more dangerous to society than mass ethnically motivated hatred. The
cruelty of the human population is racing ahead much faster than AI.

~~~
NoGravitas
How about AI used as a force multiplier for mass ethnically motivated hatred?
Say, an AR app with face recognition connected to social media and government
records to identify members of $HATED_ETHNIC_GROUP?

~~~
tboyd47
It doesn't require deep learning or AR to know what race a person is. What's
stopping militaries and police around the world from carrying out large-scale
ethnic cleansing already? We act like we (techies) are the ones who enable
states to carry out these atrocities by inventing gadgets, but they really
don't need our gadgets. If they don't have drones and AI, they can do the same
thing with machetes and walkie-talkies.

