
As Artificial Intelligence Evolves, So Does Its Criminal Potential - tysone
http://www.nytimes.com/2016/10/24/technology/artificial-intelligence-evolves-with-its-criminal-potential.html
======
hellofunk
You replace "Artificial Intelligence" in this title with nearly anything, and
it will remain true.

~~~
skewart
True. It's not a very good title.

"As drones evolve, so does their criminal potential." Yup. "As electric cars
evolve, so does their criminal potential." Okay, uh, sure, I guess so. "As
turtles evolve, so does their criminal potential." Wait, what?

~~~
placebo
When turtles evolve they become ninja turtles, and if they go bad, well... do
I really need to paint you a picture?

Seriously though, the level of evil that can be perpetrated as technology
evolves is scary and I ask myself sometimes whether humanity's wisdom will
manage to catch up to it's intelligence before it destroys itself in very
creative ways... Though I must admit that criminal AI feels much less
threatening to me than criminal biotechnology.

~~~
TeMPOraL
Criminal AI wielding biotechnology?

Hack into some database to steal bioweapon designs (or invent one by itself),
get a few chemical companies to assemple components, bribe some poor schmuck
into mixing the vials together, boom.

THB though, what I fear is individuals. Humans as a group behave often in
batshit insane ways, but most of the time, they're a peaceful and kind
species. But you always have, by the random lottery of genes and environment,
crazy people. What technology does is magnifying the power a single person can
wield, and thus the damage they can do. Self-replicating stuff is particularly
nasty power amplifier here.

------
geromek
AI is the biggest hype of the moment. It reminds me the film "Eagle Eye"
(2008) where a kind of of Zeroth-law empowered AI wants to assassinate the
president of the US. Despite its incredibly intelligence what I found more
unrealistic was its control of all internet-connected systems in the US
(traffic, remote control drones, phones, different OS, etc) "just because I am
an AI and I can do whatever I want"

For god's shake, it is 2016, we are still unable to have a decent dependency
system for most programming languages. AI is still decades far to rise up
against us.

~~~
greggman
To play devil's advocate. Let's say in 2030 we can fully simulate a human
brain and it works. Let's also assume it runs 10000x faater than wetware (a
highly conservative estimate?). That means in about 1 yr it should be able to
assimilate as much info and experience as a 30yr old human. After that it
could use it's 10000x speed advantage to effeftively have the equivalent of
10000 30yr old hackers looking for exploits in all systems.

I'm not saying that will happen or is even probable but when A.I. Does happen
it's not inconceivable it could easily take over everything. I doubt most
current state actors have 10k engineers looking for exploits. And, with a.i.
that number will only increase as the a.i. is duplicated or expanded.

~~~
Beltiras
General AI like that is not years or decades away. The problem hasn't even
been stated clearly yet. AGI is probably a century away or more. It's not a
resource problem, it's a problem problem. I attended an AGI conference a
couple of years back with the luminaries of AGI attending (held at my alma
mater, University of Reykjavík). The consensus was we didn't even know which
direction to take.

~~~
Voloskaya
The same argument can be used the other way. If we don't even know which
direction to take, what makes you think that AGI is a century or more away?
Say, in 10 years, we better understand the problem we want to solve and the
direction to take, what make you think it would take 90 years to solve, versus
20 or 30?

I think we simply have no idea when this could happen, it could be in 20, it
could be in 200. But one thing is sure, when it will happen, this will have
drastic implications for our society, so why not start thinking about it now,
in case it's 20 and not 200?

~~~
argonaut
"If we don't even know much about our universe, what makes you think that an
alien invasion is a century or more away?"

Yet I don't see us losing our heads over the chance of an alien invasion.

~~~
Voloskaya
Based on the fact that there hasn't been any known alien encouter during human
written history, that we haven't found any artifact of such an event, even in
a distant past, that a 100ly radius is really tiny at the scale of the galaxy,
that we haven't found any sign of life outside earth, and that anyway, if an
alien civ is advanced enough to come here and invade us we can't really hope
to do anything againts that, there is indeed no need to spend time worrying
about that.

Considering the evolution of computing and technology in general in the last
50 years would you consider the two things to be remotely comparable?

I personally don't.

~~~
rdm42116
Neither have we experienced a true AI, and none of the gains in the last 50
years have brought us anything near it, only more advanced computing ability
and "trick" AI.

We just assume technology will improve exponentially based on an extremely
small sample size. Has it never occurred to us that the technology curve may
horizontally asymptotic as opposed to exponential?

The ICE was an amazing piece of technology that grew rapidly, from cars to
military warplanes, to our lawnmowers. Yet we can not make them much more
efficient or powerful without significantly increasing resources and cost. If
you judged the potential of the ICE on the growth it had then, we'd be living
in an efficiency utopia now.

------
nitwit005
I'm not sure they'd bother even if it becomes available.

It doesn't appear that people have had any real difficulty pulling off scams
where they pretend to be the IRS, or Microsoft tech support, or some other
entity to extract money. The AI would only eliminate their call center costs.

~~~
daveguy
Call center and labor costs. Labor is a huge part of any endeavor. Of course
this "AI to human speech interaction" tech would be good enough to pass two
levels above a Turing test (fool a person into believing a program, _using
voice_ and with enough nuance to impersonate _a specific person_ or tight
range of verbal nuance -- grandmas with regional accents). If someone had this
and they're using it for scams then they would quite possibly be the worst
business mind to ever live.

~~~
vinchuco
[https://www.quora.com/Is-there-a-text-that-covers-the-
entire...](https://www.quora.com/Is-there-a-text-that-covers-the-entire-
English-phonetic-range)

It may not require much sampling to impersonate someone's voice based on these
constructs.

------
Mao_Zedang
Security needs to catch up, the fact that IP addresses, caller ID among other
ideas can be faked shows that we have a long way to go to improve peoples
ability to validate the identity/security of communications.

~~~
daveguy
Agreed. This "criminal AI" tack is pure fear mongering. One of their biggest
points was solving human computer voice interaction so you could use it for
automated social engineering.

If you solved that problem you would command Gates/Musk/Bezos levels of wealth
from the _legitimate_ applications.

~~~
Joof
Agreed. Social engineering is pretty difficult when you consider all the
nuances you need to understand to accomplish it well.

~~~
bananarepdev
True, but when it becomes almost inexpensive, it will pay off even with a low
success rate. You would be impressed with how simple some social engineering
schemes are.

------
nurettin
As the number of chandaliers grow, so do their criminal potential.

------
kordless
As AI evolves, so does our government's potential to abuse it to remain in
control.

~~~
Noseshine
I'm actually fine with the government(s) remaining in control. I would fear
the alternatives a lot more.

------
Animats
I'm more worried about corporations run by machine learning systems optimizing
for shareholder value. Somebody with access to YC's data should try training a
classifier to predict YC success. How far away is the first VC fund run by a
machine learning system?

~~~
feelix
YC do use machine learning on applicants to predict success already. They
don't listen solely to it, but they use it as one of their ranking signals to
use google terms.

~~~
dharma1
Interesting. What metrics? Has it worked so far?

~~~
lordnacho
If they tell you that it might not work so well.

There's a saying about good measures ceasing to be so when they become
targets.

~~~
stevetrewick
_Any observed statistical regularity will tend to collapse once pressure is
placed upon it for control purposes_

Goodhart's Law :
[https://en.wikipedia.org/wiki/Goodhart%27s_law](https://en.wikipedia.org/wiki/Goodhart%27s_law)

------
rms_returns
Indeed, we haven't got any single positive or altruistic implementation of
anything remotely representing AI yet.

And yet, we have bots, spyware, malware, etc. infecting the IoT devices
already, think what will happen if a more "Evolved AI" attacks these devices
(or even worse, us!)?

------
deftnerd
Time for extrapolation, brain storming, and irrational views of the future.

I remember some years ago reading that many of those social media quizes like
"Which [random TV show] character are you" or "if you were a color, what color
would you be" are run by companies that are slowly aggregating consumer
behavior and background data on everybody.

With access to this database, and a semi-intelligent bot that's been given
instructions, one could build an collection of people who meet certain
criteria.

You could filter people down to determine who is most easily influenced by
peers and have the bot befriend them and act as a peer. This power could be
used to simply influence them to have certain consumer behaviors, or it could
be used to cause online malcontents to move to the real world and take up arms
against governments.

You could filter for people who were "easy targets" to trick them and steal
their life savings, or better yet, convince them to send you their life
savings.

You could run a fake church and find the people easily swayed by your specific
brand of faith or family that they crave.

You could find not just the next lone gunman, you could find a thousand lone
gunmen or bombers and set them off all at the same time against a wide variety
of targets.

You could convince 10,000 people to invest a pittance into a penny stock to
make it soar and cash out.

You could trigger viral boycotts or artificially construct "Grassroots"
organizations.

Similar to a recent Black Mirror episode, you could blackmail people but make
it automated. Bots could scour the internet for "deviant behavior" in safe
pseudo-anonymous communities but connect the profile to real-world profiles
and automate threatening them for some kind of action or payout.

On the other hand, many of these things could also be used for good depending
on your viewpoint. A true believer in a cause might like the ability to easily
find and reach out to people who believe in the same cause to form a
grassroots campaign.

A church with low attendance numbers might be able to find more members for
their flock.

An intelligent bot system with psychological and marketing profiles on
everyone in a country could be used by humanitarians to give certain
categories of people (brave, natural leaders with compassion) the prodding and
emotional support they need to stand up to militants or warlords.

An automated bot that connects pseudo-anonymous identities with real
identities could be used to privately and discretely tell trolls to stop their
negative behavior.

If the tool kits arrive, I anticipate a new wild west of uses... negative,
positive, and purely exploitative.

------
blahi
I am at a point where I feel the need for an addon that blocks everything that
mentions "artificial intelligence".

STOP already! There's no such thing. There won't be such thing. You have 0
idea about what you are talking about.

~~~
vinchuco
Would such an add-on also block these comments?

