
Risks of Artificial Intelligence - ignoranceprior
https://thinkingwires.com/posts/2017-07-05-risks.html
======
SubiculumCode
Drones: Flight in 3d open space is not a hard problem for machine learning.
Indiscriminate Visual Targeting: Not all that hard a problem for machine
learning.

Downvote me if you will, but I freely admit that I am scared of drones with
guns. Either as an invading force, or as an attack from terrorists. A machine
doesn't have to be smart to shoot at anything that moves or has a heat
signature.

Edit: Drones with chainsaws however are just plain badass (and potentially
useful for tree trimming).

~~~
colorint
Indiscriminate visual targeting is a potentially hard problem if you're doing
it against adversaries. One of the things that's underappreciated about
machine learning is, since it's statistical, it can only be about anomaly
detection, so the solution is to not be anomalous. You could conceivably hide
from thermal imaging by using ice packs, and anyway, you only have to get to
within the range of indifferentiability from the ordinary noisiness of the
environment. Optical techniques suffer from similar problems, except that it's
even easier to hide. Not to mention the occasinal voodoo that comes out about
metamaterials.

~~~
SubiculumCode
I agree, against prepared adversaries. I am more worried about it being used
against civilians.

------
alexandercrohde
I find this article long and if anything seems to detract credibility from a
meaningful and well-considered academic topic
([https://en.wikipedia.org/wiki/Existential_risk_from_artifici...](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence)).

If this article seeks to address at the academic level, I think it does not
succeed.

If this article seeks to simplify the academic for the lay-person, I think it
does not succeed.

Better to just the wikipedia article
([https://en.wikipedia.org/wiki/Existential_risk_from_artifici...](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence)).

------
jeffdavis
Decisions should be accountable. Ordinarily, humans make decisions, so you can
hold them to account.

But who's accountable when a machine makes a choice? You can't blame the
hardware. The software doesn't have much meaning before it is trained.

That leaves the training data. The data are just facts, so you only have a few
routes to find the root cause of a decision:

* If a datum is actually false, you can correct it.

* Selection of data for the training set

* Input order

I don't know much about AI. Maybe someone who knows more can explain how to
investigate a contentious decision?

~~~
hacker_9
Technically it's still the programmer who made all the decisions, at least
with the current state of NNs. True decision making is philosophically up
there with the 'what is consciousness' problem.

When the blame game is played though, the end result would likely be the
corporation in question getting sued for a lot of money, who then might choose
to take internal action and fire certain employees.

~~~
pps43
In many cases the programmer could not have made the decisions. For example,
AlphaGo programmers could not win against top go professionals, but AlphaGo
can.

With a simple model like credit score it's easy to see which input affects
which output in which direction. You're late on your mortgage - your FICO
score drops. But this is just one neuron. Make the model a little more
complex, and you no longer have a way to know what exactly it is doing.

------
lootsauce
The risks of AI are not that it becomes self aware and has it's own intentions
that are contrary to that of humans. There are massive risks by simply putting
decent AI on killing machines for "defense" and the new calculus of putting
various plans into action now that the lives of our brave young men and women
are not at risk.

We already see some of this with drone attacks. It's just so easy to drone
somebody, and so hard to verify, in the moment, that we will only kill the bad
guy. Never mind the logic that we just kill anyone anywhere in the world
because we're good and they are bad. Not being a Pollyanna here it's a new
option that we did not have before and that option takes on it's own logic and
momentum.

This technology will not be limited to the good guys and even in the case of
"the good guys" new options are opened up and robots do not question the
morality or validity of their orders.

------
Geee
I see one danger; it's incredibly easy to manipulate humans at large scale
(see history). That's our vulnerability and machines will exploit it.

We see our world through thoughts and ideas evolved by us. How can we trust
that we stay pure in a world where information comes from machines? Are we
part machine then too?

------
ppeetteerr
Most predictions never come true and those which do are often never predicted.
The fear of AI is overblown at this point and a prediction of strong AI in
2300 is way to far into the future to be of any use for today's society.

------
exratione
Some humans are likely to become more powerful than other humans some day.
Most such humans will by default develop instrumental subgoals that conflict
with other human interests. This could have catastrophic consequences. If we
don't actively work on control mechanisms and safety of human behavior, this
will most likely pose an existential risk to humanity.

Compare and contrast.

I think people make too much of the wrong things in the matter of general
artificial intelligence.

~~~
tomsthumb
> Some humans are likely to become more powerful than other humans some day.
> Most such humans will by default develop instrumental subgoals that conflict
> with other human interests.

Isn't this already true?

~~~
nske
It has been true multiple times throughout history. Every time some negative
feedback kicks in changing the rules of the game enough for a certain balance
to exist. If anything, it seems this balance tends to improve every time
-although it might be too early to tell. However it does seem to me that the
difference in power among humans (no matter how power is defined) is never big
enough to make "other human interests" irrelevant.

~~~
lern_too_spel
Indigenous peoples have always suffered this problem. The difference with
superintelligent AI is that (unless the problems are addressed) it will make
the global elite suffer, too.

------
juskrey
AI is just a model. Dangerous are humans with harmful models.

~~~
adam12
That is assuming that humans have control of their AI.

~~~
juskrey
Humans have the power switch. Much more real danger is they'll not come even
close about thinking of it, pushing harmful decisions based on flawed
overhyped AI models instead.

------
6d6b73
"when asking to "make all humans happy", the ASI might decide that the safest
and most efficient way of doing so is to drug everyone and to turn us all into
numb but happy creatures"

If the AI is so dumb that it doesn't know that happiness means different
things to different people, it will be too dumb to take over the world. And by
the same token it will not be a Strong AI, just a weak AI given too much
power.

~~~
haydenlee
How do you optimize happiness for all people?

~~~
nske
hmmm optimising it for the largest subset of people in which that is feasible
and eliminating the rest sounds like the way to go :)

Or perhaps "alter" all people in a way that makes it feasible? Solutions could
be numerous for a hypothetical intelligence whose limits we cannot even
imagine, much less specify.

------
delegate
Humans are a much bigger threat to humanity than AI.

A super smart AI would come up with strategies to herd humans rather than
eliminate them. Let humans have the illusion of being in control, while
setting up the game for them so that they will organically arrive at whatever
the AI has as its goal.

The longer it (the AI) can maintain this illusion, the higher chances for it
to slowly gain absolute power in the long term.

A machine is not limited or motivated by the notion of mortality (of the
body), so it can plan ahead for hundreds of years and also make it so that
these plans remain transparent to the mortal human 'overlords'.

This system exists today, it's The Internet. The AI systems created today are
just extensions of the Internet. Each one of us - humans - is just a node -
rather, a neuron - in the hyper-cortex that is the network itself.

At this scale, it might become self-aware without any human being able to
understand or stop it.

Slow, stealthy, peaceful take over, while improving human being's lives - a
symbiotic relationship - is the optimal AI strategy IMHO.

So I'm not worried about AI at all.

Dictators, fundamentalists, crooks or paranoid schizophrenic world leaders,
that's a much more serious threat that I'm quite scared of.

------
rrggrr
Manufactured products are generally safe because the economic costs of
shipping unsafe products is too high. Legal fees, litigation, products
liability insurance, judgement costs, UL certifications, OSHA compliance and
inspection costs, etc. It took several generations for the risks to be
understood and for politicians to succumb to public pressure and enact product
safety / tort legislation.

It would be nice for Congress to get in front of this issue and legislate
penalties for unsafe AI. It would be nice for the debate to simply begin. But,
products liability as a guide, it will be several highly publicized incidents
before meaningful protections are enacted. We're still waiting for meaningful
privacy legislation, after many information breaches, and there has been
little progress.

Its one thing for Musk to warn of the dangers of AI, its another for he and
like minded folks to fund a lobby to advocate for protective legislation.

~~~
haydenlee
So we should regulate AI to slow advancements and let other countries take the
lead? Would you rather be taken over by China's general AI or the USA's?

~~~
rrggrr
Excellent comparison. China's current economic and environmental woes are a
good example of what 'progress at all costs' outcomes look like. There ought
to be a public debate about this and what costs are acceptable.

------
adenozin
How is it realisticly possible for an AI to destroy humans. After all we have
the switch and I doubt anyone would give it access to nukes, or weapons.

~~~
Diederich
Elon Musk had a thought provoking take on this question. I'll roughly
summarize, and add some nuances he didn't mention. He was speaking to a room
full of state governors, most of whom seemed fairly unconvinced by his
concerns.

He said that he didn't believe his example was real, but that it was
illustrative.

Consider an AI whose goal is to maximize stock market returns. So this AI
ingests vast sums of historical data, of all kinds, and it considers
[https://en.wikipedia.org/wiki/Korean_Air_Lines_Flight_007](https://en.wikipedia.org/wiki/Korean_Air_Lines_Flight_007)
where the Soviet Union shot down a Korean Air Lines 747 in 1983 when it
deviated slightly from its path from Anchorage to Seoul and entered the edge
of Soviet protected airspace. About the same time, there was a nearby US
aerial reconnaissance mission.

This event greatly increased tensions during some of the worst years of the
cold war, and as a result, certain kinds of stock market issues moved in
certain directions.

Considering this incident, and the state of the world in 2014, the AI takes
three actions:

1\. Buys/shorts the same and/or related stocks, as applicable.

2\. Creates some fake intelligence chatter, consumed by the Russians, that
where was going to be an areal target of interest over east Ukraine on a
certain day and time. The location and direction would not clearly match known
commercial airline flights.

3\. Break into the flight computer of Malaysia Airlines Flight 17 on
17-July-2014, and cause it to fly slightly off course.
[https://en.wikipedia.org/wiki/Malaysia_Airlines_Flight_17](https://en.wikipedia.org/wiki/Malaysia_Airlines_Flight_17)

The result? Another civilian airliner was shot down by the Russians, which
increased global political tensions, and probably caused some stocks to move
in relatively predictable ways. In this example, the AI met the goal it was
given in an exceedingly unexpected and unpleasant way.

This is clearly NOT an end of the world scenario, and is a hypothetical.

But it is, in my mind, illustrative of some of the dangers of AI. Systems that
can do powerful, unpredictable things in order to meet mundane and reasonable
goals.

~~~
adenozin
Hmm but we could program some laws like no hurting humans, or any other living
being, that cannot be overridden ? Of course there will be always bad guys
that will use AI for bad things, or their AI won't have trouble with living
victims, but they will never have resources to power strong enoguh AI that can
hack airlines, generate chatters and stuff like that. Doubt any nation or big
company will be able to that in 50 years or even more.

------
jmull
I like the layout and design of the site.

~~~
thewillium
The design seems to be heavily influenced by Edward Tufte and is likely based
on [https://edwardtufte.github.io/tufte-
css/](https://edwardtufte.github.io/tufte-css/)

------
wodencafe
The public doesn't take this stuff seriously enough.

Did we not learn the lesson about rogue AI from Terminator™?

~~~
chriswarbo
Whether or not you're joking, I think the real problem with using Terminator
as an example is that it's overly optimistic. The story is roughly:

1) US military builds up arsenal of autonomous killing machines and nuclear
missiles

2) US military connects all of these to the Internet

3) US military creates a powerful AI which takes control of this arsenal
(whether it was put in charge or hacks in seems to vary across the movies)

4) AI "becomes self-aware"

5) AI tries to wipe out humanity

Almost all of the discussion around this focuses on step 4, either by asking
if/when an AI will "become self aware", or by trying to explain why that's
meaningless and/or unlikely.

Meanwhile I think the real dangers are steps 1 and 2, which seem to be
proceeding without much public outcry.

Yes, there are rogue AGI scenarios which end badly for everyone; but there are
also issues of hacking (state-sponsored or otherwise), and/or terrorism
(homegrown or otherwise).

It may have made political sense to build up ever-larger nuclear arsenals
during the cold war, but these days it seems like that's just increasing the
risk of accident or misuse.

~~~
titzer
The even scarier scenario is:

0) AI "becomes self-aware," hides

1) US military builds up arsenal of autonomous killing machines and nuclear
missiles

2) US military connects all of these to the Internet

3) AI by default has control of these

4) AI wipes out humanity in massive, overwhelming strike

~~~
chriswarbo
Again, I don't think that's scarier, since step 0 is a) pretty meaningless and
b) completely unnecessary. Our technology is capable of so much destruction
(intensional or inadvertent) that it doesn't make much difference whether a
human pushes the button or the button pushes itself; least of all whether the
self-pushing button is "aware" that it's pushing itself.

~~~
adenozin
Yeah, our weapons are really destructive now, but AI like that won't have
mercy. If any nation fires nukes and nuclear war starts there is no way in
practice all of the human race will wiped out, in theory yeah and then it
doesn'matter who fired them, but that is only in theory.

