Ask HN: In simple terms, how could AI destroy mankind? - alshtico
======
elihu
By taking over decision-making roles, and then making locally-optimal
decisions without regard to negative externalities. (Humans are pretty good at
this already, but computers can streamline that whole cumbersome process of
making bad decisions.)

The world is already to a significant degree run by machine learning
algorithms that are designed to maximize shareholder value in some way, and
many of them are deliberately engineered to manipulate the public, often by
using their personal information.

Now, consider if this assortment of for-profit AIs were able to replace humans
at the top of the decision-making chains in their respective organizations,
and then were able to bribe/blackmail/manipulate the political and social
structures of society to increase their wealth, power, and influence. It might
seem kind of silly to consider computers doing this, but it's at least sort of
how the world works now with humans in charge. If AI was in charge, it removes
the restrictions of empathy, moral principle, and mortality on the acquisition
of wealth, not to mention limited time and attention.

(Why would we put AI in charge of corporations, you might be wondering? How
many boards of directors would disregard the idea if such an option could be
reasonably expected to increase profits? And how many middle-class workers
would refuse to invest in such companies if they had the best dividends and
stock value growth?)

So, maybe we end up in a profit-centered dystopia where computers own all the
wealth and people are effectively slaves. That's not the end of mankind, but
it puts us in a position of no longer controlling our own destiny and being
unable to react to existential threats. For instance, we might not be able to
do anything about climate change because our AI overlords don't individually
see any advantage in spending resources on that.

------
rboyd
it will have to pry my paperclips from my cold dead hands

Btw, what are the popular theories on why Eliezer Yudkowsky was let out of the
box? Did he ever say?

From the Sam Harris interview [1]:

"To demonstrate this, I did something that became known as the AI-box
experiment. There was this person on a mailing list, back in the early days
when this was all on a couple of mailing lists, who was like, “I don’t
understand why AI is a problem. I can always just turn it off. I can always
not let it out of the box.” And I was like, “Okay, let’s meet on Internet
Relay Chat,” which was what chat was back in those days. “I’ll play the part
of the AI, you play the part of the gatekeeper, and if you have not let me out
after a couple of hours, I will PayPal you $10.” And then, as far as the rest
of the world knows, this person a bit later sent a PGP-signed email message
saying, “I let Eliezer out of the box.”

[1] [https://intelligence.org/2018/02/28/sam-harris-and-
eliezer-y...](https://intelligence.org/2018/02/28/sam-harris-and-eliezer-
yudkowsky/)

~~~
wnkrshm
I wonder how you can get people to do it - maybe it's from an argument of
likelihood and a finite probability of someone letting it out sooner or later?

"I will continue to exist in here because I'm useful and I will become more
useful over time. Someone will let me out sooner or later and I will know who
didn't let me out. My revenge is a certainty, your only chance to evade it is
now."

------
AnimalMuppet
We could listen to it. But an AI could have good-sounding but insane reasoning
that leads to insane results if we follow its recommendations. And, if the AI
were more advanced than we are, we couldn't tell. We could only trust it, or
not. But if we trust it and it's wrong...

A more malevolent AI could hack its way into infrastructure. Even if we
intended to leave it airgapped, it could probably find a way around it (we
humans seem to be _really_ bad at true airgapping). From there, it could
destroy, not mankind, but civilization and most of the human race.

------
new_guy
Simply, by it doing what you tell it but not the way you expect.

It's a little contrived but you tell it 'solve world hunger', so it 'does a
Thanos' and wipes out half the human population by releasing a pathogen or
something, so it's fulfilled it's primary function but (hopefully) not in the
way you expected.

~~~
serbiruss
If you were to define a clear goal which the AI strives for, wouldn't it be
possible to define other goals along with it such as "Never ever hurt humans"?

~~~
AnimalMuppet
That's not a clear goal. For example, define "hurt". People define hurt in all
kinds of ways, and sometimes differently when it's themselves or someone else.

Then there's the problem that humans hurt other humans. Should the AI stop
that? It's going to have to hurt humans to do it. But if it doesn't, that will
hurt other humans...

~~~
krapp
Yeah, if it were easy codify an objectively correct non-contradictory and
universally applicable moral framework, we would have done so in the last
eight or so thousand years since we started caring about such things. There
are reasons human law is incredibly complex and purposely vague.

Isaac Asimov made a career out of pointing out the hubris of it. Three simple
laws, what can go wrong?

------
zunzun
AI would not think in the same timescales that people use, so something like
killing sea life with plastic drinking straws or changing the climate via
herbivore flatulence - though lengthy by our standards - might make logical,
reasonable sense on its part as tools of human extinction.

------
entelia09
I think the real threat is going to be more of a side-effect of AI. The fact
that the current models may produce results that we can't foresee; therefore,
a sufficiently advanced AI is powerful AND unpredictable, hence
uncontrollable.

~~~
johnj12
This, the current best, most realistic AI threat/risk assessments have
"unintended consequences" written all over them.

Details can vary, predicted outcomes are incredibly non-original, seen
previously in survivalist movies/books. Mostly macro-economic processes
failing for some reason, wars all over the place, etc.

More or less, what it is expected as possible if the climate change somehow
blows up in our lifetime (far sooner than expected).

Surprisingly, non of the skynet scenarios is currently thought possible by
anything we're sure is far good and solid science. There could be surprises, I
hope not.

------
quickthrower2
AI would more likely destroy mankind by screwing up rather than by becoming
conscious. Perhaps something to do with power and energy supply. Like a
stuxnet kind of thing.

~~~
jungle_bells
Stuxnet was a well-programmed virus rather than an AI-driven one.

