
Ask HN: How exactly will AI be Dangerous? - CM30
Okay, I admit I&#x27;m probably missing the point here. But assuming a super intelligent AI comes into existence, how exactly will it physically threaten humanity?<p>How will it get from a program in a computer system to an actual threat to the human race and Earth as a whole?<p>I assume it wouldn&#x27;t work like Skynet in the Terminator series, since military tools wouldn&#x27;t be connected to the internet.<p>So how does your paperclip maximiser physically turn stuff into paperclips? Or your AI cope with world politics, armies and the world population?<p>How does your AI go from irrelevant internet troll to antagonist of a dystopian movie franchise?
======
iamzaf
It's best to change our understanding of _dangerous_.

AI will not come in the form of a dystopian movie franchise.

AI will show itself in ugly biases that occur because of our human biases. The
prime example of AI being dangerous already exists in predictive policing. A
potential starting point to understanding that aspect of the problem is
[here]([https://www.washingtonpost.com/opinions/big-data-may-be-
rein...](https://www.washingtonpost.com/opinions/big-data-may-be-reinforcing-
racial-bias-in-the-criminal-justice-
system/2017/02/10/d63de518-ee3a-11e6-9973-c5efb7ccfb0d_story.html?utm_term=.2b05ef242711)).

In terms of the physical realm, you can think of self-driving cars. Currently,
these systems are not robust to [adversarial
examples]([https://blog.openai.com/adversarial-example-
research/](https://blog.openai.com/adversarial-example-research/)).

So AI will not be dangerous in the traditional sense. It _might_ be dangerous
because of biases in the data or because it acts in ways in which we do not
expect them to work.

Research in interpretable models and adversarial examples will help alleviate
some of these issues.

------
WheelsAtLarge
My view is that AI's real danger is not that it will become our master but
that we'll use it as a weapon to better kill our "enemies." It's not something
that will happen in the future. It's something we can see now. We have smart
war drones now that make it easy to kill people thousands of miles away. And
it's easy to see that the same AI that will power smart cars can be used to
power smart bombs. Or we could develop smart cheap drones that work in swarms
to over power any war plane or ship. I can go on and on the capabilities of
our current technology. I don't even have to go deep into the future to find
dangers. Our current capabilities gives us signals of what we can do now and
what we will be able to do.

We decided a long time ago that chemical weapons are so terrible that they can
not be used in war. We should start thinking about banning AI in war machines
too. If we don't I can see a future of very destructive weapons that will be
cheap and easy to use not just between independent countries but any terrorist
group that wants to use violence to make their point.

