
We Can Now Build Autonomous Killing Machines. And That's a Very, Very Bad Idea - Libertatea
http://www.wired.com/2015/02/can-now-build-autonomous-killing-machines-thats-bad-idea/
======
mullen
Maybe I don't see the problem, but I don't see the problem with robots hunting
down and killing humans. As long as the robots are controlled by the side I am
on, I am good with this. Yes, there are some technical issues to overcome that
guarantee we keep them under our control, but those issues will be overcome
before deployment.

I wouldn't mind having a future where someone attacks my country or needs to
be wiped out (ISIS, Al Qaeda in "Some cesspool of a country") and we just
unleash the robots on them. Set some parameters and tell the robots where they
can and can not operate and under which circumstances and be done with it.
Some place like Syria would be a no brainer. Drop a 100,000 robots in there
and that's the end of ISIS, Al Qaeda and Assad. Let the AI figure out who the
bad actors are and then help rebuild the country when done getting rid of said
bad actors. In the big picture, I don't see any downsides with this.

~~~
VikingCoder
> those issues will be overcome before deployment.

Really? We'll invent a 100% secure computer system? With no way to be
hijacked? How about we build that system first, and use it to protect our
social security numbers, credit cards, email, voting, and patient health
information!

It's because there is no way to make a perfectly secure system. In a worse-
case scenario, our enemy hijacks those 100,000 robots you wanted to send, all
at once, and sends them against good guys.

> Set some parameters and tell the robots where they can and can not operate
> and under which circumstances and be done with it.

Some parameters... Like, holding a firearm? Mad at the United States'
policies? Defending their home against invasion? Bowing to Mecca? Skin
pigmentation?

Remember, war isn't even fought between nations any more. Most of our recent
interesting enemies have been merely citizens inside of third world countries.

If your GOAL was "kill everyone inside of these borders," then maybe you could
control that. That's never the goal, though. Not since Hiroshima and Nagasaki.

> Let the AI figure out who the bad actors are

Humans can't even do that.

> are and then help rebuild the country

That's a neat idea... How about we win hearts and minds with agriculture /
health / construction / education robots FIRST, rather than sending an army of
death robots to murder every human in a death zone?

~~~
yellowapple
> Really? We'll invent a 100% secure computer system? With no way to be
> hijacked?

That's obviously not _actually_ a requirement of killing machines if humans -
being as phenomenally easy to deceive as they are - are acceptable options for
assuming that role.

The system wouldn't have to be 100% secure in order to be good enough, by the
standards we've already set as a species. It would only need to be more secure
than a human.

~~~
landryraccoon
It's not at all clear that a human being is easier to deceive under
battlefield conditions than a machine would be.

------
pj_mukh
Or in other words [https://xkcd.com/652/](https://xkcd.com/652/)

~~~
X-combinator
So true...
[http://imgs.xkcd.com/comics/more_accurate.png](http://imgs.xkcd.com/comics/more_accurate.png)

------
sukilot
Can't stop it. Any autonomous toy plus some poison, explosive, or sharp
warhead == weapon.

~~~
pj_mukh
I think this is referring to target identification systems that will
autonomously pull the trigger without a human in the loop. Not a thoughtless
landmine or blanket explosive (we know those are bad)

------
yellowapple
I honestly don't think this is a bad thing. Or, at least, it's no worse than
humans doing the killing.

The article complains that such machines will never be able to make correct
judgements about their targets with 100% reliability. However, neither can
humans. Furthermore, I'd reckon that an artificial intelligence - with the
hypothetical ability to tap into exabytes of raw data to base its decisions
upon - is _far_ better suited to those sorts of decisions.

The article incorrectly assumes that there is a distinction between the brain
and any other ordinary computer. Once one realizes the sheer lack of such a
distinction between organic and inorganic computing devices beyond rather-
minor implementation details and computing capacity, the whole premise of the
article is moot.

We already _are_ using autonomous killing machines, and have been for hundreds
of thousands of years. Those machines are called "humans".

------
VikingCoder
I recommend the techno-thriller fiction novel "Kill Decision" by Daniel
Suarez.

