
DOD officials say autonomous killing machines deserve a look - prostoalex
http://arstechnica.com/information-technology/2016/03/dod-officials-say-autonomous-killing-machines-deserve-a-look/
======
ianamartin
I find it interesting the way this was described at the end of the article.
The idea that this is a necessary development in the case that some enemy uses
ECM to disrupt the link between the robot and its human controller.

Because that will probably sound reasonable to most people.

"Hey, hey, hey now. We're not sending unmanned terminators out to battle to
search and destroy any living thing.

This our cute little droney-drone drone. We call him Charlie the unicorn. Now
if the bad baddy bad terrorists do something bad to Charlie and make it so he
can't receive orders, don't you want Charlie to be able to protect himself?"

And the robot apocalypse begins. Not with an army of soul-crushing t-1000s,
but with an M-79 grenade launcher dressed up as a puppy.

My guess is that by 2035, we will be mostly okay with it though. Putting AI in
charge of driving cars is, by definition, letting AI make life/death choices.
If we're okay with that, we are inherently accepting that these ethical
situations can and will be accurately modeled.

If you can tell a car how to make a decision between hitting an old lady in
the street vs. crashing into a tree and killing the passenger and get that
right enough to please the public, I'm sure we can figure out how to tell a
robot in a battle situation which person to shoot . . . well enough to please
the public.

~~~
Ciantic
I would argue US should start training their AI already, in combat situation
human senses becomes more suspectible for finding threats where there isn't
any.

They could use the AI for real-time analysis of video feed in helicopters etc.
Highly publized incident comes to mind where the poor camera man was gunned
down mistakenly when human operators thought he was wielding RPG, whilte it
was likely a camera he was holding. Could the AI have detected it?

~~~
XorNot
An AI doesn't need to. An AI can wait, take the hit, then send in a fleet of
tazer armed drones to subdue the guy, and another helicopter to extract him.

------
bad_alloc
Just playing the devil's advocate here:

* No soldiers of the side deploying armed drones need to be in danger.

* Robots do not act cruelly unless specifically instructed. Humans do have a pretty bad record of unnecessary cruelty due to racism, boredom, intoxication etc.

* If you log all descision making processes, you can justify each action taken. It's like bodycams for police officers. If you'd make all logs public after the war it would be possible to identify war crimes and who gave the order (unlike now).

* Nothing requires a machine to use lethal force all the time. Systems which have the primary purpose of capturing combattants can be considered when there's no risk of losing one of your own soldiers.

* Your robots priority doesn't have to be self-preservation (unlike a human soldier). When in doubt, don't shoot and analyze the situation.

All in all, if the military _would_ use these autonomous killing machines with
the intent of minimizing covilian casualties, would this not be an improvement
over the current state of warfare? Of course it won't make war per se less
likely, but not using these machines doesn't either. So why not use them?

~~~
vlehto
Because any technological advance is going to be countered. Then at some
point, we have drones killing drones.

Currently military numbers are limited by manpower. And military units (like
let's tanks) are to some degree limited by money. You don't want to pour too
much money on single tank, because you then put all your eggs on few baskets.
And you also don't want to put significant part of your troops in badly
armored tanks with food firepower, because then you will run out of tankers in
no time. Currently the "good balance" is 10 000 M1A2 Abrams that cost 6,2
million dollars a piece. Those would need 40 000 tankers, which is tolerable
amount of crew. Training them is costing and going to cost lots of money.
Combined price of the thing is 62 billion dollars or tax payer money,
distributed to several decades. Increasing the price of tank to 10 million
would be stupid, as it's still vulnerable to certain dangers, but you now lose
4 million more in single blast.

Now let's imagine full autonomous tank fleet of M1A9 Abrams somewhere around
year 2050. You are no longer limited by tankers. You are only limited by
price. But as you lack crew, you need less armor. Lets say we have unit price
of 4 million dollars. Now 10 000 units is only 40 billion and zero tankers.

Then Russia makes autonomous tank "T17 Kursk" and produces 15 000 of them to
sell to China. (Or something else, I'm not fiction writer.) They are bit crap,
but as trained tankmen are not required, they can easily produce higher
numbers to compete with U.S.

But U.S. is not happy with second place. So U.S. gets another 10 000 Abrams.

But China is not giving up, they order another 15 000 Kursk.

And on and on. Until tax percentages of both China and U.S. reach 70% and all
of it is poured to "defense". China, India, Brazil and Russia are all likely
to beat U.S. in economic growth rate in the future, because it's easier if you
start from the bottom. And they are all geopolitically ambitious. We are
headed towards multipolar world and that means arms races again.

TL;DR: Autonomous killing robots remove limitations of arms race. This is
"destabilizing" and draining.

~~~
walshemj
You do know that MBT's require a lot of day to day maitainace by the crew to
keep them going - that's one reason why they still have 4 man crews.

~~~
nxzero
This is a short-term issue, which is widely known; autonomous self-healing
support and supportable units are a coming.

~~~
goldenkey
Hah I suppose you think we have self healing cars.

~~~
nxzero
Appears that you don't understand how auto companies make money; hint, it's
not by selling cars.

~~~
goldenkey
Please enlighten me with more information regarding this scam that even Tesla
hasnt expunged

~~~
nxzero
Auto industry makes most of it's money from enabling auto ownership, not
selling the autos themselves. For example:
[http://www.forbes.com/sites/jimhenry/2012/02/29/the-
surprisi...](http://www.forbes.com/sites/jimhenry/2012/02/29/the-surprising-
ways-car-dealers-make-the-most-money-off-of-you/)

Believe what you want, but comparing auto industry to the military systems is
meaningless in my opinion.

~~~
goldenkey
Thank you I was not aware of this.

------
valine
"These are hard questions, and a lot of people outside of us tech guys are
thinking about it." It seems to me there is an easy answer. Building Robots
that can kill people without supervision is currently a terrible idea. AI, or
whatever you would like to call it, should not be allowed to take a humans
being's life. We make our law enforcement officers pass psych exams before
they are put in situations where they need to kill someone. Until a robot can
pass the same exam, I'm not comfortable with autonomous killing machines.

~~~
spatulan
We've had autonomous killing machines since WW2, and nobody seemed to care
until recently when everything got reframed in a scary Skynet/Terminator way.

Most people seem to get their knowledge on the subject more from movies than
reality.

~~~
harryf
...and we've already had an instance of autonomous killing machine gone wild.
- [http://www.wired.com/2007/10/robot-cannon-
ki/](http://www.wired.com/2007/10/robot-cannon-ki/)

~~~
goldenkey
Has anything like this happened to an actual superpower?

------
Nutmog
Is there any argument against autonomous guns, other than that they might go
wild and cause a lot of unexpected deaths?

We shouldn't mind them sometimes making mistakes and killing innocent people
because human soldiers already do that often enough. Perhaps the robots will
be slightly more accurate than humans, and that's surely good for everybody
except their enemies.

Maybe the real question we should be asking is why humans are allowed to kill
people? When two countries fight each other, soldiers on both sides somehow
decide it's OK to kill the other. They can't both be right, so in any war,
thousands of flesh and bones killing machines (all soldiers on the "wrong"
side) effectively go wild and try to kill lots of inappropriate people. That
shows that human decision making is extremely poor, and can even unanimously
agree on the same lethal wrong decision.

~~~
RobertoG
"Is there any argument against autonomous guns, other than that they might go
wild and cause a lot of unexpected deaths?"

Yes, that a robot always obey orders. In my opinion that is the most dangerous
thing about this issue.

Anyway, it's only a philosophic question because this is going to happen. "We"
can talk about it, but "we" are not deciding anything.

"They can't both be right"

Not, but they can both be wrong, that is what normally happens.

~~~
mtreis86
Humans are not much better than robots at disobeying orders, as demonstrated
in the Milgram experiments.

~~~
shpx
Milgram conducted 23 different kinds of experiments, each with a different
scenario, script and actors. This patchwork of experimental conditions, each
conducted with a sample of only 20 or 40 participants, yielded rates of
obedience that varied from 0% to 92.5%, with an average of 43%. Contrary to
received opinion, a majority of Milgram’s participants disobeyed. [0]

What's the obedience rate of a robot?

[0] [http://theconversation.com/revisiting-milgrams-shocking-
obed...](http://theconversation.com/revisiting-milgrams-shocking-obedience-
experiments-24787)

------
ryporter
Autonomous killing machines are a natural progression of military technology.
They will certainly be created by someone at some point in the future. Thus,
our military should definitely be researching this topic, and that's all that
they are doing now.

------
vonnik
Is it just me, or does this seem like an Onion headline? It's the
juxtaposition of the very serious "autonomous killing machines" with the
casual "deserve a look", I think. Like they might have autonomous killing
machines for lunch if that was offered as the special

~~~
x5n1
Can't wait until a dictator uses autonomous killing machines to, you know,
just take over the world.

------
vlehto
Everything DoD says or does is partially American propaganda. (I say
"propaganda", because "information war" is American propaganda.) And we are
dealing with a U.S. hegemon. It all started with American revolutionary war,
and continued as "democratization". U.S. was based on an idea of liberating
people from monarchs. And as U.S. set out to do that, DoD /CIA got really good
at meddling other countries business. Now the original idea has been
abandoned, democratization might not do any good anymore, even cold war is
over, but Yankees still keep "power projecting". Institutions tend to keep on
doing what they are good at. If powerful institution loses purpose, it invents
a new one. Because there is certain "survival of the fittest" even when
dealing with taxpayer money. And ultimately I'm grateful for this as a whole,
because U.S. set a model of democratic free market society that has been
copied and improved around the world.

Back to propaganda. DoD is scared that other countries would increasingly use
autonomous weapons in area denial missions. This would be really detrimental
to "power projection". For example you don't wan't Chinese to use swaths of
cheap autonomous boats to occupy south China sea. U.S. carrier fleet is way
too expensive to deal with such saturation. Missile may cost more than entire
boat. Other nasty possibility is cheap unmanned Cessnas flying around with
short range Infra Red seeking missiles. That would make SEAD mission lot more
demanding, as you need to kill the Cessna first and so notify everybody that
"we are coming".

U.S. is technology leader. Whatever Lockheed Martin does, is copied around the
world. Whatever they don't do, gets significantly less eyeballs. DoD uses this
as an advantage. Most recent example is infrared search and track(IRST). USAF
was lacking in this respect just recently, because USAF has radar stealth.
Putting money into IRST in late 80's would have been smart move to anybody
else. But as DoD didn't do it, herd mentality went with RADAR. And USAF has
stealth, nobody else does.

Now U.S. employs autonomous weapons in assault missions in ground war. If
other countries copy that tech, it does hardly anything to U.S. assets in air
or sea. But for "short while" U.S. gets diminished casualties and even more
frightening weapons. If people find autonomous weapons too inhumane in attack
role of ground war, that might result in international ban of all autonomous
weapons. Win-Win.

We have precedent. DoD teaching Cambodian guerrillas to set up mines. ->
Ottawa treaty. I'm not saying U.S. did that on purpose. But I'm pretty sure
DoD military analysts's learned from that experience.

~~~
goldenkey
I think you brought up a better point. Invading countries are going to have a
hell of a time if sentries are set up. Booby traps and mines are one thing.
But autonomous sentries are well, very hard to remove without risking harm

------
bane
In "The Golden Oecumene" trilogy (one of the densest and greatest looks at a
possible real future I've ever read), the world is run by a collection of
benevolent super-intelligence AIs. One of the themes in the book is the notion
of hyper-extending current technology trends and exploring what society will
be like under such circumstances.

In the series, the take on the military is fascinating. Extending out the
notions of precision and limiting collateral damage, orbital weapons are able
to take out specific neural connections in targets and cause them to change
their thinking process entirely. While the _weapons_ in this future picture
are large automated robots and AIs of a type, the actual operator is human. In
fact a single human is all that remains (and is needed) to provide all the
judgment and justification needed to take action. But the books still posit
that a human-in-the-loop is still necessary.

Great and very challenging books
[https://en.wikipedia.org/wiki/The_Golden_Oecumene](https://en.wikipedia.org/wiki/The_Golden_Oecumene)

------
ZoeZoeBee
If anyone believes using autonomous killing machines is inherently evil and no
doubt beset with Unintended Consequences, I invite you to donate to the
"Campaign to Stop Killer Robots", and hopefully make autonomous killing
machines illegal by international law.
[http://www.stopkillerrobots.org/](http://www.stopkillerrobots.org/)

------
atemerev
This is inevitable.

Remote controlled drones are prone to jamming, hacking and human factor.

Whoever deploys autonomous warfare systems first will have an advantage in
battlespace.

------
nickpsecurity
The first use will be for border security as in Babylon A.D.:

[https://youtu.be/MQNUvaPi4Qk?t=42m15s](https://youtu.be/MQNUvaPi4Qk?t=42m15s)

Lot of border. Fast response needed. Damage whole families trying to make it.
Operate with little human input for a lot of them. Perfect thing for
militarist governments to delegate to killing machines instead of people. Side
benefit of dodging some responsibility for bad choices where they blame it on
machine logic.

------
nxzero
Wow, surprise; no sarcasm, to me, this means they already have one and believe
the public will support its release.

------
velox_io
I'm surprised there isn't currently a 'personal air support drone', that can
fit into a troop's backpack. Which is little more than a flying grenade (with
a video stream that can go round corners).

Such a device could be made from off the shelve components and help protect
against snipers.

~~~
FranOntanaya
There's already a few.

[http://www.businessinsider.com/heres-the-tiny-drone-the-
us-m...](http://www.businessinsider.com/heres-the-tiny-drone-the-us-military-
is-testing-2015-6)

[https://www.youtube.com/watch?v=1tetyswGyGA](https://www.youtube.com/watch?v=1tetyswGyGA)

------
faddat
No thank you. I like my earth earth-y and not scorched Armageddon style.

Hubris!

------
mgiannopoulos
This is how it begun, robot historians will write.

~~~
throweway
If there are any left

~~~
vectorjohn
And if they lost the knowledge of English grammar.

------
kneel
They're kind of like smart land mines, we'll kill people when you're not
around except this time we promise we'll try to only kill bad guys, pinky
promise.

------
throweway
Daleks!

~~~
pmlnr
The Daleks are not machines; there is a living thing inside of the shell.

