
Pentagon: A Human Will Always Decide When a Robot Kills You - swohns
http://www.wired.com/dangerroom/2012/11/human-robot-kill/?utm_source=twitter&utm_medium=socialmedia&utm_campaign=twitterclickthru
======
csense
> Deputy Defense Secretary Ashton Carter signed

Shouldn't something this important be implemented at a higher level?

I'd think there ought to be a new Geneva convention about the rules of robot
warfare. Get all the countries capable of making deadly robots to agree on
what kinds of tactics are okay and the chain of command/responsibility issue
addressed by this memo.

Of course there's no guarantee that all countries will sign it, or that those
that do will keep to their promises, or that it will regulate terrorists.

I do approve of the move; it's definitely a step in the right direction. It's
better than having no rules on the issue, or rules saying that software
_should_ have the power to make those decisions.

But the government should be taking a stronger stance on the issue.

~~~
freshhawk
Many are now calling for this level of international law:

[http://theglobeandmail.com/news/world/ban-urged-on-killer-
ro...](http://theglobeandmail.com/news/world/ban-urged-on-killer-
robots/article5456209/)

------
rck
One possibility that the article seems to ignore is that in the future, robots
may be explicitly programmed to follow laws of war and rules of engagement.
There's already been some research into this [1], and some people argue that
autonomous robots could eventually be better than humans at following
international law in combat. It does seem like a stretch given the present
state of AI, but in the future there might be treaties that specify the
algorithms that autonomous systems have to implement if they're going to be
deployed to combat situations.

[1] [http://www.cc.gatech.edu/ai/robot-lab/online-
publications/fo...](http://www.cc.gatech.edu/ai/robot-lab/online-
publications/formalizationv35.pdf)

------
femto
Where does the battlefield stop when drones allow soldiers to be remote?

If a drone pilot on active duty works and lives in a civilian population, are
those civilians being used as human shields? How does that sit with the Geneva
Convention? Is a US based pilot, who goes home of a night, counted to be on
the front-line and unwittingly using their family as human shields?

I'd think the use of drones extends the bounds of the battlefield, making
drone pilots fair game. Probably a moot point when fighting a adversary with
limited range, but it might be quite important in a more equal contest.

------
Zenst
A human may call the kill order, but that order is based upon information the
robots/computer/machine gives them.

How many people have died from bad code, thats more worrying.

~~~
scarmig
Is there any evidence that the information provided by any technology for a
drone attack has been inaccurate because of a bug or bad code?

It's not like the government is using machine learning to categorize likely
terrorists and then blowing them up in their homes.

~~~
catshirt
1\. is there any evidence supporting the existence of a bug free system?

2\. you're probably over-simplifying the actual process in which we
algorithmically identify and eliminate terror threats. which i'd bet for all
intents and purposes is just as scary.

my point here is that you, like you identified of your parent's post, are
operating under quite a few assumptions.

------
InclinedPlane
Isn't that nice. This isn't the most concerning aspect about the use of drones
though. The concerning part is the extension of the definition of
"battlefield".

Currently the definition of what is a battlefield is little different than
"wherever we decide to put a drone". However, this leaves a lot to be desired
from a rules-of-war and civil liberties perspective. The thought that merely
visiting some city in the developing world and hanging around with the wrong
folks could result in you dying in a battlefield via remote drone attack is
more than a little disturbing, especially when compared against all of the
normal heuristics a sensible human with good-intentions would typically use to
avoid being involved in a war and especially being killed in battle.

Certainly there are risks and downsides to operating under rules-of-war which
are too limiting or too strict but the solution is not to throw everything out
and create effectively unlimited power with almost no safeguards, limitations,
or accountability. And that is the road we seem to be heading down. As history
has shown, power without accountability and limitations is never a good idea
and will inevitably lead to intentional abuse.

~~~
chimeracoder
> hanging around with the wrong folks could result in you dying in a
> battlefield via remote drone attack is more than a little disturbing

Being a minor (16 years old) and a US citizen (born in Denver) won't protect
you from "having the wrong father", as Abdulrahman_al-Aulaqi can tell you[1].

It's not reassuring at all to me to know that a human, rather than a robot, is
making the decision, as long as I don't know (or trust) the criteria that that
human is using.

[1] (Except he can't, because he was assassinated last year, far from any
"battlefield" - or even any country that the US is at war with, for that
matter.)

~~~
mousa
"and a US citizen (born in Denver) "

I've never understood the emphasis on this point.

You're either for or against drone attacks on presumed terrorists, I can't
understand how US citizenship plays any role in it. If his father was involved
in terrorism and you think drone strikes are OK, why would you have extra
sympathy for him because he's a US citizen?

I don't think the US citizen aspect is worth mentioning, in fact I think it
just makes people who are against drones seem a little unhinged. By the way I
am against the use of drones (on Americans or otherwise).

~~~
chimeracoder
> If his father was involved in terrorism and you think drone strikes are OK,
> why would you have extra sympathy for him because he's a US citizen?

As of April 2010, the _open, stated_ objective of the CIA was to kill Aulaqi's
father (and others on President Obama's "hit list") and, if and _only if_ he
could not be killed, to arrest him, charge him formally, and bring him to
trial.

In other words, "if we can kill him, we don't have to prove he was actually
guilty."

I don't support the drone strikes against non-US citizens, but the idea that a
single executive can issue capital punishment on a US citizen[1] without a
trial is downright unnerving. Imagine if police officers wielded that same
level of power - it's no different.

[1] The US Constitution does not provide the same levels of protection to non-
Citizens, which is why drone-strikes on foreigners is still disgusting but not
as terrifying (from the perspective of a US citizen).

~~~
joeyo

      >[1] The US Constitution does not provide the same levels of 
      > protection to non-Citizens, which is why drone-strikes on 
      > foreigners is still disgusting but not as terrifying (from 
      > the perspective of a US citizen).
    

I hear this asserted frequently but it is simply not the case. There are a
very few places where the citizenship requirement is explicitly stated (e.g.
voting or holding public office), but generically the US Constitution refers
to "persons" not citizens.

For a scholarly article on this topic, see:
[http://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?ar...](http://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=1302&context=facpub)

------
charonn0
Maybe we should just stop killing each other.

~~~
omarchowdhury
Preposterous.

------
trimbo
Humans make mistakes. We should really put that kind of decision making in the
hands of an artificial intelligence.

~~~
rjzzleep
it's gonna happen anyway whether the pentagon says something or not. not that
i understand why anyone cares what the pentagon says.

~~~
trimbo
I was saying that tongue in cheek, but yeah, it will happen.

At some point the brainiacs in the war department will realize that you can
never find enough soldiers to police everyone -- if they haven't already
realized this. The only way to solve that is make a machine more efficient
than people.

Our ally on the inside, potentially, are soldiers like the general in Wargames
played by Barry Corbin, who don't trust the machine over a human soldier.

------
ImprovedSilence
I'm not entirely sure I believe this. Yes, for predators and such drones it
makes sense. But the military already has weapons like the sea whiz that seek,
aim, and fire all on their own[1]. No human pulling the trigger or picking out
targets. Granted, the Sea Whiz is anti-missile. But it's only a matter of time
until it becomes self aware and targets whatever it wants....

[1] <http://en.wikipedia.org/wiki/Sea_whiz>

edit: that gun looks fantastic:
[http://www.youtube.com/watch?v=iAw82h-IhdQ&feature=fvwre...](http://www.youtube.com/watch?v=iAw82h-IhdQ&feature=fvwrel)

------
ck2
What happens if a foreign power decides to unleash 1000 armed mini-drones in
NYC as revenge?

How exactly will the US stop them?

All that TSA money could have been used to research that problem.

Are we going to have drone dogfights? Because the next step is to remove the
need for human operators for most automation like evasion, so frequency
jamming is out. I guess EMI pulse is the only way but they can be shielded
too.

~~~
chii
> What happens if a foreign power decides to unleash 1000 armed mini-drones in
> NYC as revenge?

at that point, it'd be all out open war. The only reason the US can do these
operations in foreign lands is that no other country could really retaliate
(and survive).

------
twright0
Unless you happen to be standing too close to someone a human decides to kill
with a drone, of course.

------
re_todd
Wow, that will make it feel much better if they off you, how warm and fuzzy.
They should make a hallmark card "Seasons Greetings, if you get taken out, you
can count on us to ensure that a human being will decide when a robot kills
you. Sincerely, The Pentagon."

------
BIair
Coincidental that on the same day there is an HN frontpage story about Google
Spanner database.

TL;DR humans are too slow

...think this ends well for slow humans?

------
pinaceae
well, this has been not true for a long time now.

Mines.

Anti-personnel, anti-tank, whatever. Those are killing automata, some very
simple, some quite smart - not that easy to kill a tank with high probability.

so, yeah, there will be next gen mines, combine a rifle with some sensors and
a field of fire. voila, a stand off weapon covering your flank, nothing comes
through this alley for a while.

------
superbaconman
And Iraq has weapons of mass destruction.

------
thisismyname
Until a robot decides not to listen...

~~~
jfoutz
You don't need AI to make the assertion literally false. the pentagon isn't
giving up on the concept of "collateral damage".

------
shousper
Impose a law and someone will break it.

------
cmccabe
"And that's our promise to you, here at the Pentagon."

