
'Killer robots' ban must be part of Geneva talks, says campaign group - esalazar
http://www.theguardian.com/science/2013/nov/13/geneva-talks-killer-robots-ban-campaign
======
ChuckMcM
As killer robots are nominally no different than land mines I could see
support for banning them. My understanding however is that land mine use (or
area denial weapons), are _still_ allowed if there is a way to definitively
disable them at the end of hostilities. If my understanding is correct, and
robot weapon designers are able to successfully counter with the notion "but
we can turn them off after we're done." then this effort won't go as far.

At a conference over the weekend in one of the couch discussions there was a
suggestion of a 'nearness' limit, sort of you can't use deadly force unless
you are within a 10 mile radius of that use. The goal being to outlaw more
developed countries flying drones over less developed countries and picking
off their citizens.

~~~
waps
And here's the new list of countries guarding the execution of these rules
(yeah, not technically, I know) : [http://www.jpost.com/International/China-
Cuba-Russia-and-Sau...](http://www.jpost.com/International/China-Cuba-Russia-
and-Saudi-Arabia-win-seats-on-top-UN-rights-body-331515)

What do these rules matter ?

Can we please not pretend to miss the obvious truth ? Here's the choices we've
already made : What is more important ? "respecting" religion or human rights
[1]. Choice made : "respecting" religion (read: letting muslims execute ex-
muslims, gays and ...) What is more important ? Peace with China or human
rights (See Tibet, XinJang and ...) ? Choice made : peace/trade with China.
What is more important ? Communism(/socialism) or human rights ? (because in
the case of Cuba and Venezuela they're opposing forces) Choice made :
communism is more important. What is more important ? Human rights or the
Trade with Indonesia (muslim state, executes gays, has plans to execute ex-
muslims, ...) ? Trade with Indonesia

Human rights are dead letter. Why ? Because we've chosen against them. The
same goes for this principle.

[1]
[http://en.wikipedia.org/wiki/Human_rights_in_Saudi_Arabia](http://en.wikipedia.org/wiki/Human_rights_in_Saudi_Arabia)

~~~
Volpe
How about we break the whole world into false dichotomies.

What's more important, saving children or commenting on HN? Choice made
(Commenting on HN), you wrote that comment, rather than saved a child.

What's more important, curing cancer, or eating something? Choice made (eating
something)... why didn't you cure cancer instead!?

~~~
ChuckMcM
I think I'm going to save this one. I think there is a T-shirt in there
somewhere, something like on the front "Get dressed or Cure Cancer?" on the
front and "Damn, guess I'll have to Cure Cancer tomorrow." on the back :-)

------
Houshalter
Requiring a human to be in the loop in all circumstances is impractical.
Communications can be disrupted. Autonomy is also a software issue. It's easy
for a country to say they have humans in a loop, then in a real war it would
be trivial to change it.

And "robots" are not different than any other weapon. Bullets and missiles can
be aimed but they don't discriminate and they can end up (and often do)
hitting civilians and unintended targets. Land mines don't discriminate at
all. And what difference does it make if you cover an area with land mines or
put a autonomous turret to watch it instead? I'd argue the turret is better
since it can have at least some ability to distinguish enemies from civilians
and wildlife, and can be removed much easier after the conflict is over.

~~~
jol
> Requiring a human to be in the loop in all circumstances is impractical At
> least theoretically someone made the decision to kill and could be
> accountable, in case of killer robot with AI that decided to kill mistakenly
> who is responsible?

~~~
Houshalter
The person that ordered the drone to do what it did is accountable for it.
Just like you are accountable for where bullets go when you fire a gun or who
a landmine that you place kills.

------
scotty79
Couldn't they just ban killer humans? That would prevent most of war deaths.
Shouldn't we part with this barbaric notion that killing someone is ok, as
long as the killer is in the army?

~~~
briandear
Totally, if such a ban were in place that would have certainly prevented Al
Queda from bombing and killing. It would also prevent Afghan tribal conflicts
and the Sunni-Shia battles. It certainly would have stopped Chechnya and the
Serbo-Croation war and probably Kosovo as well.

Let's do that. Let's ban bad people. Let's make it international law that all
disagreements must be settled via pillow fights with squadrons of 12 year old
girls. Of course the Muslim extremists would naturally lose since they'd never
let an 12 year old girl out of the house in the first place. So they'd have to
send boys. And since proper war fighting pillows are in short supply in places
like Yemen, then they have to use more readily available materials like diesel
fuel and fertilizer, you know just to be fair, given their logistical
disadvantages in lacking both available 12 year old girls and properly spec'd
combat pillows.

So now the law abiding rest of the world sends in the 12 year old girl combat
pillow division (polyfill only since feathers have been banned as cruel to
birds) and the promptly get set on fire and raped by an enemy that obviously
does not give a flying fuck about a group of cognac-sipping first-world
diplomats who decide to ban killing when waging war.

~~~
scotty79
Banning land mines didn't help much either. People will use whatever they
please to kill each other. Biological and chemical weapons are not used not
because they are banned but because they suck as military tool anyway and
nukes aren't really economical, all things considered, against anything else
but nukes. Countries that treat armies seriously (much less of those than 50
or 100 years ago) are stockpiled sufficiently with all banned weaponary.

Personally I'm all for killer robots. I don't trust humanity of invading army
any more than I trust autonomous war machine software. At least terminator
won't kill my kids or rape my wife because it's bored.

------
angersock
So, here's the nasty undercurrent to all this, right?

Drone warfare (whether by land, sea, or air) is about using disposable
machines to kill and injure human beings. Engineering dictates that we'll
eventually optimize away the part of the control system that is slowest and
most prone to failure: the people.

The discussion of "How can we keep people running the robots" is
uninteresting, because the entire deck is stacked to guarantee that it will be
rendered moot.

~

The real discussion--I posit--is somewhat darker and more chilling:

In order to field a drone army, you need capital. You need factories to build
the devices, you need command and control infrastructure to deploy them, and
you need bright minds to develop them. Drone warfare is difficult to conduct
in any meaningful fashion as a third-world nation, or more importantly _as a
populace in rebellion_.

To put it bluntly, the use of these engines of war is limited only to the rich
kids, and there is no chance for appeal or mercy when you are identified as a
target.

Think about that for a second.

The wealthy murderer who decides to unleash these does so without any skin in
the game, without any chance of dealing with repercussions back home for lost
sons and daughters, without any care whatsoever except for a line-item
expense. Stubborn rebel holdout? Spin up more terminators the same way we spin
up dynos to deal with spikes in load.

The teenage kid holding the rusted AK their parent just dropped, looking at
the robot which just made them an orphan? No chance in hell that they'll be
spared because they are obviously not a threat--they are a human wielding an
automatic rifle, p = .975, execute.

~

This whole thing needs to become verboten, forbidden, the same way we
nominally treat chemical and biological weapons.

If we support our .gov and .mil in the use of these weapons, we'll be doing
everyone a disservice, and come the day we decide to rescind the support which
backs those bastards, we'll find that they no longer need our support for they
already have the drones and the capital to make their whims felt.

~~~
coffeemug
The flipside of this argument is that wars happen anyway. We might as well use
the machines to save lives. (I don't necessarily believe this; I don't know
what I believe, but this is the other side of the debate).

The prolonged wars in Iraq and Afghanistan happened despite the repercussions
for lost sons and daughters. We're losing people right now, and hundreds of
thousands of Iraqis lost their lives because we didn't have enough manpower
(or weren't willing to take the risk) to enforce order. If we could spin up
machines the way we spin up dynos, these problems would go away. No American
soldiers would die. We'd spin up production and deployment to enforce near
_total_ order. We could reduce Iraqi deaths by orders of magnitude. The
machines don't necessarily have to act like terminators. They could act like
military police too.

This might be a scary future, no doubt. But it might be a bright one too. If
we trust our values and trust that our elected representatives won't go on
mindless killing sprees, the machines might be a blessing in disguise. (Also
note that by the time we get machines with this capability, our government,
social structure, our entire economic model, and our appreciation for civil
liberties will evolve quite far from where it is now too).

~~~
xerophtye
i personally think that it is a GOOD thing that we lose sons and brothers in
wars. It makes everyone think twice before engaging in war. It also gives you
an incentive to stop (or to your citizens to pressurize you to stop).

If these machines are allowed to be used, war would be a push of the button
away. And that is never a good thing.

Sure ideally, war should be ONLY between machines so no one on either side
dies, but as the parent pointed out, this tech is for the rich kids. The
poorer countries would suffer immensely in such bloodbaths

~~~
simonh
>It makes everyone think twice before engaging in war.

I would like to cite the entirety of human history as evidence to the
contrary.

~~~
xerophtye
Yes look at that mankind and imagine making it EASIER for them to wage war.
See my point?

------
xacaxulu
Banning Killer Robots is a great. But I'm fairly certain they are built by
Killer Humans. These are just a natural extension of a society that values
warfare and hegemony over social justice and peace. Note the differences in
robotics applications between countries per the number of wars they've
recently engaged in.

~~~
angersock
The critical difference is this:

Eventually, you can expend your supply of Killer Humans if you get too
bloodthirsty. You can wage war to the point where the populace is no longer
willing to support you--even the elaborate decoupling of war from daily life
the US has accomplished was still not enough to prevent the slow tide of
opinion from shifting.

Killer Machines, however, can be replaced as they are expended, and the plans
setting them in motion proceed without issue. The populace never stops it--as
long as there is money to buy material to make the things, they can be used.

Thus, the use of drones in warfare is not self-limiting the same way that the
use of humans is.

~~~
scotty79
Money and material becomes sort of scarce in times of war.

Also if machines become advanced autonomous weapons there is no point to
target humans since the only thing that can harm your machines are enemy
machines.

Everybody becomes civilian and killing anybody becomes a war crime. Yay.

~~~
rainmaking
Well, I suppose one side could publicly advertise their machines are
programmed to first kill the enemy machines, and then the enemy humans, just
to tactically raise the stakes a little bit when negotiating surrender
conditions.

~~~
scotty79
Like nowadays one side could advertise that after killing soldiers it will
slaughter all civilians. It's possible but strongly frowned upon.

~~~
rainmaking
Crazed dictators exist, so robots make them more dangerous, not less. And
robot builders are likely to get targeted by everybody. So no bloodless
machine fights.

------
melling
We probably should ban them at some point. However, personally I'd like to see
an arm's race for the next couple of decades. Massive military spending could
fund the R&D needed to get us to commercial uses. Jet fighters, and breaking
the sound barrier were driven by the military, for example.

Having said that, I must say that I'm actually quite uncomfortable with
machines deciding when to pull the trigger. Robocop is etched in my mind. Hope
the remake keeps that scene.

------
itsuart
Ah, yes, ban everything that you don't like. That would get rid of it for
sure. Chemical and biological weapons are banned? Yes. Do we have them? Yes.
Will we use them to survive? FUCK YES.

So stop this meaningless "Geneva talks". "Killer robots" will be built and
will be used.

Don't ban weapons. Ban wars. I don't see why 1st world countries would need
them anyway.

~~~
netcan
This kind of cynicism is a strange thing. It sounds like a hardened realist
talking, but in reality it's fairly naive.

Chemical, biological weapons use _has_ been greatly curtailed by rules of war.
So has "aggressive" war (war, because we want your territory). So have various
civilian targeting tactics. They haven't been eliminated, but there is a lot
less than otherwise. It's not perfect. Enforcement is erratic and case-by-
case. Powerful countries are more exempt than weak ones.

Nuclear proliferation has been effectively slowed by the NPT. Again, not
perfectly but better than nothing.

------
dljsjr
This is such a fascinating topic because the field of robotics outside of
industrial applications is still so nascent. I'm not here to weigh in, just to
drop off some useful tidbits for other people interested in this ethical gray
area as well.

This theme of "terminators" and "killer robots" has been really prevalent in
the field lately because of the DARPA Robotics Challenge[1], the latest in
DARPA's Grand Challenges, which I'm currently participating in on one of the
Track B teams. Many people see the break away from bomb-squad bots and factory
floor robotic arms in to humanoids as a really scary thing, and the DRC seems
to amplify that (if a robot can hold a sawzall, it can hold a rifle).

Just last month in Atlanta, the IEEE Humanoids conference took place. Dr.
Ronald C. Arkin[2] gave a talk exactly on this topic from an ethics
perspective, titled "How to NOT build a Terminator", and it was exceptional.
It was a plenary talk and not a paper, so I can't really find any record of it
to share. Unfortunate.

Tangentially, the lab that I work in (the robotics group at the Florida
Institute for Human and Machine Cognition[3][4]) has employed and continues to
employ a principle that we call "co-active design"[5] where we actively work
to keep a human in the loop at all times; we're definitely not looking to
build a "killer robot". It's an interesting design problem that overlaps a lot
with UI and UX, popular topics here on HN.

And lastly, a shameless plug for the field itself; a lot of people don't
realize just how software-oriented robotics research (especially humanoids,
where the fun problems are) is. A lot of people are stuck on it being a
hardware endeavor. While it's true that a chunk of robotics falls in the
mechanical engineering domain, there's plenty of room for hackers from tons of
different disciplines to get involved. There's interesting people solving
interesting problems, from the Open Source Robotics Foundation (the Willow
Garage spin-off) to private groups like Boston Dynamics, yet so many people
still see the field as a black-box that only opens up for the hardware
inclined. I could see a talented group with the right hacker mindset doing
some really interesting stuff in robotics with the right impetus and
execution.

1:[http://theroboticschallenge.org](http://theroboticschallenge.org)

2:[http://en.wikipedia.org/wiki/Ronald_C._Arkin](http://en.wikipedia.org/wiki/Ronald_C._Arkin)

3:[http://ihmc.us](http://ihmc.us)

4:[http://robots.ihmc.us](http://robots.ihmc.us)

5:[http://www.jeffreymbradshaw.net/publications/20101008_Coacti...](http://www.jeffreymbradshaw.net/publications/20101008_CoactiveDesign.pdf)

~~~
jonnathanson
Fascinating stuff, and a great comment overall.

But one thing needs to be clarified here: the fear over "killer robots" is
that the capacity to make war could be placed largely on autopilot. The danger
isn't so much a Terminator-style robotic uprising, which is so farfetched in
today's world as to be sensationalistic (if not laughable). Rather, the danger
-- a very real one, at that -- lies in an accidental war as the result of
malfunctions or data-interpretation errors.

We have more to fear from dumb AI than from smart AI.

As it turns out, the US and the Soviet Union came perilously close to nuclear
war on multiple occasions over the result of computer errors. (For an
interesting look into the subject, I recommend Eric Schlosser's "Command and
Control," a recently released history of nuclear weapons policy over the last
50 years).

Banning autonomous or semi-autonomous tactical systems ("Terminators") seems
misguided and impractical. These things will get built, and some of them are
being built already. The jury is still out on the ethics of building them (Do
they save lives by taking humans off the front lines? Or do they take lives by
turning war into a sort of game?). But they're being built, and that genie's
not going back in the bottle.

But banning fully autonomous _strategic_ systems ("Skynet") seems more
fruitful and worthwhile. Again, this is not because we expect "Skynet" to
become self-aware and intentionally initiate a nuclear holocaust. It's because
"Skynet" might misinterpret something and accidentally initiate a nuclear
holocaust.

(I don't mean to dismiss the very real role of human error in this same
domain. But fail-safes, in the form of humans having the final decisionmaking
authority over warfare at a strategic level, make a lot of sense.)

I'm a layman here, and I'm way out of my depth in discussing the intricacies
of war making AI. But the way I see it, we need to avoid a sort of dangerous
middle ground here. Either we keep AI dumb and intentionally non-autonomous,
or we make AI a heck of a lot _smarter_ (therefore reducing the risk of fatal
errors).

In a weird way, building Skynet might be less risky than getting halfway to
Skynet. A relatively dumb AI with full autonomy is a frightening thing. An
extremely smart AI with the same degree of autonomy is a little unnerving,
perhaps, but it's significantly less dangerous in practical terms.

~~~
saulrh
Agreed. The really interesting point of the "How NOT to build a Terminator"
talk was not just that we had to avoid building evil machines, but that we
have the potential to build machines with _better_ ethics than human soldiers.
Avoid atrocities committed to angry soldiers, avoid accidental wars started by
twitchy trigger fingers, avoid intentional wars started by people who weren't
authorized to start those wars, that kind of thing. The devil is always in the
details, but a ban can only ensure that the people building these weapons are
the ones that won't put in the proper effort to make them good.

------
morgante
I think this campaign suffers from something of a branding problem in that
"killer robots" theoretically includes human-controlled robots.

They need to be extremely up-front that what they're opposed to is
_autonomous_ battlefield robots. From the title/link, I assumed they opposed
all battlefield robots (like the ridiculous drone) crowd, which is an absurd
and extreme position.

Their actual position, of opposing autonomous robots, is a lot more sensible
and should be their sole focus.

~~~
nitrogen
_...which is an absurd and extreme position._

I don't see how it's absurd to be opposed to technology that makes violence
more convenient for wealthy aggressors. One could argue that reducing the
human costs of war would only bring more war.

~~~
briandear
That theory certainly played out after World War I. After seeing that
ridiculously high cost of life, war was prevented for generations.. It wasn't
until 200 years later that World War II happened.

Sorry rounding error.. I meant 20 years later and World War II was far worse
than WWI..

As far as 'wealthy aggressors' who do you mean? Certainly not the ultra
wealthy Serbs and Croats, you couldn't possibility mean the wealthy Al Queda
groups or the fabulously wealthy Taliban groups. Or Hebollah.. The Ba'ath
party was wealthy, so maybe you mean them?

Certainly you must be referring to those aggressors right?

~~~
nitrogen
Consider how much more information is now available about human costs, though.
If your brother/father/son/friend went to war in the early 20th century and
died, you probably got a photo and a posthumous medal. Now, Collateral Murder
goes viral and gets millions of views.

------
sciguy77
Drones aren't technically robots… Loophole?! ;)

~~~
rainmaking
I suppose an autonomous robot with good AI that requires an operator to log in
over SSH and punch Y before every shot would qualify as non-autonomous.

Of course, monitoring each shot is very tedious, hence:

    
    
      UNITS="wastelayer merciless thunderdeath"
      UNIT_SIZE=4096
      for UNIT in $UNITS; do
        for I in `seq $UNIT_SIZE`; do
          yes | ssh operator@$UNIT$I epclient
        done
      done

