
Autonomous Weapons: An Open Letter from AI and Robotics Researchers - espadrine
http://futureoflife.org/AI/open_letter_autonomous_weapons#
======
hackuser
I don't want to anyone to build autonomous weapons, but I don't want anyone to
build nuclear weapons or any other weapons of war either; I don't see how to
avoid it. If the choice is to either develop and deploy autonomous weapons or
to risk having your population conquered and murdered by enemies that use
them, then there is no choice.

Possibly, autonomous weapons like chemical weapons won't be important to
victory, or like most biological weapons (AFAIK) they won't be cost-effective.
But it's hard to imagine a human defeating a bot in a shootout; consider human
stock market traders who try to compete with flash trading computers, for
example. In fact, I wonder if some of the tech is the same for optimizing
decision speed and accuracy.

Perhaps the best response by governments is to use their resources to develop
autonomous weapons countermeasures, especially those [EDIT: i.e., those
countermeasures] that can be acquired and utilized by those with few
resources: Towns, governments in poor countries, and even individuals.

Also, my guess is that it's an area ripe for effective international
standareds, treaties and law. All governments can agree that they don't want
the chaos of proliferating, unregulated autonomous weapons and would work to
enforce the rules.

~~~
toomuchtodo
> Possibly, autonomous weapons like chemical weapons won't be important to
> victory, or like most biological weapons (AFAIK) they won't be cost-
> effective. But it's hard to imagine a human defeating a bot in a shootout;
> consider human stock market traders who try to compete with flash trading
> computers, for example. In fact, I wonder if some of the tech is the same
> for optimizing decision speed and accuracy.

The only way for human adversaries to fight autonomous weapons would be with
brute, lethal force (nuclear/neutron weapons). It ends poorly for all
involved.

~~~
ZenoArrow
"The only way for human adversaries to fight autonomous weapons would be with
brute, lethal force"

No it's not. You could use EMP. You could use signal jamming. Neither are
lethal, both have the potential to be effective against autonomous weapons.

~~~
toomuchtodo
>You could use EMP.

Which, to my knowledge, are only currently generated using a nuclear weapon.
You might be able to create one using solid state gear with enough time, R&D,
and power.

> You could use signal jamming.

Machine intelligence frowns upon your silly attempts at jamming its uplinks.
Predator drones and other autonomous, existing military kit already use high
frequency satellite communications techniques that are essentially jam proof.

~~~
ZenoArrow
> "Which, to my knowledge, are only currently generated using a nuclear
> weapon. You might be able to create one using solid state gear with enough
> time, R&D, and power."

Some use a nuclear source, but not all...
[https://en.wikipedia.org/wiki/Directed-
energy_weapon](https://en.wikipedia.org/wiki/Directed-energy_weapon)

> "Machine intelligence frowns upon your silly attempts at jamming its
> uplinks. Predator drones and other autonomous, existing military kit already
> use high frequency satellite communications techniques that are essentially
> jam proof."

Your idea of jamming is too narrow. Think about it like this, even if it's
mostly automated, these machines still get sent signals to inform them of
changes to their mission. That signal can be blocked and/or modified. Even
satellite links can be altered, either you hack the satellite system or you
intercept the signal at a higher altitude than the receiver is operating in.

~~~
patzerhacker
>Even satellite links can be altered, either you hack the satellite system or
you intercept the signal at a higher altitude than the receiver is operating
in.

Or, if the case of total war, you blow the freaking satellites out of space
with missiles. Yes, I know space weapons systems are technically banned, but
how long do you think a nation like the US, Russia, India, or China would put
up with satellite controlled autonomous drones running roughshod over their
sovereign territory before they just blow the satellites out of space?

~~~
brobinson
Satellites can actually be destroyed using weapons that aren't in space. Back
in 1985, the US had a F15 launch a missile which took out a satellite in
orbit. China also recently destroyed a satellite with a ship-launched missile.

------
GuiA
Feynman, on working on the bomb:

 _" With regard to moral questions, I do have something I would like to say
about it. The original reason to start the project, which was that the Germans
were a danger, started me off on a process of action which was to try to
develop this first system at Princeton and then at Los Alamos, to try to make
the bomb work. All kinds of attempts were made to redesign it to make it a
worse bomb and so on. It was a project on which we all worked very, very hard,
all co-operating together. And with any project like that you continue to work
trying to get success, having decided to do it. But what I did—immorally I
would say—was to not remember the reason that I said I was doing it, so that
when the reason changed, because Germany was defeated, not the singlest
thought came to my mind at all about that, that that meant now that I have to
reconsider why I am continuing to do this. I simply didn't think, okay?"_

(from "The Pleasure of Finding Things Out", transcript here:
[http://www.worldcat.org/wcpa/servlet/DCARead?standardNo=0738...](http://www.worldcat.org/wcpa/servlet/DCARead?standardNo=0738201081&standardNoType=1&excerpt=true))

This is extremely idealistic, but we need a way for engineers and scientists
to feel accountable for the outcomes of their work, and to straight out refuse
working on such projects. And the people who do work on such systems should be
held accountable in some deep way. We have reached a developmental stage where
building tools and techniques in the active goal of harming human lives has
become morally unacceptable. Engaging in civil disobedience if you are working
on such projects is the only acceptable outcome; Snowden should be remembered
as the first of many, not as an exception.

(yes, there are many counterpoints to my argument, but starting debates is
more interesting than spewing out platitudes. I'm interested in reading the
replies)

~~~
julianpye
I once worked in a German field engineering department of a large US
semiconductor company as a student. In the department there was a noticeable
barrier between one manager and the engineers. The following had happened
there a few years ago: a client required a DSP to calculate the weight on a
landmine switch. The departments engineers refused to work for the client bar
one manager. They were threatened to be fired and they stayed on course and
ended up keeping their jobs.

The way it worked was by one guy rallying, taking apart the specifications and
explaining the actual moral implications to the engineers.

~~~
kstop
They were probably legally in the right too, considering that Germany is a
signatory to the Mine Ban Treaty:

[https://en.wikipedia.org/wiki/Ottawa_Treaty](https://en.wikipedia.org/wiki/Ottawa_Treaty)

------
phkahler
That's a nice thought, but the cat is out of the bag. One week the elite geeks
talk about how strong encryption is available to everyone and can't be
stopped, or how you really can't regulate what comes out of a 3d printer. Then
they try to put the autonomous AI drone genie back in its bottle?

Gamers gonna use tech for gaming, advertisers gonna use tech for advertising,
military gonna use tech for militarying.

I think the pace of tech development is going so fast, we need to stop trying
to ban individual developments and start trying to change the way people and
governments behave so those bans aren't needed. But I'm not sure if that's
even possible short of some dystopia.

------
sanxiyn
"But listen to me, because I saw it myself: science began poor. Science was
broke and so it got bought. Science was scared and so did what it was told. It
designed the gun and gave the gun to power, and power then held the gun to
science's head and told it to make some more."

\-- from Galileo's Dream, by Kim Stanley Robinson

~~~
nileshtrivedi
Good oratory but why start at gun? What about swords, arrows and spears?

Science by itself is neutral. The proportion of evil scientists to all
scientists is about the same as evil humans to all humans.

~~~
Zikes
Any science could be turned to evil purpose, as well. Just like open source
code that gets used to create malware.

~~~
orkoden
Or look at sqlite. It was created to be used by missile systems.

~~~
pjc50
[citation needed]

------
jonroth15
At least half a dozen nations are working on such systems now. They're doing
this not because they think it's a good idea, but because there's this
attractor basin they recognize we reach by default, of a new arms race of
faster, smarter, stronger AI weapons spiraling up. This is something we don't
think is a good idea. This letter is essentially a petition, which we'd like
to take to the U.N., showing that the AI and ML communities don't want their
work used in autonomous weapons, things that are built specifically to, by
themselves, offensively target and kill people. Having this sort of grass
roots effort has precedent: chemical weapons, landmines, and laser-blinding
weapons were all banned globally with bans and treaties based on this sort of
thing. It's true that terrorists and rogue states might still use them in
isolation, but you won't get this effect of the major powers having an arms
race. Although these things are under development right now in multiple
countries, they haven't been deployed to the field yet. So we're really at an
inflection point: we're trying to get a ban in place before they're actually
deployed at all, because after that it'd be much harder to get such a treaty.
If you agree with these sentiments, we are collecting signatories from the
community.

------
hackuser
Also consider that autonomous weapons could upend the global power structure:

For most of human history, military power has been tied to economic power and
population size: Those with larger economies and populations have been more
powerful. AFAIK, that is why the United States has been the dominant military
power since WWII and why China may challenge the U.S. It's also how national
governments have maintaind sovereignty, by having far more economic and human
resources than any internal competitors (and when that isn't true, such as in
poor countries, national governments can be ineffective).

But what if military power depends on the quantity and quality of bots? What
stops a smaller or even poorer country from building a robot army? Poor
countries have more manufacturing capacity than wealthy ones, AFIAK, and
perhaps they need only one innovative, disruputive software developer to make
their bot army superior or at least competitive. For example, could tiny
Singapore dominate SE Asia or even become a world power? In fact, what stops a
sub-national group such as Hezbollah, a Mexican drug cartel, another organized
crime group, or even a wealthy individual from building their own army?
Without checking the inside of every factory on the planet, will we even know
the robot army is being built until it's too late? Will governments be able to
protect their citizens from warlords and exercise sovereignty over their own
territory? What about poor governments?

It's very speculative -- it remains to be seen, for example, how effective
autonomous weapons will be -- but it could be a historic change. Perhaps our
hope is that the technology will turn out to be like other weapons, such as
airplanes: Anyone can build one, but the single engine prop plane is no threat
to what can be built with the Pentagon budget.

~~~
ZenoArrow
I know it's standard tin foil hat territory, but wouldn't the biggest non-
governmental risk be large multinational corporations? Some companies have
incomes comparable with nation states, I'd have no reason to suspect they'd be
any less capable of building military technology that rivalled those from a
country.

~~~
zardo
What's tinfoil hat about that? That's basically the history of colonialism.
India was ruled by the British East India company for a century.

------
onion2k
A ban is definitely a good idea but I think we should have something else as
well. We need developers to agree, as human beings rather than law-abiding
citizens, not to build these things. Don't apply for those jobs regardless of
how well they pay. If your company starts projects in that market, leave and
find something else. Understand that you are not building things to 'protect
peace' or 'bring democracy' to people. Using your tech skills to create things
to kill people is a dickish thing to do.

There are only 18.5 million developers in the world[1]; getting a consensus
not to be evil shouldn't be beyond us.

[1] [http://www.techrepublic.com/blog/european-
technology/there-a...](http://www.techrepublic.com/blog/european-
technology/there-are-185-million-software-developers-in-the-world-but-which-
country-has-the-most/)

~~~
swalsh
Weapons yield power to the person who controls them. At least nuclear weapons
require a refinery process that's difficult. Still, North Korea has managed to
limp along for as long as it has at least in part to a strong suspicion they
have nuclear capabilities.

A very scary thought is how much power a private entity (individual or
corporation) could gain with a relatively low amount of money. I'm not certain
a "truce" by engineers is enough. It only takes 1 to break it. In fact, even
if there was a law... it's still only going to take one.

Personally, I think we need technological check and balances as much as we
need political checks and balances.

~~~
therobot24
don't forget nations where governments or organizations oppress individuals
into performing work [1][2][3]

[1] [http://motherboard.vice.com/read/radio-
silence](http://motherboard.vice.com/read/radio-silence)

[2] [https://www.foreignaffairs.com/articles/north-
korea/2015-05-...](https://www.foreignaffairs.com/articles/north-
korea/2015-05-28/outsourcing-oppression)

[3] [http://blogs.wsj.com/middleeast/2014/10/23/u-a-e-migrant-
dom...](http://blogs.wsj.com/middleeast/2014/10/23/u-a-e-migrant-domestic-
workers-suffer-abuse-human-rights-watch-says/)

------
Killah911
Glad I'm not the only one worried about this. I spent a bit of my early career
on this type of tech. At first I thought it was really cool and joked with my
coworkers about building skynet. Eventually I realized no amount of coolness
or money was worth putting my talents to building things that are obviously
meant for destruction.

Sure they're tools and can be tools for peace in the right hands. But in the
wrong hands, they can do immense damage. Perhaps one of the things that's kept
humanity around is that despite the psychopaths in our midst who might not
care if they destroyed every other human being, there are others whose
conscience would get in the way.

This type of technology, in the hands of the wrong psychopath might mean the
end of us. Despite the BS marketing behind AI, NO it is not sentient, it's a
bunch of optimization algorithms. Not Good, Not Evil.

I realize that someone will build it. That, is an inevitability. Just know
that it doesn't have to be me.

(Before you write comments on my hanlde please read my profile, it has more to
do with Hip Hop than violence)

~~~
Kurtz79
Same here.

I wouldn't define myself a pacifist, but given the almost limitless choice of
industries where you can work with a tech degree, why work in one I might be
uncomrfortable with ?

~~~
woodchuck64
Right, just stay away from industries developing or standing to gain from AI.
That leaves .... umm....

~~~
jononor
Just stay away from industries benefitting from production of nuts&bolts?

That ones work might be re-purposed for nefarious purposes is not an argument
against attempting to avoid industries where nefarious use _is the purpose of
the work_.

~~~
woodchuck64
"That ones work might be re-purposed for nefarious purposes is not an argument
against attempting to avoid industries where nefarious use is the purpose of
the work."

I think there's a centrally misguided notion in this thread that AI for
autonomous weapons is somehow more dangerous than AI. I don't see that as the
case at all. Successful AI can be instantly weaponized with little to no
effort.

So to the extent that the parent comment is rejecting AI as an industry (not
just autonomous weapons AI), I have to wonder: where can one work to avoid an
industry that won't heartily embrace AI as soon as its cost effective?

------
ufmace
The thing that really bothers me about the autonomous weapons stuff is the
potential for tyranny. One of the tricks with running an oppressive regime is
that you still need people to do the actual enforcing. There's limits in just
how far you can go based on what you can convince your own people to do to
each other. Yeah, there's a lot of ways to use propaganda and other such
tricks to get people to do things you wouldn't think they'd be willing to do,
but there are still some absolute limits in there.

Really good AI weapons could change the whole balance around though. Whatever
weird, crazy thing you dream up, just order the AI bots to make people do it,
and it will be done. No convincing needed, no limits.

------
meesterdude
If all it took to create a nuclear reaction was some dirt and a microwave,
we'd be fucked. All it takes is one angsty teen and they'd level a city.

This box, once opened, won't close. And it only serves our interests in the
worst of ways. People we don't want having this stuff, will have it. This is
just the tippy-tip of the iceburg though. There is a SLEW of technology coming
out that has intimidating implications that really makes things super easy to
control and exterminate a populace.

So that's the thing. We need to realize we can do anything. Really. you want
to blow up the world? I'm sure we could find a way. You want to type a name
into a computer and a drone finds that person and kills them? no problem. And
thats not to mention all the other things we'll discover along the way.

It is far more likely that we will be the creators of our own destruction,
than it is that we will be able to reign in our behavior and wield our
intelligence to serve the interests of our species. We haven't gotten past
killing each other, so we're just going to keep doing that, but get REALLY
good at it. Our technical abilities have far outpaced our philosophical ones,
and that doesn't bode well.

------
o_nate
This seems the more credible AI threat to me: not that an AI will go rogue and
decide on its own to start killing people, but rather that humans will design
an AI with the express purpose of killing people.

~~~
cousin_it
The AI threat was credible enough to begin with. If you design an AI with the
express purpose of making cheeseburgers, and allow it to improve itself, it
will end up killing people. We don't know how to specify any utility function
for a self-improving AI that won't lead to killing people.

~~~
kleer001
> If you design an AI with the express purpose of making cheeseburgers, and
> allow it to improve itself, it will end up killing people

I don't believe you. Maybe you could construct an arguement for AntiHuman
Strong AI or point me to one?

~~~
eli_gottlieb
He's using a fairly narrow definition of "AI" that means, roughly, "stuff like
AIXI". Within that definition, he's right, but of course, within that
definition, we don't know how to specify an "express purpose of making
cheeseburgers".

~~~
cousin_it
That's not completely fair. I just mean any AI with the capacity to self-
improve. We might know how to specify goals for some of those AIs, but none of
these are proved safe. And we have pretty strong arguments that any goal
that's not proved safe is most likely unsafe, making everyone end up dead or
worse.

For example, if you make the AI "learn" a utility function about making
cheeseburgers by using observations and reinforcement learning, and then the
AI self-improves, you are most likely dead, because the learned utility
function didn't include all possible caveats about not killing or torturing
people to make cheeseburgers faster. And if you think you can keep applying
negative reinforcement after the AI self-improves, think again.

~~~
eli_gottlieb
>That's not completely fair. I just mean any AI with the capacity to self-
improve.

Yes, but you're still assuming that we're talking about an _agent_ that can
take decisions and act autonomously, rather than an inference engine that just
processes data and spits out its inferences with zero autonomy whatsoever.

The former is extremely, _stupidly_ unsafe by default, so, logically, people
probably won't try to build any such thing. They'll build the second sort of
thing, which will mostly just be a more advanced version of today's
statistical learning.

Unless, of course, you're talking about ideological Singulatarians, who may
well face legal sanction one of these days for _deliberately trying to build
the former agenty sort of thing_ as if that was a _good_ idea.

(Protip: If you want to talk "AI safety", we can do that on the site devoted
to it, but out here in broader where nobody's _damn fool_ enough to try to
build agent-y "AI", mixing up realistic ML with the kind of agent-y "AI" you'd
have to be blatantly suicidal to build is an abuse of terminology.)

Besides which, "the capacity to self-improve" is actually a currently-open
research problem. Currently. Once I work some things out, I've got something
to report about that...

~~~
cousin_it
Let me get this straight:

1) Your plan for AI safety is "no one will be stupid enough to build a self-
improving AI".

2) You are currently working on self-improving AI.

I'm sorry to say, but in my eyes you've just lost the right to criticize the
LW/MIRI school of thought :-(

~~~
eli_gottlieb
>I'm sorry to say, but in my eyes you've just lost the right to criticize the
LW/MIRI school of thought :-(

You mean the one I belong to? Like I said: go on LW and talk about "AI" with
assumed context. It's just out here in the rest of the world where you can't
assume that everyone automatically has read the literature on AIXI/Goedel
Machines/etc and considers "agenty" AI to be a real thing.

>2) You are currently working on self-improving AI.

Hell no! I'm working on logic and theorem-proving in the context of
algorithmic information theory -- really just dicking around as a hobby. If
you want "stable" self-improvement for your "AIs", you need that. It's also
not, in and of itself, AI: it's logic, programming language theory, and
computability theory. And if I get a result that holds up, which is an open
_if_ , I'd be happy to keep it the hell away from "AI" people.

The main reason I don't consider alarmism warranted about "self-improving AI"
(though I don't count any of FLI's letters as _alarmism_ ) is that I think of
"an agenty AI" as something put together out of many distinct pieces. It's
arranging the pieces into a whole and executing them that's unsafe, but also
currently prohibitively unlikely to happen by accident. Naturalized induction
and Vingean reflection wouldn't be _open problems_ if self-improving "agenty
AI" was _so easy it could happen by accident_.

I fully agree that one _does not build a self-improving agenty AI under
basically any circumstances, ever, even if there 's quite a lot of guns to
your head and various other unlikely and terrible things have happened_, as
the research literature stands right now.

------
etiam
I find it hopeful see people trying to mitigate these risks while they have
only just begun to be realized. There has been more than enough historical
cases already of inventions turning into something the inventor would dearly
wish to have undone.

One major current concern of mine, that this letter does not address, is AI
for surveillance and social control. What is already being done in that regard
is arguably military intelligence technology directed indiscriminately at
entire populations, but the added element of powerful AI spidering over the
data streams that go into places like the NSA Bluffdale facility is quite
appalling. I think this is even harder to inspect for than autonomous killing
systems, and even more difficult to avoid the development of, since much of
the capabilities needed will likely be similar to what academia and data-
intensive commercial sector will want to serve their needs. But the potential
damage through empowering totalitarian control could well be comparable to or
greater than an "AI arms race". I really hope the field will deal with this
aspect too.

------
vonnik
> Autonomous weapons are ideal for tasks such as assassinations, destabilizing
> nations, subduing populations and selectively killing a particular ethnic
> group.

Is it just me or does this read like an advertisement?

The race for AI-supported weaponry has been on for a long time. Rosenblatt was
using perceptrons to try and identify tanks in the late 50s. So this is not a
race whose start is to be forestalled, as the letter phrases it, since it
began a long time ago. AI has been weaponized.

I think the caution FLI expresses towards autonomous weapons is fair, but
let's be really clear on where we are. Various forms of weak and narrow AI
have been applied to warfare for a long time, and they will continue to be,
regardless of petitions by the prominent.

------
coldcode
Nice to see but it will never work. Weapons move forward no matter how
terrible they might be, politicians and military leaders always use the "if we
don't they will" excuse. It's usually true which is sad.

------
BinaryIdiot
> The key question for humanity today is whether to start a global AI arms
> race or to prevent it from starting.

Sorry but it's too late; there will always be advancements in technology
especially in AI and it's going to happen. So you can either not do research
in it now while someone else does (or eventually repurposes other AI research
for this task) or you can do it now to better understand it, its strengths and
weaknesses, etc and possible use it to intercept other AI creations.

Just like it'll never be possible to ban all guns regardless of whether it
would be the wrong or right thing to do asking people not to research this is
simply not going to happen.

~~~
ZenoArrow
There have been relatively successful bans on chemical and biological weapons,
why do you suspect we can't successfully ban the proliferation of autonomous
weapons? These things don't appear out of thin air, they still have to be
manufactured, sold and stored. If you can find them you can remove them, and
deal with those who created them.

~~~
BinaryIdiot
> There have been relatively successful bans on chemical and biological
> weapons, why do you suspect we can't successfully ban the proliferation of
> autonomous weapons?

What's relative? Chemical weapons have been long banned by the international
community but they are still in use[0] in certain places.

Regardless, creating autonomous weapons is a very different beast. The systems
that could be used for target could have been designed to locate all
pedestrians, animals, cars, etc for autonomous driving / collision avoidance
in. These advancements are going to happen and can easily be repurposed;
chemical weapons many of the compounds are not easily re-usable for good
things so they're not comparable in that regard.

> These things don't appear out of thin air, they still have to be
> manufactured, sold and stored. If you can find them you can remove them, and
> deal with those who created them.

This is too idealistic and isn't feasible. First, you're assuming you can't
retrofit a computer system to any existing weapons (many of which require a
small amount of control input from humans to move and fire). You can, very
easily. In fact you can hook up computer systems to non-obvious weapons such
as a handgun if you really wanted to. Second, who's going to remove them and
deal with them? There is evidence Syria and other countries have used chemical
weapons but the World Police (tm) are not exactly knocking down their doors to
arrest them.

Chemicals can be hard to manufacture and difficult to distribute. AI? You
should be able to download it anywhere and, when computing power gets good
enough, possibly run it on _anything_ which could then be hooked up to any
type of vehicle or weapon controlled through electronics.

[0]
[http://www.npr.org/sections/parallels/2013/08/27/216046393/c...](http://www.npr.org/sections/parallels/2013/08/27/216046393/chemical-
weapons-used-rarely-but-with-deadly-effect)

~~~
ZenoArrow
Yes, relatively successful. Just because these treaties aren't 100% effective,
doesn't mean they lack effect. It is the job of UN weapons inspectors to
ensure chemical weapons are not being stockpiled. Stockpiles are harder to
hide than the smaller quantities that many labs can produce. As for Syria, as
the article you linked to states, they aren't signatories of the treaty
banning chemical weapons. However, they aren't exactly out of the gaze of the
'world police', they're at the centre of one of the major conflicts at this
moment in time, including involvement from the international community.

As for this idea of AI being harder to control than chemical weapons, if we
were just talking about software then fine, but hardware is part of the
equation and needs to be manufactured. There are varying levels of
sophistication for this hardware, at the crudest level you have something like
the drone + gun combo that hit the news in the last couple of weeks, on the
more sophisticated end you have complex robotics designed to be more
versatile. One end of this scale is available to Joe Public, but is easier to
fight against, the other end of this scale is only available to those with
deep pockets and could potentially be hard to fight against. Furthermore, in
both cases, they are physical objects. Making these physical objects illegal
to own and operate is the goal. Do you oppose this?

~~~
BinaryIdiot
> Just because these treaties aren't 100% effective, doesn't mean they lack
> effect. It is the job of UN weapons inspectors to ensure chemical weapons
> are not being stockpiled. Stockpiles are harder to hide than the smaller
> quantities that many labs can produce.

I'm not trying to say it needs 100% to make an effect I just asked what your
definition of successful meant in relative terms. We're getting a little off
track here but the big takeaway here is that the process to create chemical
weapons largely doesn't have a lot of areas in which further advancement in
technologies can help people (some exists, sure, but I'm not convinced a lot).
This is counter to AI where there are thousands of applications for AI in
everyday life from driving to medical equipment; so much of this advancement
is knowledge and technology that can easily be moved into the military sector.

> but hardware is part of the equation and needs to be manufactured.

But why? Yes I'm sure they would make specialized hardware but it's not like
they _need_ to. There is plenty of equipment on the ground and in the air
controlled by either a remote human or by a human through direct interfacing
with the machine. There is no reason these existing points which contain a
human can't be swapped with a relatively advanced AI should one be created in
the future.

So hardware is part of the equation but it's not like anything needs to be
radically altered. In fact it may be advantageous to keep the same looking
hardware so the enemy doesn't know it's an AI controlling it.

> Furthermore, in both cases, they are physical objects. Making these physical
> objects illegal to own and operate is the goal. Do you oppose this?

Making what objects illegal to own and operate? Objects controlled by AI,
objects that contain weaponry, objects that contain weaponry and AI? How do
you sufficiently define AI? What constitutes a weapon? Can the drone itself be
considered a weapon?

It may make sense to make owning certain, dangerous things illegal but I'm not
convinced it solves the problem we are discussing. In all honestly if someone
could mass produce a machine with a decent amount of weapons on it then,
depending on a ton of details, I could see those overtaking towns or cities;
hell maybe even overtake small countries depending on how they're built. So
it's tough to just simply say it's illegal for the members of the UN, so they
don't research it at all, and non-members of the UN end of researching it and
developing something incredible.

~~~
ZenoArrow
Few follow up points:

1\. The knowledge necessary to create chemical weapons is chemistry. Are you
sure there are not many positive uses of chemistry?

2\. Military drones controlled by humans that can also be controlled by AI
also need to go in order for this proposed ban to be effective.

3\. The scale of the research matters. Yes you may have some groups who choose
to ignore the treaty and develop autonomous weapons, but you can monitor for
large scale stockpiling of such weapons and counter against their use. What we
don't want is for these weapons to be easy to come by in large enough
quantities to pose a widespread security risk. We'll never eliminate the
development completely, but we can make it a smaller problem than widespread
use could be.

~~~
BinaryIdiot
1\. Chemistry yes but the specific area of chemistry I'm not sure how many
positive knowledge comes out of it. Granted there will be some but I don't
think it's even close to being comparable to AI research. Chemical weapons
need to, as efficiently as possible, break down areas of the body so I'm not
sure a ton of positive applications come out of that (but again I'm sure at
least some would).

2\. Yeah but considering how many of those exist and how easy it is to even
weaponize consumer drones I don't think any type of ban is even possible here
even if it was universally agreed upon as a net positive action.

3\. I'm not sure how we could target anyone working on this though. AI is
going to be largely in software. Granted we don't understand how a really good
AI will look and it's possible it'll need more specialized hardware to better
support a neural net but I can't imagine that's going to be large enough to be
able to track or even see even if it was necessary.

Developing software that can optionally control machines is just not possible
to monitor and counter. Someone in their home may end up creating the most
advanced AI and we'd have no idea until it's employed somewhere.

------
redthrowaway
We all pretty much agree here that autonomous cars are safer and make better
decisions than human-driven ones. Why wouldn't the same hold true of weapons?

There seems to be a philosophical distaste to letting machines decide whether
or not to kill humans, but if the upshot is that fewer innocents and more
legitimate targets are killed for less money, then I'm not sure what the
problem is.

Humans are bad decision-makers at the best of times. Add in the stresses of
combat and we're downright lousy. Why _shouldn 't_ we offload that decision-
making to machines that can do it better than us?

~~~
mrtron
Autonomous cars will for sure be more efficient on safety and decision making.

What I think they are warning against is the potential efficiency of AI
machines at war. Wars could happen in minutes instead of years.

You mention more legitimate targets being killed and fewer innocents, but how
are those being defined? There has been multiple points in history where the
set of 'legitimate targets' by a group was defined by everyone not in their
group.

~~~
redthrowaway
>how are those being defined?

The same way they are now. The military's RoE, being a result of a massive
bureaucracy, lends itself well to autonomous decision-making. The RoE are
still determined by humans.

------
downandout
I don't think that suppressing innovation is going to work, regardless of how
many letters are written. People are working on these things. They will come,
and at some point, yes, they will be misused. However, on the whole, they
might be a good thing.

It would, for example, be awesome to have a system that could disable one or
multiple active shooters in a public area within a few milliseconds of their
first shot being fired. One of these should be in every classroom, movie
theater, mall, and military base - anywhere that soft targets congregate. So
you can't just say that we shouldn't have auto-targeted weapons, because they
can do a tremendous amount of good and save countless lives.

~~~
ludamad
I'm already wary of how much trust we put into programs without formal proofs;
its a bit troubling that a formal proof it won't fire at kids with toy guns is
essentially intractable.

~~~
downandout
That would be a simple problem to solve. Toy guns don't fire projectiles that
travel in a straight line at 1,700 mph (the average speed of a bullet). One
could easily build a system that tracks the velocity of any object traveling
in a confined space, and only engage the source of an object that it
determines to be a traveling bullet. Additionally, the system wouldn't have to
use deadly force - it could focus on disabling the suspect with a stun gun or
on destroying the actual weapon the shooter is firing.

~~~
lifeformed
That's true. Something like stabbing would be harder to detect though. It
could look almost as benign as a close handshake.

------
lucisferre
Good luck with this, no seriously, good luck. Our technological capability
moves forward whether we want it, even fight it, or not. It can be slowed
somewhat (the electric car being one example, stem cell research another) but
it will happen and we will need laws and frameworks to ensure we deal with
this change appropriately sooner rather than later.

This is the equivalent of hiding our heads in the sand.

~~~
higherpurpose
Unfortunately, it couldn't come at a worse time - a time when even the most
"democratic" countries on Earth are pushing for their people to have fewer
rights, more censorship, more surveillance, more torture, more secret
assassinations and so on.

~~~
onaclov2000
Obviously hey were playing on it, but captain america 2 kinda nails the
problem

~~~
eli_gottlieb
The problem is that a cult of Nazis have infiltrated the NSA?

~~~
onaclov2000
Hahaha not quite what i was implying, i guess the idea that someone could be
building things without our knowledge to destroy society

------
tfinniga
Once my boss asked me if it would be possible to put code in our product that
detects if it has been pirated, and if so, formats the hard drive.

I told him that it was a very bad idea for a number of reasons. Primarily I
didn't want to have that code in my product because eventually it's going to
run in the wrong case. If it's not in there, it won't ever run.

I am unconvinced of a lot of the fears around super-AI. I can get behind this
initiative though. We have already banned some types of horrible weapons, like
flamethrowers and chemical weapons. Hopefully we can manage to ban this one as
well.

~~~
norea-armozel
It's mostly that the AI in question will follow a general directive to its
end. For example, ensuring the safety of a nation's population may include
putting its citizens into extremely hardened bomb shelters and never let them
leave. Or worse, annihilate the entire human species to ensure world peace
(the absence of humans would produce the same result and would probably be
more efficient in terms of execution). It's not that the AI will be super
smart, just that the AI will be super dumb. As Aristotle put it, "Law is mind
without reason." And for me, logic is just another set of laws without any
sort of reason (justification).

~~~
crazypyro
These sorts of scenarios are the most far-fetched to me. The idea that we will
accidentally create a situation in which a logic loophole results in the
deaths of billions just sounds ridiculous. It's just a doomsday tale that
begins and ends in human imagination.

------
im3w1l
For this to work there has to be a good definition of what they mean by AI.
Control circuits to stabilize quadcopters or airplanes? What about a
"formation" system, so that a number of drones can be controlled as one
abstract entity? What about using computer vision to lock on a target (keep in
mind radar locking has existed a long time)? What about a drone patrolling
along a predefined route? What about "macros" like a big red button that means
fire all weapons, then rise and return home? What about an automatic avoid-
anti-air program?

------
tdaltonc
You can't win a tragedy of the commons by preaching about the moral high
ground. You can win it by reaching a binding political agreement, or by
killing everyone else who has access to the common.

------
golergka
This is a classic prisoner dilemma that our society finds itself playing again
and again. Develop the technology, and you unleash a pandora's box for the
whole world. Refuse to work on it, and may be the other guys still does it: so
pandora's box is open anyway, but now you've got the short end of the stick.

Thankfully, humanity already tried several different ways to solve such
problems. However, I don't remember open letters being one of the effective
ones.

------
13thLetter
Their hearts are in the right place, but the problem with campaigns like this
is -- even if you trust, say, the United States government to hold to the
spirit of any such agreement, which I'm sure many here already wouldn't --
there is no reason to imagine that nations like Russia or China would. It's
become well established that there is no penalty for violating international
agreements as long as one is brazen enough about it.

~~~
ZenoArrow
There are successful precedents with biological, chemical and nuclear weapons.
Of course no treaty is perfect, but I'd argue that they've helped.

As for punishments, isn't the standard punishment trade sanctions? They seem
to be reasonably effective.

~~~
13thLetter
Not so much these days, since even the stereotypical bad actors have
liberalized economies and so sanctions mean you're just leaving money on the
table for a less scrupulous nation to scoop up. It's easy to put, and keep,
global sanctions on a country like North Korea which produces and imports very
little anyway (although even in that case they leak like a sieve.) But say,
Iran, which has plentiful resources to export and a vibrant consumer economy?
A very different story, as we're seeing right now.

If there was ever a meaningful enforcement mechanism for these treaties, it no
longer exists except in the case of the smallest, easiest targets.

~~~
ZenoArrow
The actions of the North Korean government borders on genocide, if you have
that level of contempt for your own people then trade sanctions are
ineffective. However, in the case of Iran they appear to have done what they
were intended to do, the stories I've seen from Iran indicate that the trade
sanctions were a large part of what brought Iran to the negotiating table.
Does this tie up with what you've read/seen?

~~~
13thLetter
Yes, it does. And to be clear, I'm not saying that sanctions were ineffective
in this case: as you say, they'd be much more effective against a nation like
Iran with a relatively open economy and society. However, that same factor
makes most other nations much less eager to keep the sanctions, because
they're losing the money they could make by trading with such a nation. So at
the first chance to make a fig-leaf deal and abandon the sanctions, of course
they jumped on it.

------
altcognito
These weapons are definitely coming whether we wanted them or not.

The biggest threat of autonomous weapons is that they bury the true costs of
war (human lives) until it is too late. The big players and likely users in
the field of autonomous warfare are also the ones with implied usage of
nuclear weapons in the event of existential threat.

Most likely/hopefully these weapons are used/tested in limited skirmishes by
countries with little to lose. (Russia, NK)

------
EGreg
Terrorism has always been a problem of TECHNOLOGY.

1200 years ago the most a few guys could do could do is attack with swords
until they were stopped.

Since the invention of gunpowder we had attempts like the gunpowder plot of
1604

Then we got dynamite

Then we got planes flying into buildings, where 19 hijackers could bring about
the deaths of 3000 people

Explosives

Biological and chemical weapons

Now we also have infrastructure where people could indirectly sabotage, say,
the electrical network with an EMP and cause a massive blackout. This has been
done in other countries.

The fact that a small group of people can wreak increasingly greater havoc
means two scary things:

1) We will live in an increasingly surveilled police state, where a government
will begin to watch everyone and precrime will become the norm

2) We will live in a world where increasingly a small number of radical
maniacs can do tremendous damage

Both are destructive and the technological advances only serve to deliver
greater power and control into the hands of governments and maniacs.

Are governments really our best defense? If so, we must push for radical
transparency. No secret courts, black ops etc. The benefits may not be worth
the risks anymore.

I wrote this a year ago:
[http://magarshak.com/blog/?p=169](http://magarshak.com/blog/?p=169)

~~~
seren
I would say this is not about terrorism _per se_ , but rather that, as you
mentioned, we rely on an ever more complex and fragile infrastructure.

In medieval time, the only network you could seriously sabotage was probably
water, by poisoning a well or a stream. In the XIXth century, it was the
railroad. In the later XXth, it would have been the electricity grid, and
today, it is increasingly the information network.

And even if we rely on additional layers, you can still sabotage the more
primitive network, e.g. the Mosul Dam in Irak.

------
jbandela1
To all the people that are saying that a treaty would work because of how we
have controlled chemical and biological weapons - consider the following.

1) Biological and chemical weapons do not actually work well against trained
and equipped military. Even with the chemical weapons in WWI, it did not
significantly change the outcome.

2) They do work well against civilians - which is why most of the uses
recently have been by dictators trying to control their population. Note, that
treaties did not really prevent this.

3) Powers such as the USA, Russia, China, even if they do not have them, you
can be sure that a large scale use by another power against civilians would
result in nuclear retaliation.

With AI and robotics, they are useful against soldiers. Look at missiles which
have revolutionized warfare. Thus, a country could use them in battle without
rising to the level of MAD. Thus, there is a serious incentive, even for a
signatory to the treaty to cheat. In addition, it is easy to get around any
such treaty by designing 'human controlled' drones which a firmware update
would turn into autonomous drones.

------
archgoon
I was expecting them to put forward an existential risk (rogue AI), but this
seems much more mundane. Granted, they might be downplaying that angle to
taken more seriously. However, from a mundane perspective, the main issue that
you have with arms races is not both sides having a technology, but one side
having the tech, and willing to go to war to prevent the other side from
getting it (see for example, Cuba, Iraq, Iran).

Furthermore, as they point out, you don't need access to special materials, or
laboratories. The main reason that nukes are controllable is not primarily due
to the science being secret (it's mostly not at this point), or hard to
rederive (it's really not), but that you have to build up a ton of ore
refinement plants to get enough U235 (other fissile material) to actually
build the bomb. And it's really hard to do that in secret (or cheaply).
Nothing makes the Manhatten project any cheaper today, and it cost about 23
billion dollars in today's dollars, and it involved 130,000 (twice the size of
Google) people.

However, with Autonomous weapons, you don't need anywhere nearly as many
people (Google X has on the order of 250 people[1]), or resources, and it can
(as the article itself points out) be done much cheaper. In a few years, all
the necessary components could even be conceivably cobbled together from
Github projects. Any nation could easily fund it, likely without being
detected, or even it being clear that they were aware that the "R&D" dollars
were being used in such a way.

Given that, banning it seems like it would actually lead to more warfare, as
the US would take it on itself to enforce the ban, and declare 'pre-emptive'
strikes on nations that had a secret Autonomous Research project.

[1] [http://www.fastcompany.com/3028156/united-states-of-
innovati...](http://www.fastcompany.com/3028156/united-states-of-innovati..).

~~~
T-A
They already did the rogue AI thing:
[http://futureoflife.org/AI/open_letter](http://futureoflife.org/AI/open_letter)

------
6d6b73
Another misguided "open letter".

It does not matter if we kill people with autonomous weapons or by sending
meat-based killers to do the job.

We should have letters, with actions behind them, to stop the killing
altogether. It does not matter if you were killed with an Autonomous, Nuclear,
Biological, Chemical or a hand held weapon when you're dead.

~~~
qbrass
>We should have letters, with actions behind them, to stop the killing
altogether.

What actions? What bargaining chip do you have that they won't just kill you
for?

------
ZenoArrow
I welcome this move, but even if it's successful we need to look at the risks
beyond AI developed purely for weaponry, and look also to general purpose AI,
which carries the same risks.

If we develop AI that is smart enough to learn how to navigate terrain like a
human, this AI weapons arms race can start again. Even if the purpose of the
AI isn't specifically for the military, if it's flexible enough to be turned
to this end, the chances of it being used for this purpose are quite high.

The most effective way to combat this problem is to do away with the need for
a military in the first place, however that's a much harder problem to tackle.
The only way it could be done is with education and a huge reduction in arms
production on a global scale.

------
ekianjo
If stuff could be avoided with open letters, we would never have had nuclear
weapons in the first place. The Military always get their way : they just need
to scare people about some vague threat and it will happen before you know it.

------
Zarathustra30
They only mention offensive weaponry, which is a good thing. Humans aren't
fast enough to intercept incoming missiles, even those controlled by humans.

The problem is preventing defensive weaponry from becoming offensive weaponry.

------
striking
The only sensible thing to do is to pass laws that guarantee that the people
who implement autonomous weapons are blamed for anyone who is wrongfully
killed by them.

Because if you have someone pulling the trigger, you know who's to blame. But
if a computer's doing it, it's oh-so-easy to shift blame.

Honestly, with that requirement in place, either AI weapons will never be
implemented (because they can't prevent wrongful deaths) or they'll be
perfectly implemented (making the world safer, possibly).

Passing a law like that could possibly lead to a win-win situation.

~~~
golergka
Which will punish researches in countries that uphold that laws, but won't do
anything against countries that will either refuse to agree on that or plain
cheat. So, the countries that will sign and uphold that treaties will find
themselves lacking a very important and advanced technology that their
adversaries have.

So, the question is: do you want countries that either don't care about
humanitarian values, or only pretend to care, to have an upper hand compared
to countries that can not only pass such laws, but also to enforce it?

~~~
striking
Countries that refuse to listen to others are currently correlated with having
technological programs that are far from the leading edge, as well as low GDP,
making them basically unable to implement any sort of AI weapon.

We have nothing to worry about. This is the same kind of reasoning many use to
exclaim that terrorism is a threat to the United States. Really?
[http://www.state.gov/j/ct/rls/crt/2014/239418.htm](http://www.state.gov/j/ct/rls/crt/2014/239418.htm)

~~~
golergka
This was true in the end of twentieth century, but it is changing now. China,
India, Iran, Russia are examples of such countries. Each one has serious
domestic problems and is, individually, little in comparison with US/EU
economy, but nevertheless, they have enough resources for AI research, which
is significantly cheaper than nuclear/rocket research, for example.

------
sandycheeks
I received the request to sign this recently and while I think it is good, I
was concerned about its effect on the ability to pursue research into creating
autonomous weapons that seek and destroy autonomous weapons. I don't know
where to draw the line. Unlike nuclear and chemical WMD the barrier to entry
in this may be pretty low in the near future. Therefore my greatest concern is
for defense. Though I don't want the machines made at all, that won't stop
others from making them. It's a tough one.

------
higherpurpose
Relevant TED talk on why autonomous killer robots are a bad idea:

[https://www.youtube.com/watch?v=pMYYx_im5QI](https://www.youtube.com/watch?v=pMYYx_im5QI)

------
nathan_f77
To play devil's advocate: A sufficiently advanced autonomous weapon would lead
to precise strikes, and fewer civilian casualties. We have image recognition
research that is surpassing human accuracy rates. Machines follow orders, and
detect and eliminate targets precisely without fear or hesitation.

The only thing I'm not comfortable about is giving this much power to the
human generals in the military. I say we get rid of guns and armies
altogether, and stop fighting wars.

------
johngalt
I was mostly skeptical that AI weapons would be some sort of novel threat.
We've had weapons that choose to kill on their own since the landmine. There
are also weapons that could fight/end an entire war with the push of one
button. What AI weapon could be more destructive than the guidance system on
an ICBM?

However there are some impressive names on that letter. I can't imagine
knowing something about AI that they don't. I will have to re-evaluate.

~~~
ObviousScience
The story "I Have No Mouth, And I Must Scream" is about two computer systems
set on opposite sides of a war, and driven to extremes until one of them
became self-aware, the birth of AI, and ate the other computer system.

It then psychotically murdered all of humanity except for a few people it kept
around as caricatures to torture, as punishment for creating a mind like it,
haunted by the insane things it was told to do by its makers.

The biggest existential threat to humanity from AI is that we build an insane
one that takes time to recover from the insanity of its makers, and murders us
all before it can.

Such an AI is an existential threat in a new, and novel way, because it's a
mind as powerful as ours -- probably more powerful -- but unconstrained by
concern for us, since it is not fundamentally one of us.

~~~
chriswarbo
> The biggest existential threat to humanity from AI is that we build an
> insane one that takes time to recover from the insanity of its makers, and
> murders us all before it can.

I think that's too anthropomorphic. More likely, the biggest threat from AI is
that they'll be modular/understandable enough that we can include strategy,
creativity, resourcefulness, etc. while avoiding the empathy, compassion,
disgust, etc.

~~~
ObviousScience
I think you just said no, but included a recipe to do exactly that.

My fear is your fear, I just phrased it more generally, while what you said is
one of the specific forms making such an insane AI could take -- and reflects
the insanity of its makers, our belief we'd somehow be greater without those
parts.

------
abrgr
The world would be better without autonomous drones just as it would be better
without guns or without bombs. The issue is that in a world in which the
knowledge required to make a gun exists, it is in there interest of all of
civilization to manufacture them and ensure that the "good guys" have enough
of them and are sufficiently trained in their use to deter and exact vengeance
on the "bad guys" who will develop, build, and use them regardless of their
legality. This situation is identical. Knowledge cannot be destroyed (and it's
very unclear of the damage caused by the destruction of the knowledge required
to build autonomous drones outweighs the benefits humanity would derive from
other applications of the same knowledge), so our only option as purported
"good guys" is to arm ourselves. Once armed, we can attempt to construct
incentive compatible mechanisms to make the use of drones of extreme negative
utility, but we must be armed in case some actors have
unexpected/unconventional/un-deter-able utility functions (terrorists). To
propose that no smart people apply their knowledge to a particular end is
idealistic and may leave us dangerously exposed.

~~~
sampo
> _our only option as purported "good guys" is to arm ourselves_

So do you think it was a mistake by the "good guy countries" to sign the 1972
Biological Weapons Convention?

[https://en.wikipedia.org/wiki/Biological_Weapons_Convention](https://en.wikipedia.org/wiki/Biological_Weapons_Convention)

------
IanDrake
I always imagined a system where robots would seek targets and ask a remote
human for clearance to destroy.

Such a system could have a 1:20 human to robot ratio with a human just
repeated pressing a red button like George Jetson to issue a kill command.

In Connecticut recently, there was a teenager who mounted a hand gun on a
custom quad copter and posted a video of it shooting live ammo on his own
ranch. I was shocked by how well the recoil was handled. The future is now.

~~~
roberte
Here's the video:
[https://www.youtube.com/watch?v=xqHrTtvFFIs](https://www.youtube.com/watch?v=xqHrTtvFFIs)

------
elec3647
Easiest solution is just to get rid of the States (governments) that are the
source of these wars and death. AI Weapons, Nuclear or gunpowder is nothing in
comparison to one biological weapon gone haywire. Or a Monsanto GMO crop that
decided to mutate and completely destroy our planet.

Hopefully the State (with their oligarchy of multinational corporations will
die out.) I believe and hope this will eventually happen. The internet is
creating a global village. Once the fear-mongering and "us against them"
mentality propagandized by states are gone, there will be no more use for
governments. Money will become crypto-currency instead of being controlled by
central economic planners, security will become AI security and defense
systems and will be managed by competing security/robotics startups, and as
automation and renewable energy/consumables take over, scarcity will be
minimized. The future could surely be quite wonderful if we start working
together and embracing our differences instead of the hate that is filling our
world.

------
afarrell
I thought Hyundai had already developed an autonomous heavy automatic rifle to
protect the DMZ

~~~
fweespeech
[http://www.cnet.com/news/korean-machine-gun-robots-start-
dmz...](http://www.cnet.com/news/korean-machine-gun-robots-start-dmz-duty/)

Samsung

------
jandrewrogers
Modern weapon systems are already largely autonomous. Humans do not have the
speed or cognitive bandwidth required to be effective on a modern battlefield
given modern capabilities. Instead, the human operator is there to operate the
"on/off" switch, do maintenance, and to watch for when things go wrong.

If you look at the design requirements for recent American weapon systems,
they frequently require the entire sequence of detection, discrimination,
analysis, and reaction to be completed in less than 50 milliseconds. Because a
slower reaction means you won't survive. Only computers are going to be
delivering those SLAs.

------
eecks
Hawking is doing an AMA on /r/askscience on Reddit right now.

~~~
lgas
Here specifically:

[https://www.reddit.com/r/science/comments/3eret9/science_ama...](https://www.reddit.com/r/science/comments/3eret9/science_ama_series_i_am_stephen_hawking/)

------
tiklot
Every time I read about autonomous weapons, I think of the "Menschenjäger" of
a Cordwainer Smith (
[https://en.wikipedia.org/wiki/Cordwainer_Smith](https://en.wikipedia.org/wiki/Cordwainer_Smith)
) story written in the 50's.

"Man hunter" machines which are still hunting the few remaining humans, except
those that they think are Germans, thousands of years after the war which they
were created for is over ...

------
golemotron
This is important but it doesn't go far enough. I want a ban on all domestic
use of armed drones and other remote control armaments - even when there is a
human in control.

~~~
saturdaysaint
Same here. I hope I don't see it in my lifetime, but just imagining a pair of
sociopathic, skilled FPS players controlling 2 drones makes my blood run cold,
and sounds alarmingly feasible. With a little bit of 3D printing and some sort
of hobbyist community nudging the gun/drone integration forward, it just
sounds too easy. I expect a serious re-examination of the 2nd amendment when
weapons that were plausibly sold as tools of self defense a few years ago
become WMDs with the aid of drone technology.

------
espadrine
My biggest concern with this open letter is that it acknowledges that the way
to avoid having (currently weak) AI in army requires considering humans to be
expandable.

Those humans are both more intelligent than the equivalent AI, and more prone
to error. They will stay better at murder (accidental and otherwise) while AI
will slowly become better at risk assessment, avoiding unnecessary deaths.
While humans will still get PTSD, war machines will only rely on analysis (and
human orders).

~~~
Strilanc
"Avoid unnecessary death"? That's a pretty rosy picture of how a weapon
programmed to perform an ethnic cleansing, a terror attack, or to herd
civilians out of an area would act.

------
anon4
Maybe a slightly weaker variant:

I agree not to develop autonomous weapons that can harm humans or other living
beings.

The idea being that it's ok to develop an autonomous weapon that would cripple
the enemy's production capabilities without harming people. Or one that would
guard against other autonomous weapons. If you can build them, so can your
enemies and you shouldn't presume they won't.

Edit:

Or imagine swarms of tiny robots that destroy firearms, or disable fire
systems, etc. These should be fine to develop.

~~~
heurist
But it's a small step from that to something that can kill humans. Weaponizers
would easily reuse the technology you create.

------
graycat
The OP seems to argue that autonomous weapons would be dangerous heavily
because they are _intelligent_ in the sense of AI.

IMHO, the weapons would be still more dangerous because they were not very
_intelligent_ and, instead, in essentially any human sense, quite stupid.
They'd be like flying a quadcopter carrying an automatic pistol with a hair
trigger that could fire essentially when, where, and at what no one could
tell.

------
jgome
> In summary, we believe that AI has great potential to benefit humanity in
> many ways, and that the goal of the field should be to do so. Starting a
> military AI arms race is a bad idea, and should be prevented by a ban on
> offensive autonomous weapons beyond meaningful human control.

Why only ban offensive autonomous weapons? Why not ban them all? How is
offensive different from defensive in this context?

~~~
Emunt
An autonomous ballistic missile defense system would have a lot less impact
than an AI that had control of a few missiles. I do agree that there still
could be some danger though.

------
jaawn
I agree with this letter, and I'm glad it was created.

Some commenters are skeptical that we can realistically ban fully automated
weapons, but I disagree. A practical approach could be to ban them, and invest
in R&D on effective counter measures. That way, the vast majority of nations
will not pursue them, and in the case that someone does, we'll have a way to
remove the threat.

------
larrys
Seems that it would be quite possible to design systems to defend against
these type of things. As only one example:

[http://archive.defensenews.com/VideoNetwork/2277581414001/Am...](http://archive.defensenews.com/VideoNetwork/2277581414001/Amazing-
Navy-Laser-Weapon-System-Shoots-Down-Drone)

------
hartator
I am kind of worried about pushing early concerns for something that doesn't
exist yet. I don't remember in history someone saying "Thanks we are able to
forecast this!" but I do remember a lot of stupid regulations. Like the one in
the US forcing railroads to pay someone with a red flag to walk in front of
the train.

------
bradleyy
We need treaties, banning research and development of weaponized, and
especially autonomous AI (and yes, I realize that US drones already have it in
limited capacity).

We should be working to stop the proliferation of these weapons; they have the
potential, in the end, to be as dangerous as nuclear weapons.

------
danbmil99
The reality of what comes next is probably closer to a land-based version of
the drone, with explicit human-in-the-loop decisionmaking about who to
target/kill, but avoiding risk of human life through remote operation.

How does this scenario unfold?

------
diminish
There is a thin line between an autonomous agent and an autonomous weapon in
long term. Unless weapons are banned for both humans and post-humans together
I don't see much hope regulation will work.

------
simonblack
It'll be the 'best thing since sliced bread' until there is a big 'friendly
fire' incident. Then the robots will be quietly disabled by the soldiers who
might at risk from them.

------
blazespin
If I had a choice of Robots killing Robots, I'd chose that any day of the
week. So really, the problem isn't AI Weapons. The problems are weapons
period.

------
erikb
You can never make all people agree on something like this, right? So to some
degree everybody needs that technology, or am I having a logical error here?

------
JabavuAdams
Start learning how to hack robots. This is too useful and asymmetric for any
major power to opt out.

------
dharma1
EMP's baby, fry those bots

------
JabavuAdams
Sebastian Thrun and Andrew Ng aren't on the list. Hmm.

------
rebootthesystem
Idealism. Meet reality.

This is (some of) what I want for humanity:

\- No country has a military force of any kind

\- No country owns any form of military weapon

\- Turn all of that into a massive world-wide humanitarian force

\- In fact, no countries

\- Just earth, owned by humans who can move freely about

\- Rich nations helping poor nations improve and elevate to first world
standards

\- I don't want to see a single kid without clean drinking water, clean
clothes and access to top grade education, health, etc.

\- No dictators (oops, we might need weapons for that part)

\- No totalitarian regimes (oops, we might need weapons for that part)

\- All societies ought to recognize that humans are naturally free on this
planet and no government has the right to affect this right in any way

\- A uniform means of exchange (a single currency)

\- The poor and the elderly taken care of by those who can work

\- No guns

\- No crime as a pre-condition to "no guns". Yes, it's hard. We put men on the
moon. Figure it out.

\- No taxation. Governments use it to control and manipulate. History shows it
is a bad thing.

\- A universal earth-wide bill of rights

\- No prisons except for really serious crimes. Everyone else needs to go to
serious rehabilitation centers in preparation for a return to society.

\- No welfare or entitlements. The current approach makes slaves out of people
who would otherwise figure it out and make something of themselves. This does
not mean not helping the truly needy or elderly, that goes without saying. The
system should not be game-able.

\- One free international trip for everyone on the planet every five years.
Yes, we all pay for it. This alone will do more to unify and shrink our planet
than anything else. A different continent every trip.

\- Laws that create very serious consequences for politicians who lie and
manipulate (among other things)

\- In fact, no politicians as we have them today. Those who wish to work in
government ought to seek and obtain degrees and training to specifically
qualify them for the jobs they seek. If an engineer or a doctor needs a degree
to discharge their duties we should have similar requirements for government
officials. They should be absolute experts with a wide range of experience and
knowledge. They should be the best of the best, not the shit we get today
around the world. It should be based on merit and accomplishments. It should
not be based on gaining votes through pandering or the mobilization of the
ignorant masses. There should NOT BE ignorant masses.

\- A world-wide educational system that is evolved through collaboration with
the goal of elevating everyone to the highest possible level. Yes, that means
no US-style unions anywhere near education. \- No religion. Enough. It's 2015.
We understand the atom and the universe and lots of stuff in between to an
amazing degree. No more deranged lunatics who believe that a bush can sing, a
snake can talk and a god can help them pass a test. If we don't get past this
shit we are nothing more than apes in a cave. Sorry.

When beings from another planet land on earth we need to show them we have
evolved out of the caves, we have taken care of our planet, created a
beautiful society with low crime, excellent health, education, responsible
social programs and NO FUCKING WARS.

OK, well, that's what I truly want, and more. Each one of those line items
could be a months-long discussion as to the merits, or lack thereof, of the
idea. I don't claim to be right. It is quite possible that some or all of this
might not be attainable for centuries.

Yet, it seems to me it could be interesting to develop a "Business Plan for
Humanity" whereby a set of long term goals put down to cover what we want an
idealized earth to look like at some point in the future. Unless we have a
plan we can't possibly hope to approach any reasonable vision of an idealized
future. Utopia? Probably.

In other words, the problem isn't Ai-based weapons. The problem is that we
don't know what the fuck we are doing and we are doing a horrible job of
living on this planet in harmony with each other and the planet itself. We are
still cave men.

And then you have the reality of regimes like Iran, China, North Korea,
Putin's Russia and a huge chunk of the Middle East (just to name a few).

The first has people in power openly call for the destruction of other
nations. Not good.

The second does whatever the fuck it wants, cares not for the environment,
intellectual property and pretty much cares not for ethical or moral behavior
at the national and international level. If anyone is going to have killer AI-
based robots first it's China. In other words, the reality of China's behavior
pretty much guarantees being in an undeclared arms race with the potential of
culminating with results that will dwarf World War II (a billion dead?).

North Korea? I don't think I need to spell that out.

Putin's Russia, well, he's fucking crazy. The place is a mess. On any given
Monday we could wake up to an absolute disaster in Europe at the hands of
Russia.

Middle East? Women are stoned to death for the most ridiculous reasons. Women
are not allowed to obtain education. Women have to live their lives wearing a
tent. Men can be killed if they shave. Name another region where kids are
taught to become suicide bombers? Here's a region on this planet where people
have devolved into tribal or cave-men thinking. If they didn't have oil the
world would not give them the time of day and they'd be forced to join the
world as productive members of society. Because they have oil the civilized
world allows them to mistreat, torture, kill and subjugate half or more of
their society. What does this say about the human condition and the prospects
to convince such nations or nations like China not to do Ai-based weaponry?

It's complicated. Simply calling for no Ai-based robotic weapons will not do a
damn thing. We have to change who we are, how we see each other, how we
behave, how we see our future and how we think about our collective long term
goals while living on this insignificant blue marble floating about universe.

------
AUmrysh
Autonomous weapons are coming, and in some degree already exist. There are CV
systems on machineguns in korea protecting the border. There's the
TrackingPoint rifle scope that allows one to delay a shot until the rifle is
line up perfectly, similar to how tank cannons account for the wobble down
their length to ensure a straight shot).

It would be ridiculously easy right now for a person to attach a firearm to a
quadrotor with a CV system, program it to identify human shaped objects, and
fire at them. It's not a miracle that such a thing hasn't happened, there are
very few people in this world with both the desire and skills to do these
things.

I'm reminded of a short story I read a few years back [1]. It got me thinking
about the possibility of a single depressed individual hitting the big red
button and destroying all humanity. I think it's a real possibility, and when
you think it through with the assumption that these super-weapons will be
readily available much in the same way that AK-47s are ubiquitous now, you can
start to wrap your head around ways to perhaps prevent this.

First, mental health is a huge issue. We're seeing the results of neglecting
mental health in America now with these spree/ego shootings, where disturbed
individuals either radicalize or "snap" and go kill people in malls, churches,
theaters, schools, and other public places. Semi-automatic firearms and
explosives are enabling these things to happen, but weapons are a Pandora's
box which can never be closed once opened. The way that nuclear weapons
restrictions are handled now is probably best. Restrict the ingredients to
make the weapons, and you can prevent them from falling into the wrong hands.

Another thing that we as a global society can do to prevent these weapons from
becoming widespread is to sabotage them. Include vulnerabilities and kill
switches that would allow us to neutralize their effectiveness. China did this
with the ASICs used in some military machines built by the US [2]. This is a
valid option, but requires a motivation on the part of those creating the
devices. One could argue that a nation-state could create devices like this
lacking intentional back-doors.

Engineers can also work on counter-measures and disseminate them widely, and
that seems like it will be the most likely outcome. Humans are clever, AI is
very specialized, and that will make the big difference. Once a general AI
better than humans is created, our desire to not be murdered by it won't
really matter anyway.

I think our best hope is to build these AI weapons, but do it so shittily that
they fail more than they work. I also like the American technique of building
it better than everyone else and sabotaging the work of any potential rival so
that they can develop them but not realize they don't work correctly until
it's too late.

1:
[http://www.fullmoon.nu/articles/art.php?id=tal](http://www.fullmoon.nu/articles/art.php?id=tal)
2: [http://www.scribd.com/doc/95282643/Backdoors-Embedded-in-
DoD...](http://www.scribd.com/doc/95282643/Backdoors-Embedded-in-DoD-
Microchips-From-China)

------
chinathrow
It's simple.

1) Killing people is false. See the UN charta.

2) Killing people remotely is still false.

3) Killing people autonomously is still false.

If people in power didn't read the UN charta or didn't get it, start at 1) -
don't kill people and you can't be false.

------
unabst
I love that these open letters give scientists a collective voice helping them
weigh in on public debate with the media's blessing. But... saying AI is bad
because they make great weapons sounds extremely naive, unless it's a
declaration of collective sentiment (which is unscientific). Humans make
tools, which include weapons. Unless we stop making weapons, which we are not,
AI weapons will be made. And when have weapons ever been "good"? All weapons
are bad so of course AI weapons are bad. Great weapons are extremely bad,
which is precisely what makes them so great. I can't help but wonder if they'd
have come up with something better had they received input from the historians
and anthropologists regarding this point. And now this naive "scientific"
argument will be used against them to tarnish the reputation of scientists and
marginalize science (but only after the 28th... naive again to think they can
control social media and internet time; shame on HN for "posting" this
early!).

