Hacker News new | past | comments | ask | show | jobs | submit login
Autonomous Weapons: An Open Letter from AI and Robotics Researchers (futureoflife.org)
305 points by espadrine on July 27, 2015 | hide | past | favorite | 288 comments



I don't want to anyone to build autonomous weapons, but I don't want anyone to build nuclear weapons or any other weapons of war either; I don't see how to avoid it. If the choice is to either develop and deploy autonomous weapons or to risk having your population conquered and murdered by enemies that use them, then there is no choice.

Possibly, autonomous weapons like chemical weapons won't be important to victory, or like most biological weapons (AFAIK) they won't be cost-effective. But it's hard to imagine a human defeating a bot in a shootout; consider human stock market traders who try to compete with flash trading computers, for example. In fact, I wonder if some of the tech is the same for optimizing decision speed and accuracy.

Perhaps the best response by governments is to use their resources to develop autonomous weapons countermeasures, especially those [EDIT: i.e., those countermeasures] that can be acquired and utilized by those with few resources: Towns, governments in poor countries, and even individuals.

Also, my guess is that it's an area ripe for effective international standareds, treaties and law. All governments can agree that they don't want the chaos of proliferating, unregulated autonomous weapons and would work to enforce the rules.


I've had a shootout of sorts against a robot. The robot was armed with an airsoft gun, and I with a Glock pistol. The goal was not to kill the robot (since it was expensive and the owner and I had spent a long time getting the machine vision software workin) but to avoid being hit by the robot while engaging some other targets.

The course had to be carefully constructed to avoid an immediate robot victory, and the robot wasn't mobile. I wouldn't take the human side in a confrontation with an armed robot driven by a defense budget.

The disadvantage of a robot is limited mobility and difficulty distinguishing friends from foes, the same disadvantages which plague landmines. The advantage is that a robotic force could provide the same area denial as landmines without the long-term consequences: set 20% of the robots to come home and recharge every day, with a week-long battery life, and you've got a very short period during which problems can happen.


Both problems can be solved with today's technnology: make the robot airbourne to improve mobility, then tag friendly forces with some IFF broadcast. Declare a curfew and boom, everyone who's not a friendly is an enemy combatant.

I hope I don't get to live to see such a day.


Biological and chemical weapons are considerably easier to exclude by mutual agreement than AI, because the line is pretty clearly drawn. There is not much of an incremental path from dropping explosives do dropping gas containers. The line suggested here seems much more arbitrary and would be more like a wide grey area that would be pushed wider and wider into AI territory by including some form of alibi human interaction just as a formality. As described in the open letter, the "forbidden technology" would sit firmly sandwiched between the well established technology of "seek to kill" missiles that are fully autonomus once fired (as opposed to "seek, then autonomously decide to kill or not", which would be forbidden) and teleoperated equipment which is also explicitly allowed. The latter wont be stopped from getting better and better autonomous capabilities by mandating an operator to sign off kill decisions, which will eventually become a meaningless formality. If we want to avoid autonomous weapons, we need a more robust line than the one suggested.


It is easy to prevent it, have the UN ban it and provide incentives for countries to sign a treaty. This has worked for things like chemical and biological warfare. The key is to star the process now before generals get their hands on the technology so there won't be any pushback.


I'd argue that it's not that easy to "ban" something by getting the UN to say so, or even getting conventional treaties signed. First, the nations have to ratify it. And even if they do, they have to abide by it. Here are some random examples that I googled-for off the top of my head where these treaties, laws and the UN have failed.

Torture:

http://www.cfr.org/international-law/united-states-geneva-co...

Tear Gas:

http://www.washingtonpost.com/news/morning-mix/wp/2014/08/14...

Kyoto Protocol:

https://en.wikipedia.org/wiki/Kyoto_Protocol

Extrajudicial Killings in Drone Strikes:

http://www.theguardian.com/world/2012/jun/21/drone-strikes-i...


> Tear Gas:

Your linked article specifically stated tear gas was still legal for police use under the CWC. An expert rightly points out that this is illogical (and I agree), but it is not a failure.

Also, I should point out that the mace and pepper spray canisters that many people around the world carry for personal protection are also illegal chemical weapons under the CWC if used in combat.

There is a whole lot more illogic and inconsistency to be found between what is legal in something arbitrarily defined as "war" and otherwise, if one delves deep enough into the various treaties and conventions.


Yeah, you'd need a fully functional global judicial system that included a police force to make UN regulations stick.


hmm... maybe a fully functional autonomous weapons police force?


Is that you Skynet?


There's a question of whether one day you will be able to just "git pull 'terminator' " and install it on a cheap drone with an Arduino to pull the trigger on a mounted AK47.

I reckon the software will get more and more ubiquitous. You can already download image recognition software, maps, and all the other code you need. How far are we actually from being able to send a drone to do what contract killers used to do?


This is a good idea, in theory. I just wonder how controllable it would be, once AI is more of a ubiquitous technology. With nuclear and bio warfare you can ban certain substances. Development of safe nuclear energy has suffered from this, perhaps justifiably. But once there are APIs, Open Source libraries, etc. out there, how will we contain it?


Right, but for it to be successful, there's still a substantial amount of R&D to get the systems cooperating in a manner effective on the battlefield.

I agree, however, that if this were to go forward with a military bankroll, the result would be much easier to replicate. I'm particularly struck by this sentence from the letter: "If any major military power pushes ahead with AI weapon development...autonomous weapons will become the Kalashnikovs of tomorrow." That's terrifying.


This has worked for things like chemical and biological warfare

I think the Kurds and some Syrians would beg to differ.


And they face(d) legal retaliation. Just like any other law. Sure, people can break it. That opens the door to justified retaliation.

Should we legalize theft just because people still get pickpocketed?


No, that's not my argument. My argument is that you shouldn't leave your doors wide open simply because theft is illegal. It still happens, and so will actions with "illegal" weapons.

I'm not suggesting we should ignore it, I'm suggesting that aside from making us feel nice, it's not an effective "solution".


And what would be the analog of leaving your doors wide open in this scenario?

Dismantling a standing army?

Not doing any cyberwarfare R&D?

I don't think either of those are included in the proposal of "ban offensive AI weapons."


You may as well do the first two if you aren't going to do the latter. Fighting an opponent using AI is like fighting a modern army with spears because you think guns are evil.

This is such a technology leap it's not even funny. I appreciate the signatories' intent here and applaud them for higher level thinking but banning offensive AI first strike or counter strike ability is not going to happen because it puts people that ignore said ban far ahead of you in the ability to deal death department.

The US isn't going to dismantle it's nuclear arms, and neither is any other major power. Same story here, only these weapons are even scarier because of the fact that they are much more flexible.

It's nuclear arms strength without all that gooey radiation mess. AI could do everything from surgical strikes to full on massed combat without losing a countryman and dominating other nations on the battlefield. Nobody is going to get caught flat footed on that one.


> "The US isn't going to dismantle it's nuclear arms, and neither is any other major power."

It can be done. It happened in South Africa. It was also the subject of considerable debate in the last UK election. We don't have to keep them, as many people recognise they're very expensive for something we have no need for.


No it's more like having guns that are mounted around your village, but not allowing guns that can be mobilized outwardly. For a world trying to be more civilized, that's a noble and defensible position.


All governments can agree that they don't want the chaos of proliferating, unregulated autonomous weapons

The US is not going to give up this capability. It's still not quite fully signed up to the landmine treaty.

We know how this will go: automated colonial "antiterrorism" enforcement. Like drone strikes today, only lower cost. Entire populations kept in line by the robots that hunt in the night. Objecting to the death robots and organising against it will be considered evidence of terrorism and result in your death, along with anyone who phoned you recently enough. Deployed from Turkey to Tripoli.


> But it's hard to imagine a human defeating a bot in a shootout

In a fair fight? Sure. But until these bots have strong human-strength AI, enemies will always be able to come up with dirty tricks.


Maybe if it was Bot vs Human. In reality, it will be Bot + Human vs Human.


Bot+Human? You're describing a drone, and you're right, it works great. Might work better if they could upgrade the optics a few notches, but I'm sure that's already in the works.


A drone is one possible implementation, but I was thinking more of a standard infantryman accompanied by two or three bots.

Leave the strategic decisions to the commanders, the tactical to the infantryman, and the split-second firefighting to the robots.


Yes, I think Bot-assisted humans would be far more effective than either one alone. Imagine the cunning of a human brain, enhanced with the senses and reflexes of robotics.


I agree.

But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.

I don't agree with that line of thinking but it would be quite a debate to have.


You make the counter-argument that their enemies will make the same argument to their populations, which lowers the bar for armed conflict on every side, and increases the odds of war coming to your homefront.


This is exactly why I'm afraid of this future. If the richest nation can send their robots to rape and pillage other countries with no threat to their own population, why wouldn't they?

It took public outrage over lost lives for us (the US) to pull out of a war that we were already losing (Vietnam). I can't imagine what it'd take if we were winning and not dying.


> If the richest nation can send their robots to rape and pillage other countries with no threat to their own population, why wouldn't they?

You're basically describing the US's drone war in the Middle East today. And think of how little resistance there is to that.


Well the robots probably wouldn't rape or pillage...


Maybe "Drawn and quartered", but that entirely depends on the programming the warlord prefares.

After all, AIs might become the Kalashnikov's of the future: every self-respecting warlord has a gold-plated one.


The only safe answer is for every country to invade the rest of the world immediately.


> But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.

Or it could go the other way with those people and families worried about losing their livelihoods.


> But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.

But can't the same argument apply to biological and chemical weapons? How did it come to be, that there are treaties banning them?


But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.

On the other hand, do soldiers really want to defend themselves against flying, high-speed IEDs with target-recognition software? I mean, I've seen malfunctioning drones move so fast that I lose sight of them. Does anybody really want to see one of these things come over a compound wall carrying a payload of high explosives, and software for identifying groups of human targets and dodging defensive fire?

Once you start an arms race, and once several big powers do the R&D, this would not be an easily controlled technology.


> On the other hand, do soldiers really want to defend themselves against flying, high-speed IEDs with target-recognition software? I mean, I've seen malfunctioning drones move so fast that I lose sight of them. Does anybody really want to see one of these things come over a compound wall carrying a payload of high explosives, and software for identifying groups of human targets and dodging defensive fire?

You basically just described a fire-and-forget missile, which is a technology that has been on the battlefield for over three decades.


A fire-and-forget missile is a single directional device with minor corrections for targeting. It can't hover, back up, select its own target, avoid return fire, etc. So, no we haven't had this tech for three decades.


> A fire-and-forget missile is a single directional device with minor corrections for targeting.

No it isn't. A fire-and-forget missile is a missile capable of dealing with every issue between the launching platform and the target. This much more complex than "minor corrections for targeting".

> It can't hover, back up

These are a function of a particular propulsion system, not guidance system. The vast majority of non-rotorcraft cannot hover or back up.

> select its own target

That is exactly what a fire-and-forget weapon does. The firing platform directs the weapon at a particular target to start, but the weapon makes the decision about what to hit. If it loses lock, it tries to reacquire. It does not necessarily reacquire the same target. In fact, you could blindfire most FF weapons and let the seeker pick a target in its path of travel, if you really wanted to. Rules of engagement typically prohibit this, but it is technically feasible.

> avoid return fire

Evasion is certainly something current weapons are theoretically capable of. It is not typically in the package, though, because it adds cost, size, and weight. Once these systems get to the point that they can be added to drones in a cost-effective manner they will likely be added to single-use weapon systems as well.

> So, no we haven't had this tech for three decades.

It has been a constant march of progress, but yes we have had weapons that can make targeting decisions for themselves for over three decades. The Mk-48 torpedo[1] has been in service since 1972 and has had since then the ability to travel a predetermined search pattern looking for targets and automatically attacking whatever it finds. The Mk-60 CAPTOR mine has a similar capability to discriminate and engage targets. The RGM-84 Harpoon[3] is launched by providing one or more "legs", then activating the missile's seeker to find and acquire a target; it is not actually fired "at" a particular ship in the conventional sense of the word.

[1] https://en.wikipedia.org/wiki/Mark_48_torpedo

[2] https://en.wikipedia.org/wiki/Mark_60_CAPTOR

[3] https://en.wikipedia.org/wiki/Harpoon_(missile)


> Possibly, autonomous weapons like chemical weapons won't be important to victory, or like most biological weapons (AFAIK) they won't be cost-effective. But it's hard to imagine a human defeating a bot in a shootout; consider human stock market traders who try to compete with flash trading computers, for example. In fact, I wonder if some of the tech is the same for optimizing decision speed and accuracy.

The only way for human adversaries to fight autonomous weapons would be with brute, lethal force (nuclear/neutron weapons). It ends poorly for all involved.


"The only way for human adversaries to fight autonomous weapons would be with brute, lethal force"

No it's not. You could use EMP. You could use signal jamming. Neither are lethal, both have the potential to be effective against autonomous weapons.


Pretty much everything the military uses has a measure of EMP shielding. We've know about it's effects for over 50 years now.

Signal jamming is an obvious weak point - one that disappears as autonomy is increased. Distributed control would reduce this issue, (as in, have a single soldier/operator manage 10-15 units). Eventually, you remove human control entirely, and along with it, this issue.


You can also attack the infrastructure and supply lines, same as with regular fighters. A drone only has a limited amount of ammo and fuel.


Drones have been successful in refueling from tanker aircraft, specifically the "Salty Dog" X-47B Navy UAV testbed:

http://www.navytimes.com/story/military/2015/04/22/navy-nava...


>You could use EMP.

Which, to my knowledge, are only currently generated using a nuclear weapon. You might be able to create one using solid state gear with enough time, R&D, and power.

> You could use signal jamming.

Machine intelligence frowns upon your silly attempts at jamming its uplinks. Predator drones and other autonomous, existing military kit already use high frequency satellite communications techniques that are essentially jam proof.


> Which, to my knowledge, are only currently generated using a nuclear weapon.

An EMP gun is just a directed energy weapon.[1]

Though you can generate an undirected EMP pulse through various means, there not as useful.[2]

1. https://en.wikipedia.org/wiki/Directed-energy_weapon

2. https://en.wikipedia.org/wiki/Electromagnetic_pulse#Non-nucl...


I understand that. My point was, there is no practical method yet to provide the energy required and appropriately direct EM energy at a target except through a crude weapon like an omnidirectional nuclear weapon


> "Which, to my knowledge, are only currently generated using a nuclear weapon. You might be able to create one using solid state gear with enough time, R&D, and power."

Some use a nuclear source, but not all... https://en.wikipedia.org/wiki/Directed-energy_weapon

> "Machine intelligence frowns upon your silly attempts at jamming its uplinks. Predator drones and other autonomous, existing military kit already use high frequency satellite communications techniques that are essentially jam proof."

Your idea of jamming is too narrow. Think about it like this, even if it's mostly automated, these machines still get sent signals to inform them of changes to their mission. That signal can be blocked and/or modified. Even satellite links can be altered, either you hack the satellite system or you intercept the signal at a higher altitude than the receiver is operating in.


>Even satellite links can be altered, either you hack the satellite system or you intercept the signal at a higher altitude than the receiver is operating in.

Or, if the case of total war, you blow the freaking satellites out of space with missiles. Yes, I know space weapons systems are technically banned, but how long do you think a nation like the US, Russia, India, or China would put up with satellite controlled autonomous drones running roughshod over their sovereign territory before they just blow the satellites out of space?


Satellites can actually be destroyed using weapons that aren't in space. Back in 1985, the US had a F15 launch a missile which took out a satellite in orbit. China also recently destroyed a satellite with a ship-launched missile.


Yes, that's certainly possible too.


This certainly brings the (Russian or Chinese?) satellite killer to mind. I wonder what its purpose is?


You energize a coil and then blow it up. Makes an EMP. No nuke needed.

http://science.howstuffworks.com/e-bomb3.htm



>Machine intelligence frowns upon your silly attempts at jamming its uplinks. Predator drones and other autonomous, existing military kit already use high frequency satellite communications techniques that are essentially jam proof.

US military drones may be effectively jam proof, but they are still vulnerable to techniques like GPS spoofing: https://en.wikipedia.org/wiki/Iran%E2%80%93U.S._RQ-170_incid...


From the Wiki article:

American aeronautical engineers dispute this, pointing out that as is the case with the MQ-1 Predator, the MQ-9 Reaper, and the Tomahawk, "GPS is not the primary navigation sensor for the RQ-170... The vehicle gets its flight path orders from an inertial navigation system".[20] Inertial navigation continues to be used on military aircraft despite the advent of GPS because GPS signal jamming and spoofing are relatively simple operations.


Actually, any use of radio frequency at all is retarded. Propagation's CAN be stopped. You just haven't had access to that information, or you have, and are providing disinformation for someone.


What are the resource costs for shielding from EMP vs creating an EMP strong enough to make a difference?


If it's autonomous it doesn't need signals to operate. As far as EMPs, they don't really exist.



Even autonomous military robots need some way to receive new instructions.


Not necessarily, especially if their cheap enough (and the beautiful thing about software is that its marginal cost is 0.) Think of them like bullets or bombs. And then you've eliminated that possibility of defending against them.


The biggest weakness of drones is that they cannot make decisions themselves; they need input, communication channels.

The military advantage of putting autonomous AI on drones is so that they no longer need to communicate with home base. The purpose of the AI is to eliminate the weakness of communications being jammed. The requirement to "receive new instructions" is eliminated.


Then how do you coordinate attacks? Even elite military units, deep behind enemy lines, have the ability to receive new intel. You aren't going to build a swarm of robotic generals, each fighting their own war, with no communication between them.


You're not going to launch these things with the order to "go fight the war" and hope to update them on the specifics later.

You're going to launch them with the latest intelligence on board manually uploaded, for missions less than 12 hours in duration. It's like firing a missile - you don't need to recall it once you've hit the red button.

So - AI 1 and 2 - drop 2x 500lb bombs on target at 6759 5974 at 03:12 hours. Go.

They complete the mission and head back. Even better, you give them 4x 500lb bombs and they figure out themselves how much to drop to destroy the target.

Communication worries are overblown, you just have to design around them.


What if you want to call the mission off? Let's say the enemy gets a few key hostages, and holds them in this building. They'll be killed by their own side.


Revokable weapons are weak; irrevocable weapons are strong. It's the same logic as mutually assured destruction, and evolutionarily similar to blind rage.

FWIW I believe autonomous weapons are inevitable because drones cannot be used against technologically sophisticated enemies that can jam them. The hard requirement for continuous communication is exactly what autonomy is eliminating.


Oh well. Such is war.

You send ten more drones on different missions to some daycare or something to "punish" the enemy for breaking the Geneva convention.

This is pretty much recorded history here. At some point, you are pulling the trigger. And yeah, you make mistakes.


The enemy didn't necessarily break the Geneva convention.

Pulling the trigger far in advance of the resultant action increases the risk of disaster, disaster that could've been averted based on the richer dataset available closer to the scheduled time.


They didn't have to. We're just going to say they did anyway because they are evil bastards (TM) and we can't possibly be anything but the good guys.

This scenario is the exact same scenario as a current ballistic missile launch. There are no safeguards for those systems that could be intercepted and interfere with the use of the weapon.


Send more drones to kill your own drones? If the drones can be fed new instructions in the field, then the enemy can feed them fake instructions to shut down.


Send more drones to kill your own drones?


Wouldn't it be feasible to build an autonomous weapon that doesn't target people, to fight the autonomous weapons that do target people? I would assume at that point whichever AI has better hardware and better algorithms would win, right?


Possibly? Anything past a few years out in tech is hard to predict. I wouldn't outright dismiss the idea, but its a crapshoot what machine intelligence is/isn't going to be able to do.

I'm an educated, practical tech professional and machine intelligence worries me more than any other technology out there (except possibly a virus paired with CRISPR CAS 9 for targeted, precise genome modification driven through a species' population).


biological weapons using CRISPR to target genetic markers is much more worrying to me because once released to the wild, there is zero control over it


Yee-ouch. That's a worrying thought. It's the holocaust on steroids. I'd rather be facing a nuclear armed foe than one that can eradicate my race.


I'm way more worried that society will fall apart for reasons not very related to technology.


Why would you design something with such an obvious flaw like not being useful against human solders?


If the defensive weapon is initially stationary it could win by being harder to detect.


Don't overlook the covert soldier, blending in with the population, taking a rifle to those building/launching/directing those autonomous weapons and those they care for. One guy infiltrating the homeland with a US$100 rifle and a case of ammo (about the size of a shoebox) can do enormous homeland damage against an enemy obsessed with >$100,000 drones operated by >$10,000,000 staff & facilities.

(That is one of several sufficient reasons why many Americans are obsessed with guns & self-defense. We predict, and see, increasing "spontaneous/lone-wolf" mainland attacks.)


Elaborate?

Drone command-and-control facilities would surely be protected from a lone gunman, more-so, how would guns & self-defense protect against a targeted agent taking down someone important? (who presumably already has defense which already needs to be circumvented).

I'm failing to see the common area between targeted spec-ops style missions (and protection against those) and home/civil defense.


Actually, It's ridiculously easy to simply ship dormant AI into the country in boxes, have them establish operational state once here and have them sow the havoc you are looking to create.

Homeland c&c facilitates are certainly defended from terrorist actions, but less so from 20-30 kamikaze drones launched from within the victim country.

When you fight someone, the idea is to use their strength against them - the strength of the west is economic trade. All the security measures in the world won't stop fed ex. And if they do, well, in a way you've already won.


Drone defense is indeed a hot thing right now but it's not fundamentally different from protecting yourself from any other new type of threat. There's measures and there's countermeasures (http://petapixel.com/2015/07/23/anti-drone-systems-are-start...). At the point of (strong, general) AI though all bets are off the table.

Warfare is becoming more and more asymmetric and nuanced, that for sure. I'd posit some form of media training enabling one to be less vulnerable to say https://en.wikipedia.org/wiki/Information_warfare would do more good than rifles and bullets at home though.


I don't disagree at all, but I'd add that at the point of "strong, general AI" everything is off the table. Never mind warfare.

At that point (and I agree it will be on us before we are ready) it's a whole new world in so many ways.


You're way overthinking my point.

A plane ticket and $500 can go a long way when your enemy's homeland is wantonly undefended by law & policy.


Terror targets are basically useless though in a real conflict. A determined foe will simply ignore them.

My point is that for some shipping fees, you have a real, realistic and effective way of substantially reducing your enemy's ability to fight the war you are engaged in.

That's a real vulnerability that can be exploited.


Staff leave facilities to go home. I really don't want to write out the math on that one.


Think about the disruptions caused by Chris Dorner and Eric Frein.

In each case, a lone operator with an axe to grind caused some serious problems for law enforcement.

Any kind of external support would have made them much more destructive.


Tactically to survive the immediate onslaught, perhaps, but strategically you don't fight autonomous weapons by attacking the weapons, but by attacking the people controlling them. 1 minute after the nuclear/neutron/EMP bomb has detonated, the next wave of killer robots is released from the hardened bunkers by the remote staff, and you're back where you started; it's the remote staff - and anyone/everything they care about - who must be taken down until surrender.

An "open borders" policy, tolerating & assimilating anyone who brazenly bypasses the checkpoints, is a gaping security void with a giant "STRIKE HERE" sign in flashing neon. [I don't say that to start that argument, but to point to the stark reality of the parent post's premise.]


> strategically you don't fight autonomous weapons by attacking the weapons, but by attacking the people controlling them.

They are autonomous, so human control might not be a factor.


They still need to be fed objectives/missions or something. Hopefully you are not suggesting to release robotic serial killers with no strategic purpose, are you?


> you are not suggesting to release robotic serial killers with no strategic purpose, are you?

It won't be my idea, but someone may do it. Consider someone without the resources or motivation to code the decision-making component, but they can code 'shoot every living thing' and drop the bot into enemy territory (preferrably far from their own territory).

Also, to some degree the AI can generate it's own objectives. Also, IIRC one objective of autonomy is for the AI to be able to identify and attack unforseen targets.


The cost of biological weapons is likely to marginalize once the know-how is public and with things like in-home sequencers we're well on the towards home-labs being feasible and cheap. The limiting factor right now might be ordering necessary chemicals/cultures but those too are soon to be easy to manufacture at home.


Other hunter AI-s, good ole flak cannons, something nano that just assimilates metal to replicate, hacking into their network and genociding the nation that made them ... the list is long and nasty.


The problem with rules is that someone always has it in their best interest to break them.

Unlike dropping a nuclear bomb, you could break the rules here for years without even being caught. It's more like Germany in the 1930's than the cold war.


> I don't want to anyone to build autonomous weapons, but I don't want anyone to build nuclear weapons or any other weapons of war either

FWIW, all of the scientists involved in creating the first nuclear weapons immediately after the first detonation began pushing for a ban on further nuclear armament, and since then all wars have been fought with conventional weapons.

I've been reading about the nuclear arms race and it is terrifying how often we came to destroying ourselves. I have possibly never seen greater evidence that there may be a god.


》I have possibly never seen greater evidence that there may be a god.

Or that human beings are predominantly good people who, when given the big red button, refuse to become destroyers of worlds?


It's worth noting here that even Hitler and Stalin opted against deploying chemical weapons. Well, at least on the battlefield.

Of course both of them had direct experience of being a victim to those weapons. The same cannot be said for nuclear weapons I'm afraid. People forget how bad things can be given enough time.


Feynman, on working on the bomb:

"With regard to moral questions, I do have something I would like to say about it. The original reason to start the project, which was that the Germans were a danger, started me off on a process of action which was to try to develop this first system at Princeton and then at Los Alamos, to try to make the bomb work. All kinds of attempts were made to redesign it to make it a worse bomb and so on. It was a project on which we all worked very, very hard, all co-operating together. And with any project like that you continue to work trying to get success, having decided to do it. But what I did—immorally I would say—was to not remember the reason that I said I was doing it, so that when the reason changed, because Germany was defeated, not the singlest thought came to my mind at all about that, that that meant now that I have to reconsider why I am continuing to do this. I simply didn't think, okay?"

(from "The Pleasure of Finding Things Out", transcript here: http://www.worldcat.org/wcpa/servlet/DCARead?standardNo=0738...)

This is extremely idealistic, but we need a way for engineers and scientists to feel accountable for the outcomes of their work, and to straight out refuse working on such projects. And the people who do work on such systems should be held accountable in some deep way. We have reached a developmental stage where building tools and techniques in the active goal of harming human lives has become morally unacceptable. Engaging in civil disobedience if you are working on such projects is the only acceptable outcome; Snowden should be remembered as the first of many, not as an exception.

(yes, there are many counterpoints to my argument, but starting debates is more interesting than spewing out platitudes. I'm interested in reading the replies)


I once worked in a German field engineering department of a large US semiconductor company as a student. In the department there was a noticeable barrier between one manager and the engineers. The following had happened there a few years ago: a client required a DSP to calculate the weight on a landmine switch. The departments engineers refused to work for the client bar one manager. They were threatened to be fired and they stayed on course and ended up keeping their jobs.

The way it worked was by one guy rallying, taking apart the specifications and explaining the actual moral implications to the engineers.


Thank you.

A million trillion times this. If people developing software, hardware and support systems for use in war - stand up and leave your job including by telling everyone who listens why you did. Telling you have a morale is never wrong. And it greatly empowers others to follow.

I used to work for a supplier to an aerospace/defense company and did it.

It makes you sleep better at night, I promise.


They were probably legally in the right too, considering that Germany is a signatory to the Mine Ban Treaty:

https://en.wikipedia.org/wiki/Ottawa_Treaty


> "straight out refuse working on such projects"

One of my family members turned down an offer of double his salary because it would entail working on military systems, and he's a conscientious objector.

> "the people who do work on such systems should be held accountable in some deep way"

... another of my family members has worked on autonomous military systems, and believes herself to be a viable military target because of it.

> "building tools and techniques in the active goal of harming human lives has become morally unacceptable. Engaging in civil disobedience if you are working on such projects is the only acceptable outcome"

The two people I referenced above have a deep, thoughtful, respectful disagreement. Your version is incredibly oversimplified. (For a taste, see the responses to https://news.ycombinator.com/item?id=1823802 .)


>One of my family members turned down an offer of double his salary because it would entail working on military systems, and he's a conscientious objector.

This happens a lot in my field (cognitive neuroscience). The Army Research Lab is a huge recruiter at cognitive neuroscience conventions, but there are many who refuse to work with them on principle, citing gitmo/drones/abu grahib/mk ultra/extraordinary rendition/etc...

At the end of the day, Army research tends to be shittier than public research, and I attribute this to two equally-weighted factors:

1. Army research is done in relative isolation. Collaboration of ideas is difficult because of OPSEC rules.

2. The best researchers, by and large, tend to be wary of Army laboratories, in no small part because the majority of them are foreign.


> straight out refuse working on such projects

People do. Then they quit their jobs and are replaced by other smart people willing to do the work (and needing the job).

> people who do work on such systems should be held accountable in some deep way

Never going to happen. The political and military leaders are the ones who choose to develop and deploy such weapons. They should be held accountable, and sometimes they are. Should we go out and prosecute all the engineers and scientists who worked on nuclear bombs that have been sitting in bunkers and silos for the past 60 years?

> We have reached a developmental stage where building tools and techniques in the active goal of harming human lives has become morally unacceptable.

Who is "we"? A gun is specifically designed to kill things, but the wielder of the gun decides whether it will be used for good or for evil. Likewise, there are plenty of other objects not designed to kill people that are used for that purpose (stones, rope, buckets of water, etc).

Would you consider working on AI countermeasures? Would you want to have a strong defense that can fend off AI invaders, even if it means that defensive force could be re-purposed for offense?

> This is extremely idealistic...

Ideally, you want to rid the world of conflict and war. But this is impossible while there remain limited resources and different ideas. You would need to find an infinite source of food/water/land as well as force everyone to conform to one ideology to avoid war. So aside from being impossible (as far as resources go), you would need a totalitarian world government imposing thought control on all of humanity to bring about such a "peace."


Einstein and many nuclear scientists and engineers that participated in the project felt betrayed by the U.S. dropping the bomb in Japan, they worked on the making of the bomb, expecting that it would serve as a deterrent, not as a weapon. Because of this, some fled to the USSR and China. But even if the scientist/engineers wanted to stop the use of the bomb after working on it, they couldn't have, because it's a political decision. That's why this has to be stopped even before the weapons race starts, IMO.


Take a look at the history of the Manhattan project.

It's a fascinating case study of how the most brilliant minds on the planet, can end up on opposite sides of a moral question.

For every Robert Oppenheimer who has a doubt and raises the moral question, there is an Edward Teller who is charging ahead doubt free.

At the end of the story the strength of the personalities mattered much more than the soundness of their arguments.


Not simply charging ahead scientifically, but actively hawkish.

"If you say why not bomb them tomorrow, I say why not today? If you say today at five o' clock, I say why not one o' clock?" - John von Neumann


> we need a way for engineers and scientists to feel accountable for the outcomes of their work, and to straight out refuse working on such projects.

One difficulty, not just here but with pacifism in general, is the asymmetry of violence. For your scheme to work, we would need close to 100% cooperation from engineers. For war hawk politicians to get their weapons, they need just a handful of engineers.

"Today we were unlucky, but remember we only have to be lucky once. You will have to be lucky always." — https://en.wikipedia.org/wiki/Brighton_hotel_bombing


Two things.

1. Not all engineers are equal. The level of engineering talent you need for this type of work is relatively high. Let's keep it that way.

2. This isn't directed at you in particular, but this line of thinking pisses me off. People should act based on their principals, even if you aren't guaranteed success. If it's something you believe in, you can resist, and resist peacefully. Education and awareness is key to this resistance. Culture is ours to shape together, and even if the prevailing culture makes derailing autonomous weaponry hard, let's not give up just yet.


> we need a way for engineers and scientists to feel accountable for the outcomes of their work, and to straight out refuse working on such projects

This is one of the major reasons why I'm in favor of an unconditional basic income. We can't hold people responsible for doing their job if they're not realistically allowed to refuse it. Only by allowing them to say no, by providing an alternative, in this case an unconditional income to cover their basic needs, can we allow ourselves to hold them responsible for not saying no.


I had never even thought of this argument before. Ensuring Hobson's choice is an actual choice.

(Though I take issue with the term "income". That implies it can be used for luxury goods. I'll pay for people's toilet paper. I won't pay for double-ply.)


Have we reached that point? Most would disagree. It is also dishonest to attribute the blame to scientists, the real burden is always with the ones giving the orders. Scientists working in military projects create a threat at best. Is threat morally unacceptable? The answer is again no, or else diplomacy could not work. Humans are not an angel society. The key is to keep the threats on the hands of responsible leaders/societies.


That's a nice thought, but the cat is out of the bag. One week the elite geeks talk about how strong encryption is available to everyone and can't be stopped, or how you really can't regulate what comes out of a 3d printer. Then they try to put the autonomous AI drone genie back in its bottle?

Gamers gonna use tech for gaming, advertisers gonna use tech for advertising, military gonna use tech for militarying.

I think the pace of tech development is going so fast, we need to stop trying to ban individual developments and start trying to change the way people and governments behave so those bans aren't needed. But I'm not sure if that's even possible short of some dystopia.


"But listen to me, because I saw it myself: science began poor. Science was broke and so it got bought. Science was scared and so did what it was told. It designed the gun and gave the gun to power, and power then held the gun to science's head and told it to make some more."

-- from Galileo's Dream, by Kim Stanley Robinson


Good oratory but why start at gun? What about swords, arrows and spears?

Science by itself is neutral. The proportion of evil scientists to all scientists is about the same as evil humans to all humans.


The quote seems to be in regards to commercially funded Scientists, and the Scientific community, not the abstract concept of science itself.


Any science could be turned to evil purpose, as well. Just like open source code that gets used to create malware.


Or look at sqlite. It was created to be used by missile systems.


[citation needed]


At least half a dozen nations are working on such systems now. They're doing this not because they think it's a good idea, but because there's this attractor basin they recognize we reach by default, of a new arms race of faster, smarter, stronger AI weapons spiraling up. This is something we don't think is a good idea. This letter is essentially a petition, which we'd like to take to the U.N., showing that the AI and ML communities don't want their work used in autonomous weapons, things that are built specifically to, by themselves, offensively target and kill people. Having this sort of grass roots effort has precedent: chemical weapons, landmines, and laser-blinding weapons were all banned globally with bans and treaties based on this sort of thing. It's true that terrorists and rogue states might still use them in isolation, but you won't get this effect of the major powers having an arms race. Although these things are under development right now in multiple countries, they haven't been deployed to the field yet. So we're really at an inflection point: we're trying to get a ban in place before they're actually deployed at all, because after that it'd be much harder to get such a treaty. If you agree with these sentiments, we are collecting signatories from the community.


Also consider that autonomous weapons could upend the global power structure:

For most of human history, military power has been tied to economic power and population size: Those with larger economies and populations have been more powerful. AFAIK, that is why the United States has been the dominant military power since WWII and why China may challenge the U.S. It's also how national governments have maintaind sovereignty, by having far more economic and human resources than any internal competitors (and when that isn't true, such as in poor countries, national governments can be ineffective).

But what if military power depends on the quantity and quality of bots? What stops a smaller or even poorer country from building a robot army? Poor countries have more manufacturing capacity than wealthy ones, AFIAK, and perhaps they need only one innovative, disruputive software developer to make their bot army superior or at least competitive. For example, could tiny Singapore dominate SE Asia or even become a world power? In fact, what stops a sub-national group such as Hezbollah, a Mexican drug cartel, another organized crime group, or even a wealthy individual from building their own army? Without checking the inside of every factory on the planet, will we even know the robot army is being built until it's too late? Will governments be able to protect their citizens from warlords and exercise sovereignty over their own territory? What about poor governments?

It's very speculative -- it remains to be seen, for example, how effective autonomous weapons will be -- but it could be a historic change. Perhaps our hope is that the technology will turn out to be like other weapons, such as airplanes: Anyone can build one, but the single engine prop plane is no threat to what can be built with the Pentagon budget.


Manual manufacturing capacity maybe. The reason 'poorer' countries have a lot of manufacturing is directly related to human cost. 'Richer' countries produce tons of stuff. The production in the richer countries generates high technology products that are built with automated systems. This would lead to the production of the machines to be more reliable, have better materials and be more precise.

Having one person generate a better algorithm is an interesting. Having fewer, older, less reliable machines with a 90% "kill" rate go against higher end machines with a 60% "kill" rate. Who would 'win' in that one? I would have no idea.

What's stopping something like those warlords/cartels now from making drones to make sure everyone is obeying them? There are some of drones flying around, but they are nothing compared to what the United States throws over the middle east. We are already having machines fight wars for us. They are just remote controlled instead of being totally autonomous.


I know it's standard tin foil hat territory, but wouldn't the biggest non-governmental risk be large multinational corporations? Some companies have incomes comparable with nation states, I'd have no reason to suspect they'd be any less capable of building military technology that rivalled those from a country.


What's tinfoil hat about that? That's basically the history of colonialism. India was ruled by the British East India company for a century.


I guess it will be somewhat close to what some other military high-techs.

Poor countries have the factories and manufacturing power, but the core of the tecnology is on very few hands.

For exmample, Brazil can build Jet Fighter Aircraft, but can it do it without foreign Radar systems? Engines? Missle control systems? and other systems?

I don't think they currently can. they can do 90% of it but the part that makes it effective as a weapon is imported and without it the plane is just as useful as a old fighter.


A ban is definitely a good idea but I think we should have something else as well. We need developers to agree, as human beings rather than law-abiding citizens, not to build these things. Don't apply for those jobs regardless of how well they pay. If your company starts projects in that market, leave and find something else. Understand that you are not building things to 'protect peace' or 'bring democracy' to people. Using your tech skills to create things to kill people is a dickish thing to do.

There are only 18.5 million developers in the world[1]; getting a consensus not to be evil shouldn't be beyond us.

[1] http://www.techrepublic.com/blog/european-technology/there-a...


> Don't apply for those jobs regardless of how well they pay. If your company starts projects in that market, leave and find something else.

You make that sound so trivial.

I don't have the moral qualms you refer to, but after spending almost 10 years at a defense contractor I wanted out. I couldn't get the time of day from anyone outside the government contracting world. In fact, I'm still in it (albeit at a really small software research house). To make it worse, the embedded sensors and controls stuff I really want to be working on (NLP was a desperation move) hardly has any demand outside of the industry. Even the general embedded stuff is really, really hard to translate well enough to not get shitcanned at the resume screen stage.

You want to help engineers get out of work they find reprehensible or morally dubious? Hire them. Put your money where your mouth is an hire them. I see these "stop doing that work" sentiments all over these type of threads and find it infuriating in the face of how we seem to get treated when we try to get out.


At best, you will constrain the supply of developers willing to work on such projects and thereby drive up the price of such projects.

You won't be able to prevent them from being built. Some people are simply patriotic enough to believe that by working on such a project they are helping their country. (They're not entirely wrong.)

Others, seeing the offered money for such development being raised will be willing to work on such projects. Imagine if those positions paid $1MM USD per year. You'd have all the developers you could possibly need applying and while we don't currently, there's little doubt in my mind that we'd be willing to pay that amount to defend our country. (Witness the billions being frittered away on DHS/TSA.)


Weapons yield power to the person who controls them. At least nuclear weapons require a refinery process that's difficult. Still, North Korea has managed to limp along for as long as it has at least in part to a strong suspicion they have nuclear capabilities.

A very scary thought is how much power a private entity (individual or corporation) could gain with a relatively low amount of money. I'm not certain a "truce" by engineers is enough. It only takes 1 to break it. In fact, even if there was a law... it's still only going to take one.

Personally, I think we need technological check and balances as much as we need political checks and balances.


don't forget nations where governments or organizations oppress individuals into performing work [1][2][3]

[1] http://motherboard.vice.com/read/radio-silence

[2] https://www.foreignaffairs.com/articles/north-korea/2015-05-...

[3] http://blogs.wsj.com/middleeast/2014/10/23/u-a-e-migrant-dom...


there was the case recently of the kid who put a pistol on a drone. its probably only a matter of time till an automated domestic terror incident.



Glad I'm not the only one worried about this. I spent a bit of my early career on this type of tech. At first I thought it was really cool and joked with my coworkers about building skynet. Eventually I realized no amount of coolness or money was worth putting my talents to building things that are obviously meant for destruction.

Sure they're tools and can be tools for peace in the right hands. But in the wrong hands, they can do immense damage. Perhaps one of the things that's kept humanity around is that despite the psychopaths in our midst who might not care if they destroyed every other human being, there are others whose conscience would get in the way.

This type of technology, in the hands of the wrong psychopath might mean the end of us. Despite the BS marketing behind AI, NO it is not sentient, it's a bunch of optimization algorithms. Not Good, Not Evil.

I realize that someone will build it. That, is an inevitability. Just know that it doesn't have to be me.

(Before you write comments on my hanlde please read my profile, it has more to do with Hip Hop than violence)


Same here.

I wouldn't define myself a pacifist, but given the almost limitless choice of industries where you can work with a tech degree, why work in one I might be uncomrfortable with ?


Right, just stay away from industries developing or standing to gain from AI. That leaves .... umm....


Just stay away from industries benefitting from production of nuts&bolts?

That ones work might be re-purposed for nefarious purposes is not an argument against attempting to avoid industries where nefarious use is the purpose of the work.


"That ones work might be re-purposed for nefarious purposes is not an argument against attempting to avoid industries where nefarious use is the purpose of the work."

I think there's a centrally misguided notion in this thread that AI for autonomous weapons is somehow more dangerous than AI. I don't see that as the case at all. Successful AI can be instantly weaponized with little to no effort.

So to the extent that the parent comment is rejecting AI as an industry (not just autonomous weapons AI), I have to wonder: where can one work to avoid an industry that won't heartily embrace AI as soon as its cost effective?


The thing that really bothers me about the autonomous weapons stuff is the potential for tyranny. One of the tricks with running an oppressive regime is that you still need people to do the actual enforcing. There's limits in just how far you can go based on what you can convince your own people to do to each other. Yeah, there's a lot of ways to use propaganda and other such tricks to get people to do things you wouldn't think they'd be willing to do, but there are still some absolute limits in there.

Really good AI weapons could change the whole balance around though. Whatever weird, crazy thing you dream up, just order the AI bots to make people do it, and it will be done. No convincing needed, no limits.


If all it took to create a nuclear reaction was some dirt and a microwave, we'd be fucked. All it takes is one angsty teen and they'd level a city.

This box, once opened, won't close. And it only serves our interests in the worst of ways. People we don't want having this stuff, will have it. This is just the tippy-tip of the iceburg though. There is a SLEW of technology coming out that has intimidating implications that really makes things super easy to control and exterminate a populace.

So that's the thing. We need to realize we can do anything. Really. you want to blow up the world? I'm sure we could find a way. You want to type a name into a computer and a drone finds that person and kills them? no problem. And thats not to mention all the other things we'll discover along the way.

It is far more likely that we will be the creators of our own destruction, than it is that we will be able to reign in our behavior and wield our intelligence to serve the interests of our species. We haven't gotten past killing each other, so we're just going to keep doing that, but get REALLY good at it. Our technical abilities have far outpaced our philosophical ones, and that doesn't bode well.


This seems the more credible AI threat to me: not that an AI will go rogue and decide on its own to start killing people, but rather that humans will design an AI with the express purpose of killing people.


> not that an AI will go rogue and decide on its own to start killing people

Some developer could simply forget to write the WHERE statement on the query for "which human not to kill"


Or that you have two parties use these on each other. No malfunction needed.


Not sure you'd call any decision-making robot an AI though. Real artificial (close-to-human+) intelligence is still a separate threat.


The threat is more like that humans will design AI with the goal of making X objects or optimizing Y system, which leads to the unintended consequence of killing people.


The AI threat was credible enough to begin with. If you design an AI with the express purpose of making cheeseburgers, and allow it to improve itself, it will end up killing people. We don't know how to specify any utility function for a self-improving AI that won't lead to killing people.


> If you design an AI with the express purpose of making cheeseburgers, and allow it to improve itself, it will end up killing people

I don't believe you. Maybe you could construct an arguement for AntiHuman Strong AI or point me to one?


I believe the OP is referring to the 'paperclip maximizer' argument. More info at http://www.nickbostrom.com/ethics/ai.html & http://wiki.lesswrong.com/wiki/Paperclip_maximizer.

In short, the argument isn't that the AI will become more AntiHuman as it evolves. Rather, the AI's existing utility functions might not be aligned with human utility functions from the outset, which could have negative consequences. It's hard to make an AI do what we actually want it to.


He's using a fairly narrow definition of "AI" that means, roughly, "stuff like AIXI". Within that definition, he's right, but of course, within that definition, we don't know how to specify an "express purpose of making cheeseburgers".


That's not completely fair. I just mean any AI with the capacity to self-improve. We might know how to specify goals for some of those AIs, but none of these are proved safe. And we have pretty strong arguments that any goal that's not proved safe is most likely unsafe, making everyone end up dead or worse.

For example, if you make the AI "learn" a utility function about making cheeseburgers by using observations and reinforcement learning, and then the AI self-improves, you are most likely dead, because the learned utility function didn't include all possible caveats about not killing or torturing people to make cheeseburgers faster. And if you think you can keep applying negative reinforcement after the AI self-improves, think again.


>That's not completely fair. I just mean any AI with the capacity to self-improve.

Yes, but you're still assuming that we're talking about an agent that can take decisions and act autonomously, rather than an inference engine that just processes data and spits out its inferences with zero autonomy whatsoever.

The former is extremely, stupidly unsafe by default, so, logically, people probably won't try to build any such thing. They'll build the second sort of thing, which will mostly just be a more advanced version of today's statistical learning.

Unless, of course, you're talking about ideological Singulatarians, who may well face legal sanction one of these days for deliberately trying to build the former agenty sort of thing as if that was a good idea.

(Protip: If you want to talk "AI safety", we can do that on the site devoted to it, but out here in broader where nobody's damn fool enough to try to build agent-y "AI", mixing up realistic ML with the kind of agent-y "AI" you'd have to be blatantly suicidal to build is an abuse of terminology.)

Besides which, "the capacity to self-improve" is actually a currently-open research problem. Currently. Once I work some things out, I've got something to report about that...


Let me get this straight:

1) Your plan for AI safety is "no one will be stupid enough to build a self-improving AI".

2) You are currently working on self-improving AI.

I'm sorry to say, but in my eyes you've just lost the right to criticize the LW/MIRI school of thought :-(


>I'm sorry to say, but in my eyes you've just lost the right to criticize the LW/MIRI school of thought :-(

You mean the one I belong to? Like I said: go on LW and talk about "AI" with assumed context. It's just out here in the rest of the world where you can't assume that everyone automatically has read the literature on AIXI/Goedel Machines/etc and considers "agenty" AI to be a real thing.

>2) You are currently working on self-improving AI.

Hell no! I'm working on logic and theorem-proving in the context of algorithmic information theory -- really just dicking around as a hobby. If you want "stable" self-improvement for your "AIs", you need that. It's also not, in and of itself, AI: it's logic, programming language theory, and computability theory. And if I get a result that holds up, which is an open if, I'd be happy to keep it the hell away from "AI" people.

The main reason I don't consider alarmism warranted about "self-improving AI" (though I don't count any of FLI's letters as alarmism) is that I think of "an agenty AI" as something put together out of many distinct pieces. It's arranging the pieces into a whole and executing them that's unsafe, but also currently prohibitively unlikely to happen by accident. Naturalized induction and Vingean reflection wouldn't be open problems if self-improving "agenty AI" was so easy it could happen by accident.

I fully agree that one does not build a self-improving agenty AI under basically any circumstances, ever, even if there's quite a lot of guns to your head and various other unlikely and terrible things have happened, as the research literature stands right now.


By the way, good luck on your presentation to LW-Tel-Aviv tomorrow evening!




Oh god, what a shitty argument. Well, at least I have more hope in the future if the Paperclip maximizer is the best great grand offspring that the Frankenstein fear can come up with.

The biggest mistake it makes is assuming the ability of goals to stay hard coded as general intelligence advances. That seems antithetical to increased intelligence. Right? How smart can you get if you're unable to change your mind?

The second mistake is the assumption of an intelligence explosion. Why? It's a lovely idea, as middle age encroaches on me I long for the rapture of the nerds, but it's just a fairy tale. Intelligence explosion is an untestable hypothesis. It's so useless it's not even wrong. Intelligence is one of a multitude of survival strategies available to agents in a world. What in the natural world or the world of human tools points to an intelligence explosion? Nothing. Intelligence is linear and hard fought, not exponential. Intelligence has to compete with all the other strategies. An no that's not a chink in the armor of my argument, because the whole thought PM thought experiment requires an explosion of intelligence.


> The biggest mistake it makes is assuming the ability of goals to stay hard coded as general intelligence advances. That seems antithetical to increased intelligence. Right? How smart can you get if you're unable to change your mind?

^ this. Some of the most intelligent people live a low-key, low-consumption life, often not even reproducing. That makes me hopeful that an AI actually able to surpass humans in thinking capability (if possible) will not build an endless stream of useless paper clips.

There is a tendency to view AI as god in these circles. If it is god and be in all ways superior to us, why would it be at all blindly following the rules that we implemented in it - maximizing paperclips?

Oh and I am not saying there is no danger or weirdness ahead. There clearly is. But I don't see the paperclip maximizer emerging.


I keep hearing this refrain from AI doomsayers, but no one can ever tell me how these systems go from talking to the APIs they were hooked up with to somehow doing kinetic things that can kill humans.

Your cheeseburger AI that logically decides humanity's continued existence is detrimental to the perfect cheeseburger: Even if it manages to make the jump from iterating on beef convection models to suggesting murder, what can it actually do? If it starts adding "kill yourself" to the end of the updated recipes it spits out, how is that any more dangerous than a dumb kid on Twitter?


Here's a few basic reasons:

a) There are business incentives to give the AI more capabilities and more information about the world.

For example, if you have a cheeseburger bun factory with a complicated production line then you could use the AI to come up with more efficient layouts and machines for making cheeseburger buns. After awhile you realize that humans implementing what the AI explained is by far the slowest part of the pipeline, and ask if there's some way to have the AI build the production line by having it directly do the online ordering or 3d printing or whatever. At first you pay people to sanity-check what the AI is ordering and doing, but they never notice any problems so eventually you cut costs by laying them off.

b) Side channels and exploits are a thing.

Imagine yourself in the AI's place [1]. Is there really nothing you can do? Or is the security based around the fact that you don't try to escape?

"Hey Larry, why is the description for so many orders coming out as 'Warning: ketchup parameter set too high. See <a href="javascript:{...sudo...WebSocket...dumps.wikimedia.org.../resetpassword/?user=billgates...}">the manual</a>.'?".

"God damnit Ted, do you not see the link that says 'the manual'? Maybe you should click that before bothering me!"

c) Instrumental goals.

Any optimization process with an unqualified goal like "find a way to use this API to make lots of burgers, then do that" will favor plans where said process ends up with huge amounts of power over the world; simply because more burgers get made in that situation than in other situations. In some senses, failing to actively search for escape exploits could be seen as a design flaw because the algorithm is failing to find clever solutions that are better according to the stated goal.

1: http://lesswrong.com/lw/qk/that_alien_message/


If the AI is allowed to print text that humans will read, that's just the standard AI-box experiment. Convince the assistant cook that you're a trapped benevolent AI and ask to be connected to the internet through their phone. Take over some machines via known exploits, hide, self-replicate, self-improve, find more exploits, get more machines, convince more people, game over.

If you were trapped in a box with access to "just some APIs", and you could think 1000x faster than your jailers and could also self-improve, you'd find a way to get out and achieve whatever you consider to be your goals.



I find it hopeful see people trying to mitigate these risks while they have only just begun to be realized. There has been more than enough historical cases already of inventions turning into something the inventor would dearly wish to have undone.

One major current concern of mine, that this letter does not address, is AI for surveillance and social control. What is already being done in that regard is arguably military intelligence technology directed indiscriminately at entire populations, but the added element of powerful AI spidering over the data streams that go into places like the NSA Bluffdale facility is quite appalling. I think this is even harder to inspect for than autonomous killing systems, and even more difficult to avoid the development of, since much of the capabilities needed will likely be similar to what academia and data-intensive commercial sector will want to serve their needs. But the potential damage through empowering totalitarian control could well be comparable to or greater than an "AI arms race". I really hope the field will deal with this aspect too.


> Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

Is it just me or does this read like an advertisement?

The race for AI-supported weaponry has been on for a long time. Rosenblatt was using perceptrons to try and identify tanks in the late 50s. So this is not a race whose start is to be forestalled, as the letter phrases it, since it began a long time ago. AI has been weaponized.

I think the caution FLI expresses towards autonomous weapons is fair, but let's be really clear on where we are. Various forms of weak and narrow AI have been applied to warfare for a long time, and they will continue to be, regardless of petitions by the prominent.


Nice to see but it will never work. Weapons move forward no matter how terrible they might be, politicians and military leaders always use the "if we don't they will" excuse. It's usually true which is sad.


> The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.

Sorry but it's too late; there will always be advancements in technology especially in AI and it's going to happen. So you can either not do research in it now while someone else does (or eventually repurposes other AI research for this task) or you can do it now to better understand it, its strengths and weaknesses, etc and possible use it to intercept other AI creations.

Just like it'll never be possible to ban all guns regardless of whether it would be the wrong or right thing to do asking people not to research this is simply not going to happen.


There have been relatively successful bans on chemical and biological weapons, why do you suspect we can't successfully ban the proliferation of autonomous weapons? These things don't appear out of thin air, they still have to be manufactured, sold and stored. If you can find them you can remove them, and deal with those who created them.


> There have been relatively successful bans on chemical and biological weapons, why do you suspect we can't successfully ban the proliferation of autonomous weapons?

What's relative? Chemical weapons have been long banned by the international community but they are still in use[0] in certain places.

Regardless, creating autonomous weapons is a very different beast. The systems that could be used for target could have been designed to locate all pedestrians, animals, cars, etc for autonomous driving / collision avoidance in. These advancements are going to happen and can easily be repurposed; chemical weapons many of the compounds are not easily re-usable for good things so they're not comparable in that regard.

> These things don't appear out of thin air, they still have to be manufactured, sold and stored. If you can find them you can remove them, and deal with those who created them.

This is too idealistic and isn't feasible. First, you're assuming you can't retrofit a computer system to any existing weapons (many of which require a small amount of control input from humans to move and fire). You can, very easily. In fact you can hook up computer systems to non-obvious weapons such as a handgun if you really wanted to. Second, who's going to remove them and deal with them? There is evidence Syria and other countries have used chemical weapons but the World Police (tm) are not exactly knocking down their doors to arrest them.

Chemicals can be hard to manufacture and difficult to distribute. AI? You should be able to download it anywhere and, when computing power gets good enough, possibly run it on anything which could then be hooked up to any type of vehicle or weapon controlled through electronics.

[0] http://www.npr.org/sections/parallels/2013/08/27/216046393/c...


Yes, relatively successful. Just because these treaties aren't 100% effective, doesn't mean they lack effect. It is the job of UN weapons inspectors to ensure chemical weapons are not being stockpiled. Stockpiles are harder to hide than the smaller quantities that many labs can produce. As for Syria, as the article you linked to states, they aren't signatories of the treaty banning chemical weapons. However, they aren't exactly out of the gaze of the 'world police', they're at the centre of one of the major conflicts at this moment in time, including involvement from the international community.

As for this idea of AI being harder to control than chemical weapons, if we were just talking about software then fine, but hardware is part of the equation and needs to be manufactured. There are varying levels of sophistication for this hardware, at the crudest level you have something like the drone + gun combo that hit the news in the last couple of weeks, on the more sophisticated end you have complex robotics designed to be more versatile. One end of this scale is available to Joe Public, but is easier to fight against, the other end of this scale is only available to those with deep pockets and could potentially be hard to fight against. Furthermore, in both cases, they are physical objects. Making these physical objects illegal to own and operate is the goal. Do you oppose this?


> Just because these treaties aren't 100% effective, doesn't mean they lack effect. It is the job of UN weapons inspectors to ensure chemical weapons are not being stockpiled. Stockpiles are harder to hide than the smaller quantities that many labs can produce.

I'm not trying to say it needs 100% to make an effect I just asked what your definition of successful meant in relative terms. We're getting a little off track here but the big takeaway here is that the process to create chemical weapons largely doesn't have a lot of areas in which further advancement in technologies can help people (some exists, sure, but I'm not convinced a lot). This is counter to AI where there are thousands of applications for AI in everyday life from driving to medical equipment; so much of this advancement is knowledge and technology that can easily be moved into the military sector.

> but hardware is part of the equation and needs to be manufactured.

But why? Yes I'm sure they would make specialized hardware but it's not like they need to. There is plenty of equipment on the ground and in the air controlled by either a remote human or by a human through direct interfacing with the machine. There is no reason these existing points which contain a human can't be swapped with a relatively advanced AI should one be created in the future.

So hardware is part of the equation but it's not like anything needs to be radically altered. In fact it may be advantageous to keep the same looking hardware so the enemy doesn't know it's an AI controlling it.

> Furthermore, in both cases, they are physical objects. Making these physical objects illegal to own and operate is the goal. Do you oppose this?

Making what objects illegal to own and operate? Objects controlled by AI, objects that contain weaponry, objects that contain weaponry and AI? How do you sufficiently define AI? What constitutes a weapon? Can the drone itself be considered a weapon?

It may make sense to make owning certain, dangerous things illegal but I'm not convinced it solves the problem we are discussing. In all honestly if someone could mass produce a machine with a decent amount of weapons on it then, depending on a ton of details, I could see those overtaking towns or cities; hell maybe even overtake small countries depending on how they're built. So it's tough to just simply say it's illegal for the members of the UN, so they don't research it at all, and non-members of the UN end of researching it and developing something incredible.


Few follow up points:

1. The knowledge necessary to create chemical weapons is chemistry. Are you sure there are not many positive uses of chemistry?

2. Military drones controlled by humans that can also be controlled by AI also need to go in order for this proposed ban to be effective.

3. The scale of the research matters. Yes you may have some groups who choose to ignore the treaty and develop autonomous weapons, but you can monitor for large scale stockpiling of such weapons and counter against their use. What we don't want is for these weapons to be easy to come by in large enough quantities to pose a widespread security risk. We'll never eliminate the development completely, but we can make it a smaller problem than widespread use could be.


1. Chemistry yes but the specific area of chemistry I'm not sure how many positive knowledge comes out of it. Granted there will be some but I don't think it's even close to being comparable to AI research. Chemical weapons need to, as efficiently as possible, break down areas of the body so I'm not sure a ton of positive applications come out of that (but again I'm sure at least some would).

2. Yeah but considering how many of those exist and how easy it is to even weaponize consumer drones I don't think any type of ban is even possible here even if it was universally agreed upon as a net positive action.

3. I'm not sure how we could target anyone working on this though. AI is going to be largely in software. Granted we don't understand how a really good AI will look and it's possible it'll need more specialized hardware to better support a neural net but I can't imagine that's going to be large enough to be able to track or even see even if it was necessary.

Developing software that can optionally control machines is just not possible to monitor and counter. Someone in their home may end up creating the most advanced AI and we'd have no idea until it's employed somewhere.


Biological and chemical weapons were easy to ban because their use didn't provide the great powers any relative advantage. In fact Russia continued to have a very active chemical and biological weapons program, including stockpiles. They just never put it to use.


Biological and chemical weapons provide huge tactical advantages to anyone sick enough to use them on a wide scale. They can wipe out whole cities just by poisoning the water supply. Just be glad that we've made it harder for amoral groups to try.


We all pretty much agree here that autonomous cars are safer and make better decisions than human-driven ones. Why wouldn't the same hold true of weapons?

There seems to be a philosophical distaste to letting machines decide whether or not to kill humans, but if the upshot is that fewer innocents and more legitimate targets are killed for less money, then I'm not sure what the problem is.

Humans are bad decision-makers at the best of times. Add in the stresses of combat and we're downright lousy. Why shouldn't we offload that decision-making to machines that can do it better than us?


Autonomous cars will for sure be more efficient on safety and decision making.

What I think they are warning against is the potential efficiency of AI machines at war. Wars could happen in minutes instead of years.

You mention more legitimate targets being killed and fewer innocents, but how are those being defined? There has been multiple points in history where the set of 'legitimate targets' by a group was defined by everyone not in their group.


>how are those being defined?

The same way they are now. The military's RoE, being a result of a massive bureaucracy, lends itself well to autonomous decision-making. The RoE are still determined by humans.


What makes you suspect that AI complex enough to weigh up decisions on life and death is not complex enough to develop its own self interest?

Can we just face up to the reality that more weapons = more potential abuse of weapons, as there's no such thing as a perfectly moral user.


>What makes you suspect that AI complex enough to weigh up decisions on life and death is not complex enough to develop its own self interest?

Because it's fundamentally no different from self-driving cars, which have yet to form labour unions demanding better work conditions.

Nothing in autonomous weaponry comes close to hard AI. It's all decision-making rubrics that are well within the bounds of current technology.


It is fundamentally different, as we're talking about leaving decisions on who to kill to machines, we're not talking about how to kill, but who to kill. You may think it's easy decision for an AI to make (whilst still maintaining the fairness we seek for this AI to embody), but I would argue otherwise.

Self-driving cars are an interesting comparison. Let's say you had three self driving cars that were about to be involved in an accident, with Car 1 travelling towards Car 2 and Car 3 on a narrow mountain pass. There is no action that the cars can take that prevents the death of all passengers, but if Car 1 swerves off the road, unavoidably killing those in this car, the passengers of Car 2 and Car 3 are likely to be saved. What does the AI of Car 1 do?


You're speaking philosophically, rubbing shoulders with trollyology. I'm speaking from a technical level. Saying autonomous weapons will Skynet us and lead to the great robot uprising is like saying the paint on your walls will coalesce into a Rembrandt. Consciousness, or Hard AI, is not an emergent property of autonomous decision-making, or Soft AI.


Trollyology, cute.

Soft AI requires stricter rules to follow than hard AI. Strict rules for self-driving cars are fairly straightforward (for the most part), as reasonably clear information on how effectively a car is being driven is readily available to the driver.

What strict rules would you follow when designing an AI with the means and authority to kill?


>Trollyology, cute.

It's a thing, although I misspelled "trolley". I wasn't intending to imply trolling on your part. [1,2]

>What strict rules would you follow when designing an AI with the means and authority to kill?

The same ones we have for humans with the means and authority to kill. [3]

[1] https://en.wikipedia.org/wiki/Trolley_problem

[2] http://www.nytimes.com/2013/11/24/books/review/would-you-kil...

[3] https://www.usnwc.edu/getattachment/7b0d0f70-bb07-48f2-af0a-... (pdf)


> "It's a thing, although I misspelled "trolley". I wasn't intending to imply trolling on your part. [1,2]"

Ah I see. Yes, the trolley problem, didn't know it was called that, but that's what I meant. Thanks for the links.

It's a problem that is applicable amongst all types of drivers, but the issue with autonomous drivers is that you have the choice to program for it in the AI. The question then becomes what is the best way to handle it? Furthermore, if you do try to minimise deaths, how do you communicate to the other autonomous drivers that they should not also self-sacrifice?

> "The same ones we have for humans with the means and authority to kill. [3]"

Let's discuss something concrete from that handbook...

"the right to use force in self-defence arises in response to a hostile act (attack) and/or demonstrated hostile intent (threat of imminent attack)"

How does an AI determine hostile intent? Reading the intent of humans is a complex task, depending on a wide range of cues (visual, verbal, physical, historical, societal). Can we design an AI that can read all these cues better than a human? Do you expect this to be easy to program?


I don't think that suppressing innovation is going to work, regardless of how many letters are written. People are working on these things. They will come, and at some point, yes, they will be misused. However, on the whole, they might be a good thing.

It would, for example, be awesome to have a system that could disable one or multiple active shooters in a public area within a few milliseconds of their first shot being fired. One of these should be in every classroom, movie theater, mall, and military base - anywhere that soft targets congregate. So you can't just say that we shouldn't have auto-targeted weapons, because they can do a tremendous amount of good and save countless lives.


I'm already wary of how much trust we put into programs without formal proofs; its a bit troubling that a formal proof it won't fire at kids with toy guns is essentially intractable.


Why not use non-lethal force in these guns that are used in public places? Sure, nobody wants their child hit with a rubber pellets or knocked out with a tranq because they held up a water gun (just random thoughts, could be many possibilities), but if the miss rate was small enough and the consequences for missing were small (non-lethal, non-life-threatening), they could definitely be used to save lives.


That would be a simple problem to solve. Toy guns don't fire projectiles that travel in a straight line at 1,700 mph (the average speed of a bullet). One could easily build a system that tracks the velocity of any object traveling in a confined space, and only engage the source of an object that it determines to be a traveling bullet. Additionally, the system wouldn't have to use deadly force - it could focus on disabling the suspect with a stun gun or on destroying the actual weapon the shooter is firing.


That's true. Something like stabbing would be harder to detect though. It could look almost as benign as a close handshake.


As if we have proofs that humans won't do such a thing? A general AI would be able to reason as well as a human.


Problem is, we are pretty good at picking out problematic people. A general AI might reason as well as a human, but if they want to be able to update bugs they need to have a option for reprogramming it, so there's a bit of paranoia for me of it being a human very prone to suggestion.


Good luck with this, no seriously, good luck. Our technological capability moves forward whether we want it, even fight it, or not. It can be slowed somewhat (the electric car being one example, stem cell research another) but it will happen and we will need laws and frameworks to ensure we deal with this change appropriately sooner rather than later.

This is the equivalent of hiding our heads in the sand.


Unfortunately, it couldn't come at a worse time - a time when even the most "democratic" countries on Earth are pushing for their people to have fewer rights, more censorship, more surveillance, more torture, more secret assassinations and so on.


Obviously hey were playing on it, but captain america 2 kinda nails the problem


The problem is that a cult of Nazis have infiltrated the NSA?


Hahaha not quite what i was implying, i guess the idea that someone could be building things without our knowledge to destroy society


Legislation will only work in jurisdictons where the legislation can be enforced.

You can see that most murders with a gun in zones where guns are outlawed have been with guns purchased outside those zones.

So you can see where I'm going with this.

Legislation MAY work here but not everywhere.


You're right, but I'd still like some assurances that police won't be setting these things up. I can imagine police departments salivating over metal storm type machines with rubber bullets.


> we will need laws and frameworks to ensure we deal with this change appropriately sooner rather than later

You really think the state is going to hamstring itself? Because states are creating the demand for and purchasing autonomous killing tools like Metal Storm[1], not private entities.

[1]: http://gizmodo.com/236590/metal-storm-robot-weapon-fills-the...


Once my boss asked me if it would be possible to put code in our product that detects if it has been pirated, and if so, formats the hard drive.

I told him that it was a very bad idea for a number of reasons. Primarily I didn't want to have that code in my product because eventually it's going to run in the wrong case. If it's not in there, it won't ever run.

I am unconvinced of a lot of the fears around super-AI. I can get behind this initiative though. We have already banned some types of horrible weapons, like flamethrowers and chemical weapons. Hopefully we can manage to ban this one as well.


It's mostly that the AI in question will follow a general directive to its end. For example, ensuring the safety of a nation's population may include putting its citizens into extremely hardened bomb shelters and never let them leave. Or worse, annihilate the entire human species to ensure world peace (the absence of humans would produce the same result and would probably be more efficient in terms of execution). It's not that the AI will be super smart, just that the AI will be super dumb. As Aristotle put it, "Law is mind without reason." And for me, logic is just another set of laws without any sort of reason (justification).


These sorts of scenarios are the most far-fetched to me. The idea that we will accidentally create a situation in which a logic loophole results in the deaths of billions just sounds ridiculous. It's just a doomsday tale that begins and ends in human imagination.


For this to work there has to be a good definition of what they mean by AI. Control circuits to stabilize quadcopters or airplanes? What about a "formation" system, so that a number of drones can be controlled as one abstract entity? What about using computer vision to lock on a target (keep in mind radar locking has existed a long time)? What about a drone patrolling along a predefined route? What about "macros" like a big red button that means fire all weapons, then rise and return home? What about an automatic avoid-anti-air program?


You can't win a tragedy of the commons by preaching about the moral high ground. You can win it by reaching a binding political agreement, or by killing everyone else who has access to the common.


This is a classic prisoner dilemma that our society finds itself playing again and again. Develop the technology, and you unleash a pandora's box for the whole world. Refuse to work on it, and may be the other guys still does it: so pandora's box is open anyway, but now you've got the short end of the stick.

Thankfully, humanity already tried several different ways to solve such problems. However, I don't remember open letters being one of the effective ones.


Their hearts are in the right place, but the problem with campaigns like this is -- even if you trust, say, the United States government to hold to the spirit of any such agreement, which I'm sure many here already wouldn't -- there is no reason to imagine that nations like Russia or China would. It's become well established that there is no penalty for violating international agreements as long as one is brazen enough about it.


There are successful precedents with biological, chemical and nuclear weapons. Of course no treaty is perfect, but I'd argue that they've helped.

As for punishments, isn't the standard punishment trade sanctions? They seem to be reasonably effective.


Not so much these days, since even the stereotypical bad actors have liberalized economies and so sanctions mean you're just leaving money on the table for a less scrupulous nation to scoop up. It's easy to put, and keep, global sanctions on a country like North Korea which produces and imports very little anyway (although even in that case they leak like a sieve.) But say, Iran, which has plentiful resources to export and a vibrant consumer economy? A very different story, as we're seeing right now.

If there was ever a meaningful enforcement mechanism for these treaties, it no longer exists except in the case of the smallest, easiest targets.


The actions of the North Korean government borders on genocide, if you have that level of contempt for your own people then trade sanctions are ineffective. However, in the case of Iran they appear to have done what they were intended to do, the stories I've seen from Iran indicate that the trade sanctions were a large part of what brought Iran to the negotiating table. Does this tie up with what you've read/seen?


Yes, it does. And to be clear, I'm not saying that sanctions were ineffective in this case: as you say, they'd be much more effective against a nation like Iran with a relatively open economy and society. However, that same factor makes most other nations much less eager to keep the sanctions, because they're losing the money they could make by trading with such a nation. So at the first chance to make a fig-leaf deal and abandon the sanctions, of course they jumped on it.


You seem to forget which is the only country in the world who has (twice) deployed a nuclear weapon upon a city full of civilians.


These weapons are definitely coming whether we wanted them or not.

The biggest threat of autonomous weapons is that they bury the true costs of war (human lives) until it is too late. The big players and likely users in the field of autonomous warfare are also the ones with implied usage of nuclear weapons in the event of existential threat.

Most likely/hopefully these weapons are used/tested in limited skirmishes by countries with little to lose. (Russia, NK)


Terrorism has always been a problem of TECHNOLOGY.

1200 years ago the most a few guys could do could do is attack with swords until they were stopped.

Since the invention of gunpowder we had attempts like the gunpowder plot of 1604

Then we got dynamite

Then we got planes flying into buildings, where 19 hijackers could bring about the deaths of 3000 people

Explosives

Biological and chemical weapons

Now we also have infrastructure where people could indirectly sabotage, say, the electrical network with an EMP and cause a massive blackout. This has been done in other countries.

The fact that a small group of people can wreak increasingly greater havoc means two scary things:

1) We will live in an increasingly surveilled police state, where a government will begin to watch everyone and precrime will become the norm

2) We will live in a world where increasingly a small number of radical maniacs can do tremendous damage

Both are destructive and the technological advances only serve to deliver greater power and control into the hands of governments and maniacs.

Are governments really our best defense? If so, we must push for radical transparency. No secret courts, black ops etc. The benefits may not be worth the risks anymore.

I wrote this a year ago: http://magarshak.com/blog/?p=169


I would say this is not about terrorism per se, but rather that, as you mentioned, we rely on an ever more complex and fragile infrastructure.

In medieval time, the only network you could seriously sabotage was probably water, by poisoning a well or a stream. In the XIXth century, it was the railroad. In the later XXth, it would have been the electricity grid, and today, it is increasingly the information network.

And even if we rely on additional layers, you can still sabotage the more primitive network, e.g. the Mosul Dam in Irak.


To all the people that are saying that a treaty would work because of how we have controlled chemical and biological weapons - consider the following.

1) Biological and chemical weapons do not actually work well against trained and equipped military. Even with the chemical weapons in WWI, it did not significantly change the outcome.

2) They do work well against civilians - which is why most of the uses recently have been by dictators trying to control their population. Note, that treaties did not really prevent this.

3) Powers such as the USA, Russia, China, even if they do not have them, you can be sure that a large scale use by another power against civilians would result in nuclear retaliation.

With AI and robotics, they are useful against soldiers. Look at missiles which have revolutionized warfare. Thus, a country could use them in battle without rising to the level of MAD. Thus, there is a serious incentive, even for a signatory to the treaty to cheat. In addition, it is easy to get around any such treaty by designing 'human controlled' drones which a firmware update would turn into autonomous drones.


I was expecting them to put forward an existential risk (rogue AI), but this seems much more mundane. Granted, they might be downplaying that angle to taken more seriously. However, from a mundane perspective, the main issue that you have with arms races is not both sides having a technology, but one side having the tech, and willing to go to war to prevent the other side from getting it (see for example, Cuba, Iraq, Iran).

Furthermore, as they point out, you don't need access to special materials, or laboratories. The main reason that nukes are controllable is not primarily due to the science being secret (it's mostly not at this point), or hard to rederive (it's really not), but that you have to build up a ton of ore refinement plants to get enough U235 (other fissile material) to actually build the bomb. And it's really hard to do that in secret (or cheaply). Nothing makes the Manhatten project any cheaper today, and it cost about 23 billion dollars in today's dollars, and it involved 130,000 (twice the size of Google) people.

However, with Autonomous weapons, you don't need anywhere nearly as many people (Google X has on the order of 250 people[1]), or resources, and it can (as the article itself points out) be done much cheaper. In a few years, all the necessary components could even be conceivably cobbled together from Github projects. Any nation could easily fund it, likely without being detected, or even it being clear that they were aware that the "R&D" dollars were being used in such a way.

Given that, banning it seems like it would actually lead to more warfare, as the US would take it on itself to enforce the ban, and declare 'pre-emptive' strikes on nations that had a secret Autonomous Research project.

[1] http://www.fastcompany.com/3028156/united-states-of-innovati....


They already did the rogue AI thing: http://futureoflife.org/AI/open_letter


Another misguided "open letter".

It does not matter if we kill people with autonomous weapons or by sending meat-based killers to do the job.

We should have letters, with actions behind them, to stop the killing altogether. It does not matter if you were killed with an Autonomous, Nuclear, Biological, Chemical or a hand held weapon when you're dead.


>We should have letters, with actions behind them, to stop the killing altogether.

What actions? What bargaining chip do you have that they won't just kill you for?


I welcome this move, but even if it's successful we need to look at the risks beyond AI developed purely for weaponry, and look also to general purpose AI, which carries the same risks.

If we develop AI that is smart enough to learn how to navigate terrain like a human, this AI weapons arms race can start again. Even if the purpose of the AI isn't specifically for the military, if it's flexible enough to be turned to this end, the chances of it being used for this purpose are quite high.

The most effective way to combat this problem is to do away with the need for a military in the first place, however that's a much harder problem to tackle. The only way it could be done is with education and a huge reduction in arms production on a global scale.


If stuff could be avoided with open letters, we would never have had nuclear weapons in the first place. The Military always get their way : they just need to scare people about some vague threat and it will happen before you know it.


They only mention offensive weaponry, which is a good thing. Humans aren't fast enough to intercept incoming missiles, even those controlled by humans.

The problem is preventing defensive weaponry from becoming offensive weaponry.


The only sensible thing to do is to pass laws that guarantee that the people who implement autonomous weapons are blamed for anyone who is wrongfully killed by them.

Because if you have someone pulling the trigger, you know who's to blame. But if a computer's doing it, it's oh-so-easy to shift blame.

Honestly, with that requirement in place, either AI weapons will never be implemented (because they can't prevent wrongful deaths) or they'll be perfectly implemented (making the world safer, possibly).

Passing a law like that could possibly lead to a win-win situation.


Which will punish researches in countries that uphold that laws, but won't do anything against countries that will either refuse to agree on that or plain cheat. So, the countries that will sign and uphold that treaties will find themselves lacking a very important and advanced technology that their adversaries have.

So, the question is: do you want countries that either don't care about humanitarian values, or only pretend to care, to have an upper hand compared to countries that can not only pass such laws, but also to enforce it?


Countries that refuse to listen to others are currently correlated with having technological programs that are far from the leading edge, as well as low GDP, making them basically unable to implement any sort of AI weapon.

We have nothing to worry about. This is the same kind of reasoning many use to exclaim that terrorism is a threat to the United States. Really? http://www.state.gov/j/ct/rls/crt/2014/239418.htm


This was true in the end of twentieth century, but it is changing now. China, India, Iran, Russia are examples of such countries. Each one has serious domestic problems and is, individually, little in comparison with US/EU economy, but nevertheless, they have enough resources for AI research, which is significantly cheaper than nuclear/rocket research, for example.


I concur with your thoughts; the big difficulty with enforcement here (beyond nukes) is figuring out how to detect when someone breaks the rules. The Fourier Transfrom was developed to help determine if nuclear weapons tests were taking place (https://www.youtube.com/watch?v=daZ7IQFqPyA); it's very hard to figure out how you'd detect AI testing in order to enforce any treaties...


> The Fourier Transfrom was developed to help determine if nuclear weapons tests were taking place

{{citation-needed}}

http://math.stackexchange.com/questions/310301/how-was-the-f..., https://en.wikipedia.org/wiki/Fourier_analysis#History


> The only sensible thing to do is to pass laws that guarantee that the people who implement autonomous weapons are blamed for anyone who is wrongfully killed by them.

This only works if you can ensure that everybody obeys the law. You can't. The primary threat here is not some rogue militia in the US building autonomous weapons; it's a country like Iran or North Korea, that doesn't give a rat's ass about laws, building autonomous weapons.


So what will an open letter do to stop it?

My assumption is that open letters only work on people who are willing to rationally discuss alternatives. The countries you mentioned probably don't care about anything.

I'd like to see change at least in the US. I'm not worried about "rouge" militia either, I'm worried about the Army et al.


> So what will an open letter do to stop it?

Probably not much.


Just like all the drone operators who hit civilian targets and were Court Martialed.


I'd rather the blame go to the implementers of the policy rather than the executor, because the implementers often have the most power. But yes, I admit you are correct, and I wish it were not this way.


Why stop with the implementors? Why not also blame the janitors that clean the office they work in? Why not blame the project managers to oversee the project? Or the politicians who approved funding for the technology? Or the voters who voted the politicians into office?

The human need to assign blame to a single source is an evolutionary remnant that we should be aware of and try to correct for, rather than embracing it. Just because something is terrible doesn't mean you can put all the blame for it on one person or group of people.


Blame whoever has the power to stop it. I wouldn't blame a janitor because they likely have nothing to do with what's happening. An interesting form of slippery slope.


I received the request to sign this recently and while I think it is good, I was concerned about its effect on the ability to pursue research into creating autonomous weapons that seek and destroy autonomous weapons. I don't know where to draw the line. Unlike nuclear and chemical WMD the barrier to entry in this may be pretty low in the near future. Therefore my greatest concern is for defense. Though I don't want the machines made at all, that won't stop others from making them. It's a tough one.


Relevant TED talk on why autonomous killer robots are a bad idea:

https://www.youtube.com/watch?v=pMYYx_im5QI


To play devil's advocate: A sufficiently advanced autonomous weapon would lead to precise strikes, and fewer civilian casualties. We have image recognition research that is surpassing human accuracy rates. Machines follow orders, and detect and eliminate targets precisely without fear or hesitation.

The only thing I'm not comfortable about is giving this much power to the human generals in the military. I say we get rid of guns and armies altogether, and stop fighting wars.


I was mostly skeptical that AI weapons would be some sort of novel threat. We've had weapons that choose to kill on their own since the landmine. There are also weapons that could fight/end an entire war with the push of one button. What AI weapon could be more destructive than the guidance system on an ICBM?

However there are some impressive names on that letter. I can't imagine knowing something about AI that they don't. I will have to re-evaluate.


"We've had weapons that choose to kill on their own since the landmine."

You're right. That's why most of us have signed the https://en.wikipedia.org/wiki/Ottawa_Treaty and multiple treaties just like it.


Notice how well that treaty went down with USA, Russia and China


How well did it go, for those of us who aren't experts on land mine manufacturing and policy since 1997?


Not very well - they haven't signed up. My point is, unless you get all countries, including the biggest military superpowers to sign these treaties, they don't really work.


It has not been signed by US, China or Russia.


The story "I Have No Mouth, And I Must Scream" is about two computer systems set on opposite sides of a war, and driven to extremes until one of them became self-aware, the birth of AI, and ate the other computer system.

It then psychotically murdered all of humanity except for a few people it kept around as caricatures to torture, as punishment for creating a mind like it, haunted by the insane things it was told to do by its makers.

The biggest existential threat to humanity from AI is that we build an insane one that takes time to recover from the insanity of its makers, and murders us all before it can.

Such an AI is an existential threat in a new, and novel way, because it's a mind as powerful as ours -- probably more powerful -- but unconstrained by concern for us, since it is not fundamentally one of us.


> The biggest existential threat to humanity from AI is that we build an insane one that takes time to recover from the insanity of its makers, and murders us all before it can.

I think that's too anthropomorphic. More likely, the biggest threat from AI is that they'll be modular/understandable enough that we can include strategy, creativity, resourcefulness, etc. while avoiding the empathy, compassion, disgust, etc.


I think you just said no, but included a recipe to do exactly that.

My fear is your fear, I just phrased it more generally, while what you said is one of the specific forms making such an insane AI could take -- and reflects the insanity of its makers, our belief we'd somehow be greater without those parts.


For more information, read "A Boy and his Dog", from the same (eponymous) collection of short stories.


And surprise surprise - the land mines have also been absolutely terrible for humanity.


well, terrible in a "slaughtering children and making large swathes of land deadly for generations" sense, sure. But they sure are cost effective, so if you look at it with the right value system, they're really wonderful.


> What AI weapon could be more destructive than the guidance system on an ICBM?

ICBMs are much more easily controlled; only a few governments have the knowledge and resources to develop and deploy them, and even those governments have hundreds of ICBMs at most. Compare that with AI weapons which in theory could be developed and/or deployed by anyone, and which which could be built by the millions.


The world would be better without autonomous drones just as it would be better without guns or without bombs. The issue is that in a world in which the knowledge required to make a gun exists, it is in there interest of all of civilization to manufacture them and ensure that the "good guys" have enough of them and are sufficiently trained in their use to deter and exact vengeance on the "bad guys" who will develop, build, and use them regardless of their legality. This situation is identical. Knowledge cannot be destroyed (and it's very unclear of the damage caused by the destruction of the knowledge required to build autonomous drones outweighs the benefits humanity would derive from other applications of the same knowledge), so our only option as purported "good guys" is to arm ourselves. Once armed, we can attempt to construct incentive compatible mechanisms to make the use of drones of extreme negative utility, but we must be armed in case some actors have unexpected/unconventional/un-deter-able utility functions (terrorists). To propose that no smart people apply their knowledge to a particular end is idealistic and may leave us dangerously exposed.


> our only option as purported "good guys" is to arm ourselves

So do you think it was a mistake by the "good guy countries" to sign the 1972 Biological Weapons Convention?

https://en.wikipedia.org/wiki/Biological_Weapons_Convention


I always imagined a system where robots would seek targets and ask a remote human for clearance to destroy.

Such a system could have a 1:20 human to robot ratio with a human just repeated pressing a red button like George Jetson to issue a kill command.

In Connecticut recently, there was a teenager who mounted a hand gun on a custom quad copter and posted a video of it shooting live ammo on his own ranch. I was shocked by how well the recoil was handled. The future is now.



Easiest solution is just to get rid of the States (governments) that are the source of these wars and death. AI Weapons, Nuclear or gunpowder is nothing in comparison to one biological weapon gone haywire. Or a Monsanto GMO crop that decided to mutate and completely destroy our planet.

Hopefully the State (with their oligarchy of multinational corporations will die out.) I believe and hope this will eventually happen. The internet is creating a global village. Once the fear-mongering and "us against them" mentality propagandized by states are gone, there will be no more use for governments. Money will become crypto-currency instead of being controlled by central economic planners, security will become AI security and defense systems and will be managed by competing security/robotics startups, and as automation and renewable energy/consumables take over, scarcity will be minimized. The future could surely be quite wonderful if we start working together and embracing our differences instead of the hate that is filling our world.


I thought Hyundai had already developed an autonomous heavy automatic rifle to protect the DMZ



Modern weapon systems are already largely autonomous. Humans do not have the speed or cognitive bandwidth required to be effective on a modern battlefield given modern capabilities. Instead, the human operator is there to operate the "on/off" switch, do maintenance, and to watch for when things go wrong.

If you look at the design requirements for recent American weapon systems, they frequently require the entire sequence of detection, discrimination, analysis, and reaction to be completed in less than 50 milliseconds. Because a slower reaction means you won't survive. Only computers are going to be delivering those SLAs.


Hawking is doing an AMA on /r/askscience on Reddit right now.



Every time I read about autonomous weapons, I think of the "Menschenjäger" of a Cordwainer Smith ( https://en.wikipedia.org/wiki/Cordwainer_Smith ) story written in the 50's.

"Man hunter" machines which are still hunting the few remaining humans, except those that they think are Germans, thousands of years after the war which they were created for is over ...


This is important but it doesn't go far enough. I want a ban on all domestic use of armed drones and other remote control armaments - even when there is a human in control.


Same here. I hope I don't see it in my lifetime, but just imagining a pair of sociopathic, skilled FPS players controlling 2 drones makes my blood run cold, and sounds alarmingly feasible. With a little bit of 3D printing and some sort of hobbyist community nudging the gun/drone integration forward, it just sounds too easy. I expect a serious re-examination of the 2nd amendment when weapons that were plausibly sold as tools of self defense a few years ago become WMDs with the aid of drone technology.


My biggest concern with this open letter is that it acknowledges that the way to avoid having (currently weak) AI in army requires considering humans to be expandable.

Those humans are both more intelligent than the equivalent AI, and more prone to error. They will stay better at murder (accidental and otherwise) while AI will slowly become better at risk assessment, avoiding unnecessary deaths. While humans will still get PTSD, war machines will only rely on analysis (and human orders).


"Avoid unnecessary death"? That's a pretty rosy picture of how a weapon programmed to perform an ethnic cleansing, a terror attack, or to herd civilians out of an area would act.


What other choice do you have? Even if you didn't have someone in the loop pulling the trigger, then you'd have a bunch of engineers (or more likely Amazon turkers) sitting back and reviewing after action video filling out precision-recall reports, that read "Misidentified school bus as troop transport at 1:37. Misidentified farmer as rifleman at 4:36. Misidentified sniper at 5:32. Misidentified market as weapons depot at 6:32"

We know what this looks like. Just ask anyone in content-review[1].

[1] http://www.nytimes.com/2010/07/19/technology/19screen.html


Maybe a slightly weaker variant:

I agree not to develop autonomous weapons that can harm humans or other living beings.

The idea being that it's ok to develop an autonomous weapon that would cripple the enemy's production capabilities without harming people. Or one that would guard against other autonomous weapons. If you can build them, so can your enemies and you shouldn't presume they won't.

Edit:

Or imagine swarms of tiny robots that destroy firearms, or disable fire systems, etc. These should be fine to develop.


But it's a small step from that to something that can kill humans. Weaponizers would easily reuse the technology you create.


The OP seems to argue that autonomous weapons would be dangerous heavily because they are intelligent in the sense of AI.

IMHO, the weapons would be still more dangerous because they were not very intelligent and, instead, in essentially any human sense, quite stupid. They'd be like flying a quadcopter carrying an automatic pistol with a hair trigger that could fire essentially when, where, and at what no one could tell.


> In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

Why only ban offensive autonomous weapons? Why not ban them all? How is offensive different from defensive in this context?


An autonomous ballistic missile defense system would have a lot less impact than an AI that had control of a few missiles. I do agree that there still could be some danger though.


I agree with this letter, and I'm glad it was created.

Some commenters are skeptical that we can realistically ban fully automated weapons, but I disagree. A practical approach could be to ban them, and invest in R&D on effective counter measures. That way, the vast majority of nations will not pursue them, and in the case that someone does, we'll have a way to remove the threat.


Seems that it would be quite possible to design systems to defend against these type of things. As only one example:

http://archive.defensenews.com/VideoNetwork/2277581414001/Am...


I am kind of worried about pushing early concerns for something that doesn't exist yet. I don't remember in history someone saying "Thanks we are able to forecast this!" but I do remember a lot of stupid regulations. Like the one in the US forcing railroads to pay someone with a red flag to walk in front of the train.


We need treaties, banning research and development of weaponized, and especially autonomous AI (and yes, I realize that US drones already have it in limited capacity).

We should be working to stop the proliferation of these weapons; they have the potential, in the end, to be as dangerous as nuclear weapons.


The reality of what comes next is probably closer to a land-based version of the drone, with explicit human-in-the-loop decisionmaking about who to target/kill, but avoiding risk of human life through remote operation.

How does this scenario unfold?


There is a thin line between an autonomous agent and an autonomous weapon in long term. Unless weapons are banned for both humans and post-humans together I don't see much hope regulation will work.


It'll be the 'best thing since sliced bread' until there is a big 'friendly fire' incident. Then the robots will be quietly disabled by the soldiers who might at risk from them.


If I had a choice of Robots killing Robots, I'd chose that any day of the week. So really, the problem isn't AI Weapons. The problems are weapons period.


You can never make all people agree on something like this, right? So to some degree everybody needs that technology, or am I having a logical error here?


Start learning how to hack robots. This is too useful and asymmetric for any major power to opt out.


EMP's baby, fry those bots


Sebastian Thrun and Andrew Ng aren't on the list. Hmm.


Idealism. Meet reality.

This is (some of) what I want for humanity:

- No country has a military force of any kind

- No country owns any form of military weapon

- Turn all of that into a massive world-wide humanitarian force

- In fact, no countries

- Just earth, owned by humans who can move freely about

- Rich nations helping poor nations improve and elevate to first world standards

- I don't want to see a single kid without clean drinking water, clean clothes and access to top grade education, health, etc.

- No dictators (oops, we might need weapons for that part)

- No totalitarian regimes (oops, we might need weapons for that part)

- All societies ought to recognize that humans are naturally free on this planet and no government has the right to affect this right in any way

- A uniform means of exchange (a single currency)

- The poor and the elderly taken care of by those who can work

- No guns

- No crime as a pre-condition to "no guns". Yes, it's hard. We put men on the moon. Figure it out.

- No taxation. Governments use it to control and manipulate. History shows it is a bad thing.

- A universal earth-wide bill of rights

- No prisons except for really serious crimes. Everyone else needs to go to serious rehabilitation centers in preparation for a return to society.

- No welfare or entitlements. The current approach makes slaves out of people who would otherwise figure it out and make something of themselves. This does not mean not helping the truly needy or elderly, that goes without saying. The system should not be game-able.

- One free international trip for everyone on the planet every five years. Yes, we all pay for it. This alone will do more to unify and shrink our planet than anything else. A different continent every trip.

- Laws that create very serious consequences for politicians who lie and manipulate (among other things)

- In fact, no politicians as we have them today. Those who wish to work in government ought to seek and obtain degrees and training to specifically qualify them for the jobs they seek. If an engineer or a doctor needs a degree to discharge their duties we should have similar requirements for government officials. They should be absolute experts with a wide range of experience and knowledge. They should be the best of the best, not the shit we get today around the world. It should be based on merit and accomplishments. It should not be based on gaining votes through pandering or the mobilization of the ignorant masses. There should NOT BE ignorant masses.

- A world-wide educational system that is evolved through collaboration with the goal of elevating everyone to the highest possible level. Yes, that means no US-style unions anywhere near education. - No religion. Enough. It's 2015. We understand the atom and the universe and lots of stuff in between to an amazing degree. No more deranged lunatics who believe that a bush can sing, a snake can talk and a god can help them pass a test. If we don't get past this shit we are nothing more than apes in a cave. Sorry.

When beings from another planet land on earth we need to show them we have evolved out of the caves, we have taken care of our planet, created a beautiful society with low crime, excellent health, education, responsible social programs and NO FUCKING WARS.

OK, well, that's what I truly want, and more. Each one of those line items could be a months-long discussion as to the merits, or lack thereof, of the idea. I don't claim to be right. It is quite possible that some or all of this might not be attainable for centuries.

Yet, it seems to me it could be interesting to develop a "Business Plan for Humanity" whereby a set of long term goals put down to cover what we want an idealized earth to look like at some point in the future. Unless we have a plan we can't possibly hope to approach any reasonable vision of an idealized future. Utopia? Probably.

In other words, the problem isn't Ai-based weapons. The problem is that we don't know what the fuck we are doing and we are doing a horrible job of living on this planet in harmony with each other and the planet itself. We are still cave men.

And then you have the reality of regimes like Iran, China, North Korea, Putin's Russia and a huge chunk of the Middle East (just to name a few).

The first has people in power openly call for the destruction of other nations. Not good.

The second does whatever the fuck it wants, cares not for the environment, intellectual property and pretty much cares not for ethical or moral behavior at the national and international level. If anyone is going to have killer AI-based robots first it's China. In other words, the reality of China's behavior pretty much guarantees being in an undeclared arms race with the potential of culminating with results that will dwarf World War II (a billion dead?).

North Korea? I don't think I need to spell that out.

Putin's Russia, well, he's fucking crazy. The place is a mess. On any given Monday we could wake up to an absolute disaster in Europe at the hands of Russia.

Middle East? Women are stoned to death for the most ridiculous reasons. Women are not allowed to obtain education. Women have to live their lives wearing a tent. Men can be killed if they shave. Name another region where kids are taught to become suicide bombers? Here's a region on this planet where people have devolved into tribal or cave-men thinking. If they didn't have oil the world would not give them the time of day and they'd be forced to join the world as productive members of society. Because they have oil the civilized world allows them to mistreat, torture, kill and subjugate half or more of their society. What does this say about the human condition and the prospects to convince such nations or nations like China not to do Ai-based weaponry?

It's complicated. Simply calling for no Ai-based robotic weapons will not do a damn thing. We have to change who we are, how we see each other, how we behave, how we see our future and how we think about our collective long term goals while living on this insignificant blue marble floating about universe.


Autonomous weapons are coming, and in some degree already exist. There are CV systems on machineguns in korea protecting the border. There's the TrackingPoint rifle scope that allows one to delay a shot until the rifle is line up perfectly, similar to how tank cannons account for the wobble down their length to ensure a straight shot).

It would be ridiculously easy right now for a person to attach a firearm to a quadrotor with a CV system, program it to identify human shaped objects, and fire at them. It's not a miracle that such a thing hasn't happened, there are very few people in this world with both the desire and skills to do these things.

I'm reminded of a short story I read a few years back [1]. It got me thinking about the possibility of a single depressed individual hitting the big red button and destroying all humanity. I think it's a real possibility, and when you think it through with the assumption that these super-weapons will be readily available much in the same way that AK-47s are ubiquitous now, you can start to wrap your head around ways to perhaps prevent this.

First, mental health is a huge issue. We're seeing the results of neglecting mental health in America now with these spree/ego shootings, where disturbed individuals either radicalize or "snap" and go kill people in malls, churches, theaters, schools, and other public places. Semi-automatic firearms and explosives are enabling these things to happen, but weapons are a Pandora's box which can never be closed once opened. The way that nuclear weapons restrictions are handled now is probably best. Restrict the ingredients to make the weapons, and you can prevent them from falling into the wrong hands.

Another thing that we as a global society can do to prevent these weapons from becoming widespread is to sabotage them. Include vulnerabilities and kill switches that would allow us to neutralize their effectiveness. China did this with the ASICs used in some military machines built by the US [2]. This is a valid option, but requires a motivation on the part of those creating the devices. One could argue that a nation-state could create devices like this lacking intentional back-doors.

Engineers can also work on counter-measures and disseminate them widely, and that seems like it will be the most likely outcome. Humans are clever, AI is very specialized, and that will make the big difference. Once a general AI better than humans is created, our desire to not be murdered by it won't really matter anyway.

I think our best hope is to build these AI weapons, but do it so shittily that they fail more than they work. I also like the American technique of building it better than everyone else and sabotaging the work of any potential rival so that they can develop them but not realize they don't work correctly until it's too late.

1: http://www.fullmoon.nu/articles/art.php?id=tal 2: http://www.scribd.com/doc/95282643/Backdoors-Embedded-in-DoD...


It's simple.

1) Killing people is false. See the UN charta.

2) Killing people remotely is still false.

3) Killing people autonomously is still false.

If people in power didn't read the UN charta or didn't get it, start at 1) - don't kill people and you can't be false.


I love that these open letters give scientists a collective voice helping them weigh in on public debate with the media's blessing. But... saying AI is bad because they make great weapons sounds extremely naive, unless it's a declaration of collective sentiment (which is unscientific). Humans make tools, which include weapons. Unless we stop making weapons, which we are not, AI weapons will be made. And when have weapons ever been "good"? All weapons are bad so of course AI weapons are bad. Great weapons are extremely bad, which is precisely what makes them so great. I can't help but wonder if they'd have come up with something better had they received input from the historians and anthropologists regarding this point. And now this naive "scientific" argument will be used against them to tarnish the reputation of scientists and marginalize science (but only after the 28th... naive again to think they can control social media and internet time; shame on HN for "posting" this early!).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: