Hacker News new | past | comments | ask | show | jobs | submit login
Don't Let Robots Pull the Trigger (scientificamerican.com)
47 points by ohjeez 28 days ago | hide | past | web | favorite | 63 comments



I think the authors are 100% right on everything, but they failed to bring up one of the most important factors that I believe is driving US policy in this area: if Russian or Chinese weapons are automatically firing at the speed of a neural network while our weapons are waiting for meatware actuation then it's going to be a short battle.


That’s why it needs to be a treaty. There needs to be an understanding that it’s as unacceptable as biological warfare and that the gloves are off once they are used.

No one will use autonomous fire in some backwater proxy war skirmish (which is where superpowers meet) if the response would be the same as it would be to a nuclear first strike.


> There needs to be an understanding that it’s as unacceptable as biological warfare

Are you under the impression that the major powers don't have biological warfare capabilities?

> and that the gloves are off once they are used.

So, China uses autonomous killbots in Tibet to control an uprising. The US "takes its gloves off", and...sends a strongly worded letter? Sends the world economy into a freefall?

Further, China and Russia could do some limited testing, and then stockpile millions of killbots/killdrones. Once the gloves have already "come off", they can then rule the battlefield, since their opponent(s) won't have a similar massive force.

I think the combination of cheap and effective will be impossible for the less scrupulous countries to resist...and possibly the more scrupulous countries as well.

It is also a slippery slope - the US is already working on autonomous anti-ship cruise missiles that search an area and look for the correct ship (think "Chinese aircraft carrier"). That implicitly means the missile will autonomously "decide" which ship containing hundreds or thousands of sailors gets hit. Full autonomy will also be a requirement for any effective UCAS capability.

I expect that regardless of what treaty is signed, the various powers will make sure there are loopholes permitting what they wish to do - or they will simply ignore the treaty. A concrete example of such behavior is Russia's long-term violations of the INF Treaty.


> Are you under the impression that the major powers don't have biological warfare capabilities?

No, but that they are extremely careful with using them since it’s considered an extreme escalation to do so even on a small scale.

> China uses autonomous killbots in Tibet...

Replace “autonomous killbot” with “small tactical nuke” and you see what I’m getting at. They could. But they won’t - because they can accomplish any military goal with conventional means. So why risk the backlash? That’s the kind of threshold I’d like for the deployment of autonomous weapons of this type. And it’s difficult given how easily deniable they are, compared to e.g bioweapons.

> ...autonomous cruise missiles...

This is an interesting gray area. No one thinks a fuse that ensures a weapon doesn’t blow up a friendly is a bad thing. But we still think of these weapons as being “launched” by a human, even though they can make terminal decisions on their own. I’d draw the line for what’s ethically acceptable at the point where you no longer have a human launch weapons “at” a target but instead launch a weapon to roam. The cruise missiles are still just clever fuses, they don’t linger waiting to autonomously attack ships. The reason this is a gray area and a slippery slope is of course only down to the short flight time of the missiles. If they could fly for a year (like the Russian nuclear experimental missile) then it’s useless against predesignated targets - so effectively a roaming killbot. With current cruise missiles, a human still makes a decision to attack even if it’s only a rough area thought to contain the target. They know what they are trying to attack and there is a human to take the blame if the weapon subsequently makes a mistake and hits a tanker.

> ...stockpile millions...

Yes. Comparing again to nukes that’s very similar. So would I prefer a world without nukes to one with? Sure. But the best we could do was superpowers stockpiling to ensure mutual destruction. I think that’s the best we can hope for here. I don’t think these weapons won’t be built by the millions, I just hope they will be used reluctantly.


(Sorry for the late reply...)

> The cruise missiles are still just clever fuses, they don’t linger waiting to autonomously attack ships.

That is not correct. The cruise missiles will be given an area to search. They will then traverse that area looking for targets. Anything that matches their programming will be hit. I'm sure they're flexible enough to take orders like "attack any destroyer, cruiser, or aircraft carrier". It will be up to the sensors and software to make sure the target isn't a civilian ship, for instance.

> I don’t think these weapons won’t be built by the millions, I just hope they will be used reluctantly.

They are already being pitched on the open market by the Chinese:

>> The Blowfish A2 “autonomously performs complex combat missions, including fixed-point timing detection and fixed-range reconnaissance, and targeted precision strikes.”

>> Depending on customer preferences, Chinese military drone manufacturer Ziyan offers to equip Blowfish A2 with either missiles or machine guns.

>> Mr Allen wrote: “Though many current generation drones are primarily remotely operated, Chinese officials generally expect drones and military robotics to feature ever more extensive AI and autonomous capabilities in the future.

>> “Chinese weapons manufacturers already are selling armed drones with significant amounts of combat autonomy.”

>> China is also interested in using AI for military command decision-making.

https://www.technocracy.news/china-releases-fully-autonomous...


Even in a controlled testing environment, it would be difficult to distinguish the signature of an autonomous weapon system from a remote-controlled unmanned system.

On the battlefield, it will be impossible.

The only difference between a system with a human firebreak and a fully autonomous weapon is the flip of a bit.

Treaties will not be effective at preventing the development of these weapons. Any illusions will be shattered by the first bad actor.

An arms race in this space is inevitable; realistically it has already begun.


That's true: you will have deniability. To begin with, all autonomous sysems will also be able to operate as remote controlled, or semi-autonomous with human triggering. If you are the kind of state that will, say, repaint military vehicles before sending them across the border to a country where you "have no military engagement" - you would also no doubt operate autonomous weapons and call them remote controlled.

Still - using what amounts to a nuclear first strike is a dangerous game to play, so it might at least make superpowers think twice if it's made clear that this is how it's seen. Russia wouldn't try to use a bioweapon or tiny nuke and claim it was conventional, because it's a risky gamble.


Yes, the fact that these systems will be built by someone regardless needs to be addressed unless we want to become the equivalent of redcoats lining up in neat rows to fight people who are not playing by the rules.


The line isn't between Russians and Chinese and American citizens, it's between the power centers within them and the citizens in their own countries as well as the others, and these power centers will be more loyal to each other than the respective populations they "represent" when it's time to consent and pay taxes for something, but which they don't know at all when it comes to dissidence or poverty. Robots firing at robots or soldiers isn't going to the end result, robots firing at civilians is.

IMO the closing window of opportunity, that is, the fleeting opportunity that still exist, would be a culture of intellectual honesty and getting rid of moral double standards rather than goosestepping, so maybe people in Russia, China and elsewhere might think hey, maybe we'd actually like to be free, too. But, instead let's just play another round of this game. Remember the cold war, how giant that was, yet when it ended, the military budget just stayed as big? It's a racket.


this just proves we haven't learned anything from the nuclear arms race.


>if Russian or Chinese weapons are automatically firing at the speed of a neural network while our weapons are waiting for meatware actuation then it's going to be a short battle.

While I see your point on the ever present escalation risk in these things, are you sure you're not taking an overly restrictive view of "pull the trigger"? Take long existing weapons like just plain old guided missiles. In one respect it could be argued a computer system is making the kill choice there, after firing there will be a long period where the computer is seeking the target and the computer is what actually detonates the warhead that makes the kill. But in practice it's hard to argue that a human isn't still "pulling the trigger", making the decision that "this target should be killed" and then the level of intelligence the weapon employs to accomplish that order afterwards isn't so relevant. The real comfort factor seems come from there being inherent limits in terms of time/space (a missile only has so much run time and range) and specificity (a missile is going after a specific designated target).

If we consider the most problematic weapons of modern history, I think the real uniting factor is when they get away from all those limits at once. Landmines can linger a long time and activate indiscriminately without further human intervention. Chemical and bioweapons also not well controlled in terms of time, space or specificity despite no AI. Project Pluto, the early Cold War R&D by the US military to create a hypersonic nuclear scramjet cruise "missile" (more of a autonomous bomber) that could be launched then linger flying over the ocean for months (due to nuclear reactor power) was abandoned precisely because it was considered far too destabilizing even for then. Despite still being a "missile", the longer time/space/area effect made a difference. Heck, even cluster munitions on conventional missiles, while not considered in the same category as landmines exactly, are still pretty hotly debated.

By the same token in a future conflict with more advanced NN/ML weapons, isn't there still room for operator decisions? Ie., "here are cryptographically authenticated orders to kill targets matching X profile within Y geographic area for Z time period". It would seem like setting international restrictions on X, Y and Z there (requirements for safety interlocks and for any given kill authorization to be valid for no more then 20 minutes over no more then 400 km^2 to toss something out) could be a viable approach. So in the specific minutes and seconds and milliseconds of machine/machine conflict there'd be no disadvantage, but humans would still be need to be constantly "involved" at a higher level. Having killbots just roaming around or lying in wait (and potentially lasting after the conflict is totally over, ala landmines) would be banned, but sending robot tanks to attack a specific fortified position on one single day with active strategic direction would not be.

Basically moving humanity "up the chain" a bit seems like something countries could practically agree to without feeling massively behind and could be monitored/verified.

Edit: updated to add some comparison to non-AI weapon systems that have been agreed to be restricted with some success.


Similar to chemical warfare. There needs to be a UN resolution to ban these types of weapons. Make it so use of them constitutes war crimes.


A large issue with this is that like the DARPA ALIAS project[1], and another project which I do not have a reference to but saw a demonstration of last year which made a traditional helicopter autonomous with just a bolt on avionics box, it is entirely possible to have "manned" systems that are made autonomous with a bolt on. This means that any country could develop autonomous warfare systems entirely in secret and then convert everything to unmanned overnight just before an attack.

You can't see software.

[1]: http://www.aurora.aero/wp-content/uploads/2016/10/ALIAS-Broc...

Edit, found a reference for the autonomous helicopter:

https://www.navy.mil/submit/display.asp?story_id=103769


I don't understand your argument. Laws are created irregardless of the potential for illegal circumvention. Nobody ever decided "we shouldn't bother to outlaw theft because shoplifting is too easy". Creating the law provides the teeth to prosecute someone found to be breaking it.


Agreed, the threat I am concerned about is development and preparation of deployment in secret of these systems by a country in secret. If outlawed and other countries obey the laws, and a bad actor develops in secret and waits for the tech to become sufficiently advanced, they could launch a __very__ effective blitzkrieg.

Unlike chemical/nuclear warfare, no factories, material transportation, etc. out of the ordinary observable by other countries. Sensors needed for the systems would be entirely justified for other reasons and software to change behavior can be loaded invisibly.

There would be no way to police it.


Here it is in flight and a shot of the interior as well as the sensor array bolted onto the nose:

https://www.youtube.com/watch?v=WxfNChMNQyM


Chemical weapons aren't used because they're not worth the risk. There was virtually zero battlefield use of chemical weapons in the Second World War, which was otherwise as "no holds barred" as you can imagine.


It’s not really risk that’s an issue, it’s a combination of expensive production, dangerous and expensive storage, and expensive deployment contrasted with a total lack of efficacy in killing the intended target. Unless your goal is to murder untrained and unequipped civilians, chemical weapons are just not very good weapons. Soldiers are trained, have vehicles with overpressure seals, masks and MOPP suits, detectors and a command structure.

The real use for chemical weapons is to terrify civilians and poorly organized/equipped militias. Most of all it’s a way to murder a large number of civilians without destroying infrastructure. Needless to say that’s not something you’d typically do in a war, where destruction of infrastructure is typically step one! As a result most countries recognize the economic and social burden of chemical weapons and their use, and strongly oppose them.


> The real use for chemical weapons is to terrify civilians and poorly organized/equipped militias.

See Vietnam War, Napalm, Agent Orange etc.


By that definition explosives are chemical weapons, and you know that isn’t what’s meant by the term. Napalm is an incendiary and equally effective against military and civilians. Agent Orange is a defoliant, and while it’s long term effects are horrendous, it’s not effective as a chemical weapon; you don’t want to sit around and wait for people to develop cancers.


Hell, by that standard, bullets are chemical weapons because they can cause lead poisoning.


Aren't 'killer robots' ethically similar to landmines? Only with better fire control systems.

Not saying that landmines are paragons of ethical weapon systems, but they are making a kill decison without a human being in the loop. Based on nothing more than a pressure plate or trip wire.

If we had landmines that could differentiate between a tank and a school bus wouldn't that be a step forward?


https://www.un.org/disarmament/convarms/landmines/

"The ensuing Mine Ban Convention has been joined by three-quarters of the world’s countries. Since its inception more than a decade ago, it has led to a virtual halt in global production of anti-personnel mines, and a drastic reduction in their deployment. More than 40 million stockpiled mines have been destroyed, and assistance has been provided to survivors and populations living in the affected areas."

I think the ideas is nip this one in the bud before it gets there.


Yes. Autonomous weapons systems don't make much sense on patrol for a long time, but as area-denial systems, they're already past viable and it's surprising they're not deployed.

I think there is a big argument to be made for their ethicality over landmines: unlike landmines, they are easy to deactivate and can be made highly visible while remaining effective. I don't think this makes them ethical, but I do believe that we have to consider how that changes their potential uses. There's an argument that it's far safer if we have hard rules around certain areas - "Enter a 500m radius around this military base moving more than 5mph and you will be shot many times" - than if we have fuzzy ones around guards and schedules. Certainties are often better than including a fuzzy human in the loop, which is why electrical installers all put their own lock on the switch rather than assigning a coordinator.


>they're already past viable and it's surprising they're not deployed.

Not deployed as far as the public knows.

It wouldn't exactly be hard to add a "shoot everything that comes near us" flag to the software for any existing naval CIWS


Arms control treaties are always vaguely pointless in the sense that countries will either leave or violate them the moment they are no longer within their interest. The biggest example of this is probably the Washington and London naval treaties of the 1920's and 1930's. The only real exception so far seems to be nuclear arms control treaties, which (a) mostly entrench the existing nuclear powers in a position of military hegemony over the rest of the world and (b) don't really stop anyone from having the weapons.

Landmine bans in particular are kind of ridiculous because of how easy it is to build and plant IED's.


Interesting that you brought up this treaty because I believe none of China, Russia or US has signed it...

[0]: http://disarmament.un.org/treaties/t/mine_ban


Sophisticated landmines do have decent target discriminators to reduce false positives with respect to the targets they are intended for.


I.e. they are more effective against humans.


Or if they are anti-vehicle, they are not set off by dismounted infantry.

They are more effective against the intended target.


My concern about autonomous robot warfare is not that it will shoot the wrong people or go out of control; that's easily answered by "but this one won't", and there's no counter-argument which will not look like pure assertion.

My concern is that they will work just fine: this would bring the domestic costs of warfare to near-zero (the cost of the robot weapons). For those of us that are sceptical of military adventurism, just consider how much more willing our politicians would be to invade some random, relatively harmless country if there were no "boots on the ground" and no possibility of body bags coming home.


> My concern is that they will work just fine: this would bring the domestic costs of warfare to near-zero

As an adjunct viewpoint to this - what happens when such weapons are available to regular people?

In theory, that's possible today; witness the number of instructables and other similar projects detailing how to create a nerf or paintball automated tracking gun system. It wouldn't take much to upgrade that to mount it on a decent mobile platform with a real weapon.

Right now, the objective or use-case for such a device isn't clear, so we haven't seen criminals build or use them, beyond perhaps drug trafficking via consumer drones and similar.

I'm sure, though, that it is going to be a problem in the future...


I'm pretty sure that some criminals will use such devices in the future.

I'm also pretty sure that criminal use is going to be rare.

There doesn't seem to be a very large overlap between willingness to commit violent crime and the technical competency to turn off-the-shelf consumer products into weapons. Why do terrorists stab people with knives when they could make explosives from household materials instead? They're clearly violent and unafraid of death. My interpretation: they lack the necessary knowledge to make explosives, aren't capable of self-learning either, and settle for a vastly inferior but readily available knife. Even organized crime seems to use explosives rather rarely, and even then diverted/stolen commercial ones more often than DIY.


> There doesn't seem to be a very large overlap between willingness to commit violent crime and the technical competency to turn off-the-shelf consumer products into weapons.

Good insight, which I will tack my own thought onto: the kind of person that would be successful at this would probably have the right mindset to work in an engineering job, and people who do that are both earning enough that they have no need to turn to crime, and too busy to be a political or religious extremist (something about idle hands and the devil's work...)

Still. Stabbing and other unsophisticated attacks seems to be relatively recent and primarily a feature of Islamic terrorism. If I am correct, then...

> My interpretation: they lack the necessary knowledge to make explosives, aren't capable of self-learning either, and settle for a vastly inferior but readily available knife.

...I have an alternative, perhaps entirely wrong interpretation: Islamic terrorism typically requires that its most skilled and most dedicated operatives kill themselves in the course of their first and only attack. Which is to say, they're running out of good operatives.

Meh, perhaps your original interpretation is better :)


I think you're exactly right.

A key check on military abuse in a democracy is that military actions kill some fraction of voters. We have already seen that as that fraction goes down through better soldier protection technology, our willingness to go into battle increases. If we drive that down to near-zero, the implications are frightening.

A further consequence of this is that the fewer human soldiers you have in the field, the fewer embedded journalists you have. And, without those, it is even easier to conduct military actions without any oversight from voters.


The long, relative peace on the Korean peninsula has persisted despite a massive manpower advantage by the North. The presence of hard defensive positions augmented by landmines and autonomous sentry guns allows South Korea to deter Northern aggression without having to keep up with North Korea's punishing and impoverishing level of military spending.

Like any weapon, robotic weapons are only a problem if you use them for evil.


>The presence of hard defensive positions augmented by landmines and autonomous sentry guns allows South Korea to deter Northern aggression without having to keep up with North Korea's punishing and impoverishing level of military spending.

And the presence of plenty of artillery pointed at Seoul has prevented anyone who doesn't want to see Seoul leveled as a result of their actions from taking aggressive action against North Korea. Knowing that if you do anything bad you will get whacked hard enough that it's not worth it is a powerful deterrent.

>Like any weapon, robotic weapons are only a problem if you use them for evil.

Because of the inherent defensive bias of the technology (stationary is easier than mobile, defending a known area is easier than entering an new area and figuring out what to shoot) robotic weapons are a bigger problem for those seeking to do or enable evil.


> Because of the inherent defensive bias of the technology (stationary is easier than mobile, defending a known area is easier than entering an new area and figuring out what to shoot) robotic weapons are a bigger problem for those seeking to do or enable evil.

This is a crucial point.

Another point is that most potential adversaries don't value the lives of their fighters as much as we value the lives of our military personnel. If half a dozen American troops die, it's national news. If a hundred die, it's a political scandal. People wonder why the US spends so much more on defense than other countries. That's partially because we're willing to spend lots and lots of dollars to save just a few lives.

And, for that reason, I think the US in particular will eventually be willing to replace even its offensive military capacity with robots. And if it works, that means the US or any other technologically advanced state can, at great expense, continue a counterinsurgency campaign indefinitely.

But, as you point out, any defender who can scrap a robot together will have an advantage in countering that ability. Hell, they might even be able to use open-source software to run them. And that's the good news. Orwell wrote, "...tanks, battleships and bombing planes are inherently tyrannical weapons, while rifles, muskets, long-bows and hand-grenades are inherently democratic weapons. A complex weapon makes the strong stronger, while a simple weapon--so long as there is no answer to it--gives claws to the weak."


Autonomous robots are going to be really effective in all the places where it is difficult for a human to survive and where decision making is mediated be technology, not human senses. So, they are going to be really effective deep underwater, in space, and in high-g aerial combat at radar range.

Setting aside the moral question, there is also the institutional issue that officers may not like to see pilots replaced with drones, sailors with UUAVs, etc. That creates this dangerous place where the US military can have it's manned systems disrupted by cheaper unmanned ones. I think that is dangerous for me.

As far as the moral issue: What is the difference between a machine firing a missile at a plane it believes is attacking and an officer on a ship pressing a button to fire that missile based on the risk assessment of a machine that tells the officer that an airliner is an attacker. Ref: https://en.wikipedia.org/wiki/Iran_Air_Flight_655

In that case the crew may have made a mistake in judgement. But it is easy to see a potential future case where no human interpretation is better than how a computer interprets the incoming data, and humans have to act on the computer's judgement alone. We can insert humans in the loop but that may not really mean anything.


What worries me most is how robotic warfare further concentrates power. Currently you have to convince the people in the military to follow your orders, meaning you need at least some legitimacy. What happens when even that human check on power is gone?


So how is a cruise missile not an autonomous weapon?


The cruise missile does not decide on its own, when to start. Humans do.


It also doesn't decide the target.


Neither does the person launching it though.

Software in the case of AI weapons is basically just enlisted personnel that follow orders perfectly.


I thought this is about weapons making decisions in the field autonomously. Like spotting a target and then also deciding to kill it.


Well right now Pvt. Crayon is making the decision on whether or not to engage.

In the edge cases Pvt. Crayon is probably going to be better than AI at making the call for a hell of a lot longer than the people pushing the technology claim at any point in time.

The question really becomes how much you care about edge cases.

When there's a car bomb barreling toward your checkpoint being able to just hit the big red "kill everything in this predefined space" button and dive for cover would be a godsend.


Push this to it's logical conclusion: Put the robots in an arena and decide policy based on who wins. No one has to get hurt and we don't even have to blow up real buildings and stuff.

Play war games to game war.


Relevant: https://www.stopkillerrobots.org/

> Formed in October 2012, the Campaign to Stop Killer Robots is a coalition of non-governmental organizations (NGOs) that is working to ban fully autonomous weapons and thereby retain meaningful human control over the use of force.


Robots have been killing people by accident for years, soon we will find a way to turn this bug into a feature!


The USA already has this. Russia and China have similar systems.

If you fly a kamikaze attack against a large US warship, Raytheon's Phalanx CIWS will automatically fire. I believe it handles speedboats as well. The Patriot system is similar, able to shoot down incoming fighter jets.


I think in principle this means we need to develop robots which are capable of disabling robots using means which are generally non-lethal (not through autonomous decision making, but matter-of-fact mechanisms like EMP, sensor blocking/jamming and so on).


Yes! Defense is easier than offense in general, especially with time to prepare.

One area that should be investigated is sticky hairy counter measures. Just think about what kills your vacuum cleaner?

Or what about rip-stop protective clothing that can kill a chainsaw in a split second? (I have my leg now because of protective chaps with kevlar fibers. Grubbing out a nasty old stump, one moment of inattention and the chain bit my leg, but the fibers jammed the saw in an instant. I looked down and it actually took me a few moments to realize that the rip in my chaps with the streamers of kevlar hanging down could have been my own precious meat! Man did I shiver, and send up a prayer of gratitude for the folks who made the chaps!)

Anyhow, once you start thinking defensively the ideas flow readily.

One idea I want to try is a thing that shoots those plastic balls (for the plastic ball pits you play in) but with a non-Newtonian goo (like cornstarch goo) that goes stiff at greater energies but is fluid at lesser. Result: anti-riot "foam". People can't run around and flail in it but you can still move around (to e.g. provide first aid). There could be an enzyme or something that can be sprayed to dissolve the goo... etc.


This absolutely needs to be an agreed upon standard similar to chemical weapons and mines.

Anyone remotely familiar with software development knows that automating either when to launch or what to target will not end well.

Which bug will end a civilization?


Forbid robots and send this guy instead. Two problems solved.


Mines and other kinds of death traps are primitive robots too.


A robot is just a more complicated fuse.


A fuse can't make a decision to light itself.


Some weapon systems arm themselves according to certain conditions, altitude, time from launch, etc. I think one could argue that some missile systems are already robots that "pull the trigger".


https://en.wikipedia.org/wiki/Russian_submarine_Kursk_(K-141...

Best decision: don't build it, if it's already built don't launch it, destroy it as soon as possible.


But a human dictates the launch and what the target will be.


How is this different from programming the killer robot and turning it on?


Do they? Aren't some just sent out to find a heat source and blow it up?


Oh god no. At least not as the primary guidance for long-range. It can be partially used to track a target, but it's not being used to decide where to target in a broader sense.

Short range devices sure, but they still require a human to point and launch. There's not a margin of error there that will divert a missile from an armored vehicle to an elementary school.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: