Hacker News new | comments | show | ask | jobs | submit login
DOD officials say autonomous killing machines deserve a look (arstechnica.com)
57 points by prostoalex on Mar 6, 2016 | hide | past | web | favorite | 83 comments



I find it interesting the way this was described at the end of the article. The idea that this is a necessary development in the case that some enemy uses ECM to disrupt the link between the robot and its human controller.

Because that will probably sound reasonable to most people.

"Hey, hey, hey now. We're not sending unmanned terminators out to battle to search and destroy any living thing.

This our cute little droney-drone drone. We call him Charlie the unicorn. Now if the bad baddy bad terrorists do something bad to Charlie and make it so he can't receive orders, don't you want Charlie to be able to protect himself?"

And the robot apocalypse begins. Not with an army of soul-crushing t-1000s, but with an M-79 grenade launcher dressed up as a puppy.

My guess is that by 2035, we will be mostly okay with it though. Putting AI in charge of driving cars is, by definition, letting AI make life/death choices. If we're okay with that, we are inherently accepting that these ethical situations can and will be accurately modeled.

If you can tell a car how to make a decision between hitting an old lady in the street vs. crashing into a tree and killing the passenger and get that right enough to please the public, I'm sure we can figure out how to tell a robot in a battle situation which person to shoot . . . well enough to please the public.


I would argue US should start training their AI already, in combat situation human senses becomes more suspectible for finding threats where there isn't any.

They could use the AI for real-time analysis of video feed in helicopters etc. Highly publized incident comes to mind where the poor camera man was gunned down mistakenly when human operators thought he was wielding RPG, whilte it was likely a camera he was holding. Could the AI have detected it?


An AI doesn't need to. An AI can wait, take the hit, then send in a fleet of tazer armed drones to subdue the guy, and another helicopter to extract him.


Sounds like Wing Attack Plan R. How could that go wrong?

https://www.youtube.com/watch?v=n8qCkVklFWE&feature=youtu.be...

It's Dr Strangelove all over again. (Dr Strangelove was about a crazy general who launched a nuclear attack, using a provision that he was able to do so on losing contact with Washington)


This happened to a Russian nuclear submarine that got caught in a US training exercise near Cuba. 2/3 officers voted to authorize a nuclear attack.

https://en.wikipedia.org/wiki/Vasili_Arkhipov


A car is not the most effective weapon, so I think it matters with which robots get equipped.

It's an interesting word choice: "accurate ethical modeling". Given certain "ethical priors" what we do depends on what we have encountered in life. Soldiers are humans with siblings, parents, wives, pets, homes, etc. that allows them to train what they ought to do in a wide range of situations. If we start to train robots only in the art of killing people, or even only in deescalating dangerous situations, this is a very impoverished dataset.

I would never want an AI that is able to kill people, but that I don't trust to hold my baby.


Well put. That's a great smell test.


> If you can tell a car how to make a decision between hitting an old lady in the street vs. crashing into a tree and killing the passenger and get that right enough to please the public, I'm sure we can figure out how to tell a robot in a battle situation which person to shoot . . . well enough to please the public.

I have not seen any indication that this is a solved problem and a major reason why I'm against autonomous cars as they stand today. I'm extremely concerned that in the rush to get to self-driving cars that issues like this are swept aside in the typical Valley attitude of asking for forgiveness later.

To be clear, I would love an autonomous commute vehicle that lets me work on the way to work but only if it's safe. At the same time, I love diving for pleasure and do not want a future where that's out of the question. It's a big dilemma.


I may not have been clear enough. Perhaps I should point out that I think the "If" in my hypothetical is a really big if.

Except for the part about it being good enough for the public to go with it.


By 2035 anyone that is not OK with the AI running things I am sure will be sent to a reeducation camp, or killed.... so it is all good.


Just playing the devil's advocate here:

* No soldiers of the side deploying armed drones need to be in danger.

* Robots do not act cruelly unless specifically instructed. Humans do have a pretty bad record of unnecessary cruelty due to racism, boredom, intoxication etc.

* If you log all descision making processes, you can justify each action taken. It's like bodycams for police officers. If you'd make all logs public after the war it would be possible to identify war crimes and who gave the order (unlike now).

* Nothing requires a machine to use lethal force all the time. Systems which have the primary purpose of capturing combattants can be considered when there's no risk of losing one of your own soldiers.

* Your robots priority doesn't have to be self-preservation (unlike a human soldier). When in doubt, don't shoot and analyze the situation.

All in all, if the military would use these autonomous killing machines with the intent of minimizing covilian casualties, would this not be an improvement over the current state of warfare? Of course it won't make war per se less likely, but not using these machines doesn't either. So why not use them?


> * Robots do not act cruelly unless specifically instructed.

Well, sure, but have you used computers recently? It's often a fucking awful experience and while the computer clearly isn't trying to make your life frustrating that's often the result.

    PC LOAD LETTER
https://www.youtube.com/watch?v=5QQdNbvSGok

https://www.youtube.com/watch?v=N9wsjroVlu8

Do we really want the people who make printers to make autonomous killing machines?


Then again, the printer is just not working. It's not randomly printing things you don't want. In the same sense, a broken autonomous tank would just stop dead in its tracks instead of suddenly killing everything that moves. Those failure modes have always been very rare.


http://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres_...?

> In this article I present in which way scanners / copiers of the Xerox WorkCentre Line randomly alter written numbers in pages that are scanned. This is not an OCR problem (as we switched off OCR on purpose), it is a lot worse – patches of the pixel data are randomly replaced in a very subtle and dangerous way: The scanned images look correct at first glance, even though numbers may actually be incorrect. Without a fuss, this may cause scenarios like:

... wedding parties being hit by drone strike?


But they shot AKs in the air as celebration! Only a mistake that kills an entire family tree.


Because any technological advance is going to be countered. Then at some point, we have drones killing drones.

Currently military numbers are limited by manpower. And military units (like let's tanks) are to some degree limited by money. You don't want to pour too much money on single tank, because you then put all your eggs on few baskets. And you also don't want to put significant part of your troops in badly armored tanks with food firepower, because then you will run out of tankers in no time. Currently the "good balance" is 10 000 M1A2 Abrams that cost 6,2 million dollars a piece. Those would need 40 000 tankers, which is tolerable amount of crew. Training them is costing and going to cost lots of money. Combined price of the thing is 62 billion dollars or tax payer money, distributed to several decades. Increasing the price of tank to 10 million would be stupid, as it's still vulnerable to certain dangers, but you now lose 4 million more in single blast.

Now let's imagine full autonomous tank fleet of M1A9 Abrams somewhere around year 2050. You are no longer limited by tankers. You are only limited by price. But as you lack crew, you need less armor. Lets say we have unit price of 4 million dollars. Now 10 000 units is only 40 billion and zero tankers.

Then Russia makes autonomous tank "T17 Kursk" and produces 15 000 of them to sell to China. (Or something else, I'm not fiction writer.) They are bit crap, but as trained tankmen are not required, they can easily produce higher numbers to compete with U.S.

But U.S. is not happy with second place. So U.S. gets another 10 000 Abrams.

But China is not giving up, they order another 15 000 Kursk.

And on and on. Until tax percentages of both China and U.S. reach 70% and all of it is poured to "defense". China, India, Brazil and Russia are all likely to beat U.S. in economic growth rate in the future, because it's easier if you start from the bottom. And they are all geopolitically ambitious. We are headed towards multipolar world and that means arms races again.

TL;DR: Autonomous killing robots remove limitations of arms race. This is "destabilizing" and draining.


I'd argue that China, Russia and the US have already passed this point w.r.t. nucelar weapons: Every side is sufficiently armed so that a war between any of them cannot be won. As soon as you start winning in a conventional scenario the other side might as well use nuclear weapons, just to ensure you'll lose too. In short a war between nuclear powers makes no sense, since the "victory condition" has been removed.

I'd assume that this is the same in asymmetric conflicts: If you are fighting a guerilla war, you "win" when you kill some of your enemies troops and demoralize them. However if enemy sends only machines, then cannot win as long as the enemy is capable of producing more of these autonomous weapons than you can destroy. As you've said the cost savings do enable that. Hence the guerilla side also loses its victory condition. This is a scenario I'd actually like to see, since it might put an effective end to the kind of fighting we've seen in the middle east.

Your argument is of course correct when we consider two countries which are both capable of producing such weapons and neither side has a nuclear weapon. Saudi-Arabia and Iran are candidates which could probably fall into this kind of an arms race and that would indeed make the situation there even worse.


"Hence the guerilla side also loses its victory condition. This is a scenario I'd actually like to see, since it might put an effective end to the kind of fighting we've seen in the middle east."

As a Finn, this is exactly the kind of stuff I don't want to see. Russia can win conventional war against Finland any day, if Finland would be absolutely certain to lose also the ensuing guerrilla war, we would lose our independence without fighting. Maybe not de jure independence, but practically yes.

Nukes are unusable as you said. This has not prevented conventional arms race and it's usage for proxy wars during cold war. It's unlikely it would do so in future. It might even make the whole situation worse, as now you can supply the "mujahideen" with so much more potent weapons. And you need less popular support to maintain similar levels of guerrilla activity.

Damn I hope I'm wrong on that. The possible counter could be autonomous aerial denial weapons. If they get better than autonomous assault weapons, we might actually see more peaceful world in the fashion of Kant(1). It might mean that Taleban might rule Afganistan again. But it would also mean that Russia can't wing Georgia, Egypt can't win Israel and Iran can't win Iraq. Westphalian utopia. :D

[1] https://www.mtholyoke.edu/acad/intrel/kant/kant1.htm It's really awkwardly worded essay, but the point is that if everybody defends and nobody attacks, we can get resilient world peace. With the downside of occasional minor conflict, but also with no "world government". If you start to think the election of president of world government, the whole idea seems pretty obviously oppressive and potentially very totalitarian.


Just a worthwhile note: guerilla warfare is not, by far, the only option. There are all sorts of techniques available to overthrow dictators and other occupiers.

A good example of this in real life is what Denmark ended up doing when occupied by the Nazis - non-compliance, plain refusal to cooperate with the Final Solution, and eventually even convincing the Nazis who were based in Denmark to sabotage their own country's plans.

Read more here: https://en.m.wikipedia.org/wiki/Rescue_of_the_Danish_Jews

And I also recommend the book "From Dictatorship to Democracy" by Gene Sharp which outlines many principles and techniques for non-violent resistance.


> we might actually see more peaceful world in the fashion of Kant(1).

I think we would just have nutters bombing us from the skies.


I should point out that if you are a perfectly immortal government that takes a strictly instrumental view of your population then the value of the typical 1st world citizen's life is going to be about $5 million. The government can't extract all that value from its soldiers but it should still consider their long term productivity forgone by them dying as much the cost of training them if you're being smart.

Of course that only enhances your point.


Arguably most conflicts are caused by people with much lower level concerns. Two opposing governments launching a drone war between them, due to some civilians being disgruntled at some other civilians, while these angry people watch from the sidelines, sounds very theatrical compared with governments lacking the political will or interest to stop some rebels from grabbing pointy sticks and fighting each other. Sort of how we have lots of countries with airforces, but current wars don't involve air vs air combat. People smart enough to use the technology don't seem to have that much taste for fighting other smart people.


Two opposing countries launching ICBM war between them never happened either. But that didn't stop people pouring incredible amounts of money into it. Until certain saturation point was achieved. Military spending is not much determined by war-fighting, but capabilities and threats.


You do know that MBT's require a lot of day to day maitainace by the crew to keep them going - that's one reason why they still have 4 man crews.


Yes I do. It's oversimplified scenario to drive home the principle.

Money and perceived threat are big factors in the equation. But anyhow there is currently a limit on how much money you can throw at a Field army until you hit deeply diminishing returns. And numbers of enlisted limit how many field armies you can field. Robotics can push this limitation way further.


This is a short-term issue, which is widely known; autonomous self-healing support and supportable units are a coming.


Hah I suppose you think we have self healing cars.


Appears that you don't understand how auto companies make money; hint, it's not by selling cars.


Please enlighten me with more information regarding this scam that even Tesla hasnt expunged


Auto industry makes most of it's money from enabling auto ownership, not selling the autos themselves. For example: http://www.forbes.com/sites/jimhenry/2012/02/29/the-surprisi...

Believe what you want, but comparing auto industry to the military systems is meaningless in my opinion.


Thank you I was not aware of this.


right... so your going to have full human level ai robots that can replace humans exactly when ?


Hardware is limited by resources, which are neither limitless or instantaneous. That said, I agree the cost of war will rapidly increase, as will the desire to control resources; China already sees this, though they're not good at fighting economic wars.


People are not the limiting factor on number of tanks. See: the rest of the military. This is an implausible scenario.


It was simplification. But yes, the real shit hits the fan when infantry can be replaced with robotics. See "combined arms approach".


Robots do not act cruelly unless specifically instructed.

I agree that in theory, killbots fighting killbots could lead to very clean wars. The problem is that the least powerful party will quickly resort to 'kill and torture as many civilians as possible' as a means of revenge, psychological warfare, terrorism, etc.

The ethics of robots will be the ethics of humans. More powerful and effective killing machines will amplify the behavior of immoral humans.


Do you expect the Pentagon be the least powerful party? Do you expect other partiesto stay away from aautonomous military robots if the US does?


Just to point the elephant in the room and place the correct problem there for discussion:

* Autonomous weapons subvert the basis of Democracy, making autocracies sustainable.

* Bugs on autonomous weapons can make them much more dangerous, to both sides, even on peace time.


Starting to think HN is full of sock puppets. Here's a downside, robots don't care about killing the population that paid to create them. Basically, every single claim you're making is false or is easily spun to be a negative.


The belief that people who don't share your opinions must be under control is a gross dehumanization, and not a terribly helpful way to begin a contribution.


Understand, fair enough.


It's remarkable that you had to preface your argument with "Just playing the devil's advocate here". It appears that the fear of AI taking life and death decisions is so great that merely making an argument in favor of it is seen as immoral. Doubly fascinating is the fact that this attitude has taken roots on HN of all places.


The more deterrents to war we have the better. Lowering the perceived risks of it seems dangerous and might in fact make it more likely to wage war. You need skin in the game to make you think twice about waging it. I worry the actual human costs could be much higher by removing humans entirely from one side of the fight.


Your logic implies, that the other war participants play by the same standards. I doubt, that there will be a standardization or approval process for the operating software across all armies, in order to ensure that the robots behave with some "moral limitations".


I think the better question is how do we stop war?

I think too often the casualties of war get dismissed as collateral damage. I'm for requiring politicians that vote for war to send their offspring into battle. Put some skin in the game so as to speak...


"These are hard questions, and a lot of people outside of us tech guys are thinking about it." It seems to me there is an easy answer. Building Robots that can kill people without supervision is currently a terrible idea. AI, or whatever you would like to call it, should not be allowed to take a humans being's life. We make our law enforcement officers pass psych exams before they are put in situations where they need to kill someone. Until a robot can pass the same exam, I'm not comfortable with autonomous killing machines.


We've had autonomous killing machines since WW2, and nobody seemed to care until recently when everything got reframed in a scary Skynet/Terminator way.

Most people seem to get their knowledge on the subject more from movies than reality.


...and we've already had an instance of autonomous killing machine gone wild. - http://www.wired.com/2007/10/robot-cannon-ki/


Has anything like this happened to an actual superpower?


>We've had autonomous killing machines since WW2

What are you referring to?

The only examples I know involve using animals to do the targeting. But all examples failed, as animals don't detect friend from foe. https://en.wikipedia.org/wiki/Military_animal#As_living_bomb...

I'd like to note that human being deciding when and where an attack should happen, never counts as "autonomous". Otherwise we have to count simple traps as "autonomous" weapons and the whole term becomes redundant. You could claim that any bullet shot while aiming through night vision goggles was "autonomous" because the shooter just set time and direction, but didn't really see who he was shooting at.

Second thought: you could claim that Japanese balloon firebombs and British balloon troops were using "autonomous" weapons. Then you would need to claim that changes in wind was the "AI". And strictly speaking that might be correct term. But then it's pretty easy to see why current hype about computerized "AI" has taken off in different way.


https://en.wikipedia.org/wiki/Sentry_gun

Such as:

https://en.wikipedia.org/wiki/Samsung_SGR-A1

Quote:

"identify and shoot a target automatically from over two miles (3.2 km) away."

There are some scary videos of it on the internet.


Notice the US does not use such a thing along side its border with Mexico, and the SGR-A1 is less than a decade old, WW2 was over 70 years ago...


Acoustic homing torpedoes.


Currently anything "homing" is launched at a target by human. The fact that it can correct its course, does not make it "autonomous" in the sense of autonomous currently used. It's about differentiating kill decisions "could have killed that, but didn't".

If we go to historical precedents, probably Russian swarming anti-ship missiles are the cut point. They can discriminate with carrier and it's companions, so the individual missile will make "I will not kill that" decision about individual ship. https://en.wikipedia.org/wiki/P-500_Bazalt


my understanding of aiming mechanisms esp. wrt. airborne missiles and drones, is that the pilot 'firing' is merely giving the go ahead for launch and the targeting mechanism takes over from there. e.g a heat seeking missle.


Sadly I am sure we could program to easily pass that test.

Machines killing without oversight is a terrible idea regardless of the # of tests they can pass.


You could say the same about people. I'm not sure why we treat machines differently. If we can build robots that can discriminate people we're okay with killing ("enemies") from innocents about as well as human soldiers, where is the problem? I don't think we currently can do that, but that might change in the future.


Personal mines are autonomous weapons. Their target selection heuristic is very simple: "anyone walking over me must be an enemy combatant", but they are still a weapon automation.


So, you're against autonomous driving vehicles too?


Is there any argument against autonomous guns, other than that they might go wild and cause a lot of unexpected deaths?

We shouldn't mind them sometimes making mistakes and killing innocent people because human soldiers already do that often enough. Perhaps the robots will be slightly more accurate than humans, and that's surely good for everybody except their enemies.

Maybe the real question we should be asking is why humans are allowed to kill people? When two countries fight each other, soldiers on both sides somehow decide it's OK to kill the other. They can't both be right, so in any war, thousands of flesh and bones killing machines (all soldiers on the "wrong" side) effectively go wild and try to kill lots of inappropriate people. That shows that human decision making is extremely poor, and can even unanimously agree on the same lethal wrong decision.


"Is there any argument against autonomous guns, other than that they might go wild and cause a lot of unexpected deaths?"

Yes, that a robot always obey orders. In my opinion that is the most dangerous thing about this issue.

Anyway, it's only a philosophic question because this is going to happen. "We" can talk about it, but "we" are not deciding anything.

"They can't both be right"

Not, but they can both be wrong, that is what normally happens.


Humans are not much better than robots at disobeying orders, as demonstrated in the Milgram experiments.


Milgram conducted 23 different kinds of experiments, each with a different scenario, script and actors. This patchwork of experimental conditions, each conducted with a sample of only 20 or 40 participants, yielded rates of obedience that varied from 0% to 92.5%, with an average of 43%. Contrary to received opinion, a majority of Milgram’s participants disobeyed. [0]

What's the obedience rate of a robot?

[0] http://theconversation.com/revisiting-milgrams-shocking-obed...


They don't need to go wild to cause more civilian deaths than soldiers would.

There is much less risk sending a robot to kill than there is sending a human. Humans fear death. Robots do not.

Contractors claim that drones are accurate. However that can never guarantee that the human controllers are accurate. Humans controlling robots may err to the side of killing too many than too few, to a greater degree than soldiers would.

So yeah I think there are some reasonable arguments against them. Jet fighters and bombers are somewhat limited by personnel and higher equipment costs.

Drones do not require as much training to operate and will not have as much of a personnel constraint in the future. They could be as dangerous or worse than nuclear weapons for a country that is being attacked by them in the future. Rather than wiping out a population, drones could create slaves.


Autonomous killing machines are a natural progression of military technology. They will certainly be created by someone at some point in the future. Thus, our military should definitely be researching this topic, and that's all that they are doing now.


Is it just me, or does this seem like an Onion headline? It's the juxtaposition of the very serious "autonomous killing machines" with the casual "deserve a look", I think. Like they might have autonomous killing machines for lunch if that was offered as the special


Can't wait until a dictator uses autonomous killing machines to, you know, just take over the world.


Everything DoD says or does is partially American propaganda. (I say "propaganda", because "information war" is American propaganda.) And we are dealing with a U.S. hegemon. It all started with American revolutionary war, and continued as "democratization". U.S. was based on an idea of liberating people from monarchs. And as U.S. set out to do that, DoD /CIA got really good at meddling other countries business. Now the original idea has been abandoned, democratization might not do any good anymore, even cold war is over, but Yankees still keep "power projecting". Institutions tend to keep on doing what they are good at. If powerful institution loses purpose, it invents a new one. Because there is certain "survival of the fittest" even when dealing with taxpayer money. And ultimately I'm grateful for this as a whole, because U.S. set a model of democratic free market society that has been copied and improved around the world.

Back to propaganda. DoD is scared that other countries would increasingly use autonomous weapons in area denial missions. This would be really detrimental to "power projection". For example you don't wan't Chinese to use swaths of cheap autonomous boats to occupy south China sea. U.S. carrier fleet is way too expensive to deal with such saturation. Missile may cost more than entire boat. Other nasty possibility is cheap unmanned Cessnas flying around with short range Infra Red seeking missiles. That would make SEAD mission lot more demanding, as you need to kill the Cessna first and so notify everybody that "we are coming".

U.S. is technology leader. Whatever Lockheed Martin does, is copied around the world. Whatever they don't do, gets significantly less eyeballs. DoD uses this as an advantage. Most recent example is infrared search and track(IRST). USAF was lacking in this respect just recently, because USAF has radar stealth. Putting money into IRST in late 80's would have been smart move to anybody else. But as DoD didn't do it, herd mentality went with RADAR. And USAF has stealth, nobody else does.

Now U.S. employs autonomous weapons in assault missions in ground war. If other countries copy that tech, it does hardly anything to U.S. assets in air or sea. But for "short while" U.S. gets diminished casualties and even more frightening weapons. If people find autonomous weapons too inhumane in attack role of ground war, that might result in international ban of all autonomous weapons. Win-Win.

We have precedent. DoD teaching Cambodian guerrillas to set up mines. -> Ottawa treaty. I'm not saying U.S. did that on purpose. But I'm pretty sure DoD military analysts's learned from that experience.


I think you brought up a better point. Invading countries are going to have a hell of a time if sentries are set up. Booby traps and mines are one thing. But autonomous sentries are well, very hard to remove without risking harm


In "The Golden Oecumene" trilogy (one of the densest and greatest looks at a possible real future I've ever read), the world is run by a collection of benevolent super-intelligence AIs. One of the themes in the book is the notion of hyper-extending current technology trends and exploring what society will be like under such circumstances.

In the series, the take on the military is fascinating. Extending out the notions of precision and limiting collateral damage, orbital weapons are able to take out specific neural connections in targets and cause them to change their thinking process entirely. While the weapons in this future picture are large automated robots and AIs of a type, the actual operator is human. In fact a single human is all that remains (and is needed) to provide all the judgment and justification needed to take action. But the books still posit that a human-in-the-loop is still necessary.

Great and very challenging books https://en.wikipedia.org/wiki/The_Golden_Oecumene


If anyone believes using autonomous killing machines is inherently evil and no doubt beset with Unintended Consequences, I invite you to donate to the "Campaign to Stop Killer Robots", and hopefully make autonomous killing machines illegal by international law. http://www.stopkillerrobots.org/


This is inevitable.

Remote controlled drones are prone to jamming, hacking and human factor.

Whoever deploys autonomous warfare systems first will have an advantage in battlespace.


The first use will be for border security as in Babylon A.D.:

https://youtu.be/MQNUvaPi4Qk?t=42m15s

Lot of border. Fast response needed. Damage whole families trying to make it. Operate with little human input for a lot of them. Perfect thing for militarist governments to delegate to killing machines instead of people. Side benefit of dodging some responsibility for bad choices where they blame it on machine logic.


Wow, surprise; no sarcasm, to me, this means they already have one and believe the public will support its release.


I'm surprised there isn't currently a 'personal air support drone', that can fit into a troop's backpack. Which is little more than a flying grenade (with a video stream that can go round corners).

Such a device could be made from off the shelve components and help protect against snipers.



No thank you. I like my earth earth-y and not scorched Armageddon style.

Hubris!


This is how it begun, robot historians will write.


If there are any left


And if they lost the knowledge of English grammar.


They're kind of like smart land mines, we'll kill people when you're not around except this time we promise we'll try to only kill bad guys, pinky promise.


Daleks!


The Daleks are not machines; there is a living thing inside of the shell.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: