Because that will probably sound reasonable to most people.
"Hey, hey, hey now. We're not sending unmanned terminators out to battle to search and destroy any living thing.
This our cute little droney-drone drone. We call him Charlie the unicorn. Now if the bad baddy bad terrorists do something bad to Charlie and make it so he can't receive orders, don't you want Charlie to be able to protect himself?"
And the robot apocalypse begins. Not with an army of soul-crushing t-1000s, but with an M-79 grenade launcher dressed up as a puppy.
My guess is that by 2035, we will be mostly okay with it though. Putting AI in charge of driving cars is, by definition, letting AI make life/death choices. If we're okay with that, we are inherently accepting that these ethical situations can and will be accurately modeled.
If you can tell a car how to make a decision between hitting an old lady in the street vs. crashing into a tree and killing the passenger and get that right enough to please the public, I'm sure we can figure out how to tell a robot in a battle situation which person to shoot . . . well enough to please the public.
They could use the AI for real-time analysis of video feed in helicopters etc. Highly publized incident comes to mind where the poor camera man was gunned down mistakenly when human operators thought he was wielding RPG, whilte it was likely a camera he was holding. Could the AI have detected it?
It's Dr Strangelove all over again. (Dr Strangelove was about a crazy general who launched a nuclear attack, using a provision that he was able to do so on losing contact with Washington)
It's an interesting word choice: "accurate ethical modeling". Given certain "ethical priors" what we do depends on what we have encountered in life. Soldiers are humans with siblings, parents, wives, pets, homes, etc. that allows them to train what they ought to do in a wide range of situations. If we start to train robots only in the art of killing people, or even only in deescalating dangerous situations, this is a very impoverished dataset.
I would never want an AI that is able to kill people, but that I don't trust to hold my baby.
I have not seen any indication that this is a solved problem and a major reason why I'm against autonomous cars as they stand today. I'm extremely concerned that in the rush to get to self-driving cars that issues like this are swept aside in the typical Valley attitude of asking for forgiveness later.
To be clear, I would love an autonomous commute vehicle that lets me work on the way to work but only if it's safe. At the same time, I love diving for pleasure and do not want a future where that's out of the question. It's a big dilemma.
Except for the part about it being good enough for the public to go with it.
* No soldiers of the side deploying armed drones need to be in danger.
* Robots do not act cruelly unless specifically instructed. Humans do have a pretty bad record of unnecessary cruelty due to racism, boredom, intoxication etc.
* If you log all descision making processes, you can justify each action taken. It's like bodycams for police officers. If you'd make all logs public after the war it would be possible to identify war crimes and who gave the order (unlike now).
* Nothing requires a machine to use lethal force all the time. Systems which have the primary purpose of capturing combattants can be considered when there's no risk of losing one of your own soldiers.
* Your robots priority doesn't have to be self-preservation (unlike a human soldier). When in doubt, don't shoot and analyze the situation.
All in all, if the military would use these autonomous killing machines with the intent of minimizing covilian casualties, would this not be an improvement over the current state of warfare? Of course it won't make war per se less likely, but not using these machines doesn't either. So why not use them?
Well, sure, but have you used computers recently? It's often a fucking awful experience and while the computer clearly isn't trying to make your life frustrating that's often the result.
PC LOAD LETTER
Do we really want the people who make printers to make autonomous killing machines?
> In this article I present in which way scanners / copiers of the Xerox WorkCentre Line randomly alter written numbers in pages that are scanned. This is not an OCR problem (as we switched off OCR on purpose), it is a lot worse – patches of the pixel data are randomly replaced in a very subtle and dangerous way: The scanned images look correct at first glance, even though numbers may actually be incorrect. Without a fuss, this may cause scenarios like:
... wedding parties being hit by drone strike?
Currently military numbers are limited by manpower. And military units (like let's tanks) are to some degree limited by money. You don't want to pour too much money on single tank, because you then put all your eggs on few baskets. And you also don't want to put significant part of your troops in badly armored tanks with food firepower, because then you will run out of tankers in no time. Currently the "good balance" is 10 000 M1A2 Abrams that cost 6,2 million dollars a piece. Those would need 40 000 tankers, which is tolerable amount of crew. Training them is costing and going to cost lots of money. Combined price of the thing is 62 billion dollars or tax payer money, distributed to several decades. Increasing the price of tank to 10 million would be stupid, as it's still vulnerable to certain dangers, but you now lose 4 million more in single blast.
Now let's imagine full autonomous tank fleet of M1A9 Abrams somewhere around year 2050. You are no longer limited by tankers. You are only limited by price. But as you lack crew, you need less armor. Lets say we have unit price of 4 million dollars. Now 10 000 units is only 40 billion and zero tankers.
Then Russia makes autonomous tank "T17 Kursk" and produces 15 000 of them to sell to China. (Or something else, I'm not fiction writer.) They are bit crap, but as trained tankmen are not required, they can easily produce higher numbers to compete with U.S.
But U.S. is not happy with second place. So U.S. gets another 10 000 Abrams.
But China is not giving up, they order another 15 000 Kursk.
And on and on. Until tax percentages of both China and U.S. reach 70% and all of it is poured to "defense". China, India, Brazil and Russia are all likely to beat U.S. in economic growth rate in the future, because it's easier if you start from the bottom. And they are all geopolitically ambitious. We are headed towards multipolar world and that means arms races again.
TL;DR: Autonomous killing robots remove limitations of arms race. This is "destabilizing" and draining.
I'd assume that this is the same in asymmetric conflicts: If you are fighting a guerilla war, you "win" when you kill some of your enemies troops and demoralize them. However if enemy sends only machines, then cannot win as long as the enemy is capable of producing more of these autonomous weapons than you can destroy. As you've said the cost savings do enable that. Hence the guerilla side also loses its victory condition. This is a scenario I'd actually like to see, since it might put an effective end to the kind of fighting we've seen in the middle east.
Your argument is of course correct when we consider two countries which are both capable of producing such weapons and neither side has a nuclear weapon. Saudi-Arabia and Iran are candidates which could probably fall into this kind of an arms race and that would indeed make the situation there even worse.
As a Finn, this is exactly the kind of stuff I don't want to see. Russia can win conventional war against Finland any day, if Finland would be absolutely certain to lose also the ensuing guerrilla war, we would lose our independence without fighting. Maybe not de jure independence, but practically yes.
Nukes are unusable as you said. This has not prevented conventional arms race and it's usage for proxy wars during cold war. It's unlikely it would do so in future. It might even make the whole situation worse, as now you can supply the "mujahideen" with so much more potent weapons. And you need less popular support to maintain similar levels of guerrilla activity.
Damn I hope I'm wrong on that. The possible counter could be autonomous aerial denial weapons. If they get better than autonomous assault weapons, we might actually see more peaceful world in the fashion of Kant(1). It might mean that Taleban might rule Afganistan again. But it would also mean that Russia can't wing Georgia, Egypt can't win Israel and Iran can't win Iraq. Westphalian utopia. :D
It's really awkwardly worded essay, but the point is that if everybody defends and nobody attacks, we can get resilient world peace. With the downside of occasional minor conflict, but also with no "world government". If you start to think the election of president of world government, the whole idea seems pretty obviously oppressive and potentially very totalitarian.
A good example of this in real life is what Denmark ended up doing when occupied by the Nazis - non-compliance, plain refusal to cooperate with the Final Solution, and eventually even convincing the Nazis who were based in Denmark to sabotage their own country's plans.
Read more here: https://en.m.wikipedia.org/wiki/Rescue_of_the_Danish_Jews
And I also recommend the book "From Dictatorship to Democracy" by Gene Sharp which outlines many principles and techniques for non-violent resistance.
I think we would just have nutters bombing us from the skies.
Of course that only enhances your point.
Money and perceived threat are big factors in the equation. But anyhow there is currently a limit on how much money you can throw at a Field army until you hit deeply diminishing returns. And numbers of enlisted limit how many field armies you can field. Robotics can push this limitation way further.
Believe what you want, but comparing auto industry to the military systems is meaningless in my opinion.
I agree that in theory, killbots fighting killbots could lead to very clean wars. The problem is that the least powerful party will quickly resort to 'kill and torture as many civilians as possible' as a means of revenge, psychological warfare, terrorism, etc.
The ethics of robots will be the ethics of humans. More powerful and effective killing machines will amplify the behavior of immoral humans.
* Autonomous weapons subvert the basis of Democracy, making autocracies sustainable.
* Bugs on autonomous weapons can make them much more dangerous, to both sides, even on peace time.
I think too often the casualties of war get dismissed as collateral damage. I'm for requiring politicians that vote for war to send their offspring into battle. Put some skin in the game so as to speak...
Most people seem to get their knowledge on the subject more from movies than reality.
What are you referring to?
The only examples I know involve using animals to do the targeting. But all examples failed, as animals don't detect friend from foe. https://en.wikipedia.org/wiki/Military_animal#As_living_bomb...
I'd like to note that human being deciding when and where an attack should happen, never counts as "autonomous". Otherwise we have to count simple traps as "autonomous" weapons and the whole term becomes redundant. You could claim that any bullet shot while aiming through night vision goggles was "autonomous" because the shooter just set time and direction, but didn't really see who he was shooting at.
Second thought: you could claim that Japanese balloon firebombs and British balloon troops were using "autonomous" weapons. Then you would need to claim that changes in wind was the "AI". And strictly speaking that might be correct term. But then it's pretty easy to see why current hype about computerized "AI" has taken off in different way.
"identify and shoot a target automatically from over two miles (3.2 km) away."
There are some scary videos of it on the internet.
If we go to historical precedents, probably Russian swarming anti-ship missiles are the cut point. They can discriminate with carrier and it's companions, so the individual missile will make "I will not kill that" decision about individual ship. https://en.wikipedia.org/wiki/P-500_Bazalt
Machines killing without oversight is a terrible idea regardless of the # of tests they can pass.
We shouldn't mind them sometimes making mistakes and killing innocent people because human soldiers already do that often enough. Perhaps the robots will be slightly more accurate than humans, and that's surely good for everybody except their enemies.
Maybe the real question we should be asking is why humans are allowed to kill people? When two countries fight each other, soldiers on both sides somehow decide it's OK to kill the other. They can't both be right, so in any war, thousands of flesh and bones killing machines (all soldiers on the "wrong" side) effectively go wild and try to kill lots of inappropriate people. That shows that human decision making is extremely poor, and can even unanimously agree on the same lethal wrong decision.
Yes, that a robot always obey orders. In my opinion that is the most dangerous thing about this issue.
Anyway, it's only a philosophic question because this is going to happen. "We" can talk about it, but "we" are not deciding anything.
"They can't both be right"
Not, but they can both be wrong, that is what normally happens.
What's the obedience rate of a robot?
There is much less risk sending a robot to kill than there is sending a human. Humans fear death. Robots do not.
Contractors claim that drones are accurate. However that can never guarantee that the human controllers are accurate. Humans controlling robots may err to the side of killing too many than too few, to a greater degree than soldiers would.
So yeah I think there are some reasonable arguments against them. Jet fighters and bombers are somewhat limited by personnel and higher equipment costs.
Drones do not require as much training to operate and will not have as much of a personnel constraint in the future. They could be as dangerous or worse than nuclear weapons for a country that is being attacked by them in the future. Rather than wiping out a population, drones could create slaves.
Back to propaganda. DoD is scared that other countries would increasingly use autonomous weapons in area denial missions. This would be really detrimental to "power projection". For example you don't wan't Chinese to use swaths of cheap autonomous boats to occupy south China sea. U.S. carrier fleet is way too expensive to deal with such saturation. Missile may cost more than entire boat. Other nasty possibility is cheap unmanned Cessnas flying around with short range Infra Red seeking missiles. That would make SEAD mission lot more demanding, as you need to kill the Cessna first and so notify everybody that "we are coming".
U.S. is technology leader. Whatever Lockheed Martin does, is copied around the world. Whatever they don't do, gets significantly less eyeballs. DoD uses this as an advantage. Most recent example is infrared search and track(IRST). USAF was lacking in this respect just recently, because USAF has radar stealth. Putting money into IRST in late 80's would have been smart move to anybody else. But as DoD didn't do it, herd mentality went with RADAR. And USAF has stealth, nobody else does.
Now U.S. employs autonomous weapons in assault missions in ground war. If other countries copy that tech, it does hardly anything to U.S. assets in air or sea. But for "short while" U.S. gets diminished casualties and even more frightening weapons. If people find autonomous weapons too inhumane in attack role of ground war, that might result in international ban of all autonomous weapons. Win-Win.
We have precedent. DoD teaching Cambodian guerrillas to set up mines. -> Ottawa treaty. I'm not saying U.S. did that on purpose. But I'm pretty sure DoD military analysts's learned from that experience.
In the series, the take on the military is fascinating. Extending out the notions of precision and limiting collateral damage, orbital weapons are able to take out specific neural connections in targets and cause them to change their thinking process entirely. While the weapons in this future picture are large automated robots and AIs of a type, the actual operator is human. In fact a single human is all that remains (and is needed) to provide all the judgment and justification needed to take action. But the books still posit that a human-in-the-loop is still necessary.
Great and very challenging books https://en.wikipedia.org/wiki/The_Golden_Oecumene
Remote controlled drones are prone to jamming, hacking and human factor.
Whoever deploys autonomous warfare systems first will have an advantage in battlespace.
Lot of border. Fast response needed. Damage whole families trying to make it. Operate with little human input for a lot of them. Perfect thing for militarist governments to delegate to killing machines instead of people. Side benefit of dodging some responsibility for bad choices where they blame it on machine logic.
Such a device could be made from off the shelve components and help protect against snipers.