No one will use autonomous fire in some backwater proxy war skirmish (which is where superpowers meet) if the response would be the same as it would be to a nuclear first strike.
Are you under the impression that the major powers don't have biological warfare capabilities?
> and that the gloves are off once they are used.
So, China uses autonomous killbots in Tibet to control an uprising. The US "takes its gloves off", and...sends a strongly worded letter? Sends the world economy into a freefall?
Further, China and Russia could do some limited testing, and then stockpile millions of killbots/killdrones. Once the gloves have already "come off", they can then rule the battlefield, since their opponent(s) won't have a similar massive force.
I think the combination of cheap and effective will be impossible for the less scrupulous countries to resist...and possibly the more scrupulous countries as well.
It is also a slippery slope - the US is already working on autonomous anti-ship cruise missiles that search an area and look for the correct ship (think "Chinese aircraft carrier"). That implicitly means the missile will autonomously "decide" which ship containing hundreds or thousands of sailors gets hit. Full autonomy will also be a requirement for any effective UCAS capability.
I expect that regardless of what treaty is signed, the various powers will make sure there are loopholes permitting what they wish to do - or they will simply ignore the treaty. A concrete example of such behavior is Russia's long-term violations of the INF Treaty.
No, but that they are extremely careful with using them since it’s considered an extreme escalation to do so even on a small scale.
> China uses autonomous killbots in Tibet...
Replace “autonomous killbot” with “small tactical nuke” and you see what I’m getting at. They could. But they won’t - because they can accomplish any military goal with conventional means. So why risk the backlash? That’s the kind of threshold I’d like for the deployment of autonomous weapons of this type. And it’s difficult given how easily deniable they are, compared to e.g bioweapons.
> ...autonomous cruise missiles...
This is an interesting gray area. No one thinks a fuse that ensures a weapon doesn’t blow up a friendly is a bad thing. But we still think of these weapons as being “launched” by a human, even though they can make terminal decisions on their own. I’d draw the line for what’s ethically acceptable at the point where you no longer have a human launch weapons “at” a target but instead launch a weapon to roam. The cruise missiles are still just clever fuses, they don’t linger waiting to autonomously attack ships. The reason this is a gray area and a slippery slope is of course only down to the short flight time of the missiles. If they could fly for a year (like the Russian nuclear experimental missile) then it’s useless against predesignated targets - so effectively a roaming killbot. With current cruise missiles, a human still makes a decision to attack even if it’s only a rough area thought to contain the target. They know what they are trying to attack and there is a human to take the blame if the weapon subsequently makes a mistake and hits a tanker.
> ...stockpile millions...
Yes. Comparing again to nukes that’s very similar. So would I prefer a world without nukes to one with? Sure. But the best we could do was superpowers stockpiling to ensure mutual destruction. I think that’s the best we can hope for here. I don’t think these weapons won’t be built by the millions, I just hope they will be used reluctantly.
> The cruise missiles are still just clever fuses, they don’t linger waiting to autonomously attack ships.
That is not correct. The cruise missiles will be given an area to search. They will then traverse that area looking for targets. Anything that matches their programming will be hit. I'm sure they're flexible enough to take orders like "attack any destroyer, cruiser, or aircraft carrier". It will be up to the sensors and software to make sure the target isn't a civilian ship, for instance.
> I don’t think these weapons won’t be built by the millions, I just hope they will be used reluctantly.
They are already being pitched on the open market by the Chinese:
>> The Blowfish A2 “autonomously performs complex combat missions, including fixed-point timing detection and fixed-range reconnaissance, and targeted precision strikes.”
>> Depending on customer preferences, Chinese military drone manufacturer Ziyan offers to equip Blowfish A2 with either missiles or machine guns.
>> Mr Allen wrote: “Though many current generation drones are primarily remotely operated, Chinese officials generally expect drones and military robotics to feature ever more extensive AI and autonomous capabilities in the future.
>> “Chinese weapons manufacturers already are selling armed drones with significant amounts of combat autonomy.”
>> China is also interested in using AI for military command decision-making.
On the battlefield, it will be impossible.
The only difference between a system with a human firebreak and a fully autonomous weapon is the flip of a bit.
Treaties will not be effective at preventing the development of these weapons. Any illusions will be shattered by the first bad actor.
An arms race in this space is inevitable; realistically it has already begun.
Still - using what amounts to a nuclear first strike is a dangerous game to play, so it might at least make superpowers think twice if it's made clear that this is how it's seen. Russia wouldn't try to use a bioweapon or tiny nuke and claim it was conventional, because it's a risky gamble.
IMO the closing window of opportunity, that is, the fleeting opportunity that still exist, would be a culture of intellectual honesty and getting rid of moral double standards rather than goosestepping, so maybe people in Russia, China and elsewhere might think hey, maybe we'd actually like to be free, too. But, instead let's just play another round of this game. Remember the cold war, how giant that was, yet when it ended, the military budget just stayed as big? It's a racket.
While I see your point on the ever present escalation risk in these things, are you sure you're not taking an overly restrictive view of "pull the trigger"? Take long existing weapons like just plain old guided missiles. In one respect it could be argued a computer system is making the kill choice there, after firing there will be a long period where the computer is seeking the target and the computer is what actually detonates the warhead that makes the kill. But in practice it's hard to argue that a human isn't still "pulling the trigger", making the decision that "this target should be killed" and then the level of intelligence the weapon employs to accomplish that order afterwards isn't so relevant. The real comfort factor seems come from there being inherent limits in terms of time/space (a missile only has so much run time and range) and specificity (a missile is going after a specific designated target).
If we consider the most problematic weapons of modern history, I think the real uniting factor is when they get away from all those limits at once. Landmines can linger a long time and activate indiscriminately without further human intervention. Chemical and bioweapons also not well controlled in terms of time, space or specificity despite no AI. Project Pluto, the early Cold War R&D by the US military to create a hypersonic nuclear scramjet cruise "missile" (more of a autonomous bomber) that could be launched then linger flying over the ocean for months (due to nuclear reactor power) was abandoned precisely because it was considered far too destabilizing even for then. Despite still being a "missile", the longer time/space/area effect made a difference. Heck, even cluster munitions on conventional missiles, while not considered in the same category as landmines exactly, are still pretty hotly debated.
By the same token in a future conflict with more advanced NN/ML weapons, isn't there still room for operator decisions? Ie., "here are cryptographically authenticated orders to kill targets matching X profile within Y geographic area for Z time period". It would seem like setting international restrictions on X, Y and Z there (requirements for safety interlocks and for any given kill authorization to be valid for no more then 20 minutes over no more then 400 km^2 to toss something out) could be a viable approach. So in the specific minutes and seconds and milliseconds of machine/machine conflict there'd be no disadvantage, but humans would still be need to be constantly "involved" at a higher level. Having killbots just roaming around or lying in wait (and potentially lasting after the conflict is totally over, ala landmines) would be banned, but sending robot tanks to attack a specific fortified position on one single day with active strategic direction would not be.
Basically moving humanity "up the chain" a bit seems like something countries could practically agree to without feeling massively behind and could be monitored/verified.
Edit: updated to add some comparison to non-AI weapon systems that have been agreed to be restricted with some success.
You can't see software.
Edit, found a reference for the autonomous helicopter:
Unlike chemical/nuclear warfare, no factories, material transportation, etc. out of the ordinary observable by other countries. Sensors needed for the systems would be entirely justified for other reasons and software to change behavior can be loaded invisibly.
There would be no way to police it.
The real use for chemical weapons is to terrify civilians and poorly organized/equipped militias. Most of all it’s a way to murder a large number of civilians without destroying infrastructure. Needless to say that’s not something you’d typically do in a war, where destruction of infrastructure is typically step one! As a result most countries recognize the economic and social burden of chemical weapons and their use, and strongly oppose them.
See Vietnam War, Napalm, Agent Orange etc.
Not saying that landmines are paragons of ethical weapon systems, but they are making a kill decison without a human being in the loop. Based on nothing more than a pressure plate or trip wire.
If we had landmines that could differentiate between a tank and a school bus wouldn't that be a step forward?
"The ensuing Mine Ban Convention has been joined by three-quarters of the world’s countries. Since its inception more than a decade ago, it has led to a virtual halt in global production of anti-personnel mines, and a drastic reduction in their deployment. More than 40 million stockpiled mines have been destroyed, and assistance has been provided to survivors and populations living in the affected areas."
I think the ideas is nip this one in the bud before it gets there.
I think there is a big argument to be made for their ethicality over landmines: unlike landmines, they are easy to deactivate and can be made highly visible while remaining effective. I don't think this makes them ethical, but I do believe that we have to consider how that changes their potential uses. There's an argument that it's far safer if we have hard rules around certain areas - "Enter a 500m radius around this military base moving more than 5mph and you will be shot many times" - than if we have fuzzy ones around guards and schedules. Certainties are often better than including a fuzzy human in the loop, which is why electrical installers all put their own lock on the switch rather than assigning a coordinator.
Not deployed as far as the public knows.
It wouldn't exactly be hard to add a "shoot everything that comes near us" flag to the software for any existing naval CIWS
Landmine bans in particular are kind of ridiculous because of how easy it is to build and plant IED's.
They are more effective against the intended target.
My concern is that they will work just fine: this would bring the domestic costs of warfare to near-zero (the cost of the robot weapons). For those of us that are sceptical of military adventurism, just consider how much more willing our politicians would be to invade some random, relatively harmless country if there were no "boots on the ground" and no possibility of body bags coming home.
As an adjunct viewpoint to this - what happens when such weapons are available to regular people?
In theory, that's possible today; witness the number of instructables and other similar projects detailing how to create a nerf or paintball automated tracking gun system. It wouldn't take much to upgrade that to mount it on a decent mobile platform with a real weapon.
Right now, the objective or use-case for such a device isn't clear, so we haven't seen criminals build or use them, beyond perhaps drug trafficking via consumer drones and similar.
I'm sure, though, that it is going to be a problem in the future...
I'm also pretty sure that criminal use is going to be rare.
There doesn't seem to be a very large overlap between willingness to commit violent crime and the technical competency to turn off-the-shelf consumer products into weapons. Why do terrorists stab people with knives when they could make explosives from household materials instead? They're clearly violent and unafraid of death. My interpretation: they lack the necessary knowledge to make explosives, aren't capable of self-learning either, and settle for a vastly inferior but readily available knife. Even organized crime seems to use explosives rather rarely, and even then diverted/stolen commercial ones more often than DIY.
Good insight, which I will tack my own thought onto: the kind of person that would be successful at this would probably have the right mindset to work in an engineering job, and people who do that are both earning enough that they have no need to turn to crime, and too busy to be a political or religious extremist (something about idle hands and the devil's work...)
Still. Stabbing and other unsophisticated attacks seems to be relatively recent and primarily a feature of Islamic terrorism. If I am correct, then...
> My interpretation: they lack the necessary knowledge to make explosives, aren't capable of self-learning either, and settle for a vastly inferior but readily available knife.
...I have an alternative, perhaps entirely wrong interpretation: Islamic terrorism typically requires that its most skilled and most dedicated operatives kill themselves in the course of their first and only attack. Which is to say, they're running out of good operatives.
Meh, perhaps your original interpretation is better :)
A key check on military abuse in a democracy is that military actions kill some fraction of voters. We have already seen that as that fraction goes down through better soldier protection technology, our willingness to go into battle increases. If we drive that down to near-zero, the implications are frightening.
A further consequence of this is that the fewer human soldiers you have in the field, the fewer embedded journalists you have. And, without those, it is even easier to conduct military actions without any oversight from voters.
Like any weapon, robotic weapons are only a problem if you use them for evil.
And the presence of plenty of artillery pointed at Seoul has prevented anyone who doesn't want to see Seoul leveled as a result of their actions from taking aggressive action against North Korea. Knowing that if you do anything bad you will get whacked hard enough that it's not worth it is a powerful deterrent.
>Like any weapon, robotic weapons are only a problem if you use them for evil.
Because of the inherent defensive bias of the technology (stationary is easier than mobile, defending a known area is easier than entering an new area and figuring out what to shoot) robotic weapons are a bigger problem for those seeking to do or enable evil.
This is a crucial point.
Another point is that most potential adversaries don't value the lives of their fighters as much as we value the lives of our military personnel. If half a dozen American troops die, it's national news. If a hundred die, it's a political scandal. People wonder why the US spends so much more on defense than other countries. That's partially because we're willing to spend lots and lots of dollars to save just a few lives.
And, for that reason, I think the US in particular will eventually be willing to replace even its offensive military capacity with robots. And if it works, that means the US or any other technologically advanced state can, at great expense, continue a counterinsurgency campaign indefinitely.
But, as you point out, any defender who can scrap a robot together will have an advantage in countering that ability. Hell, they might even be able to use open-source software to run them. And that's the good news. Orwell wrote, "...tanks, battleships and bombing planes are inherently tyrannical weapons, while rifles, muskets, long-bows and hand-grenades are inherently democratic weapons. A complex weapon makes the strong stronger, while a simple weapon--so long as there is no answer to it--gives claws to the weak."
Setting aside the moral question, there is also the institutional issue that officers may not like to see pilots replaced with drones, sailors with UUAVs, etc. That creates this dangerous place where the US military can have it's manned systems disrupted by cheaper unmanned ones. I think that is dangerous for me.
As far as the moral issue: What is the difference between a machine firing a missile at a plane it believes is attacking and an officer on a ship pressing a button to fire that missile based on the risk assessment of a machine that tells the officer that an airliner is an attacker. Ref: https://en.wikipedia.org/wiki/Iran_Air_Flight_655
In that case the crew may have made a mistake in judgement. But it is easy to see a potential future case where no human interpretation is better than how a computer interprets the incoming data, and humans have to act on the computer's judgement alone. We can insert humans in the loop but that may not really mean anything.
Software in the case of AI weapons is basically just enlisted personnel that follow orders perfectly.
In the edge cases Pvt. Crayon is probably going to be better than AI at making the call for a hell of a lot longer than the people pushing the technology claim at any point in time.
The question really becomes how much you care about edge cases.
When there's a car bomb barreling toward your checkpoint being able to just hit the big red "kill everything in this predefined space" button and dive for cover would be a godsend.
Play war games to game war.
> Formed in October 2012, the Campaign to Stop Killer Robots is a coalition of non-governmental organizations (NGOs) that is working to ban fully autonomous weapons and thereby retain meaningful human control over the use of force.
If you fly a kamikaze attack against a large US warship, Raytheon's Phalanx CIWS will automatically fire. I believe it handles speedboats as well. The Patriot system is similar, able to shoot down incoming fighter jets.
One area that should be investigated is sticky hairy counter measures. Just think about what kills your vacuum cleaner?
Or what about rip-stop protective clothing that can kill a chainsaw in a split second? (I have my leg now because of protective chaps with kevlar fibers. Grubbing out a nasty old stump, one moment of inattention and the chain bit my leg, but the fibers jammed the saw in an instant. I looked down and it actually took me a few moments to realize that the rip in my chaps with the streamers of kevlar hanging down could have been my own precious meat! Man did I shiver, and send up a prayer of gratitude for the folks who made the chaps!)
Anyhow, once you start thinking defensively the ideas flow readily.
One idea I want to try is a thing that shoots those plastic balls (for the plastic ball pits you play in) but with a non-Newtonian goo (like cornstarch goo) that goes stiff at greater energies but is fluid at lesser. Result: anti-riot "foam". People can't run around and flail in it but you can still move around (to e.g. provide first aid). There could be an enzyme or something that can be sprayed to dissolve the goo... etc.
Anyone remotely familiar with software development knows that automating either when to launch or what to target will not end well.
Which bug will end a civilization?
Best decision: don't build it, if it's already built don't launch it, destroy it as soon as possible.
Short range devices sure, but they still require a human to point and launch. There's not a margin of error there that will divert a missile from an armored vehicle to an elementary school.