
Don't Let Robots Pull the Trigger - ohjeez
https://www.scientificamerican.com/article/dont-let-robots-pull-the-trigger/
======
goldenshale
I think the authors are 100% right on everything, but they failed to bring up
one of the most important factors that I believe is driving US policy in this
area: if Russian or Chinese weapons are automatically firing at the speed of a
neural network while our weapons are waiting for meatware actuation then it's
going to be a short battle.

~~~
alkonaut
That’s why it needs to be a treaty. There needs to be an understanding that
it’s as unacceptable as biological warfare and that the gloves are off once
they are used.

No one will use autonomous fire in some backwater proxy war skirmish (which is
where superpowers meet) if the response would be the same as it would be to a
nuclear first strike.

~~~
Recurecur
> There needs to be an understanding that it’s as unacceptable as biological
> warfare

Are you under the impression that the major powers don't have biological
warfare capabilities?

> and that the gloves are off once they are used.

So, China uses autonomous killbots in Tibet to control an uprising. The US
"takes its gloves off", and...sends a strongly worded letter? Sends the world
economy into a freefall?

Further, China and Russia could do some limited testing, and then stockpile
millions of killbots/killdrones. Once the gloves have already "come off", they
can then rule the battlefield, since their opponent(s) won't have a similar
massive force.

I think the combination of cheap and effective will be impossible for the less
scrupulous countries to resist...and possibly the more scrupulous countries as
well.

It is also a slippery slope - the US is already working on autonomous anti-
ship cruise missiles that search an area and look for the correct ship (think
"Chinese aircraft carrier"). That implicitly means the missile will
autonomously "decide" which ship containing hundreds or thousands of sailors
gets hit. Full autonomy will also be a requirement for any effective UCAS
capability.

I expect that regardless of what treaty is signed, the various powers will
make sure there are loopholes permitting what they wish to do - or they will
simply ignore the treaty. A concrete example of such behavior is Russia's
long-term violations of the INF Treaty.

~~~
alkonaut
> Are you under the impression that the major powers don't have biological
> warfare capabilities?

No, but that they are extremely careful with using them since it’s considered
an extreme escalation to do so even on a small scale.

> China uses autonomous killbots in Tibet...

Replace “autonomous killbot” with “small tactical nuke” and you see what I’m
getting at. They could. But they won’t - because they can accomplish any
military goal with conventional means. So why risk the backlash? That’s the
kind of threshold I’d like for the deployment of autonomous weapons of this
type. And it’s difficult given how easily deniable they are, compared to e.g
bioweapons.

> ...autonomous cruise missiles...

This is an interesting gray area. No one thinks a fuse that ensures a weapon
doesn’t blow up a friendly is a bad thing. But we still think of these weapons
as being “launched” by a human, even though they can make terminal decisions
on their own. I’d draw the line for what’s ethically acceptable at the point
where you no longer have a human launch weapons “at” a target but instead
launch a weapon to roam. The cruise missiles are still just clever fuses, they
don’t linger waiting to autonomously attack ships. The reason this is a gray
area and a slippery slope is of course only down to the short flight time of
the missiles. If they could fly for a year (like the Russian nuclear
experimental missile) then it’s useless against predesignated targets - so
effectively a roaming killbot. With current cruise missiles, a human still
makes a decision to attack even if it’s only a rough area thought to contain
the target. They know what they are trying to attack and there is a human to
take the blame if the weapon subsequently makes a mistake and hits a tanker.

> ...stockpile millions...

Yes. Comparing again to nukes that’s very similar. So would I prefer a world
without nukes to one with? Sure. But the best we could do was superpowers
stockpiling to ensure mutual destruction. I think that’s the best we can hope
for here. I don’t think these weapons won’t be built by the millions, I just
hope they will be used reluctantly.

~~~
Recurecur
(Sorry for the late reply...)

> The cruise missiles are still just clever fuses, they don’t linger waiting
> to autonomously attack ships.

That is not correct. The cruise missiles will be given an area to search. They
will then traverse that area looking for targets. Anything that matches their
programming will be hit. I'm sure they're flexible enough to take orders like
"attack any destroyer, cruiser, or aircraft carrier". It will be up to the
sensors and software to make sure the target isn't a civilian ship, for
instance.

> I don’t think these weapons won’t be built by the millions, I just hope they
> will be used reluctantly.

They are already being pitched on the open market by the Chinese:

>> The Blowfish A2 “autonomously performs complex combat missions, including
fixed-point timing detection and fixed-range reconnaissance, and targeted
precision strikes.”

>> Depending on customer preferences, Chinese military drone manufacturer
Ziyan offers to equip Blowfish A2 with either missiles or machine guns.

>> Mr Allen wrote: “Though many current generation drones are primarily
remotely operated, Chinese officials generally expect drones and military
robotics to feature ever more extensive AI and autonomous capabilities in the
future.

>> “Chinese weapons manufacturers already are selling armed drones with
significant amounts of combat autonomy.”

>> China is also interested in using AI for military command decision-making.

[https://www.technocracy.news/china-releases-fully-
autonomous...](https://www.technocracy.news/china-releases-fully-autonomous-
killer-robots-and-drones-for-combat/)

------
johngalt
Aren't 'killer robots' ethically similar to landmines? Only with better fire
control systems.

Not saying that landmines are paragons of ethical weapon systems, but they are
making a kill decison without a human being in the loop. Based on nothing more
than a pressure plate or trip wire.

If we had landmines that could differentiate between a tank and a school bus
wouldn't that be a step forward?

~~~
nkingsy
[https://www.un.org/disarmament/convarms/landmines/](https://www.un.org/disarmament/convarms/landmines/)

"The ensuing Mine Ban Convention has been joined by three-quarters of the
world’s countries. Since its inception more than a decade ago, it has led to a
virtual halt in global production of anti-personnel mines, and a drastic
reduction in their deployment. More than 40 million stockpiled mines have been
destroyed, and assistance has been provided to survivors and populations
living in the affected areas."

I think the ideas is nip this one in the bud before it gets there.

~~~
GauntletWizard
Yes. Autonomous weapons systems don't make much sense on patrol for a long
time, but as area-denial systems, they're already past viable and it's
surprising they're not deployed.

I think there is a big argument to be made for their ethicality over
landmines: unlike landmines, they are easy to deactivate and can be made
highly visible while remaining effective. I don't think this makes them
ethical, but I do believe that we have to consider how that changes their
potential uses. There's an argument that it's far safer if we have hard rules
around certain areas - "Enter a 500m radius around this military base moving
more than 5mph and you will be shot many times" \- than if we have fuzzy ones
around guards and schedules. Certainties are often better than including a
fuzzy human in the loop, which is why electrical installers all put their own
lock on the switch rather than assigning a coordinator.

~~~
dsfyu404ed
>they're already past viable and it's surprising they're not deployed.

Not deployed as far as the public knows.

It wouldn't exactly be hard to add a "shoot everything that comes near us"
flag to the software for any existing naval CIWS

------
lewiscollard
My concern about autonomous robot warfare is not that it will shoot the wrong
people or go out of control; that's easily answered by "but this one won't",
and there's no counter-argument which will not look like pure assertion.

My concern is that they will work just fine: this would bring the domestic
costs of warfare to near-zero (the cost of the robot weapons). For those of us
that are sceptical of military adventurism, just consider how much more
willing our politicians would be to invade some random, relatively harmless
country if there were no "boots on the ground" and no possibility of body bags
coming home.

~~~
cr0sh
> My concern is that they will work just fine: this would bring the domestic
> costs of warfare to near-zero

As an adjunct viewpoint to this - what happens when such weapons are available
to regular people?

In theory, that's possible today; witness the number of instructables and
other similar projects detailing how to create a nerf or paintball automated
tracking gun system. It wouldn't take much to upgrade that to mount it on a
decent mobile platform with a real weapon.

Right now, the objective or use-case for such a device isn't clear, so we
haven't seen criminals build or use them, beyond perhaps drug trafficking via
consumer drones and similar.

I'm sure, though, that it is going to be a problem in the future...

~~~
philipkglass
I'm pretty sure that some criminals will use such devices in the future.

I'm also pretty sure that criminal use is going to be rare.

There doesn't seem to be a very large overlap between willingness to commit
violent crime and the technical competency to turn off-the-shelf consumer
products into weapons. Why do terrorists stab people with knives when they
could make explosives from household materials instead? They're clearly
violent and unafraid of death. My interpretation: they lack the necessary
knowledge to make explosives, aren't capable of self-learning either, and
settle for a vastly inferior but readily available knife. Even organized crime
seems to use explosives rather rarely, and even then diverted/stolen
commercial ones more often than DIY.

~~~
lewiscollard
> There doesn't seem to be a very large overlap between willingness to commit
> violent crime and the technical competency to turn off-the-shelf consumer
> products into weapons.

Good insight, which I will tack my own thought onto: the kind of person that
would be successful at this would probably have the right mindset to work in
an engineering job, and people who do that are both earning enough that they
have no need to turn to crime, and too busy to be a political or religious
extremist (something about idle hands and the devil's work...)

Still. Stabbing and other unsophisticated attacks seems to be relatively
recent and primarily a feature of Islamic terrorism. If I am correct, then...

> My interpretation: they lack the necessary knowledge to make explosives,
> aren't capable of self-learning either, and settle for a vastly inferior but
> readily available knife.

...I have an alternative, perhaps entirely wrong interpretation: Islamic
terrorism typically requires that its most skilled and most dedicated
operatives kill themselves in the course of their first and only attack. Which
is to say, they're _running out of good operatives_.

Meh, perhaps your original interpretation is better :)

------
georgeecollins
Autonomous robots are going to be really effective in all the places where it
is difficult for a human to survive and where decision making is mediated be
technology, not human senses. So, they are going to be really effective deep
underwater, in space, and in high-g aerial combat at radar range.

Setting aside the moral question, there is also the institutional issue that
officers may not like to see pilots replaced with drones, sailors with UUAVs,
etc. That creates this dangerous place where the US military can have it's
manned systems disrupted by cheaper unmanned ones. I think that is dangerous
for me.

As far as the moral issue: What is the difference between a machine firing a
missile at a plane it believes is attacking and an officer on a ship pressing
a button to fire that missile based on the risk assessment of a machine that
tells the officer that an airliner is an attacker. Ref:
[https://en.wikipedia.org/wiki/Iran_Air_Flight_655](https://en.wikipedia.org/wiki/Iran_Air_Flight_655)

In that case the crew may have made a mistake in judgement. But it is easy to
see a potential future case where no human interpretation is better than how a
computer interprets the incoming data, and humans have to act on the
computer's judgement alone. We can insert humans in the loop but that may not
really mean anything.

------
deogeo
What worries me most is how robotic warfare further concentrates power.
Currently you have to convince the people in the military to follow your
orders, meaning you need at least _some_ legitimacy. What happens when even
that human check on power is gone?

------
krisoft
So how is a cruise missile not an autonomous weapon?

~~~
hutzlibu
The cruise missile does not decide on its own, when to start. Humans do.

~~~
maxxxxx
It also doesn't decide the target.

~~~
dsfyu404ed
Neither does the person launching it though.

Software in the case of AI weapons is basically just enlisted personnel that
follow orders perfectly.

~~~
maxxxxx
I thought this is about weapons making decisions in the field autonomously.
Like spotting a target and then also deciding to kill it.

~~~
dsfyu404ed
Well right now Pvt. Crayon is making the decision on whether or not to engage.

In the edge cases Pvt. Crayon is probably going to be better than AI at making
the call for a hell of a lot longer than the people pushing the technology
claim at any point in time.

The question really becomes how much you care about edge cases.

When there's a car bomb barreling toward your checkpoint being able to just
hit the big red "kill everything in this predefined space" button and dive for
cover would be a godsend.

------
carapace
Push this to it's logical conclusion: Put the robots in an arena and decide
policy based on who wins. No one has to get hurt and we don't even have to
blow up real buildings and stuff.

Play war games to game war.

------
yboris
Relevant:
[https://www.stopkillerrobots.org/](https://www.stopkillerrobots.org/)

> Formed in October 2012, the Campaign to Stop Killer Robots is a coalition of
> non-governmental organizations (NGOs) that is working to ban fully
> autonomous weapons and thereby retain meaningful human control over the use
> of force.

------
opwieurposiu
Robots have been killing people by accident for years, soon we will find a way
to turn this bug into a feature!

------
burfog
The USA already has this. Russia and China have similar systems.

If you fly a kamikaze attack against a large US warship, Raytheon's Phalanx
CIWS will automatically fire. I believe it handles speedboats as well. The
Patriot system is similar, able to shoot down incoming fighter jets.

------
captainbland
I think in principle this means we need to develop robots which are capable of
disabling robots using means which are generally non-lethal (not through
autonomous decision making, but matter-of-fact mechanisms like EMP, sensor
blocking/jamming and so on).

~~~
carapace
Yes! Defense is easier than offense in general, especially with time to
prepare.

One area that should be investigated is sticky hairy counter measures. Just
think about what kills your vacuum cleaner?

Or what about rip-stop protective clothing that can kill a chainsaw in a split
second? (I have my leg now because of protective chaps with kevlar fibers.
Grubbing out a nasty old stump, one moment of inattention and the chain bit my
leg, but the fibers jammed the saw in an instant. I looked down and it
actually took me a few moments to realize that the rip in my chaps with the
streamers of kevlar hanging down could have been my own precious meat! Man did
I shiver, and send up a prayer of gratitude for the folks who made the chaps!)

Anyhow, once you start thinking defensively the ideas flow readily.

One idea I want to try is a thing that shoots those plastic balls (for the
plastic ball pits you play in) but with a non-Newtonian goo (like cornstarch
goo) that goes stiff at greater energies but is fluid at lesser. Result: anti-
riot "foam". People can't run around and flail in it but you can still move
around (to e.g. provide first aid). There could be an enzyme or something that
can be sprayed to dissolve the goo... etc.

------
thisisweirdok
This absolutely needs to be an agreed upon standard similar to chemical
weapons and mines.

Anyone remotely familiar with software development knows that automating
either when to launch or what to target will not end well.

Which bug will end a civilization?

~~~
v_lisivka
Forbid robots and send this guy instead. Two problems solved.

------
v_lisivka
Mines and other kinds of death traps are primitive robots too.

------
empath75
A robot is just a more complicated fuse.

~~~
function_seven
A fuse can't make a decision to light itself.

~~~
Varcht
Some weapon systems arm themselves according to certain conditions, altitude,
time from launch, etc. I think one could argue that some missile systems are
already robots that "pull the trigger".

~~~
thisisweirdok
But a human dictates the launch and what the target will be.

~~~
Varcht
Do they? Aren't some just sent out to find a heat source and blow it up?

~~~
thisisweirdok
Oh god no. At least not as the primary guidance for long-range. It can be
partially used to track a target, but it's not being used to decide where to
target in a broader sense.

Short range devices sure, but they still require a human to point and launch.
There's not a margin of error there that will divert a missile from an armored
vehicle to an elementary school.

