
Death by algorithm: the age of killer robots is closer than you think - saravana85
https://www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-weapons-ai-war
======
ummwhat
All this hoopla about state level actors and un bans and scavenging the tech,
but realistically speaking of any of us really really needed to, we could
probably bolt a gun to a drone and use pytorch/tensor flow/whatever to make a
reasonably effective LAWS. Sure a country could do it better than me, but you
don't need Manhattan project resources to do this. The cats out of the bag.
Pandora's box is open. The horse has already bolted

------
Barrin92
>Experts in machine learning and military technology say it would be
technologically straightforward to build robots that make decisions about whom
to target and kill without a “human in the loop”

I'd like to know who those experts are because like almost always in these
ethical debates about autonomous systems, the first thing I notice is the
grandiose claims being made about the technology itself.

It is highly non trivial for humans to make calls about civilians, targets,
journalists and so on with high fidelity in war zones.

And even if the classification was possible, what stops bad actors from the
trivial countermeasures of handing civilians weapons or using disguise, or
even adversarial methods to fool autonomous systems? That's not something any
existing technology can cope with, it requires extraordinary human judgement.

Just like with autonomous driving technology I'm almost more afraid of the
notorious overselling of dumb machines than I am of ethical debates.

~~~
dogma1138
Bad actors already use adversarial patterns by mixing up within the civilian
population.

This is been to core concept of guerrilla warfare since it was conceived and
likely well before it was named.

In fact even outside of war zones bad actors constantly hide in plain sight we
just call them criminals rather than terrorists when they aren’t a foreign
entity with a political goal (although organized crime can easily fall under
the definition of terrorism under certain conditions).

As far as dumb machines go then we already have plenty of systems which
already automate much of the killing including target identification and
tracking.

We’ve already been over selling dumb machines that make killing easier a human
in a conflict zone overloaded with sensor data that makes it easy to hit a
target isn’t more likely to make the right decision.

Given the limited window of opportunity and the level of cognitive dissonance
any one engage in combat has to develop as coping mechanism the idea that
humans are inherently capable of making a moral decision under these
conditions is pretty laughable to me.

------
xyzal
Reminds me of the somewhat famous Slaughterbots video from the Stop Autonomous
Weapons initiative:
[https://youtu.be/9CO6M2HsoIA](https://youtu.be/9CO6M2HsoIA)

~~~
imtringued
It's a pretty crappy video that is doing nothing but fearmongering.

Did you know that there is a knife in every kitchen? So why isn't there a
knife murder epidemic? Because murder has to be "profitable" in some way. As
long as there are better ways to obtain that "profit" violence will generally
not happen on a large scale.

Nowadays the bigger problem is that countries like the USA are willing to kill
journalists and other civilians in foreign countries Drone or rifle does not
matter. They were innocent either way.

