>There is no need to solve the long-term AI problem of general intelligence in order to build high-value applications that exploit limited-scope autonomous capabilities dedicated to specific purposes.
I'd argue that statement is inclusive of robotic infantry. We already have self-driving cars, so the prospect is not that far-fetched.
If you are fighting an army in uniforms, your targets have like bull eyes painted on them. But if you look at all the modern conflicts since the viet nam war, how do you tell the difference between a viet peasant and a viet cong, between an afghan villager and a taliban, between a kurd fighter and an ISIS fighter?
I doubt we are that close. Recognizing road signs and making judgement calls on the above are not the same order of magnitude of complexity.
Remotely controlled infantry, that would be more realistic.
If one of these peasants to taking shots at your robots, and only your robots, is ethical to kill them when no actual human is immediately threatened ?
Armed conflict is the ultimate rejection of rules societies normally impose on themselves. When two otherwise evenly matched armies fight each other, one following some arbitrary, self-imposed rules and the other one ignoring them, the latter will have a tactical advantage.
Or to change your question - is it ethical to use civilians to maneuver around the killer robots who do not shoot at non-combatants? No, it's not, but it doesn't matter - because if your robots have such limitation, your opponent will surely exploit it to defeat you.
My point is that ROE are applied only by the side that feels very much sure of its power and ability to win the conflict. When you fight for your very survival, all rules go down the drain.
>Recognizing road signs and making judgement calls on the above are not the same order of magnitude of complexity.
On the contrary, self-driving vehicles go far beyond merely "recognizing road signs", and must make life and death judgement calls themselves.
Recognizing a person holding a weapon is already in the realm of current machine vision capability. In an environment where robotic infantry is deployed, it's not a stretch to imagine friendly forces possessing equipment to facilitate friend/foe identification.
Moreover, robotic infantry doesn't necessarily have to be lethal in the first place.
It's already being done, minus the IFF beacon. Just being in the general vicinity of some other person is enough to get murdered, no need for 'friendlies' (what a term) to go there at all.
Interestingly enough, Nikola Tesla discussed the concept of "telautomatons" for use in war in his 1900 article: "The Problem of Increasing Human Energy."
Fully autonomous weapons are going to come, and the squeamishness around it is going to disappear when its necessity becomes apparent in a shooting war.
We already have autonomous weapons, their called mines. Even these can be somewhat targetable, in particular with antivehicle mines.
The most immediately plausible development in autonomous weapons I've read is in anti-shipping and anti-radiation missles. Israeli Areospace new anti-radiation weapon is a loitering drone than can autonomously target fire control radars.[0] Lockheed's new LRASM[1] is designed to loiter in an area and automatically identify and target enemy ships within an area. However, it right now asks for permission to conduct the attack. So we're just a firmware upgrade from fully autonomous with that weapon.
In both these cases, autonomous weapons aren't really troublesome. Fire control radars send out a very different signal than civilian radars, and so are explicitly a military target. Furthermore even civilian radar could be considered a legitimate target if you don't want to be observed at all, as opposed to be seen but impossible to shoot down.
Antishipping historically has explicitly allowed targeting of both military and civilian ships. So just as long as you can identify the vessel as not being yours using an electrolytically sensors, it's permissible to sink it.
The real problem with autonomous weapons tend to be when their deployed on land, especially when civilians are around. In the sea, and in the air even, there's no one around.
> We already have autonomous weapons, their called mines.
And those autonomous weapons keep on killing and maiming people for a long time, whether the orignial conflict continues or not. The squeamishness does not seem to have quite disappeared either, judging from the fact that much of the world has comdended and opted to forgo their use. [1]
"Fully autonomous weapons are going to come, and the squeamishness around it is going to disappear when its necessity becomes apparent in a shooting war."
The squeamishness around chemical, biological, nuclear weapons and expanding bullets didn't disappear. How do we know that fully autonomous weapons won't be categorised alongside them?
I honestly don't know what to make of this comment. There hasn't been large scale naval combat, or challenged transoceanic supply lines since World War II. So... what's your point? Also, you seem to be implying that targeting supply lines is somehow illegitimate. It is legitimate. It's been a tactic of war since war evolved beyond cattle raids on neighboring villages. The fact that you're comparing this to a war crime, shows you don't know what you're talking about.
Next you'll being saying false flags[0] and arming civilian[1] ships are war crimes. Again, they're not.
>There is no need to solve the long-term AI problem of general intelligence in order to build high-value applications that exploit limited-scope autonomous capabilities dedicated to specific purposes.
I'd argue that statement is inclusive of robotic infantry. We already have self-driving cars, so the prospect is not that far-fetched.
[0] http://www.acq.osd.mil/dsb/reports/DSBSS15.pdf