"Not even the evidence of the 1904 Russo-Japanese War, with its long sieges and trench warfare—an eerie predictor of World War I’s horrors to come—could persuade military observers of the Maxim gun’s lethality on the modern battlefield."
Kind of like how the queen in chess may not take many pieces. She is easily the most powerful piece, and as a result a very credible threat. As a result she can be used to project threats rather than taking pieces.
By example, there is a long and well documented history of human rights violations against civilian population from human armies, and those violations come as often as not from the high command's orders as for the volition of individual units.
Even worse than that, there is also the well documented history of the Cold War, where nuclear war and MAD was only spared because there were individual low level officers that exercised a humane criteria and knowingly disobeyed protocol. If robots would have been in charge of nuclear defense systems during those years, we probably would not been here discussing this.
On the other hand, robots would also complicate things for the not-honorable superiors -- for one, they'll lose an option to give verbal orders and then claim no knowledge and blame subordinates when things come out -- they would now need to alter electronic records.
But my point is just that the effect of higher robot involvement is far from clear.
> and those violations come as often as not from the high command's orders as for the volition of individual units.
 Perhaps through cryptographically one-way neurolinguistic re-programming. Even Japan's conditioning by the American occupation (one of the most brutal cultural reorganizes absent enforced diaspora) seems to be waning after 70 years.
You're thinking like a nice person, which is the wrong approach here.
What you should be doing is, think of it as if you were Iosif Vissarionovich Stalin. Okay? You're Stalin. And now you're thinking "hmmm, how could I use these shiny new toys, I wonder?..."
It's not the best case scenario you should worry about, it's the other end of the stick. Security 101.
(Unless, of course, your whole post was sarcastic, in which case nevermind.)
Drones will be deployed in situations where human based intervention would otherwise be a political non-option, thus creating collateral damage where there otherwise would be none.
The cold truth is that an algorithm chooses the where (based on automatically collected and analyzed metadata no less!) a machine relays the commands, an interface displays the target coordinates and then the human obeys.
In any case, this article is not about merits of drones per se, just about the self-targeting ones. So when you say that "an algorithm chooses the where" -- any reference for that?
Surely you don't think they have gymnasiums full of junior analysts trying to manually correlate phone call metadata? At some level an algorithm decides someone specific crosses the probability threshold for our definition of "terrorist" and it's all rubber stamps from there until the hellfire blows up a wedding, funeral or hospital.
I suspect this is also the case for , however they are unlikely to own up to that publicly
We often mark the fall of a regime at the point where its army and secret police take off their uniforms, climb down from the guard towers, and join the protest crowd (or go home). Robots can remain loyal to the central command structure to the very end.
Human police forces are liable to defect if not paid, likely to refuse suicidal engagements, and may even refuse immoral engagements. With good enough engineering, a regime's drones will remain ruthlessly efficient even when it runs out of money, is obviously on the wrong side of history, and is obviously going to lose.
I'm not sure whether the final outcomes of revolutions would change, but their endings will certainly be longer and more painful. A Berlin Wall capable of defending itself indefinitely without human intervention would not have fallen so easily.
Hmm, they can just shield the control modules (cpu and memory). And use optical storage or similar for large amounts of data and basic programming.
Also, EMPs usually last only a couple of seconds and can only be effective in a small area. Maybe the robots can just reboot and wait for a valid command a few seconds later by some flying drone.
I thought it was one of Stanislaw Lem's novels, but can't find the name now; I might be wrong about that.
For another perspective on autonomous drones read Robert Sheckley's Watchbird.
Edit: The Defenders, which became The Penultimate Truth.
The plotline of the book is about autonomous drones. In my opinion the book seemed well researched and not too far fetched.
Just imagine how this might be applied by powers like the US, Russia, Israel, and China. IMO its a recipe for increased use of force and extended violent occupations. And that's not even getting into the issue of autonomous kill decision capabilities.
Imagine instead summary execution by robot because an algorithm decides that you look enough like a known terrorist or your call metadata seems terroristish and you're in the wrong neighborhood. Imagine the violent collective punishment that can be brought to bear on a population in retaliation for something like a suicide bombing.
There's this weird blind spot in the robot weapons debate where we shudder in fear of robots committing acts that are already being committed in job lots by humans every day. We don't need machines to be inhumane! The problem is not weapons, the problem is human beings choosing to do bad things, and scapegoating robots will do absolutely nothing to fix that.
I'm saying I think they can have the effect of making it easier for a person to kill more people with less risk of consequence and that the lack of consequence makes violence easier to turn to as a solution. I am not afraid of robots, I am afraid of humans granted the ability to shield themselves from consequence for killing.
Back to the original point of the article it is important that international standards limiting such devastating weapons be enacted and enforced. Even Assad after being caught using chemical weapons was forced to step back and many robotic weapons, especially autonomous ones, should probably be in the same category for the same reason. These things are going to be developed so we need to act now to limit their use not wait until a calamity has struck.
One of the reasons I'm skeptical about the importance of this is, I guess I still don't understand what the nightmare scenario is. You call these weapons "devastating" and say that they will lead to even more death and suffering than currently existing weapons. How? An autonomous robot with a gun can't kill you any more dead than a human with a gun, and as Assad and Putin have demonstrated the lack of robots didn't keep them from just sending humans instead.
It's simple logic really. When it's easy and cheap, they'll just do a lot more of it, and with much more relaxed rules, because every single strike is not a huge deal (to them) anymore, so they can afford to kill "less important" targets, or even "false positives", because there are a lot more strikes where that one came from.
False positives only matter in so far as they can generate outrage that can threaten power. So if you find a way to make absolutely sure no powerful person gets killed, which doesn't seem like a hard problem to solve, you can pretty much start shooting fish in a barrel without any adverse consequences. E.g. when Mao had his leaders fulfill quotas of persecuting so-called dissidents, whether those people were really dissidents was rather secondary. What matters is that there is a certain percentage that is unemployed, starving, and/or hunted by killer robots; that in itself does wonders to keep people in line, and the people who aren't directly under attack have a huge capacity to rationalize and ignore things, if that's what it takes to not be attacked. If we can accept people needlessly starving we can accept people being killed by a random number generator no problem.
I wish I was being snarky, and I hope for nothing more than to be proven wrong, but absent radical changes, that's what I see in our future. "A boot stamping on a human face, forever" is not a still frame, it's a process, and unless that process is stopped for good, that human face will become infinitely thin and infinitely helpless.
Are there other ways to be reachable semi-anonymously? I don't care much about encryption, since the most "dangerous" things I say in public anyway, but I also don't want to put an email or website address because I've, uhh, learned to behave better here by getting hellbanned a lot, and as such wouldn't want to be tied to an identifier like that should that happen again.
Sorry for endulging, I wish HN had a simple private messaging system, maybe requiring a "message permission request" to be accepted before messaging someone; even restricting it to few short messages that get deleted after a while would be great.
The network is client-server architectured, but as befits a replacement for e-mail one can run one's own server. Everything is open source, written in Go, and the work of Adam Langley, HTTPS security engineer at Google. The CLI is also a standard for aesthetically appealing design.
You need to share a symmetric key with every contact to bootstrap trust; feel free to e-mail me (in my profile) if you would like a first contact.
(Cynical, perhaps, but if you watch the UN human rights council for even five minutes cynicism seems quite appropriate.)