Robot wars are so much better than human wars. Against humans, robots will eventually use more non-lethal options (it's easier to be humane if losing a fight means replacing a drone than when it's either your life or your enemy's). Then, robots will be mostly used against other robots, reducing death tolls. Then, perhaps, wars will be decided in friendly Starcraft matches, saving billions. :)
"Maxim was later knighted by Queen Victoria for his services to humanity -- it was thought that the machine gun would make wars shorter, hence more humane." Spencer C. Tucker _A Global Chronology of Conflict: From the Ancient World to the Modern Middle East vol 6 1st ed_ pg 1198
"Not even the evidence of the 1904 Russo-Japanese War, with its long sieges and trench warfare—an eerie predictor of World War I’s horrors to come—could persuade military observers of the Maxim gun’s lethality on the modern battlefield."
Artillery caused ~60% of combat deaths and 75% of combat injuries in World War 1. The machine gun is definitely a terrifying weapon, and has the added impact of being directed at individuals, or aimed at groups, but was simply much less deadly. I would also add that indirect-fire artillery is much more likely to cause post traumatic stress disorder (shell-shock), as the randomness and lack of control is traumatic.
Not a wargames expert, but you could argue the threat of the machine gun kept people in trenches, which of course shaped the war and made artillery a cornerstone.
Kind of like how the queen in chess may not take many pieces. She is easily the most powerful piece, and as a result a very credible threat. As a result she can be used to project threats rather than taking pieces.
While there are possibilities for more humane warfare using drone armies, the downsides are way, way worse.
By example, there is a long and well documented history of human rights violations against civilian population from human armies, and those violations come as often as not from the high command's orders as for the volition of individual units.
Even worse than that, there is also the well documented history of the Cold War, where nuclear war and MAD was only spared because there were individual low level officers that exercised a humane criteria and knowingly disobeyed protocol. If robots would have been in charge of nuclear defense systems during those years, we probably would not been here discussing this.
"By example, there is a long and well documented history of human rights violations against civilian population from human armies, and those violations come as often as not from the high command's orders as for the volition of individual units." -- doesn't this suggest that robots (who would not disobey their orders) are likely to be much better than humans at reducing human rights violations?
It seems to suggest so, but you have to take also into account all the violations that never occur because some honorable human soldier choose to disobey direct orders from their not so honorable superiors. Or the much more extensive and systematic violations that ruthless but pragmatic commanders never order in the first place because they are aware on the effect that would have on the morale of the troops.
You are right, and of course in most cases we won't know of such cases, so there is no way to judge how many there might be.
On the other hand, robots would also complicate things for the not-honorable superiors -- for one, they'll lose an option to give verbal orders and then claim no knowledge and blame subordinates when things come out -- they would now need to alter electronic records.
But my point is just that the effect of higher robot involvement is far from clear.
well reducing them by half wouldn't be a bad thing either :) although of course it's more complicated than that, I just wanted to point out that the direction of the effect is rather unclear at this point.
Between multiple opposed parties attempting to resolve conflict, war is a meta-mechanism that solves the decision making problem vacuously, by removing all but one participant. Until a permanently disabling threshold other than death is found [0], there will exist a price at which it is profitable to undermine any abstraction layer of polite competition. Automated militaries lower the price point at which death, whose limit is genocide, is an economic alternative.
[0] Perhaps through cryptographically one-way neurolinguistic re-programming. Even Japan's conditioning by the American occupation (one of the most brutal cultural reorganizes absent enforced diaspora) seems to be waning after 70 years.
> Then, robots will be mostly used against other robots, reducing death tolls.
You're thinking like a nice person, which is the wrong approach here.
What you should be doing is, think of it as if you were Iosif Vissarionovich Stalin. Okay? You're Stalin. And now you're thinking "hmmm, how could I use these shiny new toys, I wonder?..."
It's not the best case scenario you should worry about, it's the other end of the stick. Security 101.
(Unless, of course, your whole post was sarcastic, in which case nevermind.)
That's very altruistic. This hypothesis has been tested and rejected. The collateral damage due to drone warfare has increased over time. There is some hope it's slowing down but that might just be lack of data or a slowdown in dronestrikes.
Of course it has increased, there are many more drones out there. The hypothesis is whether drones would do more collateral damage compared to humans tasked with the same job. Also, to my knowledge all current drones are not closed systems, and there is always a human pressing a button, so they aren't even a subject of this article?
Right, a human always has a close eye on the drone's operations, so if it were to target something silly like an MSF hospital the human operator can stop the strike!
Drones will be deployed in situations where human based intervention would otherwise be a political non-option, thus creating collateral damage where there otherwise would be none.
The cold truth is that an algorithm chooses the where (based on automatically collected and analyzed metadata no less!) a machine relays the commands, an interface displays the target coordinates and then the human obeys.
Or, differently phrased, achieving political objectives that would have otherwise not been feasible. It is not obvious to me that a side relatively more tolerant to own casualties should enjoy the political advantage. Also, do we discount casualties that the intended targets would likely have inflicted, had they not been eliminated?
In any case, this article is not about merits of drones per se, just about the self-targeting ones. So when you say that "an algorithm chooses the where" -- any reference for that?
Surely you don't think they have gymnasiums full of junior analysts trying to manually correlate phone call metadata? At some level an algorithm decides someone specific crosses the probability threshold for our definition of "terrorist" and it's all rubber stamps from there until the hellfire blows up a wedding[1], funeral[2] or hospital[3].
Yes, but your examples [1,2,3] are neither the desired, nor the likely outcome. In most cases, the correct (not nescessarily the best or morally right) target is eliminated. In fact, I suspect (and would be interested in data that could prove or disprove this) that the erroneous outcome types [1,2,3] you list are _less_ likely with drones and automated systems than with humans on the front line and the attendant stress and confusion implicit in that situation...
Well, IMO there is a big difference between humans looking at algorithm-provided evidence (even if it's often rubber-stamping), and drones acting on that evidence directly.
UAV's have few realistic less than lethal options without people on the ground. Ground based drones will could actually capture someone which allows for intelligence gathering.
EMPs make robot war between states of similar capability pretty unlikely. Robots are only particularly interesting against enemies who can't wield meaningful electronic countermeasures, like protesters and other domestic uprisings.
We often mark the fall of a regime at the point where its army and secret police take off their uniforms, climb down from the guard towers, and join the protest crowd (or go home). Robots can remain loyal to the central command structure to the very end.
Human police forces are liable to defect if not paid, likely to refuse suicidal engagements, and may even refuse immoral engagements. With good enough engineering, a regime's drones will remain ruthlessly efficient even when it runs out of money, is obviously on the wrong side of history, and is obviously going to lose.
I'm not sure whether the final outcomes of revolutions would change, but their endings will certainly be longer and more painful. A Berlin Wall capable of defending itself indefinitely without human intervention would not have fallen so easily.
> EMPs make robot war between states of similar capability pretty unlikely
Hmm, they can just shield the control modules (cpu and memory). And use optical storage or similar for large amounts of data and basic programming.
Also, EMPs usually last only a couple of seconds and can only be effective in a small area. Maybe the robots can just reboot and wait for a valid command a few seconds later by some flying drone.
EMPs burn out circuits, so equipment can't just be turned back on. Also, not an electrical engineer, but my understanding is that shielding involves grounding, which is relatively difficult for things that fly.
Also worth noting that "shielding" electronics is not a binary field-- The damage that occurs depends on both the strength of the magnetic field produced and the specific electronics in question. A strong enough EMP, perhaps one from a coronal mass ejection on the sun, might require a mountain of shielding to adequately protect sensitive electronics. On the flip side, a vacuum tube calculator would be much more resilient than an iphone.
This was the premise of some sci-fi novel I read long ago: two countries at war switched to using robots, and as robo-armies became more interconnected and thus more intellectually powerful, they understood the pointlessness of destruction and refused to fight. IIRC they went to pick flowers on the battlefield instead of singing, though.
I thought it was one of Stanislaw Lem's novels, but can't find the name now; I might be wrong about that.
Pretty certain pkd did this too - Robots keep reporting to humans they the surface is still radioactive, etc., human emerges from bunker, all is green, Soviet and American Robots living in peace together.
Edit: The Defenders, which became The Penultimate Truth.
As noted above I filed it under "not too far fetched". As I grow older and have spent more time in the it industry less and less stuff files under impossible.
Robot wars, or drone strikes, have been going on for at least 20 years perhaps more. And perhaps robots have the ability to even the battlefield. Imagine China sending billions of drones to counteract American drones. Perhaps we should be looking forward to drone war, because anyone can destroy your machines, it's not a big deal for anyone to enter such a war. There will be no diplomatic incident over destroyed robots as a provocation to all out war between countries with real power. The battlefield is pretty much open to anyone with the hardware.
It lowers the barrier of entry to use lethal violence and that is not a good thing. A more realistic scenerio is that rich countries will have less reason to seek diplomatic solutions now that the only people doing the dying are the poorer nations they wage war against. Just imagine how many years an all robot military could occupy a nation with no public blowback because of casualties. At that point it's a cost calculation.
Just imagine how this might be applied by powers like the US, Russia, Israel, and China. IMO its a recipe for increased use of force and extended violent occupations. And that's not even getting into the issue of autonomous kill decision capabilities.
These aren't infallible defense only hug-bots we're talking about. They're not going to just grab your arm they are more likely to shoot you and anyone else who happens to be too close to you.
Imagine instead summary execution by robot because an algorithm decides that you look enough like a known terrorist or your call metadata seems terroristish and you're in the wrong neighborhood. Imagine the violent collective punishment that can be brought to bear on a population in retaliation for something like a suicide bombing.
Are you saying that human intelligence agencies would never go after innocent people and human military forces would never carry out violent collective punishment against a civilian population? Think carefully...
There's this weird blind spot in the robot weapons debate where we shudder in fear of robots committing acts that are already being committed in job lots by humans every day. We don't need machines to be inhumane! The problem is not weapons, the problem is human beings choosing to do bad things, and scapegoating robots will do absolutely nothing to fix that.
I'm saying I think they can have the effect of making it easier for a person to kill more people with less risk of consequence and that the lack of consequence makes violence easier to turn to as a solution. I am not afraid of robots, I am afraid of humans granted the ability to shield themselves from consequence for killing.
And what I'm saying is that people -- real people, not some unnamed "person" from some unspecified country or culture who may exist at some point in the future -- are already turning to violence as a solution, without the need for robots. Bashir Assad doesn't have killer robots and neither does Vladimir Putin and neither do the mullahs of Iran, but somehow they're managing.
I don't see the relevance of that point. The existence of violence without a particular weapon does not refute the argument that a new weapon may lead to even more death and suffering. By your logic since violence already exists apparently we shouldn't worry about new weapons making it even worse.
Back to the original point of the article it is important that international standards limiting such devastating weapons be enacted and enforced. Even Assad after being caught using chemical weapons was forced to step back and many robotic weapons, especially autonomous ones, should probably be in the same category for the same reason. These things are going to be developed so we need to act now to limit their use not wait until a calamity has struck.
> Back to the original point of the article it is important that international standards limiting such devastating weapons be enacted and enforced.
One of the reasons I'm skeptical about the importance of this is, I guess I still don't understand what the nightmare scenario is. You call these weapons "devastating" and say that they will lead to even more death and suffering than currently existing weapons. How? An autonomous robot with a gun can't kill you any more dead than a human with a gun, and as Assad and Putin have demonstrated the lack of robots didn't keep them from just sending humans instead.
Why are they more likely to shoot you? They will do what we program them to do and we could send them into situations too dangerous for a human team. It wouldn't need to have a gun.
The biggest danger is the government believing they "solved" autonomous killing machines that kill people based on the NSA's mass spying apparatus, when in fact such robots could have many false positives. In fact, they already do that. It's just a human that pushes the button. When the decision will become automated, such strikes will grow 10 or 100 fold, just like the drone strikes increased "air strikes" by 100-1000x fold in the Middle East, too.
It's simple logic really. When it's easy and cheap, they'll just do a lot more of it, and with much more relaxed rules, because every single strike is not a huge deal (to them) anymore, so they can afford to kill "less important" targets, or even "false positives", because there are a lot more strikes where that one came from.
> The biggest danger is the government believing they "solved" autonomous killing machines that kill people based on the NSA's mass spying apparatus, when in fact such robots could have many false positives.
False positives only matter in so far as they can generate outrage that can threaten power. So if you find a way to make absolutely sure no powerful person gets killed, which doesn't seem like a hard problem to solve, you can pretty much start shooting fish in a barrel without any adverse consequences. E.g. when Mao had his leaders fulfill quotas of persecuting so-called dissidents, whether those people were really dissidents was rather secondary. What matters is that there is a certain percentage that is unemployed, starving, and/or hunted by killer robots; that in itself does wonders to keep people in line, and the people who aren't directly under attack have a huge capacity to rationalize and ignore things, if that's what it takes to not be attacked. If we can accept people needlessly starving we can accept people being killed by a random number generator no problem.
I wish I was being snarky, and I hope for nothing more than to be proven wrong, but absent radical changes, that's what I see in our future. "A boot stamping on a human face, forever" is not a still frame, it's a process, and unless that process is stopped for good, that human face will become infinitely thin and infinitely helpless.
Off Topic: I see in your profile, PavlovsCat, a link to the RetroShare Sourceforge page. Are you a contributor to the project? I see that they now have an active GitHub repository [0], when I had thought progress had slowed or ceased.
Oh, I kinda forgot about that... to be honest, I have used that thing once, when it was discussed here, "added" a few people, but it never even came to really talking to anyone over it. So, thanks for reminding me, I guess I'll try it again! [ https://retroshareteam.wordpress.com/2015/06/08/version-0-6-... ] (needless to say, I'm not a contributor, and I doubt I could provide much of substance to something like that)
Are there other ways to be reachable semi-anonymously? I don't care much about encryption, since the most "dangerous" things I say in public anyway, but I also don't want to put an email or website address because I've, uhh, learned to behave better here by getting hellbanned a lot, and as such wouldn't want to be tied to an identifier like that should that happen again.
Sorry for endulging, I wish HN had a simple private messaging system, maybe requiring a "message permission request" to be accepted before messaging someone; even restricting it to few short messages that get deleted after a while would be great.
For asynchronous anonymous communication, I have been experimenting with Pond recently [0]. Message have perfect forward security from each previous, are anonymized through the TOR network, padded to a standard packet length and sent at randomized times (along a power-scale distribution), to prevent size and timing co-ordination attacks revealing identity.
The network is client-server architectured, but as befits a replacement for e-mail one can run one's own server. Everything is open source, written in Go, and the work of Adam Langley, HTTPS security engineer at Google. The CLI is also a standard for aesthetically appealing design.
You need to share a symmetric key with every contact to bootstrap trust; feel free to e-mail me (in my profile) if you would like a first contact.
IMO it's not obvious that autonomous robots would be worse at things like discriminating between civilian and military targets than humans -- particularly when said humans are in a life-or-death situation that requires quick decisions. The way the article describes it, it looks more like an attempt by the parties currently lacking technology to level the playing field through diplomatic means, rather than a mutually beneficial arrangement.
The biggest threat to automous weapons is the millitary organizations that want to keep manpower central. That's why drones were pioneered by the CIA and the Navy and not the US Air Force. They needed solutions, not pilots.
It's hard not to think that a lot of the theatrical upset about "killer robots" is because these are weapons mostly deployed by the United States and a few other Western nations against opponents who don't have a real military defense against them. When you can't respond with warfare, there's always lawfare.
(Cynical, perhaps, but if you watch the UN human rights council for even five minutes cynicism seems quite appropriate.)