The first AI War is going to be a really brutal wakeup call. The part in sci-fi where anyone actually has to aim to kill an enemy was a big romantic lie. The robots don't miss and they draw and shoot before the humans even know they're there. William Gibson, to his credit, did anticipate this with the Slamhound assassin robots in Count Zero.
Strange. The video and everyone here keeps talking about this as a scary future dystopia, while Pakistan and many other countries have been living in it for years.
Drone strikes are exactly what the video describes: surgical precision attacks from machines with automatic weapons that the targets couldn't defend themselves from, used by both governments and terrorists, and routinely relying on mass surveillance for targeting.
The weakly conveyed message of the video was that there needs to be a human pressing the trigger so that they can appreciate whether the act conflicts with their values, but we have millennia of evidence that humans can act mindlessly too.
Now, we're not yet at the point where it's cost-effective to give every infantryman a thing like this - the scope alone costs more than most high-end sniper rifles. But costs will go down...
That's not to say this sort of technology couldn't be useful, but putting active broadcasting equipment on your gun is going to backfire in next generation warfare.
That thing is called AN/PEQ-2, and it is an near infrared (NIR) laser/illuminator. The reason why it's infrared, is because the enemy can't see it unless they have night vision (which works in near infrared). The illuminator is then used as a flashlight, essentially, and the laser is used to designate targets.
Now, recently, groups like ISIS have been acquiring night vision devices in more substantial numbers. So now there's an arms race of sorts, where there are new devices like this that push the frequencies further down, such that most existing NIR devices cannot see it, but new and more expensive ones can. Presumably the same would apply to Trackpoint-like technology.
Give me a cheap-ass webcam, a rapsberry pi 3 and a day or two to cobble together an OpenCV beam-detecting program in Python. A day or two more —perhaps with better hardware— and I'll give you GIS-aided positioning. That seems like it would be enough to defend a position.
Even at other wavelengths, the sensor equipment is out there. I'm not sure what the SNR would be like on a varied focus 0.4μm (~bluray) sensor though.
CMOS sensors pick up all sorts.
For example, there's a popular firearm (and other things, technically, but it's mostly used for firearms) coating called Cerakote. Comes in a bunch of colors, including those commonly used for camouflage - indeed, you can mix and match to create some very impressive patterns. Example:
This is available to civilians, and pretty popular - mostly for aesthetics, but it does work well as a camouflage. The catch is that, being an epoxy polymer coating, it reflects NIR really well - so for someone observing through a night vision device, these things basically glow, especially if an infrared illuminator is also used.
But Cerakote also does have a version that was specifically designed to reduce the near-infrared signature to the point where it works just as well as camo in NIR, as it does in visual spectrum. Except it's only available to military and police - and I believe this is, in fact, because of some law that restricts them from offering it on civilian market.
We are very unlikely to get as wakeup call after any AI War because we will all be dead.
I know. I thought I was decent at strategy games, but AI War shook that confidence to the bone. I don't think I've ever won a game. http://store.steampowered.com/app/40400/AI_War_Fleet_Command...
On a serious note, you're absolutely right. Sci-Fi has been lying to us for a very long time and I don't think people are ready to face what the reality will be.
> The robots don't miss and they draw and shoot before the humans even know they're there.
Maybe, though if those robots are based on neural networks maybe we'll luck out and get a situation like this: https://xkcd.com/1897/
To download your file: Please click all the squares containing traitorous human scum.
That's totally unfair. I think you have been reading the wrong science fiction.
Same issue: a world where the technology to build an incredibly cheap drone that size which can hunt humans with facial recognition based on specific parameters exists...is also a world in which an anti-drone can use the exact same technology and just ram into unidentified drone vehicles to disable them. If the swarms are cheap enough for terrorists, then the counter-swarms are monumentally cheaper.
EDIT: It's worth noting you could argue other scenarios - like only governments building these, but that's what I mean by the message being confused - it's not clear what the threat is supposed to be.
For an eight minute video, it does a decent job of exploring the consequences of a weapon that can kill remotely, with minimal collateral damage, and with no supervision. They touch on potential use against terrorists, political opponents, and activists. The main point being that once you release thing into the wild, you have little control over how it will be used.
I don't think that's a reasonable assumption to make. I have no evidence either way, but I can easily instantly theorize that lots of medical advancements might have come from the same people that got their feet wet in chemical weapon technology, in later stages of their careers/life.
Again, I don't really have an opinion, and haven't really thought it through - I'm just saying your blanket statement needs a citation.
Can we afford to and do we wish to have a number of anti-drones flying everywhere in public space and our own homes?
Perhaps someone has. The reason those are ineffective in the scenarios depicted is the same as with present day terrorism: In order to prevent these attacks you have to succeed every time whereas the terrorists only have to succeed once to wreak havoc.
Well, in a world where developing assault rifles is actually rather cheap, nobody has really developed any convincing countermeasures. Infantry still gets killed if you shoot them with an AK-47, etc.
So there can be weapons without countermeasures. The point of the video is that the possibility exists to develop the weapon to begin with - and that is already concerning enough.
This is in comparison to the significant difficulty of making a barrel or any of the mechanical parts for an AK-47.
The only currently existing self-replicating nanotech is called “life”, and plenty of that has been getting in our way for as long as there has been an us to get in the way of.
Unless you count bootstraping on existing life, I have it on good authority (someone with a PhD in material science from Cambridge) that nobody knows how to make fully self-replicating devices yet, not even clanking replicators.
As for drones… well, I can think of terrible ways to weaponise them, how to defend drones from simple counter-attacks, and then how to use that counter attack mechanism as city-wide anti-drone tech, but in the medium-likelyhood event that my ideas are not fundamentally flawed, I don’t want to give people ideas.
The trick is to not take the first step down that rabbit hole otherwise we will forever be chasing our tail and we'll never feel secure again.
A ban would be impossible to implement. What exactly would you ban? AI? Drones? facial recognition? Once you have a certain level of capability in these technologies, weaponising these things is the easy part.
Compare with nuclear weapons, which are way worse and have a lot of support for a ban, but there's no serious hope of that ever happening, because they are really good at killing people and destroying stuff.
Seems to me that a ban is only realistic if the stuff doesn't work well.
Against a modern military equipped and prepared to withstand them, perhaps. Against a civilian population or a less well-funded force, not so much. This is doubly the case when the attacker is a non-state agent (like a terrorist). Remember for instance the Sarin gas attack in the Tokyo metro by the Shinning Path . An air force bombing a major city with chemical weapons would also cause vast casualties.
I think what you're trying to say is that chemical weapons are primarily designed to be terrifying with lethality considerations coming next. However, you could argue the same about nuclear weapons. After all, the attacks in Hiroshima and Nagasaki were not acutally meant to wipe the Japanese apart, only to scare them into submission. And nuclear deterrence works on the principle that an enemy won't dare attack a nuclear power.
In general, you could think of all weapons ever as aimed primarily at the morale of the enemy rather than their life and that thought may have some merit. After all, an army that breaks and runs is beaten faster and with fewer losses to the other side than one that fights to the last .
However, the important point to keep in mind is that weapons, even when they're primarily designed to be terrifying, are designed to be terrifying in the manner they kill. So I think you'll find that most people would agree that chemical weapons should be banned because they kill in a horrible manner, not relative to the number of casualties they may or may not inflict.
I do think you underestimate their potency btw.
 Cough cough. In war games, certainly.
I guess you could think if we can't just get rid of war (which is always terrible), there's no reason to have international laws and agreements limiting certain kinds of killing and other harm as off limits and unacceptable.
Apparently our current leaders agree with you.
The position of the Canadian government is that the development of autonomous weapons is already illegal under Article 36 of the 1977 Extended Protocol 1 to the Geneva Convention of 1949.
Could also forbid self-replication of weaponized robots.
Pacifism is an unstable configuration; non-aggression is.
Laws as written have all kinds of obvious loopholes, but they are intentionally vague enough that a judge can look at your "obvious loophole" and decide that you are in violation anyway.
That Google, Apple, Uber et al pay so very little tax suggests that that is an optimistic appraisal.
Killing AI is not the same problem with nuclear weapon. Because AI weapon is far easier to make compare to nuclear one. Technically, you can do it right now with your DJI drone, all it takes is just to program it to do some automatic control.
So I'm not quite optimistic about any attempt on limit those kind of weapons base on current condition we are in.
Yet, people stopped mentioning this kind of warning even it was already there for way over half of a century.
Reason of we successfully limited use of nuclear weapon is because, well, the US tested it on Japanese, saw the devastating result and thinking "What if the Japanese (Or Russian, Chinese etc) one day use it on us?".
I would guess only similar event can trigger everyone on earth stand in line and effectively ban those AI weapons all together. That's AFTER we all actually be seriously hurt by it.
It's just like the warning of globe warming, the math is there, proved by observation, somebody out there still believe we will be fine because the whole globe warming thing is just a conspiracy made by China.
We're only safe, because crazy or truly evil people are rare.
War should be approximately 'human-magnitude' in scale and proportion, or it will simply wipe us out. That's why we have built but not yet routinely used, nuclear weapons. The same strategy needs to be followed for any tech that is far more efficient and autonomous than a human would be.
Which is morally very different. Actually morality is an interesting aspect of the video/story. We had the bad guy white male presenter on the stage playing the good guy and claiming that his weapons could distinguish good guys from bad guys (and then of course the targeting of said weapons with a pitifully inadequate and unrealistic set of criteria).
Civilians manufacture and own drones, 3d printers are consumer products now, there are incredibly affordable electronic kits (e.g: Raspberry Pi)... then, for computer vision and AI you've got a lot of free toolkits to pick from... it will be challenging to effectively prevent it even if legislation forbids it.
Additionally, even with that ban, you could have the exact same video with the minor difference that each drone sends its video stream to a human somewhere else who presses a button to authorize each suggested-by-the-drone kill. And if we got that far, the pressure to get rid of the ban would be immense, because tool AIs are better as agent AIs.
And if we can't even handle regular old microsystems technology, how are we going to handle nanotech? https://www.amazon.com/Military-Nanotechnology-Applications-... (and briefly covered in this talk by the same author http://www.youtube.com/watch?v=MANPyybo-dA) discusses it but in the 11 years since it was published I see no reason to think the world is any better prepared.
If a computer scientist teamed up with an engineer and a chemist they could probably jerry rig something that would work. Sort of. But if Northrop Grumman tried to build these things with a billion dollar budget behind them, they might actually succeed in replicating the drones in the video.
What about the neutron bomb?
But overall, yes, I agree, a rifle hanging on the wall will eventually go off.
I'm not saying that we can rest assured that we won't all end up as mushroom cloud dust someday in the maybe not distant future. But attempts to curtail the use of really dangerous weapons have paid off in the past. There's no reason why they wouldn't in this case also.
Agreed history does not bode well, but it is certainly possible. Communication, treaties, non-violent methods of solving disputes can together prevent this nightmare scenario. Just because it isn't easy doesn't mean we should all throw up our hands and say it's impossible.
The list covers a number of professors in top graduate programs and several angles should be tackled simultaneously to maximize our chances.
Some people I know kept arguing we can simply unplug the skynet and be mostly safe (despite some damage). These AI weapons make it clear that we will not be able to largely limit the damage to the virtual world.
Nah, not really. In the face of a proper hard-takeoff superintelligence, Skynet will look positively incompetent by comparison.
There's enough manufacturing equipment connected to the internet today that a superintelligence could physically manifest whatever it wishes. That's not to mention all the human labor it could coerce, trick or pay.
Put simply: as soon as the thing copied itself into the internet, that's game over. Better hope it likes us.
In the future, miniature exploding drones could indeed autonomously, distinguish between cranium and body, plot a flight route and divebomb at their victim.
But, I don't think these skull-crackers will be fully autonomous, even with significant advances and development. Given the chance of collateral damage and the "fuzziness" of algorithms, the kill command or sequence will be still left to a human operator, like how America does so in the skies of Pakistan or with the machine-gun robots of the Middle-East.
In my mind, the the computer would display potential targets via live capture, and the human in the chair would cycle through them, and after filtering out the mistakes (a human dummy, a picture on the wall), type "y".
Militaries would have this check less out of "consider the ethics and morality!" and more out of rational avoid-friendly-fire and ensure-maximum accuracy-of-the-payload.
One exception to this approach could be indiscriminate killing; where the designers intentionally program the drones to lobotomize any target within a geographic range or a time, as long as that target exceeds a certain threshold. This approach is guaranteed to result in a lot of false positives, but would still be remarkably efficient for the purpose of terror or localized warfare. 
On a side note, YouTube recommended a follow up video, depicting a real-life swarm, howling and circling around a target. https://www.youtube.com/watch?v=PYLP0pAGbE0
: Even remote controlled drones strikes are pretty iffy on who they kill, so it's not like unlikely that less savory actors won't mind blanketing swaths of countries with skull-crackers.
And that's just the targeting of the drones themselves. But how accurate is the intelligence targeting? Post-Snowden we found out that they were killing people mainly based on what SIM card the targets carried with them.
Plus, they consider any male that is killed and above 16 years old as a terrorist. It's harder not to be "accurate" when that's your definition of a target...
Also, you're forgetting one aspect of this. The cheaper the tech becomes, the easier it will be used. Just like we now kill 100x more people with drone strikes that we did with airstrikes, in the future we may be kill 100-1,000x more people with automated drones, because it's so much easier to "get rid of the baddies".
Do you actually think they'll hire 100x more people to operate those machines and "have final say"? Yeah, no way that's how they'll think about it - unless we make them think differently about it and not to allow the automated kill machines to ever be built/sent.
I mean, given the type of politicians the U.S. government tends to have these days, which option do you think they'll choose? A 10-year dragged out war against a group like ISIS, or "just sending 10,000 automated drones" into multiple "hot areas" and "finish the job within a month"?
Can you not see how their logic would go, and that we'll actually need a lot of people to oppose that type of thinking to ensure it will not happen? But will there be a lot of people to ensure that doesn't happen, if say a terrorist group takes out the White House? Or will everyone think "those 100,000 people all killed automatically by the drones deserved it for destroying our White House!!".
Heck, Trump already asked the military "for the biggest bomb they can send" - a bomb that was built but never used before - and it wasn't even a situation of the US being under huge immediate threat at the time. It was mainly done for PR purposes for Trump to show how macho he is and how he "gets things done." The worst part about it is that the media seemed gleeful about it, rather than condemning him.
I think it would be best to ensure the machines are not built in the first place. There are too many trigger-happy people that would send such drones out.
Hated In The Nation hypothesises that a lone nutter repurposes existing non-lethal technology to kill people. No explosives, just drill your way into the brain through the ear because the robots are replacements for flying insects.
A law saying "Don't make killer robots" is useless in the Black Mirror scenario, nobody made killer robots except the lone nutjob they're already hunting for murder. So for this campaign it's an unhelpful message, it suggests their campaign would be futile.
On the other hand, I think it will be a long time before visual face recognition will ever get this good, especially out in the real world with bad lighting, hats, eyeglasses etc. But let's say the world of the video authors comes true. What do we do?
1. Microwaves can fry electronics. A system wouldn't need fancy targeting; just blast powerful microwaves in all directions.
A. Microwaves this strong are bad for people too.
B. All it takes is 1 anti-microwave drone to slip through and disable the system.
C. Asymmetrical cost of offense vs. defense
2. Lasers can fry the optical sensors used by the drones
A. Probably would need a fancy targeting system?
B. Same problems as microwave defense 1B and 1C.
3. Something along the lines of WW2 barrage balloons. A network of suspended lines designed to entangle the drones.
The lines wouldn't have to be very big; a piece of strong thread could get caught in the rotor of a drone and disable it.
A. Wind gusts
B. How do you suspend the thread?
C. Do you carry one with you when you go out to Starbucks?
D. The thread would have to be small enough to not be detected by the drone's camera but big enough to cripple it.
4. Ski mask. If the drone is programmed to attack specific individuals using facial recognition, take away it's targeting requirement.
A. Maybe drone deployers don't care about a specific target -- just body count.
5. Play dead. It's assumed the drones wouldn't waste resources on targets that have already been attacked, so just emulate a victim.
A. Big assumption
6. Wax museum defense. Have a mannequin with target's facial features. Drone attacks wax dummy.
A. See 3C, 4A
7. Obscure the target. Throw a blanket over the target.
A. Time to deploy.
B. Darn. Left my blanket in the back in the ....POP!
From Wikipedia: "Quigley concludes that the characteristics of weapons are the main predictor of democracy. Democracy tends to emerge only when the best weapons available are easy for individuals to buy and use."
I'm making more of a long-term guess here. Unlike uranium enrichment or jet construction, I suspect that state-of-the-art drone manufacturing may eventually reach a level of affordability to where drone-facilitated hits, feuds, and drive-bys become a thing on domestic soil.
edit: And key part of the wiki quote is "the best weapons available". If the best weapon available is a six-shooter, and it has a price that almost everyone can afford, you've got an "equalizer". Democracy! If the best weapon available is a nuke, and it takes billions in supply chains and hundreds of millions in materials and know-how to make one, that is a lot of extra leverage for the guys at the top.
is it because people think it is inevitable and there is nothing they can do to stop it?
or is it because people think it is fantasy and could never happen?
Phalanx doesn't work tho' :-) In 1991 Iraqi forces launched a Silkworm missile against USS Missouri. The ship launched decoys... and its Phalanx guns promptly targeted those decoys. Fortunately there was a Royal Navy ship, HMS Gloucester, in the vicinity which dealt with the problem with a good British weapon, the Sea Dart missile. This was the first real "drone on drone" combat, if you will.
Will this capability exist? Yes, probably.
Am I concerned that this capability will be deployed against me? No, not really. Theoretically it could, but then again, so could a dude with a gun. I live in a democratic first-world country, the capabilities of my own government are likely to exceed the capabilities of those who want to kill me. And theoretically maybe my government might want to kill me... but it could already do that anyway, using a dude with a gun.
For now. Things change.
It also fits the progress we've been making - we removed civilians from the battlefield or at least we made it a war crime, and now we need to remove soldiers from it too (especially considering that they will become inferior - slower reaction, no 360 vision, slower speed, can't fly, can't really network and exchange full tactical info, etc. - to drones anyway). Let drones fight each other.
For non-military setting it was already addressed in South Park for example - autonomous police drones will control the airspace. I think in the future any UFO will be just taken down like a car without license plates.
In the Middle East, the US likes to blow up buildings that they think might have terrorists in them with a drone-launched missile, then have the drone circle back and hit the rubble with another missile a few minutes later to kill any neighbors or EMTs coming to help the victims.
> Let drones fight each other.
Are you imagining a sci-fi future where nations politely agree to decide their disagreements with bloodless robot duels? Wars are won by destroying the enemy's will to fight, not by gentlemen's agreements.
You seem to think that war can be made safe and civilized somehow. It reminds me of the 1897 New York Times writer who thought that machine guns would lead to peace on earth because leaders would hesitate to send their soldiers against such deadly weapons.
Where is no "politely agree" nor "gentlemen's agreements" in international relationships, only sheer power. MAD is an example of power forcing to not fight a war at all. Falkland war was fought without bombing Buenos Aires or any other action outside of immediate war zone. Ukraine/Russia war in Donbass was limited to that area because of possible repercussions to either side for letting it spill out (and for the same reason both sides in general, with some exceptions, didn't indiscriminantly target civilians). US and other developed countries limit sending its soldiers in harms ways and hitting enemy civilians because of internal and external political repercussions. So it is always a power calculation, and as a result in this more and more interconnected and densely populated world we have 2 tendencies - restricting conflicts to specific localities and minimizing human involvement. Those 2 tendencies ultimately lead to drone wars confined to specific areas (preferably in space or in cyberspace - notice how much of adversarial activity happens more and more in cyberspace, yet nobody is going for real life action over it). Attempt to bomb each other's cities will be (and pretty much is already today) unthinkable like in case of Falkland war. Of course, when there is no containing power in sight, we immediately have what Saudis do in Yemen right now - it isn't even a war, it is just pure genocide.
Also in a really grim scenario, could threaten/kill anyone that hides their face, be it with a skimask, burka or whatever. Or even require positive identification, that your face is found in the database.