Hacker News new | comments | show | ask | jobs | submit login
Slaughterbots: Stop Autonomous Weapons [video] (youtube.com)
215 points by georgecmu 26 days ago | hide | past | web | favorite | 126 comments



It's like WWI all over again. In WWI, people went happily off to war thinking it's going to be nice and heroic and adventurous like the good old days and they all get slaughtered in a pathetic pointless way in a blink of an eye by newly developed automated war machinery.

The first AI War is going to be a really brutal wakeup call. The part in sci-fi where anyone actually has to aim to kill an enemy was a big romantic lie. The robots don't miss and they draw and shoot before the humans even know they're there. William Gibson, to his credit, did anticipate this with the Slamhound assassin robots in Count Zero.


> The first AI War is going to be a really brutal wakeup call

Strange. The video and everyone here keeps talking about this as a scary future dystopia, while Pakistan and many other countries have been living in it for years.

Drone strikes are exactly what the video describes: surgical precision attacks from machines with automatic weapons that the targets couldn't defend themselves from, used by both governments and terrorists, and routinely relying on mass surveillance for targeting.

The weakly conveyed message of the video was that there needs to be a human pressing the trigger so that they can appreciate whether the act conflicts with their values, but we have millennia of evidence that humans can act mindlessly too.


You don't even need robots for that. You just need an advanced scope on a rifle that can still be transported by a human, and the human is also the one that identifies targets. Check this out:

https://www.youtube.com/watch?v=q0oGZ4TZr5k

Now, we're not yet at the point where it's cost-effective to give every infantryman a thing like this - the scope alone costs more than most high-end sniper rifles. But costs will go down...


I have to wonder how hard it would really be to detect and target somebody whose scope is laser-ranging and has wifi. These seem like tactical giveaways.

That's not to say this sort of technology couldn't be useful, but putting active broadcasting equipment on your gun is going to backfire in next generation warfare.


It's trivial to figure out where a ranging laser comes from, it's just that asymmetric warfare enables you to use them with impunity


And we already use lasers heavily because of this. Ever noticed those square things mounted to the forends of M16 and M4 rifles in Iraq and Afghanistan? E.g. this photo from 2001 - notice how literally every guy in it has one:

https://en.wikipedia.org/wiki/File:U.S._Marines_humping_in_A...

That thing is called AN/PEQ-2, and it is an near infrared (NIR) laser/illuminator. The reason why it's infrared, is because the enemy can't see it unless they have night vision (which works in near infrared). The illuminator is then used as a flashlight, essentially, and the laser is used to designate targets.

Now, recently, groups like ISIS have been acquiring night vision devices in more substantial numbers. So now there's an arms race of sorts, where there are new devices like this that push the frequencies further down, such that most existing NIR devices cannot see it, but new and more expensive ones can. Presumably the same would apply to Trackpoint-like technology.


I won't discount the other utilities of NIR night vision but for the purposes of detecting incoming infantry, it's overkill.

Give me a cheap-ass webcam, a rapsberry pi 3 and a day or two to cobble together an OpenCV beam-detecting program in Python. A day or two more —perhaps with better hardware— and I'll give you GIS-aided positioning. That seems like it would be enough to defend a position.

Even at other wavelengths, the sensor equipment is out there. I'm not sure what the SNR would be like on a varied focus 0.4μm (~bluray) sensor though.


The point is that the laser will likely be infrared. And it might well be of a kind that you can't easily source sensors for.


Sorry, I'd switched to the side of the visible spectrum because a $2 webcam off ebay with its IR filter removed will pick up 830nm IR. That's what the AN/PEQ-2A uses.

CMOS sensors pick up all sorts.


Interesting, didn't know about this 'race to thermal IR'. But what I do know is that op optical components suitable for thermal imaging systems are under export control in the EU. So at least as a corporate entity you cannot easily export them.


In US, as well. In fact, all kinds of things that have to do with both active IR, and protection against it, are regulated.

For example, there's a popular firearm (and other things, technically, but it's mostly used for firearms) coating called Cerakote. Comes in a bunch of colors, including those commonly used for camouflage - indeed, you can mix and match to create some very impressive patterns. Example:

http://i64.tinypic.com/21bw5lc.jpg

This is available to civilians, and pretty popular - mostly for aesthetics, but it does work well as a camouflage. The catch is that, being an epoxy polymer coating, it reflects NIR really well - so for someone observing through a night vision device, these things basically glow, especially if an infrared illuminator is also used.

But Cerakote also does have a version that was specifically designed to reduce the near-infrared signature to the point where it works just as well as camo in NIR, as it does in visual spectrum. Except it's only available to military and police - and I believe this is, in fact, because of some law that restricts them from offering it on civilian market.


As for the "arms race" in IR, here's one specific example:

http://www.thefirearmblog.com/blog/2017/10/12/swir-mawl-clad...


You can but NODs retail now that are halfway as good as the military state-of-the-art but a tiny fraction of the price. People think the Taliban, ISIS et al are just these desert rednecks but the truth is that they are well trained (originally by US Special Forces to fight the Soviets), highly experienced - much more so than any Western soldier most of whom do only a couple of tours - well funded and technologically sophisticated enough to run advanced psyops via social media.


The most amazing thing about the Great War is not that people started out like this, but they continued once they knew the truth.

We are very unlikely to get as wakeup call after any AI War because we will all be dead.


Maybe if something develops intelligence and tool use after us, they can do a better job with it.


> The first AI War is going to be a really brutal wakeup call.

I know. I thought I was decent at strategy games, but AI War shook that confidence to the bone. I don't think I've ever won a game. http://store.steampowered.com/app/40400/AI_War_Fleet_Command...

On a serious note, you're absolutely right. Sci-Fi has been lying to us for a very long time and I don't think people are ready to face what the reality will be.

> The robots don't miss and they draw and shoot before the humans even know they're there.

Maybe, though if those robots are based on neural networks maybe we'll luck out and get a situation like this: https://xkcd.com/1897/

To download your file: Please click all the squares containing traitorous human scum.


"Sci-Fi has been lying to us for a very long time [..]"

That's totally unfair. I think you have been reading the wrong science fiction.


it also seems that most people are too optimistic with regards to involvement of robots and ai in war. first of all an asymmetric kill power distribution will lead to asymmetric war anyway - that's what is referred to as terrorism. if an opponent has no chance fighting a 1on1 battle they will resort to planting bombs in malls. and the idea that robots Will fight for humans is also very naive because if the robots are done on one side then humans will be back fighting.


Grunts facing mini-drones. Helmet-penetrating explosives. Military sci-fi horror.


"Heroic" and "adventurous" OK, but- "nice"? I can't believe anyone ever thought that going to war -to kill and risk getting killed- could be "nice", except perhaps for murderous psychopaths.


It's a figure of speech meant to convey irony.


The message of this film feels extremely confused - probably because it's trying to single-mindedly push an agenda rather then explore the problem space. In a world where apparently drone-swarms are this cheap...what, no one ever developed counter-drones? It's the reason grey goo isn't a realistic scenario for nanotechnology: the goo has to replicate, whereas all the counter-goo has to do is kill it.

Same issue: a world where the technology to build an incredibly cheap drone that size which can hunt humans with facial recognition based on specific parameters exists...is also a world in which an anti-drone can use the exact same technology and just ram into unidentified drone vehicles to disable them. If the swarms are cheap enough for terrorists, then the counter-swarms are monumentally cheaper.

EDIT: It's worth noting you could argue other scenarios - like only governments building these, but that's what I mean by the message being confused - it's not clear what the threat is supposed to be.


I don't think "people will invent countermeasures" is a particularly good argument. We have countermeasures against chemical weapons - but not everyone has them all the time. We would still be better off if we hadn't opened that pandora's box.

For an eight minute video, it does a decent job of exploring the consequences of a weapon that can kill remotely, with minimal collateral damage, and with no supervision. They touch on potential use against terrorists, political opponents, and activists. The main point being that once you release thing into the wild, you have little control over how it will be used.


> We would still be better off if we hadn't opened that pandora's box.

I don't think that's a reasonable assumption to make. I have no evidence either way, but I can easily instantly theorize that lots of medical advancements might have come from the same people that got their feet wet in chemical weapon technology, in later stages of their careers/life.

Again, I don't really have an opinion, and haven't really thought it through - I'm just saying your blanket statement needs a citation.


Complete defense is more expensive than targeted offence.

Can we afford to and do we wish to have a number of anti-drones flying everywhere in public space and our own homes?


> In a world where apparently drone-swarms are this cheap...what, no one ever developed counter-drones?

Perhaps someone has. The reason those are ineffective in the scenarios depicted is the same as with present day terrorism: In order to prevent these attacks you have to succeed every time whereas the terrorists only have to succeed once to wreak havoc.


>> In a world where apparently drone-swarms are this cheap...what, no one ever developed counter-drones?

Well, in a world where developing assault rifles is actually rather cheap, nobody has really developed any convincing countermeasures. Infantry still gets killed if you shoot them with an AK-47, etc.

So there can be weapons without countermeasures. The point of the video is that the possibility exists to develop the weapon to begin with - and that is already concerning enough.


The difference here is that you can buy everything shown in that video (except the explosive) on Amazon. The drone can even be bought pre-made, all you need to do is implement a more capable system for autonomous control, which is also doable for simpler scenarios.

This is in comparison to the significant difficulty of making a barrel or any of the mechanical parts for an AK-47.


Counter-goo also has to self-replicate, or otherwise be available in huge quantities.

The only currently existing self-replicating nanotech is called “life”, and plenty of that has been getting in our way for as long as there has been an us to get in the way of.

Unless you count bootstraping on existing life, I have it on good authority (someone with a PhD in material science from Cambridge) that nobody knows how to make fully self-replicating devices yet, not even clanking replicators.

As for drones… well, I can think of terrible ways to weaponise them, how to defend drones from simple counter-attacks, and then how to use that counter attack mechanism as city-wide anti-drone tech, but in the medium-likelyhood event that my ideas are not fundamentally flawed, I don’t want to give people ideas.


That's a good point- even if countermeasures can be developed, there's nothing stopping development of anti-counter-measures, and so on.

The trick is to not take the first step down that rabbit hole otherwise we will forever be chasing our tail and we'll never feel secure again.


Anyone can take the first step. I’d prefer some sneaky billionaire builds consumer products with surprising features, like Tesla cars being safe from chemical weapons or hybrid VR-AR goggles that just coincidentally make it impossible to be permanently blinded by laser weapons.


to develop countermeasures is always way harder than developing an assaulting technology. like firing a rocket is easy, but catching it mid flight is very difficult.


That's funny, I already have anti-drone technology in my kitchen. Take the magnetron from the microwave and use a Wok as a dish to create an approximately directional 1kW microwave energy weapon. Aim it at the swarm of drones and sweep back and forth. They'll all fall out of the air up to some range.


So you're going to walk around with a massive metal plate and a huge battery strapped to your back? Nevermind the fact that you probably couldn't react quickly enough to use it even if you carried it everywhere.


I would just have an EMP wrist watch or something. Wouldn't take much of a pulse to knock out one of these bad boys.


If you had an "EMP wristwatch" that could disable even a pocket calculator (without blowing your hand off), I'd be impressed.


Unfortunately I don't see a way this can be avoided.There has never been a weapon humanity imagined, designed, and built that it has not used. AI will be no different.

A ban would be impossible to implement. What exactly would you ban? AI? Drones? facial recognition? Once you have a certain level of capability in these technologies, weaponising these things is the easy part.


We can make it considered unacceptable and against international law. Like chemical weapons. Which still happen, but not as much as they would if the U.S were, say, planning on using them all over the place, doing heavy R&D into them, and selling them to other people. Like, you know, robot kill bots.


Most of the reason chemical weapons don't get used is because they're not very effective. It's hard to disperse them on the battlefield, changing weather has a nasty habit of blowing the stuff back onto your own troops, and protective gear is cheap enough to equip everyone in an army. I'm pretty sure that everyone was willing to ban them internationally not just because they're horrible but also because they're not very useful.

Compare with nuclear weapons, which are way worse and have a lot of support for a ban, but there's no serious hope of that ever happening, because they are really good at killing people and destroying stuff.

Seems to me that a ban is only realistic if the stuff doesn't work well.


>> Most of the reason chemical weapons don't get used is because they're not very effective.

Against a modern military equipped and prepared to withstand them, perhaps. Against a civilian population or a less well-funded force, not so much. This is doubly the case when the attacker is a non-state agent (like a terrorist). Remember for instance the Sarin gas attack in the Tokyo metro by the Shinning Path [1]. An air force bombing a major city with chemical weapons would also cause vast casualties.

I think what you're trying to say is that chemical weapons are primarily designed to be terrifying with lethality considerations coming next. However, you could argue the same about nuclear weapons. After all, the attacks in Hiroshima and Nagasaki were not acutally meant to wipe the Japanese apart, only to scare them into submission. And nuclear deterrence works on the principle that an enemy won't dare attack a nuclear power.

In general, you could think of all weapons ever as aimed primarily at the morale of the enemy rather than their life and that thought may have some merit. After all, an army that breaks and runs is beaten faster and with fewer losses to the other side than one that fights to the last [2].

However, the important point to keep in mind is that weapons, even when they're primarily designed to be terrifying, are designed to be terrifying in the manner they kill. So I think you'll find that most people would agree that chemical weapons should be banned because they kill in a horrible manner, not relative to the number of casualties they may or may not inflict.

I do think you underestimate their potency btw.

__________________________

[1] https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

[2] Cough cough. In war games, certainly.


It’s already against the law to murder people.


It's not in fact against international law or the laws of war to kill people, cause, you know, war.

I guess you could think if we can't just get rid of war (which is always terrible), there's no reason to have international laws and agreements limiting certain kinds of killing and other harm as off limits and unacceptable.

Apparently our current leaders agree with you.


I deliberately used the word 'murder', not 'kill'.

The position of the Canadian government is that the development of autonomous weapons is already illegal under Article 36 of the 1977 Extended Protocol 1 to the Geneva Convention of 1949.


How would you write such a treaty? Limit the CPU speed of weapons systems or something? Totally unenforceable.


Could forbid weapons to take kill actions themselves, each instance having to be activated by a human operator.

Could also forbid self-replication of weaponized robots.


Why not go one step further and say that nobody is allowed to take a kill action? Killing is murder.

Not sarcastic.


The first force to break the rule will prevail.

Pacifism is an unstable configuration; non-aggression is.


If you deploy automated drone factories that send out millions of these things, and then have an army of people all pressing "kill" buttons as if they were doing a quick-fire arcade game, without even seeing what they were killing, it would comply with the rules you describe.


Those rules would be enforced by humans, and humans don't tend to be very impressed by attempts to follow the letter of the law while violating its spirit.

Laws as written have all kinds of obvious loopholes, but they are intentionally vague enough that a judge can look at your "obvious loophole" and decide that you are in violation anyway.


hose rules would be enforced by humans, and humans don't tend to be very impressed by attempts to follow the letter of the law while violating its spirit.

That Google, Apple, Uber et al pay so very little tax suggests that that is an optimistic appraisal.


Sure. At least it cannot be done without the active participation of humans. Who hat least may object, though there are many ways to attempt to avoid that from happening. One being deluding them into thinking it is just a game.


Do you still remember The Three Laws of Robotics[0]? Not too many people still mentions it now days.

Killing AI is not the same problem with nuclear weapon. Because AI weapon is far easier to make compare to nuclear one. Technically, you can do it right now with your DJI drone, all it takes is just to program it to do some automatic control.

So I'm not quite optimistic about any attempt on limit those kind of weapons base on current condition we are in.

[0] https://www.auburn.edu/~vestmon/robotics.html


The three laws are a literary device, not a serious engineering proposal. The stories they were introduced in described exactly why they have such serious problems.


> they were introduced in described exactly why they have such serious problems.

Yet, people stopped mentioning this kind of warning even it was already there for way over half of a century.

Reason of we successfully limited use of nuclear weapon is because, well, the US tested it on Japanese, saw the devastating result and thinking "What if the Japanese (Or Russian, Chinese etc) one day use it on us?".

I would guess only similar event can trigger everyone on earth stand in line and effectively ban those AI weapons all together. That's AFTER we all actually be seriously hurt by it.

It's just like the warning of globe warming, the math is there, proved by observation, somebody out there still believe we will be fine because the whole globe warming thing is just a conspiracy made by China.


Such weapons have existed for a long time already and there's no practical way to write a legally enforceable treaty banning them. Do you really think that Russia or China would ever allow the USA to audit the source code in their weapons systems? It's ludicrous.


So you didn't bother trying to bypass your own rules without breaking them?


The difference is that I can make almost all the components of a killer bot in my living room, and the rest with of the shelf components and access to a maker space with tools.


You can make a chemical weapon too, your local hardware store has enough hazardous chemicals.

We're only safe, because crazy or truly evil people are rare.


Yes and no. The traditional barrier-to-entry for homemade chemical weapons or explosives is that the wannabe terrorist usually ends up killing themselves. That doesn't necessarily hold with drones, depending on what the payload is. You might not even need a payload if your plan is just to ram an airliner. So the threat of drones is much higher than "conventional" terrorism IMO.


It doesn't have to be implemented 100%. Nothing is. But that doesn't stop us from banning bio & chemical weapons.

War should be approximately 'human-magnitude' in scale and proportion, or it will simply wipe us out. That's why we have built but not yet routinely used, nuclear weapons. The same strategy needs to be followed for any tech that is far more efficient and autonomous than a human would be.


I would guess: we ban civilian drones from carrying explosives, nerve-toxins and other weapons; we invest the bulk of research effort in developing anti-drone drones. The anti-drone drones don't carry weapons that can be used against humans. Advanced drone tech would be less like nuclear weapons, against which there is no defence, and more like vaccines or antidotes.

Which is morally very different. Actually morality is an interesting aspect of the video/story. We had the bad guy white male presenter on the stage playing the good guy and claiming that his weapons could distinguish good guys from bad guys (and then of course the targeting of said weapons with a pitifully inadequate and unrealistic set of criteria).


You can make any laws. But how exactly do you enforce them?

Civilians manufacture and own drones, 3d printers are consumer products now, there are incredibly affordable electronic kits (e.g: Raspberry Pi)... then, for computer vision and AI you've got a lot of free toolkits to pick from... it will be challenging to effectively prevent it even if legislation forbids it.


The bulk of the effort goes into R&D for anti-drone tech (this includes monitoring what the bad guys are up to).


I think the proposed ban is: don't develop weapons with software that can make the killing decisions without humans in the loop. That itself isn't so impossible a ban to make something like the CWC out of, but nations couldn't even stop 3D gun printing, despite the same sort of worries.

Additionally, even with that ban, you could have the exact same video with the minor difference that each drone sends its video stream to a human somewhere else who presses a button to authorize each suggested-by-the-drone kill. And if we got that far, the pressure to get rid of the ban would be immense, because tool AIs are better as agent AIs.

And if we can't even handle regular old microsystems technology, how are we going to handle nanotech? https://www.amazon.com/Military-Nanotechnology-Applications-... (and briefly covered in this talk by the same author http://www.youtube.com/watch?v=MANPyybo-dA) discusses it but in the 11 years since it was published I see no reason to think the world is any better prepared.


Weapons that make killing decisions without humans in the loop have existed for a long time. For example captor naval mines that sit tethered in place and then launch a torpedo when they detect a vessel nearby. Those aren't going away.


You can't really ban software. Sooner or later someone will write it, opensource it, and it will be on GitHub/piratebay.


It's probably impossible to stop someone weaponising a drone and piloting it with AI. But you can control how well the result is executed.

If a computer scientist teamed up with an engineer and a chemist they could probably jerry rig something that would work. Sort of. But if Northrop Grumman tried to build these things with a billion dollar budget behind them, they might actually succeed in replicating the drones in the video.


> There has never been a weapon humanity imagined, designed, and built that it has not used.

What about the neutron bomb?

But overall, yes, I agree, a rifle hanging on the wall will eventually go off.


A neutron bomb isn't really a separate thing, it's just a nuclear weapon designed for more radiation and less blast. And of course nuclear bombs have been used in war.


Matter of time.


This is a temporal version of the No True Scotsman fallacy.


The "No Temporal Scotsman" fallacy if you will.


The "Non-Timely Scotsman" fallacy?


Outlander fans will be very sad.


There's only ever been two nuclear strikes however. Nations do have nuclear arsenals but have, so far, not been stupid enough to use them. Chemical and biological weapons are also largely banned from use.

I'm not saying that we can rest assured that we won't all end up as mushroom cloud dust someday in the maybe not distant future. But attempts to curtail the use of really dangerous weapons have paid off in the past. There's no reason why they wouldn't in this case also.


> Unfortunately I don't see a way this can be avoided.There has never been a weapon humanity imagined, designed, and built that it has not used. AI will be no different.

Agreed history does not bode well, but it is certainly possible. Communication, treaties, non-violent methods of solving disputes can together prevent this nightmare scenario. Just because it isn't easy doesn't mean we should all throw up our hands and say it's impossible.


a hydrogen bomb has never been used.


I stand by my right to a robot army as self-defense in the mean time while I work on achieving the singularity in order to defeat skynet.


I for real find it terrifying. If we were sane, we'd be pushing for international treaties against this same as chemical weapons, instead we're making it happen and selling the tech to people.


International treaties around weapons are hard to enact lately - it seems that most of the useful stuff that we have is mostly created in WW1-WW2 era. For more recent treaties, the usual pattern is that one is enacted, but US, Russia and China (and sometimes a few others who happen to be major users) simply refuse to be a party. Most other countries sign up, but it mostly doesn't matter, because they either don't do it on a large scale anyway for other reasons, or they ignore it when it's suddenly convenient. Examples:

https://en.wikipedia.org/wiki/Ottawa_Treaty

https://en.wikipedia.org/wiki/Convention_on_Cluster_Munition...


Software eating society. Suddenly our entire criminal justice paradigm breaks down without corporeal law breakers. Makes the 'corporation as a person' question seem sophomoric


Just the availability of this kind of tech would scare anybody sane shitless. From that day, you'd better hope you don't ever piss off anybody within a 1000 km radius enough to find your photo and address online (easy in most countries) and drive to within drone distance of your house. Want to obliterate anyone's family completely? Done. Kill anyone in session at the government of a country - any country? Done.


That technology already exists and is being tested in a military setting https://www.youtube.com/watch?v=CGAk5gRD-t0. Not sure if those ones have explosive on it but I'm sure it isn't harder to implement it.


Try flying a drone next to a major governmental building. In the best case, you'll be politely ordered to stop and leave. In worse cases, e.g. if you tried to conceal yourself, the drone might be shot down, and you face charges.


People who can should apply to work on one of the AI Safety Research projects:

https://futureoflife.org/ai-safety-research/

The list covers a number of professors in top graduate programs and several angles should be tackled simultaneously to maximize our chances.


This seems to be mostly about skynet scenarios, which is an entirely different problem.


Yes, I am aware. But weaponized drones and robots will give much more power to a future skynet or a lesser version that autonomously acts with minimal guidance which may unintentionally divert from human goals. This greatly increases the priority of AI Safety research.

Some people I know kept arguing we can simply unplug the skynet and be mostly safe (despite some damage). These AI weapons make it clear that we will not be able to largely limit the damage to the virtual world.


>But weaponized drones and robots will give much more power to a future skynet ...

Nah, not really. In the face of a proper hard-takeoff superintelligence, Skynet will look positively incompetent by comparison.

There's enough manufacturing equipment connected to the internet today that a superintelligence could physically manifest whatever it wishes. That's not to mention all the human labor it could coerce, trick or pay.

Put simply: as soon as the thing copied itself into the internet, that's game over. Better hope it likes us.


Best and most telling part of the video: "trust me, these were all bad guys"


Daniel Suarez, the author of Daemon, has another book called Kill Decision, which explores a lot of these ideas.


He's also had a great video on this topic for a few years:

https://www.youtube.com/watch?v=pMYYx_im5QI


Terrifying and fascinating, as all good wake-up calls should be.

In the future, miniature exploding drones could indeed autonomously, distinguish between cranium and body, plot a flight route and divebomb at their victim.

But, I don't think these skull-crackers will be fully autonomous, even with significant advances and development. Given the chance of collateral damage and the "fuzziness" of algorithms, the kill command or sequence will be still left to a human operator, like how America does so in the skies of Pakistan or with the machine-gun robots of the Middle-East[1].

In my mind, the the computer would display potential targets via live capture, and the human in the chair would cycle through them, and after filtering out the mistakes (a human dummy, a picture on the wall), type "y".

Militaries would have this check less out of "consider the ethics and morality!" and more out of rational avoid-friendly-fire and ensure-maximum accuracy-of-the-payload.

One exception to this approach could be indiscriminate killing; where the designers intentionally program the drones to lobotomize any target within a geographic range or a time, as long as that target exceeds a certain threshold. This approach is guaranteed to result in a lot of false positives, but would still be remarkably efficient for the purpose of terror or localized warfare. [2]

On a side note, YouTube recommended a follow up video, depicting a real-life swarm, howling and circling around a target. https://www.youtube.com/watch?v=PYLP0pAGbE0

[1]: https://en.wikipedia.org/wiki/Foster-Miller_TALON

[2]: Even remote controlled drones strikes are pretty iffy on who they kill, so it's not like unlikely that less savory actors won't mind blanketing swaths of countries with skull-crackers. https://www.nytimes.com/2015/04/24/world/asia/drone-strikes-...


I think it's naive to think their primary focus will be collateral damage. Is that really the focus in any war at all?

https://www.vox.com/world/2017/11/16/16666628/iraq-nyt-casua...

And that's just the targeting of the drones themselves. But how accurate is the intelligence targeting? Post-Snowden we found out that they were killing people mainly based on what SIM card the targets carried with them.

Plus, they consider any male that is killed and above 16 years old as a terrorist. It's harder not to be "accurate" when that's your definition of a target...

Also, you're forgetting one aspect of this. The cheaper the tech becomes, the easier it will be used. Just like we now kill 100x more people with drone strikes that we did with airstrikes, in the future we may be kill 100-1,000x more people with automated drones, because it's so much easier to "get rid of the baddies".

Do you actually think they'll hire 100x more people to operate those machines and "have final say"? Yeah, no way that's how they'll think about it - unless we make them think differently about it and not to allow the automated kill machines to ever be built/sent.

I mean, given the type of politicians the U.S. government tends to have these days, which option do you think they'll choose? A 10-year dragged out war against a group like ISIS, or "just sending 10,000 automated drones" into multiple "hot areas" and "finish the job within a month"?

Can you not see how their logic would go, and that we'll actually need a lot of people to oppose that type of thinking to ensure it will not happen? But will there be a lot of people to ensure that doesn't happen, if say a terrorist group takes out the White House? Or will everyone think "those 100,000 people all killed automatically by the drones deserved it for destroying our White House!!".

Heck, Trump already asked the military "for the biggest bomb they can send" - a bomb that was built but never used before - and it wasn't even a situation of the US being under huge immediate threat at the time. It was mainly done for PR purposes for Trump to show how macho he is and how he "gets things done." The worst part about it is that the media seemed gleeful about it, rather than condemning him.

I think it would be best to ensure the machines are not built in the first place. There are too many trigger-happy people that would send such drones out.


Theoretically we can make building and using them illegal, but there's no way to prevent building and using these machines.


I think with modern advances in facial recognition technologies, a human participant wont be necessary. Have you seen how accurate these models are nowadays? Many big tech companies have already released tools that make it possible for just about anyone who can write code to implement sophisticated CNN models for facial recognition. These technologies are more accessible currently than they've ever been.


What makes a positive "false" is not necessarily objective. It may be fluid and entirely dependent on the "mission parameters". Consider two states at war where the bots are by design targeting a specific genetic fenotype.


I, too, look forward to season 4 of Black Mirror.


Interestingly, this was almost the exact plot to the the latest season finale.


I've observed elsewhere that the reason this is different is also the reason they couldn't use footage from "Hated In The Nation" (the Black Mirror episode you're thinking of)

Hated In The Nation hypothesises that a lone nutter repurposes existing non-lethal technology to kill people. No explosives, just drill your way into the brain through the ear because the robots are replacements for flying insects.

A law saying "Don't make killer robots" is useless in the Black Mirror scenario, nobody made killer robots except the lone nutjob they're already hunting for murder. So for this campaign it's an unhelpful message, it suggests their campaign would be futile.


Thanks for the requirements and use case. How about a Gofundme to start on development? I also see a group providing drone attack services along the lines of booter/stresser DDoS providers (distributed denial of services).

On the other hand, I think it will be a long time before visual face recognition will ever get this good, especially out in the real world with bad lighting, hats, eyeglasses etc. But let's say the world of the video authors comes true. What do we do?

Defenses:

  1. Microwaves can fry electronics.  A system wouldn't need fancy targeting; just blast powerful microwaves in all directions.
    Downside: 
      A. Microwaves this strong are bad for people too.
      B. All it takes is 1 anti-microwave drone to slip through and disable the system.
      C. Asymmetrical cost of offense vs. defense

  2. Lasers can fry the optical sensors used by the drones
    Downside: 
      A. Probably would need a fancy targeting system?
      B. Same problems as microwave defense 1B and 1C.

  3. Something along the lines of WW2 barrage balloons.  A network of suspended lines designed to entangle the drones.
  The lines wouldn't have to be very big; a piece of strong thread could get caught in the rotor of a drone and disable it.
    Downside:
      A. Wind gusts
      B. How do you suspend the thread?
      C. Do you carry one with you when you go out to Starbucks?
      D. The thread would have to be small enough to not be detected by the drone's camera but big enough to cripple it.

  4. Ski mask.  If the drone is programmed to attack specific individuals using facial recognition, take away it's targeting requirement.
    Downside:
      A. Maybe drone deployers don't care about a specific target -- just body count.

  5. Play dead.  It's assumed the drones wouldn't waste resources on targets that have already been attacked, so just emulate a victim.
    Downside:
      A. Big assumption

  6. Wax museum defense.  Have a mannequin with target's facial features.  Drone attacks wax dummy.
    Downside:
      A. See 3C, 4A

  7. Obscure the target.  Throw a blanket over the target.
    Downside:
      A. Time to deploy.
      B. Darn.  Left my blanket in the back in the ....POP!


In a world of fighter-jets, satellites, and nukes, under the right circumstances this is a development which could cut both ways.

From Wikipedia: "Quigley concludes that the characteristics of weapons are the main predictor of democracy.[0] Democracy tends to emerge only when the best weapons available are easy for individuals to buy and use."

[0] http://www.carrollquigley.net/pdf/Weapons%20Systems%20and%20...


Do Somalia, Afghanistan, and Iraq qualify as natural experiments that test this theory?


I doubt it. Jets, drones, and satellites vs peasants with automatics is a fairly stark disparity.

I'm making more of a long-term guess here. Unlike uranium enrichment or jet construction, I suspect that state-of-the-art drone manufacturing may eventually reach a level of affordability to where drone-facilitated hits, feuds, and drive-bys become a thing on domestic soil.

edit: And key part of the wiki quote is "the best weapons available". If the best weapon available is a six-shooter, and it has a price that almost everyone can afford, you've got an "equalizer". Democracy! If the best weapon available is a nuke, and it takes billions in supply chains and hundreds of millions in materials and know-how to make one, that is a lot of extra leverage for the guys at the top.


I find it Strange that nobody generally comments on this including all the previous postings..

is it because people think it is inevitable and there is nothing they can do to stop it?

or is it because people think it is fantasy and could never happen?


Another option is that some of us really look forward to real-life sentry gun turrets; we actually want this future.


Systems such as Phalanx CISW with fully automatic mode have been available for ages.


Yes, but to clarify, I meant access by civilians, given the availability of cheap FLIR systems, stepper motor rigs, and OpenCV processing.


Systems such as Phalanx CISW with fully automatic mode have been available for ages

Phalanx doesn't work tho' :-) In 1991 Iraqi forces launched a Silkworm missile against USS Missouri. The ship launched decoys... and its Phalanx guns promptly targeted those decoys. Fortunately there was a Royal Navy ship, HMS Gloucester, in the vicinity which dealt with the problem with a good British weapon, the Sea Dart missile. This was the first real "drone on drone" combat, if you will.


I expect the software was upgraded several times since. Also, I expect drone countermeasures coevolve with drones.


Let's sit back and take a chill pill.

Will this capability exist? Yes, probably.

Am I concerned that this capability will be deployed against me? No, not really. Theoretically it could, but then again, so could a dude with a gun. I live in a democratic first-world country, the capabilities of my own government are likely to exceed the capabilities of those who want to kill me. And theoretically maybe my government might want to kill me... but it could already do that anyway, using a dude with a gun.


It's nice to live in the country that's at the top of the global political order, it's just _our_ "enemies" that have to worry about this sort of thing.

For now. Things change.


why stop it? even at this early stage it seems to work really well - in a hall full of people it correctly identified its target (until of course presenter cheated). Also compare to a rifle bullet, this thing is much easier to shoot down.

It also fits the progress we've been making - we removed civilians from the battlefield or at least we made it a war crime, and now we need to remove soldiers from it too (especially considering that they will become inferior - slower reaction, no 360 vision, slower speed, can't fly, can't really network and exchange full tactical info, etc. - to drones anyway). Let drones fight each other.

For non-military setting it was already addressed in South Park for example - autonomous police drones will control the airspace. I think in the future any UFO will be just taken down like a car without license plates.


> we removed civilians from the battlefield or at least we made it a war crime

In the Middle East, the US likes to blow up buildings that they think might have terrorists in them with a drone-launched missile, then have the drone circle back and hit the rubble with another missile a few minutes later to kill any neighbors or EMTs coming to help the victims.

> Let drones fight each other.

Are you imagining a sci-fi future where nations politely agree to decide their disagreements with bloodless robot duels? Wars are won by destroying the enemy's will to fight, not by gentlemen's agreements.

You seem to think that war can be made safe and civilized somehow. It reminds me of the 1897 New York Times writer who thought that machine guns would lead to peace on earth because leaders would hesitate to send their soldiers against such deadly weapons.


>Are you imagining a sci-fi future where nations politely agree to decide their disagreements with bloodless robot duels? Wars are won by destroying the enemy's will to fight, not by gentlemen's agreements.

Where is no "politely agree" nor "gentlemen's agreements" in international relationships, only sheer power. MAD is an example of power forcing to not fight a war at all. Falkland war was fought without bombing Buenos Aires or any other action outside of immediate war zone. Ukraine/Russia war in Donbass was limited to that area because of possible repercussions to either side for letting it spill out (and for the same reason both sides in general, with some exceptions, didn't indiscriminantly target civilians). US and other developed countries limit sending its soldiers in harms ways and hitting enemy civilians because of internal and external political repercussions. So it is always a power calculation, and as a result in this more and more interconnected and densely populated world we have 2 tendencies - restricting conflicts to specific localities and minimizing human involvement. Those 2 tendencies ultimately lead to drone wars confined to specific areas (preferably in space or in cyberspace - notice how much of adversarial activity happens more and more in cyberspace, yet nobody is going for real life action over it). Attempt to bomb each other's cities will be (and pretty much is already today) unthinkable like in case of Falkland war. Of course, when there is no containing power in sight, we immediately have what Saudis do in Yemen right now - it isn't even a war, it is just pure genocide.


Cannot be stopped. Whatever human mind imagines can be made. Only defense is counter drones. Everyone will buy one with next cell phone.


technically there is nothing new in this clip. People have been singled out and murdered for since we exist and sometimes nobody is singled out but everybody killed with a bomb. nonetheless there is something undeniably creepy and unsettling about being killed by a soulless machine.


This is terrifying!! AHH!!!!


I have been warning about this for years:

https://www.schneier.com/blog/archives/2015/08/shooting_down...


personal EMP??


I thought it was real for a few seconds.. Wow Scary!


Is this a leak of Black Mirror Season 4?


Maybe we can petition.


Or start building anti drone capabilities.


crazy


Magnificent piece of technology! Except it gets defeated by a 2$ skimask. This kind of fearmongering isn't really useful as a way of creating political change. If you want people to participate in a reasonable and constructive dialog then convince them to do so with reasonble and constructive arguments.


Do you know that person identification based on gait detection is actively developed and getting decent results? Same with voice.

Also in a really grim scenario, could threaten/kill anyone that hides their face, be it with a skimask, burka or whatever. Or even require positive identification, that your face is found in the database.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: