Hacker News new | past | comments | ask | show | jobs | submit login
Scientists call for ban on lethal, autonomous robots (theglobeandmail.com)
171 points by pseudolus 38 days ago | hide | past | web | favorite | 197 comments



Lethal robots are not a technology that can really be regulated by ban.

It's not like nuclear weapons, where you need a huge industrial operation with tens of thousands of high-tech centrifuges running for years before you have enough fissile material for a bomb. Even the rulebreakers, like Israel, couldn't hide the fact that they were doing it and had to rely on America's UNSC veto to save them from the consequences.

But lethal robots are a trivial step away from the current state of robotics. Hobbyists have already mounted guns on quadcopters. Tesla and Waymo have made robots that kill people accidentally. Any robot can be made a killer robot with minimal change to its software and hardware.


Sure, but we do have all sorts of laws and restrictions on chemical weapons (which you could plausibly make by pouring some of the cleaning products in your house together).

The ban isn't a flip-switch that says: "hey, no more of these", it's the start of a formal dialogue over our relationship to these new technologies and how we would like to see them used on a worldwide basis.

If a country were to opt to drop thousands of flying drones with guns onto a military base is that different than chemical weapons? a fuel air bomb? conventional explosives? sending special forces in?

Yeah, they're all different with different repercussions, long term impacts, potential civilian casualties, etc. etc.


Chemical, and especially biological, weapons are an order of magnitude more difficult to manufacture correctly, (ie-without killing yourself), than autonomous drones. The thought that autonomous groups like ISIS would not use autonomous drone swarms because such a terrorist attack would "break international law" really is so far beyond "fanciful" that it ventures into "humorous" territory.

I think we can pretty much count on every group with a gripe reaching for autonomous drone swarms first in the future.


Sure, but if they can only be developed by parties who don't care for international law, they will be weaker and less effective, then they might otherwise be, and we can focus legal development efforts on countermeasures.


Quite a lot of "banned" or restricted weapons are simple to produce: anti personnel landmines, poison gas, cluster bombs..

They do run into the problem you described. Most militaries have them. Paramilitaries and sub-state forces (even criminals) have them. They sometimes get used, even by relatively law abiding forces.

Despite all that, those treaties do significantly impact their use in various ways. Diplomatic concerns do affect choices in conflict. These treaties put a diplomatic price on the use of certain weapons. They limit trade in these weapons.

Weapons treaties are not about perfectly police able, and internally consistent rules. No rules of war are. That doesn't mean they are entirely worthless.


One way the treaties help is they reduce R&D expenditure because their is no legal market for the products which slows the progression.

Not much point engineering something you can't sell.

It's not a perfect solution but its a positive outcome.

It also puts diplomatic and PR pressure on countries who use weapons generally prohibited which also is good.

The problem generally is that technology continues to improve and the cost of doing things continues to drop.

I've no idea what it would cost to build a cruise missile using off the shelf components and props instead of jets (and I'm not going to Google it) but on YouTube there are people operating remote control aircraft at long ranges with love video feedback, that's a payload away from a smart bomb already.

In reality targets are soft enough that you don't need to do those kinds of things to attack a civilian area but that doesn't mean they couldn't.

The recent drone at Gatwick shows the problems.


That's a good point. Banning weapons lessens military industries ability to produce them. They tend to be made locally, within the militaries themselves and that often means less/worse... Eg Syria/Assad.


This is a silly pedantic point...

I don't think Israel (or India, Pakistan & North Korea) broke the npt rules. The treaty doesn't regulate non signatories and allows countries to withdraw from the agreement. North Korea is the recent one. They gave their 3 month notice, withdrew and tested nukes without violating the treaty.

Ultimately, I don't really think the treaty was the biggest direct factor limiting proliferation.

When it was signed, peaceful nuclear tech was expected to be the most important technology of the century. The treaty was supposed to exclude non-members from accessing peaceful nuclear technology. It did that quite effectively (Israel never had nuclear power for this reason).

But unexpectedly, it's 2019 and nuclear energy isn't that important.

The npt did help forming a consensus though, and policing that consensus outside of the treaty. Helping anyone nuclearize is taboo. Five more countries (possibly six) did develop nuclear weapons after the npt, but the barriers were pretty huge, diplomatically and technically... because they got very little help.


>Helping anyone nuclearize is taboo

Only because most of the nations that haven't got them but are seeking them are poor nations that don't have a lot of close friends among the nuclear nations. It's kind of a "got mine, screw you" deal. Iran and the like have just as much right to develop and hold nuclear weapons as everyone else.

The thought of "regional troublemaker" having nuclear weapons countries bothers nations like the US, Russia, France, (etc) for the same reason some people at the top of the social hierarchy doesn't want poor people to have easy access to firearms. Those already at the top generally don't like seeing the playing field leveled and it's true all the way from the individual level to the national level.

In a democratic society if the party you want to push around is well enough armed to hurt you back that limits your ability to push them around to cases where you are actually consider to be "in the right" by the court of public opinion otherwise the people will promptly decide that it's not worth it.


It's not about rich or poor. It's about whether or not you had them at the time the treaty was written.

China, UK & France are in. Japan & Germany are out, because they were occupied and demilitarised at the time. Both large, rich countries. China was large, but poor at that time.


I agree that rich and poor is probably too broad of a brush but there's definitely an in group and out group with which rich/poor generally, but not perfectly aligns. I'm talking more about the current political landscape anyway.

Everyone turns a blind eye to Israel (they're in the in group) but if any middle eastern, African or South American nation were to seriously pursue nuclear weapons the hand wringing would be immense (and is, in the case of Iran).


IANAL but does it matter whether it can be enforced or not? There are plenty of types of crimes that cannot be easily detected or prevented and are indeed committed at high rates. But they are still forbidden and should remain so.


There is an older theory of law that only things which are enforcable (which detectability is a requirement for) should be illegal otherwise it only breeds contempt for the law. This was way back when the concerns were more "could a king enforce it" than any notion of natrual rights. Personally I think that philosophy is not wrong at very least even if other valid frameworks could exist.


So what exactly is the point?


The point is that we were able to mostly get rid of chemical weapons and landmines, which is also not easy to enforce.

The effectiveness of enforcement is no argument.


It is indeed an argument, and the idea that the laws around landmines and chemical weapons are what caused them to occur less is very selective reasoning. Those two cases are weapons that have a blowback towards those who wield them (Korea, Vietnam, and Afghanistan are still cleaning up landmines), and it's just as likely that their diminishing was due to there being less costly ways to kill people.

Inability to enforce is exactly why prohibition has been a colossal failure, so how is the ability to enforce irrelevant to a law's success?


Things like these are hard to judge, but a ban is a sign of intent and not a perfect solution to prevent a certain situation.

We collectively agree to ban certain things (e.g. murder), because there are multiple reasons to do so. Some of these reasons are humanitarian, some rational, some economic etc. Yet we see proof each day that neither laws nor punishment will stop murder from happening.

The idea of a ban is not to stop things — but to increase the (social, economical, ...) cost of certain behaviour.

Any actor who fears that cost will abstain from chemical weapons for example. But just like with murder you will always have actors that either decide it is a prize worth taking, or they never ever thought about it at all.

Hard to say how effective those bans were, but they certainly helped to nudge some actors into adopting higher standards.

And once there is a standard the majority agrees on it is hard to go back to a lower standard..


"and it's just as likely that their diminishing was due to there being less costly ways to kill people."

Politicaly costly, as more and more people pressured for more human ways of war. Because mines are a very effective and cheap military strategy. (where mines are you don't need so much troops). I say it was the boulevard pictures of children without legs, that did it. And maybe will so again after the first killbots gone rouge..

Because I doubt any big military would miss out the opportunity to at least be able to flip the switch to let them operate and shoot autonom, when you have more cheap massproduced robots than operators and need them and or the enemy is jamming you and the situation is critical.


>and it's just as likely that their diminishing was due to there being less costly ways to kill people

Land mines are by far the cheapest modern method for area denial unless we're considering dirty bombs.


I don't think he has said it was irrelevant. He said it was no argument for not banning them.


Consequences. You risk suffering the consequences if you commit a crime. Society deems something as Bad (calling it crime), punishes those who commit crime.

We’re hoping society agrees that making autonomous lethal robots should be Bad and deserve punitive consequences. And hopefully, those who commit this crime will be caught and suffer the consequences.


IMO having any law that is rarely enforced is a bad idea. That gives authorities the choice of whom to pursue and whom to let go, which is often used to hound parties (people, groups or countries) for arbitrary reasons citing the law in question as a reason for persecution.

Laws should be clear and non-burdensome for the majority. And violations caught at the high rate. My 2c.


A typical example of a crime that is very rarely punished is rape. I hope you'll agree that it should nevertheless remain banned.


There is a difference between rarely enforced and rarely punished. In antiquity before forensics if someone murdered someone on a country road and cleaned their clothes and cleaned or disposed of their weapon nobody would ever know who did it without a confession. However if they saw someone murdering a man in the street they would mob the assailant and capture or kill him.

The first case they may bury bodies and set an armed patrol of some sort for roving bandits but the murderer would never be punished per say.

However if nearly everyone drinks and alcohol is illegal but bars operate openly yeah that law is bad it is only being used as a pretext.


It's rarely punished because it's hard to prosecute. When an actionable case is dropped in a prosecutor's lap they're happy to take it to court.


No one said don't enforce it, they said it's difficult to enforce. Those are very different.


eh... I'm glad there is a speed limit on the road in front of my house even though a lot of people drive 10 mph faster.


OK, but in a similar situation (speed limit that most people exceed by 5-10 mph), I would prefer that the speed limit was raised by 5 mph, and then enforced all time time (e.g., via a well-calibrated speed camera), not randomly when the town needs extra cash and places a traffic cop nearby to extract a tax.


Society only punishes you if society knows about it and has enough power to do something about it. If the US signs a treaty to ban killer robots but builds them anyway then nobody can force them to stop.


This is to binary IMO. I think imany ban is about increasing the cost of a certain behaviour and by these means decreasing the likelyhood of an actor to adopt it.

No human agreement will ever be able to stop a 100% of anything. But that doesn’t imply it isn’t worth increasing the cost (financial, social and otherwise) to certain types of behaviour.

People seem to agree quite uniformly with banning and punishing murder for example, although these laws and punishments don’t seem to stop murder as has been proven over centuries. Few would murder their neighbour over a small dispute, because the cost for doing so is extremely high. Much more people would e.g. kill the rapist of their child, because the notion of cost might be irrelevant under such circumstances and it is much more socially acceptable to do so.

Of course there is more than just cost. It can also be a matter of culture. If you live in a culture were getting shot for miniscule reasons is not something that happens, you are also far less likely to see shooting at somebody for miniscule reasons as an option


But then the rest of the world can agree it was evil.

Just like the world can mostly agree it was evil to arrange a civil war in Eastern Ukraine or invade Iraq etc etc:

There will always be people who more or less genuinely argue that the last invasion of Iraq was necessary or that a certain superpower next to Ukraine has nothing to do with the well supplied rebels in their back yard.

For the rest of us we can all agree to despise both of these.


I don't think the World agrees that it was evil to invade Iraq. A lot of the World participated in that invasion actually... At the end of the day Saddam was not exactly a peaceful dictator.

I think hacker news should be above inflammatory posts like these.


Have my upvote. I did not intend it to be inflammatory, in fact I rewrote it once before posting to make sure it wasn't.

I posted something I was sure most HN-ers could agree on, - but as someone who frequently disagrees with many of you I probably should have known better.


In principle yes, but what happens when a pariah state like Venezuela or Iran decide to do this, no amount of international ostracizing and sanctions will deter them. What leverage are you going to use to enforce this?


The point is that when people get caught they can be sent to court and sentenced.

Some people are scared of that and don't do the forbidden thing.

Take the act of stealing for example. It is ridiculously easy to do and indeed it is committed a lot. Yet most societies prefer to ban it, hoping that fear of sentence will prevent some people to do it.


>Hobbyists have already mounted guns on quadcopters.

ISIS has actually already used (remote controlled) quadcopters to conduct grenade strikes.

https://www.c4isrnet.com/unmanned/uas/2018/01/05/how-650-dro...

You can make a distinction between human-in-the-loop weapons versus human-on-the-loop supervision versus robots killing with no supervision. But of course landmines have been completely autonomous weapons since the US civil war. It's a place where it's really hard to make bright line distinctions.


> But of course landmines have been completely autonomous weapons since the US civil war.

Excellent point.


I was going to bring up landmines but I googled it first:

https://en.wikipedia.org/wiki/International_Campaign_to_Ban_...

It is not a bad comparison. I can even imagine a robot/drone sent it with orders to kill everyone in a certain area. It gets lost or whatever and runs out of power. Years later some kid finds it, charges it up and turns it on...


Spiked pits, poisoned wells, bear traps and caltrops have been around a whole lot longer than landmines and have most of the properties in question. This is not a new issue.


Landmines are easy to make too, still it's worthwhile to ban their use.


Are they banned? Does the ban do anything?

I think USA has huge pile of banned weapons like bio and chemical weapons.


The US has long since destroyed its biological weapons and mostly its chemical ones too. As Nixon said "If someone uses germs on us, we'll nuke 'em."

The USSR put anthrax warheads on a lot of their ICBMs, mostly because it was cheaper and they weren't as enamored with the idea that it might be possible to win a nuclear war by destroying one's opponent's weapons in a first strike, something bio weapons are useless for.


> I think USA has huge pile of banned weapons like bio and chemical weapons.

This is not necessarily true. We are disposing of them and last time I checked there are only two sites now that have weapons and their numbers are decreasing daily.

Sites that have since disposed of all weapons that I know about.

1. Johnston Island 2. Umatilla, Oregon 3. Little Rock, Arkansas

source: my father has been doing this for 29 years


The US had been destroying chemical weapons for decades, due to treaty obligations. Thinking is not knowing.


yes and we've been consulting with many other nations on destroying them. Ukraine, russia, etc...


U.S. landmine policy is mostly aligned with the Ottawa Convention, with an exception for the Korean peninsula https://www.state.gov/t/pm/wra/c11735.htm


ban is on use and not on stockpiling. it's all legal distinctions to avoid incurring penalties in the global economy. wars aren't even fought in the open anymore,you fight them via proxy "liberations" in failed nation states. for eg russia vs europe/usa vs iran vs saudi arabia in syria or china vs india via pakistan


Also annoyingly biological weaponry requires specimens to create countermeasures like vaccines and antibodies. Even smallpox we have kept on ice and are debating if it should be incinerated to reduce the relatively minuscule escape chance or kept on ice in case we need to make more vaccines.

Hypothetically being able to digitize and compile it would allow incineration of every bioweapon but it comes with its own dilemma. The one advantage is that if it comes out of bits and into pathogens it was definitely deliberate. The disadvantage of course is that it can be printed with the right equipment.


>Lethal robots are not a technology that can really be regulated by ban.

Bans aren't merely about regulating technology, they're about regulating human behavior by introducing consequences. They're about a society making a choice to introduce legal sanctions against persons who do something unambiguously reckless and destructive, such as handing an autonomous device the power to kill.

Responses that boil down to a shrug of the shoulders underscore an important observation: enthusiasts for technology are precisely the wrong people to ask about the importance of recklessness and risk. It's on par with asking someone in software sales for a warts-and-all perspective on their product.


One thing I've noticed is that the type of hacker-person one often finds on HN - the kind of person who looks at the world in terms of what is technically possible - has a gaping blind spot when it comes to the distinction between "anyone could do this" and "everyone can do this". The infamous Dropbox announcement thread is instructive - your average hacker type poo poos it because it's trivial, but its existence still changes the world in meaningful ways.

There is a huge qualitative difference between "anyone could put a gun on a quadcopter" and "everyone can buy a quadcopter with a gun on it".


>gaping blind spot when it comes to the distinction between "anyone could do this" and "everyone can do this"

Agree. These are exactly the enormously important subtleties that are lost when education omits training in the humanities.


There's a big difference between a jerry-rigged bomb strapped to a quadcopter and a purpose-built killbot produced by a nation state. Strong international sanctions against warbots are a good idea, even if it won't prevent every terrorist from making their own low-rent version.


"There's a big difference between a jerry-rigged bomb strapped to a quadcopter and a purpose-built killbot produced by a nation state. Strong international sanctions against warbots are a good idea, even if it won't prevent every terrorist from making their own low-rent version."

Killbots built by nation-states (essentially) are already available for purchase. How does an autonomous helicopter drone armed with an AK-47 sound? ;-)

>The Blowfish A2 “autonomously performs complex combat missions, including fixed-point timing detection and fixed-range reconnaissance, and targeted precision strikes.”

>Depending on customer preferences, Chinese military drone manufacturer Ziyan offers to equip Blowfish A2 with either missiles or machine guns.

>Mr Allen wrote: “Though many current generation drones are primarily remotely operated, Chinese officials generally expect drones and military robotics to feature ever more extensive AI and autonomous capabilities in the future.

>“Chinese weapons manufacturers already are selling armed drones with significant amounts of combat autonomy.”

>China is also interested in using AI for military command decision-making.

https://www.technocracy.news/china-releases-fully-autonomous...

China and Russia will ignore any treaties on this, just as Russia has already ignored many including the INF Treaty. As in many other cases, the West will essentially tie its own hands while allowing others to literally get away with murder.


So I see this was downvoted without any effort to rebut the story...

Anyone care to explain?

It should also be mentioned that the drone I linked to above was being shown at an arms show, and is essentially being offered for sale to all comers. I'm sure there'll be healthy sales to some of the more autocratic/aggressive nations out there...


I totally agree with your point and have raised this issue before (but not here). Most people prefer to bury their heads in the sand rather than confront these uncomfortable and complicated dilemmas.

I believe the scientists in question would not be calling for this kind of ban if their sons and daughters were sent into battle against nations equipped with such weapons - in fact, they would be cooperating to develop this very technology if it meant protecting their loved ones.

It's easy to be an idealist, until you've truly got something to lose. I think a lot of people need to be reminded of this.


So, I had to reread your middle para because I honestly thought you were heading an entirely different direction.

Do you think it should be legal for citizens to carry RPGs, machine guns, grenades, ..., so they never have to go in to a room in which someone else might have more firepower?


Pretty sure iliketosleep means "if their sons and daughters were in the military during a war, and were sent into battle" - after all, it'd be unusual to be sent into battle in any other circumstances.

And yes, I think most people agree it should be legal for soldiers to carry machine guns and grenades when in warzones during wars.


You misunderstood me. I was asking the question I wished the parent to answer.

The point the GP made was seemingly that we shouldn't let potential enemies have better weapons (autonomous drones), denying them for ourselves.

Yes, I assumed they were considering the context of state sanctioned aggression.

I wondered if they only found that reasoning appropriate for states, or in general.

One could ask "should we always seek for the most terrible weapons to be legal in any context in order to ensure no enemy can outmatch us due to restraints of law?".


To address the point that I think you're getting at, the difference between a military and civilian context is that is that the former is a potentially "no rules, everything at stake" situation. There is no international enforcement agency that can force all nations to abide by an agreed set of rules in any meaningful way. Thus, in a practical sense, there's no restriction on the type of weapons the enemy may develop and a nation must be on the cutting edge of military tech at all times "just in case".

In the civilian context, enforcement agencies in most first-world countries are able to restrict weapons and violence reasonably well - most people won't see the need to arm themselves and will prefer to be law abiding. But in countries rife with violence where law enforcement is weak or corrupt, I imagine people would try to arm themselves with what they can regardless legal constraints.


I think that reasoning is valid for state-sanctioned aggression, or war to be precise. Personally I think that as long as collateral damage is minimal, we should strive to develop the most deadly, most efficient weapons we can, specifically for the purposes of deterring unchecked aggression.

I don't think GP was making an argument that I should carry an AK-47 on me so that any muggers i come across are less likely to out-arm me.


Thanks. Per your 2nd para, why not, what's different in you legally allowing the availability of the most deadly means in the two contexts?


Nation-states cannot, in my mind, be compared to individuals when it comes to propensity for aggression. The vast majority of individuals, in, say the U.S. are not armed, whereas the majority of nation-states have a military. Furthermore, the countries that are in proximity to "aggressive" states either have a strong military e.g. Israel, or have military presence from other countries with strong militaries e.g. South Korea.

Also, the distances between different countries vs the spread of information is also vastly different. It is harder to commit an international act of violence with no witnesses than it is an interpersonal one, and the threat of retaliation keeps violence at bay, for the most part at least.


> produced by a nation state

Why do people always say ‘nation state’ in these situations? It’s not a fancy word for ‘major country’ - it means something specific. For example the UK is definitely not a nation state. The US is arguably not a nation state either, due to its cultural and lingual heterogeneity and tribal sovereignty.


Fine, substitute "state actor" or whatever pedantic term you prefer.

Jargon nitpicking is my second-least-favourite kind of HN nitpicking, coming in close ahead of "Boy there was an advert on that article!/Why U no adblock?" nitpicking.


Especially in American English, the term nation state is often used to disambugiate from a 'state' at the sub national level -- eg one of the 50 US ones.


The uk and us are definitely referred to as nation states in cyber Warfare lingo. I think the meaning is close to: group with lots of resources and strategic goal directed in that context.


That's the same mistaken meaning that the original commenter is using.

A nation state is a nation that is also a state. The UK clearly is a union of four nations into one kingdom. I mean - the clue's in the name.


As many have noted in American English this distinguishes from a state (as in the United States).

Additionally "nation" can refer to the collective people rather than a centralized government in action.


It's 3 kingdoms, they are "united".


They're nations. Why do you think Wales has a 'national' assembly if it isn't a nation? Why do you think they have their own 'national' bodies for things such as sports? Why do you think they compete separately in competitions like the 'six nations'?


Wales is a curious one, I've tried to establish when it became a "nation" best I can do is some time in the last 100 years or so.

Unless you're going to separate out historic county kingdoms (Wessex, Sussex, Kent, Gwent, etc.) then England annexed the counties that compromise the region we now call Wales as lands of the English Crown.

The UK was initially formed by the Union of two kingdoms, England and Scotland at which time all the parts of Great Britain were under the same Monarch.

Your contention doesn't necessarily disagree with this historical fact.


It is 1 kingdom, which is united.

It's "the United Kingdom of Great Britain and Northern Ireland", not the "United Kingdoms of Great Britain, Northern Ireland, and whatever 3rd one you have in mind"


Two then Great Britain and Ulster. Though Lord Steel appeared to imply it should be 3 on calling her Queen of Scots when the Scottish Parliament opened.

I was under the seemingly false impression that the Queen was separately Queen of England and Scotland still/again. But it does appear she is Queen "of Great Britain, Ireland and the British Dominions beyond the Seas" making her Queen of 2 kingdoms (Great Britain, Ulster) within the UK?


"nation state"

Dictionary result for nation state noun a sovereign state of which most of the citizens or subjects are united also by factors which define a nation, such as language or common descent.

so you can call anything a nation state as long as that thing issues a passport i guess?


> so you can call anything a nation state as long as that thing issues a passport i guess?

What? No - you just gave the argument against that!

> are united also by factors which define a nation, such as language or common descent

This means the UK isn't a nation state - we have four nations in the UK, with different cultures, descents, and in some areas even languages.

It also means that the US is not a nation state - the people there have very distinct descents. Today Americans literally still say they're 'Italian' rather than 'American'. Many Americans don't speak English. The US literally describes tribal lands as 'dependent nations', so it isn't one nation even by their own federal definition.

Examples of nation states are places like Iceland and Portugal. These are the major military players that the original comment was trying to refer to.


It's possible for a person to be a member of multiple nations. In the eyes of many Americans (USian), American most certainly is a nation.

MAGA-style nationalism is quite popular and powerful at the moment...


Yes establishing your own nation as a state is a goal for many nationalists. And you can see some nationalists pointing at Japan (a nation-state) and saying that they'd prefer their state to be more homogenous like Japan.


Korea is one nation divided into two different states. And the Soviet Union was one state comprised of several nations which later split apart.


It sounds fancier than "state"? Which is what was meant I think.


> There's a big difference between a jerry-rigged bomb strapped to a quadcopter and a purpose-built killbot produced by a nation state

Aren't we already there thought? Don't the US already have semi-autonomous flying kill-driods? The ones that occasional crash Pakistani weddings unannounced.


If you count ICBMs, we’ve had them at least since the 1960s.

For more targeted strikes with conventional explosives, we started using Tomahawk Cruise Missiles in 1991 during the Gulf War.


Those aren't autonomous weapons in the sense meant here.

Both of what you mention attack a predetermined target - a target selected by humans.

This technology is about "smart weapons" going out and searching for targets. For instance, the Chinese helicopter drone I mentioned in another post could easily be fitted with an IR sensor, and could probably hit targets out to ~200 meters with the AK.

One challenge is IFF (Identification Friend or Foe), but if you just want the drone(s) to kill all "enemies" within a certain boundary that'd be easy. AK ammo is even cheap, unlike the $500K missiles we often use to blow up individual jihadis...

And of course, at any time a human could potentially take control or at least monitor activity.


Those are lethal but non-autonomous weapons afaik. A human pulls the trigger.


> Strong international sanctions against warbots are a good idea, even if it won't prevent every terrorist from making their own low-rent version.

Assuming that all nations follow. Otherwise one rogue player can have a huge advantage, and that's the general problem with trying to ban military technologies.


The same is true for chemical or biological weapons. Banning them has worked reasonably well during WW2.


However scary they are they are not particularly effective agains enemy combatants. That's the main reason the ban sort of works.


That's mostly because nukes are a better value once their manufacturing infrastructure is up. They're essentially a bioweapon with a really big boom included (purchase now and get an EMP pulse for free!)


>Strong international sanctions against warbots are a good idea, even if it won't prevent every terrorist from making their own low-rent version.

Why? I read the article, but I'm not buying the "would increase conflict" angle. Some countries are in constant conflict anyway. It's not like the reason they're not fighting more is because they have a lack of men. A lack of equipment is much more likely to be the limiting factor.


Certainly, but when it's ten or twenty jerry rigged drone bombs I'm not so sure it make a ton of difference.

Swarm warfare is going to be cheap and interesting.

That's why the US can spend a quarter million on IUD proof humvees, and the insurgence can just put pointed caps on them to pierce the armor.

It's a lot cheaper to fuck shit up than it is to defend against it.


I think blinding lasers are even easier to make and they are still banned.


> Tesla and Waymo

Tesla and Uber, you mean. Unless there's been an incident with Waymo that I'm not aware of?


You're right, I was thinking of the Phoenix accident, I'd forgot that was Uber, not waymo


> Even the rulebreakers, like Israel, couldn't hide the fact that they were doing it and had to rely on America's UNSC veto to save them from the consequences.

A bit unrelated, but I couldn't ignore it. Could you expand on this one? In what sense is Israel a rulebreaker? What are the consequences?

I don't read the news very often, but I do remember there are some actors that are seemingly working towards obtaining nuclear weapons, and their leaders publicly claim they want to harm other countries. To be sure I'm not totally detached from reality I googled a bit:

http://www.timesofisrael.com/iran-supreme-leader-touts-9-poi...

https://www.telegraph.co.uk/news/2018/04/22/iran-pledges-des...


"The tale serves as a historical counterpoint to today's drawn-out struggle over Iran's nuclear ambitions. The parallels are not exact – Israel, unlike Iran, never signed up to the 1968 NPT so could not violate it. But it almost certainly broke a treaty banning nuclear tests, as well as countless national and international laws restricting the traffic in nuclear materials and technology."

https://www.theguardian.com/world/2014/jan/15/truth-israels-...


I supposed he is talking about the Treaty on the Non-Proliferation of Nuclear Weapons [0]. Israel never signed it though (Iran did), so you cannot really called them "rule breaker" (they didn't sign the "rule"). Although, I think that not signing the treaty doesn't protect you from having sanction from the U.N. But U.S veto did protect them I guess.

[0] https://en.wikipedia.org/wiki/Treaty_on_the_Non-Proliferatio...


I believe GP is talking about Israel's oft denied nuclear missles program which everyone knows they have but they refuse to confirm.

They're probably breaking some nuclear non proliferation treaty put into place by a bunch of nuclear powers. (Whether there's anything wrong with the people with sticks saying no sticks is a different matter).

Israel definitely has nukes already, they're not working towards it.


Waymo has not killed anyone, you're thinking of Uber. Waymo has a MUCH safer/better product than Uber or these other SDC startups. Arguably Tesla's AP has only killed one of the many of thousands of drivers who use Autopilot - one of their two deaths was a guy watching Harry Potter on his laptop and not watching the road.


Kalashnikov has already created autonomous death robots.

https://newatlas.com/kalashnikov-ai-weapon-terminator-conund...


There are many examples of lo tech weapons that have been effectively banned. By effectively I mean they're not used by regular militaries.


Israel is not and hasn't ever been a NPT signatory so they're not breaking the rules by having a nuclear weapons program.


You don’t even need to make a change to hardware. Fly a big enough drone into a building and it’s lethal.


You could do that with a regular privately owned airplane too, and arguably more damage.


but without sacrificing any of your people, or even needing people to be involved. If US drones were autonomous, then somebody could press the "destroy islamabad" button and they'd all take off, fire all their missiles, and crash into buildings on their own, with no humans involved. That's the kind of capability we want to avoid. Robots are cheap, people are not.


> Robots are cheap, people are not.

Depends on the country


Yes but you can build or buy drones in much larger quantities for cheap. With enough planning you could target weakpoints in the structure and bring down a large building more reliably than with an airplane and with less loss of life (on your side).


It's going to be interesting to see how the circle is squared regards defending against autonomous weaponry. There's probably a large design space of offensive autonomous weapons that could only effectively be countered by a system with no human in the control loop. If the design / use of autonomous defensive systems is also prohibited, that would give an overwhelming advantage to any aggressor willing to defy any such a ban on offensive systems. If they're not banned, then the difference between a weapon designed to auto-target other weapons and one designed to auto-target humans is so blurry as to be virtually indistinguishable in terms of verifiable arms control.


> If they're not banned, then the difference between a weapon designed to auto-target other weapons and one designed to auto-target humans is so blurry as to be virtually indistinguishable in terms of verifiable arms control.

There's no such thing as purely defensive weapon. I don't remember who said it, but I recall a quote that went similar to: "if your weapon can shoot enemy planes over your cities, it can just as easily shoot enemy planes over their cities".


Well, what if your weapon is a fixed turret built on top of your city hall?


Put that turret on a trailer truck and drive to enemy town.

Or put your city hall on a trailer truck, if that's what it takes.


That's much easier to do if you develop a mobile turret from the beginning. The idea is that there are technologies which are fairly harder to use offensively than others.


Then you put it on a train, like Big Bertha


See: electric fences


I’m pretty sure banning autonomous defensive weapons won’t happen because we already have them deployed. The Phalanx CIWS mounted on US Warships has a fully autonomous mode where it will fire on targets automatically.


It is there already but limited. Hospital ships are antsy about missles because CIWS are still considered weapons and they can't have them and at least one missle strike occurred at harbor because turning it on there would lead to unacceptable collateral damage to their backstop.


What about current weapons, such as the UK Brimstone missile that apparently:

"includes essentially the ability to find targets within a certain area (such as those near friendly forces), and to self-destruct if it is unable to find a target within the designated area."

https://en.wikipedia.org/wiki/Brimstone_(missile)

When does something become fully autonomous rather than temporarily autonomous?


Exactly when a guided weapon is launched many of them are autonomous eg ATGM's that do top attacks on MBT's


IMO wide, blanket bans on dual-use technologies (and many things in our high tech lives can be effectively weaponized with some effort) do more harm than good. Especially where, as in this case, violators would be hard to detect and classify into "OK" or "bad" bins.

This would deter some researchers working on general topics (easy to label / threaten as law breakers) while having no impact. Organized crime has bigger laws they already broke and already have enough COTS parts to build nasty things (and are often OK with 80% reliability, thus need no cutting edge research). Many / most militaries would ignore such bans and fund development quietly, justifying this by national security.

IMO the only way to make this approach tractable is to choose a narrow list of technologies we do not want developed and see if there are technologies that exist or can be developed that would effectively detect violations. And choosing even a narrow list of technologies to ban would make for lots of debates and unhappy citizens. My 2c.


I'm not really clear on what the moral difference is between an artillery shell, a criuse missile, a drone strike, and an autonomous robot that targets the wrong individual. As far as human agency is concerned, all three are weapons of war that kill indiscriminantly. How is the autonomous system worse? Is it the illusion that we might be able to delegate moral agency to the machine?


If you're going for logical consistency, when it comes to rules of war, you end up with no rules.

Why is a nuclear bomb different to a full scale carpet bombing campaign. Why is mustard gas different to a fragment grenade? Why is killing ok, but torture is not? Why is a barely discriminate bombing campaign legal but a subway bombing illegal?

Rules of war always encountered these arguments. But, do we want to eliminate them, just so things are philosophically consistent?


Makes me think of https://en.wikipedia.org/wiki/Vasili_Arkhipov

Imagine an AI in such a scenario. Count me as one of those people who thinks we are headed into a kind of hell.


I think this comes down to my last sentence -- delegating the decision to kill to a robot is not really different from deciding to kill. But we may do it anyway, and wash our hands. I suppose this is the source of the unease about autonomous drones. Not that they're worse than other killing machines, but that we might deceive ourselves into thinking that they're better.

Maybe a better analogy than a cruise missile would be a land mine. A device that kills, possibly much later than when it's deployed, and often an unintended target.


You could argue that an autonomous robot may be morally better in that it could be more discriminating as to whom to attack than say artillery shells.


Is this a proposed ban on AAMs, SAMs and all torpedos? Because they are all basically exploding robots with advanced tactical systems to bypass defenses and autonomous systems that track their targets and control their rotation.


Don't forget CIWS.


That is a defensive weapon, but perhaps I forgot to mention ASMs, which are normally released from stealth bombers and drones.


Currently reading a great book on this topic:

https://www.amazon.com/Army-None-Autonomous-Weapons-Future-e...


We often forget that by the time the tech world is playing with something, it has perhaps already done the rounds in military.

Automated sentry guns on the Korean DMZ have existed since around 2003, developed by a company that was associated with Samsung (not sure if it still is).

Therefore this isn't, quote, "at best, a few years away". Unless that quote was from around the year 2000.

https://en.m.wikipedia.org/wiki/SGR-A1


And of course land mines have existed for decades longer, and “covered pit of spikes” centuries before that.

No situation is totally new, though sometimes the equilibrium shifts so much that the top 3 driving factors in an environment are qualitatively different


So this is why weapons have manual aim in sci-fi movies! You'd think that war would be fully computerized in the future, but all sides must have decided that was a bad idea.


i'm reminded of the prologue of Ian Bank's Excession which describes how a ship Mind (a vast incredibly powerful AI) is overwhelmed and subverted, its ship taken over by an excessive and overwhelming force, and only a solitary drone is able to make it off the ship after running a gambit as the force tries to stop the drone. most of the attack, the attempt to defend against it and the final hail mary by the drone takes place within thousands of a second.

also i'm reminded of battlestar galactica and how deeply distrustful they are of computers and tech because of everything that has happened to them.


Not to mention the Butlerian Jihad in the backstory of _Dune_, and the enduring ban on thinking machines. "Thou shalt not make a machine in the likeness of a human mind."

https://en.wikipedia.org/wiki/Butlerian_Jihad


There's also the attack of the evil AI known as "The Blight" in Vernor Vinge's "A Fire Upon The Deep."


Burnside’s Zeroth Law of space combat:

Science fiction fans relate more to human beings than to silicon chips.

http://www.projectrho.com/public_html/rocket/crew.php#id--Yo...


With exception of Iain M Banks novels.


Even there, the Minds are written to be very… human. They are written to think faster, but I think it’s probably impossible to write a realistic superhuman mind. Certainly impossible where said Minds have the capacity to internally replicate the full consciousness of billions of humans so much faster than real time that those simultated humans could between themselves play a game of John Searle’s Chinese Room that itself was both genuinely conscious and running faster than real time.


Never realized this. But very true. Except in the novelization of Halo, where all the space battles have realistic scale and distances and the firing solutions are done by the AI.


They'll have to pry my autotracking Nerf ADS off it's hot, charred turret mount. Better tell Aperture Science as well.

On a more serious note, where do unmanned turrets and gun platforms fall? They're not making the kill/no kill decision (they're always in kill mode) and don't have a human directly at the trigger.


I'd imagine they'd be fine. They're not deciding. as you say


“What’s the point of robotics then?”


A lot of people on here are talking about radio control drones, and there's a huge difference between unmanned and autonomous, both in terms of difficulty of execution (exception: landmines) and moral responsibility.

A radio control drone bomb is equivalent to a gun with a really long range. An autonomous drone bomb is equivalent to a gun with a motion detector. It's not about the power of the weapon, it's about the relinquishing of human control and accountability. This is why landmines are so evil, and why they are also trying to ban lethal autonomous robots. It's not about capability of killing at a distance, it's about who pulls the trigger.


A strict reading would include self driving cars.


It doesn't even have to be strict. Driving vehicles is the most dangerous and lethal thing most people do regularly. Software is the most unreliable system most people interact with regularly. Autonomous cars will be lethal and will cause deaths. It's crazy to me that they are being allowed to drive on public roads in such a haphazard fashion. There's no way these cars would pass a driving test.

Additionally, in a quarter of the USA for a quarter of the year driving conditions are such (permanent snow cover) that there are no visual indicators of where to drive or park. Cars would need all roads pre-mapped with ground penetrating radar at the least (a giant data set). And even with that it would be difficult. I know this because it's difficult for me as a human and I often have to rely on my contextual knowledge of how human society works and how people behave to know where to drive on a surface, avoid getting in crashes, and figure out how and where to park. None of this is possible for current or near term computers.


Someone will inevitably use self-driving vehicles to carry out vehicle attacks like human-driven vehicles were used in Nice, Stockholm, Berlin...


yes, makes me think of the trolley problem [1].

[1] - https://en.wikipedia.org/wiki/Trolley_problem


I think it is too late. The genie is out of the bottle and there is no way of putting it back now. The tech is not some obscure, super complex thing where the expertise can be gated and it doesn't require controllable materials or preproduction technology.

I mean, the sort of tech they are warning against can be put together by lone operatives using commonly accessible electronics and concentional arms.

Why not call for banning mines or cluster munitions? Those remain functional for decades after deployment and kill/maim indiscriminately.

I'm not trying to appeal to whataboutism here, I'm trying to point out the futility.

Perhaps I'm too jaded, but I can't see any scenario where military robotics can be prevented.


It doesn't matter if the genie is outta he bottle. Bio weapons were well out of the bottle when countries largely agreed to put that away.

I think the problem is main countries won't see the massive deaths as a problem for them but an assets and the race is on to develope the best.

To stop this essentially we need to get US, China and Russia to all agree to ban these and the world will mostly follow..but in todays political environment I can take see that happening.


Bioweapons aren't effective weapons of war though.


Yeah it’s really just a matter of time before Taiwan wakes up with a drone/robot army swooping in from the skies. (just for illustrative purposes— not because I think China will do this) There’s going to be a blitzkreig-like situation somewhere that a short term unbeatable technological advantage is attained and a large amount of territory is conquered over night.


They could also just launch a nuke. The reason why we don't kill each other isn't because we can't, it's because we don't want to.


No. The reason we don't kill each other or engage in other aggression is because we risk getting seriously hurt or suffering other serious consequences in the process. This is true from the individual up to the national level.

There's a reason nuclear nations (definition to include non-nuclear nations that are sufficiently good friends with nuclear nations) don't get in fights with each other.


I wonder if there was similar opposition to the first autonomous weapon before it was deployed. I'm talking about the humble landmine.


I would suggest that the first autonomous weapon was likely to be animal in nature - like Ceaser's "dog's of war". Train dogs to be as vicious as you can, and let them loose on the battlefield.

I don't think "opposition" in the sense of an organized political movement was as much of a feature of life back then. You did whatever you could to win, and there was no "international community" to tut at you for it.


I think a hole in the ground is an even older "autonomous weapon".


Depending on the target of the weapon a wildlife trap is also an 'autonomous weapon'.

Such bans will always end up revolving around a definition game of sorts.


This brings to mind a microwave beam developed by Raygheon to down drones [0] and an article I read well over a decade ago about EMPs. Wonder how these are doing on the consumer front?

[0] https://m.youtube.com/watch?v=hlmf032NmHU


How is an ICBM not already a lethal, autonomous robot? I think they may mean affordable lethal autonomous robots.


I think they don't want algorithms to make high level ethical decisions like whether to fire the ICBM. It's a thin line though between firing and correctly navigating to a target.


We've had lethal robots for decades. They were called rockets.

The only thing that changed is - now everybody can make it's own lethal robot.

I don't see why guns/explosives on robots should be treated differently than explosives/guns without robots.


The difference is that with rockets, a human still has to decide on the target and what to destroy. A human makes the decision to kill.

With modern robots you can deploy a killbot into some area and they will autonomously decide who to kill and who not to kill with utmost precision beyond what is possible with rockets (which are a general attack on a sizable area).


> The difference is that with rockets, a human still has to decide on the target and what to destroy. A human makes the decision to kill.

That wasn't true even before rockets. Mines are one example of a weapon where the decision to kill isn't made by a human, but by the device (even if the decision tree was as simple as "if weight>threshold: kill").


Landmines were banned in a 1997 convention, specifically landmines that kill people (disabling vehicles is allowed).


Not really. A person fired that at a particular target. Those sci-fi bullets with find that improve aim are not leathal autonomous robots. It's still manual, just with aim assist.


Modern rockets can automatically change target after human fired it.

There are even anti-tank rockets you fire when you don't even see if there is a target, just suspect it. Then, when it flies over a hill and sees a target - it locks on and destroys it.

There are also bombs which spread small explosives over a large area, each of these small explosives has passive control surfaces and try to target the closest armored vehicle under them.

So that you only decide to deploy the bomb in some area, and the bombs decide which target to choose.


Yeah but say a person fires at target x but during the flight , the rocket locks on to target y.


Gun control. Simple. Don't worry about the robots, worry about the guns.


I wouldn't want to cross paths with the StabBot 9000, either.


Killbots have superhuman strength.

You'd need to classify their ability to pull your arms off as a 'gun'.


There's nothing in the Second Amendment that says the right to keep and bear arms applies only to humans, and not also to the arms themselves.


That's a ridiculous interpretation of both letter and spirit. Killer robots are not people.


It's a straightforward interpretation.

At the very least, they count as "arms," just autonomous arms, which makes them necessary for a free state.

Let's not twist the Founder's intent here, or underestimate their foresight. They thought very carefully about these issues, and understood that a government's monopoly on violence going unchecked by a disarmed populace inevitably leads to tyranny.

Therefore, we must deploy an autonomous militia of killer robots programmed to enforce an Originalist interpretation of the Constitution with ruthless efficiency as soon as possible. The cause of Liberty demands no less.


I'm curious. Can you elaborate in which ways the Constitution permits its enforcement through "an autonomous militia of killer robots programmed to enforce an Originalist interpretation of the Constitution"? If people agree the Constitution should be enforced in this way, what are the necessary legislative, executive or judicial procedures to enable it?


The Second Amendment claims that the right to keep and bear arms is necessary for a free state, and that the Federal government cannot abridge this right. Autonomous, weaponized robots can be considered arms - certainly, their autonomy doesn't make them any less of an armament than a rifle or a handgun. Therefore, the right to keep and bear armed autonomous robots is necessary for a free state.

>If people agree the Constitution should be enforced in this way, what are the necessary legislative, executive or judicial procedures to enable it?

There are none. The Second Amendment already allows it, and provides the means to enable it - we build an army of killer robots and use them against the government should it try to abridge our right to deploy an army of killer robots against them.

If Ben Franklin were alive, I think he would fully support it.


There are probably weight limits to directly bearing arms, I doubt most people could bear more than 60lbs for an extended period of time. Fortunately, we can work around that limit by making the armed autonomous robot also an exoskeleton that lets people bear the weight of the arm.


Okay, you got me.

I actually thought you were serious.


Depends what you class as people. Is a sentient robot a person? Many philosophers have made the case it could be, and the definition there for person in that field is often not 'human' as much as 'anything with sentience/sapience/whatever'.

So may it'd depend how smart the killer robot is and how capable they are of holding a conversation as well as attacking things.


How about a ban on all weapons? If people want to go to war, they can fight with their bare hands.


The incentive to break that rule would be overwhelming.


Reminds me of Scot Aaronson's Malthusianisms: https://www.scottaaronson.com/blog/?p=418


Which brings us back to the problem of banning AI weapons.


Enforcement would be tricky.


The problem with autonomous robots is that there’s never cold dead hands to pry them out of. Both a robots hands are always cold and dead and they’re autonomous so no hands at all.


Would that make robots the true zombie apocalypse?


Chemical weapons are banned. The Unites States and others still make them. There is no way to enforce such a rule as nation states do not abide by them.


That's needlessly reductive. Chemical weapons are banned, but Saddam didn't use them against US troops in the US invasion.

Bans are probabilistic, not binary. Anyone violating a ban must weigh the advantage from doing so against the consequences and probability of getting caught. A high degree of revulsion and a consistent track record of enforcement greatly increase consequences and probabilities, and make a ban more effective. Just shrugging your shoulders and saying it's not always possible is a sure way to increase the probability of unwanted action.


Drones are potentially lethal and autonomous, not sure how on earth that would be done.


The Chinese and Russians will do it whether you ban it or not.


Also ban lethal, autonomous humans, colloquially called soldiers, psychopaths, or just the neighbor. Is this the way we treat our artificial children?


well we sort of do with prisons and death penalties.


The Harvard Classics [0] (or Dr. Eliot's Five Foot Shelf[1]), is a 51-volume anthology of 'the classics' compiled by Harvard president Charles W. Eliot and published in 1909. It is the WASP collection of literature and gives a very interesting eye towards the first Gilded Age and into the minds of US leaders during the first half of the 20th century.

If it were to be updated, nearly 110 years later, many works would be left out (like, nearly all of the middle French plays) and MUCH would be included. Without a doubt, one of the books to be included would be The Making of the Atomic Bomb by Richard Rhodes, winner of the 1987 Pulitzer[2]. This book goes from start to finish on how the A-bomb was made, the major and minor players, and their thoughts about the bomb all along the way. Rhodes takes us into the heads of the physicists and clears out all the calculus and gets to the human parts of the endeavor.

All the players in the race for the nuke knew the bomb was unavoidable for mankind. Leo Szilard was the first person to really conceive of the bomb while waiting for the stoplight to change where Southampton Row passes Russell Square, across from the British Museum in Bloomsbury[3].

There were thoughts, groups, and people that entertained the thought that they could form a group of 'priest-scientists' that would keep the uranium safe from politicians and warlords. Keep the power flowing but stop the chain reaction from going critical and taking out billions of people. But as the race progressed, all the real players for the nuke knew that such a utopia was impossible. That this particular genie was out of the bottle the second that Leo, or any other physicist, had made it across Southampton Row.

These robots are a microcosm of the same issues that the atomic physicists faced nearly 100 years ago. They know the damage that the ideas will have upon us all, they know that the semi-bucolic world we now live in will fall away to violence, and they know that they can't stop it. But, bless them, they are trying to sound the alarms and maybe, just maybe, change the minds of some of the politicians and warlords.

Looking back at the first Gilded age, with all their knowledge, they had no templates for the power of The Bomb and finally facing the mortality of all humans. Fortunately, we do have the templates, at least in terms of the use of lethal robots. We know how this plays out, we know how to make an atomic bomb, and we programmers need to learn from the physicists of the '30s and '40s.

[0] https://www.myharvardclassics.com/categories/20120212

[1] https://en.wikipedia.org/wiki/Harvard_Classics

[2] https://en.wikipedia.org/wiki/The_Making_of_the_Atomic_Bomb

[3] https://openlibrary.org/works/OL2617750W/The_making_of_the_a...


IMHO delaying the inevitable is not a plan. We need to find a way to survive the inevitable use and deployment of technology by others. This stuff is no longer rocket science. Any idiot with a raspberry pi and some basic tooling can build some quite effective weaponry. Most of the AI you need for this is OSS and getting easier to use by the month. This is not at all like nuclear warfare where you need a lot of skills, knowledge, infrastructure, and capital expenses to be able to put a weapon together. All this stuff will take is a bit of ingenuity and access to commodity hardware and software.

IMHO hoping that others won't go there is not a plan because ultimately somebody will. We need credible defensive capability against this stuff. Given the adversary is going to be autonomous AIs with lightning fast responses, having humans in the loop when defending means being defenseless effectively.

Most current wars are asymmetric guerrilla wars where there is a powerful but reluctant to engage party and some highly motivated individuals fighting an unwinnable fight. E.g. the conflicts in the middle east are basically premised on men with primitive weaponry moving around the country trying to stay hidden from air surveillance, satellites, or simply hiding among civilians. Countries like the US are reluctant to go in and fight on the ground because things get ugly in terms of casualties and 'collateral' damage. You can bet drones will be popular on both sides in such conflicts.

Drone based warfare would make that a lot more one sided than it already is and would probably make this type of warfare a lot less attractive and potentially put an end to a lot of long running conflicts. This does not have to turn into the cliche dystopian mess. For that look no further than the current state of affairs in e.g. Congo, Yemen, Afghanistan, or Syria.

War is not about chivalry but about winning at any cost. When one side is fighting with their hands behind their backs thus preventing them from winning and the other side can't win, you have a stalemate. It used to be the case that the winner would execute/enslave surviving enemies. Brutal but effective. When the war was over, there was little chance of a comeback by the other side. These days we do a lot of damage to each other but it is rarely decisive. WWII was one of the last conflicts where the other side literally had no choice but to surrender unconditionally. I live in Berlin. The effects of that war are very visible still. It's also a very peaceful city these days. That war was really over when it was over.

Wars between nation states are subject to all sorts of international rules. The problem is that most wars these days do not necessarily involve nation states and nation states instead engage each other via proxy wars. E.g. the US, Iran, and Russia are not formally at war but actively engaging each other in a plausibly deniable way nevertheless. These wars are dirty, brutal, and cause a lot of misery precisely because they rarely are fought to a conclusion and fought using any means possible.


What happened to Asimov's Three Laws of Robotics?


Asimov's Three Laws were a plot device, not an actual attempt to codify ethics for artificial intelligence. They were never intended to be taken seriously outside of their fictional universe.

Their purpose was to set up conflict through an apparent paradox (robots can't harm humans but then robots harm humans) by showing the Laws failing spectacularly and exposing humanity's hubris in the face of unintended consequences.

Here's a big, long HN thread about it[0].

[0]https://news.ycombinator.com/item?id=19044387


Why is "scientists" the name of the group?

That seems to be an appeal to authority. This group is making an ethical argument, not a scientific one.

Which probably why without “supervision or meaningful human control” is their criteria.

Is a security guard with 20 camera controlled turrets using facial recognition to choose targets "supervised" ?


Scientists, as a group, are a moral authority. They're not the only authority, but they represent a generally respected tribe of intellectuals.

The title doesn't say "science calls for."

I know I'm interested in hearing what a broad group of scientists think about non scientific questions.


Because the story would get zero consideration if it were "ethisists".

If the guard is necessary to pull the trigger, then it's supervised. If the ADS has a list of approved faces, or is told not to shoot people with an IR tag, then you get into a fun grey zone


We need robots for three reasons:

First, special forces are having a hard time recruiting and the reality, it's not just them [0].

Secondly, why not purely for defensive purposes? Want to eliminate the effectiveness of nuclear weapons? If we can have self landing rockets. Why not a missile that breaks into multiple rockets in space and destroys all nukes whilst they are inbound?

Thirdly, why should billionaires and those who can afford them have their own private security? Imagine having your own inside the home? 24 hour protection. Only calling the Police to effectively clean up the situation? Could be a new startup right there.

Finally, we should be encouraging darpa and the military to be spending billions on robots. That technology will filter down eventually to the consumer. That's better for all of us.

[0]: https://www.youtube.com/watch?v=4OFevGcLHTU


> why should billionaires and those who can afford them have their own private security?

Has it ever occurred to you that banning killer robots not just for the poor and the public sector might be on the table?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: