Hacker News new | past | comments | ask | show | jobs | submit login
Why The Pentagon’s ‘killer robots’ are spurring major concerns (thehill.com)
30 points by c420 12 months ago | hide | past | favorite | 35 comments



I think an underappreciated danger of "killer robots" is that they allow converting capital directly into precise violent force without the need for some significant portion of the population tacitly approving by participating.

E.g. a hundred years ago you could only convert cash into violence by recruiting and paying ordinary people. You couldn't wage a war that the people didn't want because people wouldn't sign up. Fifty years ago you could convert cash into indiscriminate violence with bombing campaigns, but you still needed the people if you wanted to invade instead of just destroying.

Killer robots shake that up, and I fear brings us one step closer to the apocalyptic "corporation-nations" of scifi. I don't think it's likely at this juncture, but corporations sitting on cash could wage war on each other or other nations. Chiquita doesn't need the CIA to do their dirty work, they can just invade Nicaragua with an army of autonomous robots.

Wealth is now directly translatable into force, which is somewhat terrifying.


> into precise violent force without the need for some significant portion of the population tacitly approving by participating

This has existed for years in the form of "foreign legions". Arguably French foreign policy in Africa would have been quite different without the FFL.

> Fifty years ago you could convert cash into indiscriminate violence with bombing campaigns, you still needed the people if you wanted to invade instead of just destroying

None of these developments in "intelligent" weapons has moved beyond the "destroy" category. You still need boots on the ground to have any hope of building personal relationships in counterinsurgency and "nationbuilding".

> corporations sitting on cash could wage war on each other or other nations

We have already seen non-state entities wage "war" on other countries. It hasn't changed the fundamentals of international relations. Either (a) the "host" country condones the actions, at which point the actions are an act of war against the victim, or (b) the "host" cooperates in taking action against the non-state actor, or (c) the "host" is so weak that the non-state entity is the de facto "state". I don't see a realistic case where this behavior spreads beyond the very weak "states" in which it has already happened - functional states will not surrender their sovereignty to companies.


And the liberal elite will have to pry my murderbot from my cold dead hands! Seriously, defending these under the second ammendment will happen as soon as industry realizes they can bring down the price thenmore they make and they can charge more in the private sector. Think about the mass shootings that will happen when the shooter isn't even there -- or the shooter is backed up by a team of murderbots...


I suspect they won't be protected. The phrasing is "the right to keep and bear arms", and there's a salient argument that because the robot is autonomous, it is the one bearing arms, not the human. Robots have no right to bear arms. A similar argument to the copyright office saying that AI generated stuff can't be copyrighted because the AI can't hold copyright.

Another interesting corollary is whether EMP/jamming devices will be considered arms under the 2nd amendment if autonomous robots become common on the battlefield.


When you say that robots have no rights, you imply that robots aren't people. And that's common sense today. But even today there's a non-trivial number of scientists who believe that we are just complex mechanism, and our consciousness is no more mysterious than dynamics of a sufficuently complex machine. If this idea of mechanical consciousness takes hold and spreads, in a few decades the supreme court will be considering a case, backed by solid scientific foundation, that "natural persons are fundamentally machines, and so machines created on american soil must be granted the same rights as natural persons."


Most people in US politics don’t hold that view though. As demonstrated by the frequent opposition to abortion.

So long as religion plays a fundamental part of political rallying, I can’t see enough people following the logic that AI deserves rights too.

To be honest, as someone who isn’t religious myself and who does believe our consciousness is mechanical, I’m also yet to be persuaded that AI is even remotely sophisticated enough for right. And I don’t think we will see AGI in my lifetime either.


I suspect with a disinterested Congress and an apolitical Supreme Court, wringing hands over interpretation of the intended meaning of words might be predictive of legal outcomes.


EMP weapons are science fiction. There is no plausible means of using a portable device to produce an EMP powerful enough to damage electronics.

Portable jammers exist today, but are very short ranged. And obviously those aren't effective against autonomous systems.


By this logic if my killer robot murdered someone then I am not responsible.


I would say the biggest problem with trying to regulate or prohibit "killer robots" is that almost no one involved can precisely define what a "killer robot" actually is. Aside from the terminators and assassin droids of scifi, there is a huge spectrum of "intelligence" in weapons systems, much of which has been deployed for years.

Consider the following cases, and whether or not they are a "killer robot":

1. An automated shipboard air defence system, such as that provided by the Phalanx or SeaRAM, which can automatically detect and engage targets it determines pose a threat to the vesset (potentially faster than its human operators could respond).

2. An anti-ship missile, such as NSM, fitted with imaging infrared guidance so that it can autonomously identify hostile ships and prioritize which (and where) to strike.

3. An anti-radiation missile, such as HARM, which can be launched without a specific target and instead automatically identifies threat emitters to engage.

All of these have some aspect of "killer robot" behavior, and they are all currently in service (several for decades). These existing weapons systems are too critical for anyone to remotely consider regulation or prohibition that would reduce their effectiveness, so any realistic hypothetical "killer robot" definition needs to coexist with them.


https://en.wikipedia.org/wiki/Ship_Self-Defense_System

I agree with your premise. A lot of our weapons already are killer robots, they just don't walk on two or four legs.


The navy put the trigger finger into full computer hands all the way back in the fifties with https://en.wikipedia.org/wiki/Naval_Tactical_Data_System , which could automatically assign and direct ordinance towards incoming threats, and could even give orders to piloted aircraft without human intervention, though this functionality was not always in effect.


There's zero chance this won't happen.

We see that $1000 drones with explosives can be incredibly effective at both surveillance and taking out squads. The fact that these $1000 drones are doing the work often reserved for missiles or artillery with a much higher price tag says just how efficient they are.

Its like the kamakazi pilots, except there's a human driving it, without a human inside.

Ukraine has shown what the future of warfare is. Now everyone is trying to be more efficient at it.


Not only will it happen, but human interference in the process will be lessened over time, as human reaction time will be no match for that of machines.

It's a race to give ever more autonomy and power to machines, because humans are too slow, too stupid, and too limited.

There are a million dystopian scifi novels about this, and it seems they're about to start coming true.


the problem with machines is that they don't have to be the best. If it takes 100 shitty drones to take down a F35, sure an F35 is amazing tech with an amazing human, but those drones can be replaced in a couple of days, the F35 + pilot takes 100x the cost and years of training and building.

That's the point. We don't need amazing robot warriors, that's the dystopian scifi, the hero human who is that much better. Yeah that's gonna be the case, but the point is it doesn't matter. I can mass manufacture my losses faster than you can breed and train human replacements.

The biggest win with AI is that no matter how shitty it is, you can teach another one the same knowledge as the previous one instantly.


I struggle to imagine a future where something like this is avoidable.


It's true, though we do have international arms conventions that have been partly successful in preventing widespread use of say, chemical and biological weapons (but not completely, e.g. Syria). And nuclear proliferation has maybe been slowed, a little. Hard to tell if those are 'guarded successes' or 'the best we could hope for' or 'just kidding ourselves' - so like the same for slaughterbots.


The difference between those past successes and this is that with none of the previous doomsday weapons has it been possible to cobble something together from COTS parts and open software. Nor have they had a large degree of crossover with your common garage tinkerer hobbies like drones and robotics. Nuclear material and chemicals are likely easier to put controls around or get names put on a list if/when acquiring them.

Applying controls to software and embedded hardware is a bit more difficult. Then you have things like precision GPS systems that might just be too useful for civilian uses to not allow free access to. It feels like pandoras box might already have been opened on this one without some backpedalling in certain areas.


Can't we just all form one world government so we don't have to do this? Like ordering a new world or something.


I don't know if you are serious but I suspect without a peaceful geopolitical resolution we may see WWIII arrive within a few years. And it will be dominated by superintelligent weapons control systems directing swarms of autonomous weapons.

And there is pressure to make them more and more autonomous because humans in the loop are too slow.

It doesn't seem like a safe direction.

I know that governments can be generally terrible and a world government is a lot of people's worst nightmare. But the alternative might be SkyNet. Or WWIII and then SkyNet.


A couple quick ideas for political activism against autonomous weapons:

The "Stop Killer Robots" campaign: https://www.stopkillerrobots.org/

Asking your nation to support the upcoming UN General Assembly resolution against autonomous weapons: https://www.stopkillerrobots.org/news/unga-resolution-on-aut...

And having personal conversations with your friends in technology and defense industries who may contribute to these technologies.


No, we should encourage the development of these kinds of weapons because they make combat less dangerous for US servicepeople.

Also, it's incredibly naive to think that you'd achieve anything but handicapping western nation in their development of autonomous weapons. China certainly isn't going to forego the technology out of moral squeamishness.


>stopkillerrobots.org

This kind of self-sabotaging suicidal stupidity should not exist after the 2022 Russian invasion of Ukraine.

Russia, China, and Iran already have "killer robots".

The only countries that will follow these suicidal regulations are stupid naive Western countries.


Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times.


Bad idea. We should be encouraging development of autonomous weapons in order to reduce risks to US and allied military personnel in future conflicts.


The problem with that approach is that it also makes conflicts more likely if there are no drawbacks to deployment of such weapons.

There's a lot of push back to deploying human soldiers in a conflict, but much less so for remotely controlled drones. It's likely that AI bots would have even less domestic pressure to avoid conflicts.


Then let's all go back to using swords and spears. Should make conflict even less likely, right?

Autonomous and remotely controlled weapons systems have been in widespread use since at least the 1970s. There is zero chance that powerful countries would ever agree to ban them, or that such a ban could every be enforceable. It's a total fantasy.

https://en.wikipedia.org/wiki/Mark_60_CAPTOR?wprov=sfla1


So we should be endangering young American's lives in a twisted blood sacrifice to the maintenance of peace?

Ridiculous. If the Cold War taught us anything, it's that peace is bought with the power to annihilate anyone who chooses to break it. And to extend the Cold War analogy, sovereignty is only real to the countries with the military prowess to uphold it. A peaceful world is one where war is too risky for the decision makers, not the soldiers.


One of the things the Cold War taught us is that we almost destroyed the world with nuclear weapons several times because of accidents and misinterpretations, by both humans and computers. Having a human in the loop who was willing to say "no" or "wait"[0] may be why we are still here having this discussion today.

[0] https://en.wikipedia.org/wiki/Stanislav_Petrov


> A peaceful world is one where war is too risky for the decision makers, not the soldiers.

My point is that drones and autonomous weapons make it less risky for decision makers to engage in war, hence making it less peaceful.


Is anyone seriously threatening the US? I'd say lives are lost when soldiers are sent abroad, where they (maybe) shouldn't be. Consider this: https://en.wikipedia.org/wiki/Security_dilemma


War and technology, name a better duo? What is going to get really weird is how available a lot of this stuff is going to become.

We will have to depend on the morality of the masses and hope that most wont build their own killer drones in their garages.


People already have. Purchase a drone and a "wedding ring dropper" online and you can too!


Quick, start working on CRISPR bioweapons that ensnare and dissolve those robots!


This seems like the sort of thing Philip K. Dick should have warned us about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: