Hacker News new | past | comments | ask | show | jobs | submit login

I would say the biggest problem with trying to regulate or prohibit "killer robots" is that almost no one involved can precisely define what a "killer robot" actually is. Aside from the terminators and assassin droids of scifi, there is a huge spectrum of "intelligence" in weapons systems, much of which has been deployed for years.

Consider the following cases, and whether or not they are a "killer robot":

1. An automated shipboard air defence system, such as that provided by the Phalanx or SeaRAM, which can automatically detect and engage targets it determines pose a threat to the vesset (potentially faster than its human operators could respond).

2. An anti-ship missile, such as NSM, fitted with imaging infrared guidance so that it can autonomously identify hostile ships and prioritize which (and where) to strike.

3. An anti-radiation missile, such as HARM, which can be launched without a specific target and instead automatically identifies threat emitters to engage.

All of these have some aspect of "killer robot" behavior, and they are all currently in service (several for decades). These existing weapons systems are too critical for anyone to remotely consider regulation or prohibition that would reduce their effectiveness, so any realistic hypothetical "killer robot" definition needs to coexist with them.




https://en.wikipedia.org/wiki/Ship_Self-Defense_System

I agree with your premise. A lot of our weapons already are killer robots, they just don't walk on two or four legs.


The navy put the trigger finger into full computer hands all the way back in the fifties with https://en.wikipedia.org/wiki/Naval_Tactical_Data_System , which could automatically assign and direct ordinance towards incoming threats, and could even give orders to piloted aircraft without human intervention, though this functionality was not always in effect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: