You're speaking philosophically, rubbing shoulders with trollyology. I'm speaking from a technical level. Saying autonomous weapons will Skynet us and lead to the great robot uprising is like saying the paint on your walls will coalesce into a Rembrandt. Consciousness, or Hard AI, is not an emergent property of autonomous decision-making, or Soft AI.
Soft AI requires stricter rules to follow than hard AI. Strict rules for self-driving cars are fairly straightforward (for the most part), as reasonably clear information on how effectively a car is being driven is readily available to the driver.
What strict rules would you follow when designing an AI with the means and authority to kill?
> "It's a thing, although I misspelled "trolley". I wasn't intending to imply trolling on your part. [1,2]"
Ah I see. Yes, the trolley problem, didn't know it was called that, but that's what I meant. Thanks for the links.
It's a problem that is applicable amongst all types of drivers, but the issue with autonomous drivers is that you have the choice to program for it in the AI. The question then becomes what is the best way to handle it? Furthermore, if you do try to minimise deaths, how do you communicate to the other autonomous drivers that they should not also self-sacrifice?
> "The same ones we have for humans with the means and authority to kill. [3]"
Let's discuss something concrete from that handbook...
"the right to use force in self-defence arises in response to a hostile act (attack) and/or demonstrated hostile
intent (threat of imminent attack)"
How does an AI determine hostile intent? Reading the intent of humans is a complex task, depending on a wide range of cues (visual, verbal, physical, historical, societal). Can we design an AI that can read all these cues better than a human? Do you expect this to be easy to program?