It is fundamentally different, as we're talking about leaving decisions on who to kill to machines, we're not talking about how to kill, but who to kill. You may think it's easy decision for an AI to make (whilst still maintaining the fairness we seek for this AI to embody), but I would argue otherwise.
Self-driving cars are an interesting comparison. Let's say you had three self driving cars that were about to be involved in an accident, with Car 1 travelling towards Car 2 and Car 3 on a narrow mountain pass. There is no action that the cars can take that prevents the death of all passengers, but if Car 1 swerves off the road, unavoidably killing those in this car, the passengers of Car 2 and Car 3 are likely to be saved. What does the AI of Car 1 do?
You're speaking philosophically, rubbing shoulders with trollyology. I'm speaking from a technical level. Saying autonomous weapons will Skynet us and lead to the great robot uprising is like saying the paint on your walls will coalesce into a Rembrandt. Consciousness, or Hard AI, is not an emergent property of autonomous decision-making, or Soft AI.
Soft AI requires stricter rules to follow than hard AI. Strict rules for self-driving cars are fairly straightforward (for the most part), as reasonably clear information on how effectively a car is being driven is readily available to the driver.
What strict rules would you follow when designing an AI with the means and authority to kill?
> "It's a thing, although I misspelled "trolley". I wasn't intending to imply trolling on your part. [1,2]"
Ah I see. Yes, the trolley problem, didn't know it was called that, but that's what I meant. Thanks for the links.
It's a problem that is applicable amongst all types of drivers, but the issue with autonomous drivers is that you have the choice to program for it in the AI. The question then becomes what is the best way to handle it? Furthermore, if you do try to minimise deaths, how do you communicate to the other autonomous drivers that they should not also self-sacrifice?
> "The same ones we have for humans with the means and authority to kill. [3]"
Let's discuss something concrete from that handbook...
"the right to use force in self-defence arises in response to a hostile act (attack) and/or demonstrated hostile
intent (threat of imminent attack)"
How does an AI determine hostile intent? Reading the intent of humans is a complex task, depending on a wide range of cues (visual, verbal, physical, historical, societal). Can we design an AI that can read all these cues better than a human? Do you expect this to be easy to program?
Can we just face up to the reality that more weapons = more potential abuse of weapons, as there's no such thing as a perfectly moral user.