I thought this might be something like Eliezer's arguments against developing a GAI until it could be made provably Friendly AI, instead I just got an argument exactly like the ones in 1903 that said heavier than air flight by men was impossible - go back and read some of them, some of the arguments were almost identical. Some of the arguments are currently true, but some of them amount to "I can't do it, and no one else has done it, therefore there must be some fundamental reason it can't be done".
> There is no way for any AI system to discriminate between a combatant and an innocent.
Eh? Why not? Your saying you can't use some sort of NN algorithm trained on past videos of combat situations to learn this? I realize that we'd need to be VERY sure before we deploy anything, but its definitely possible.