It already deploys a manned lethal force with far less than 100% identification accuracy, so what's the fundamental difference? The arguments in courts would be the same - "an honest mistake", "incident due to unfortunate circumstances" etc, wrapped up with a claim that if such broad latitude to make "mistakes" is not given, then society will descend into lawless anarchy. But it would be applied to people who deploy the robots, instead of the people pulling the trigger.
Alternatively, and more likely, measures would be taken to make it not autonomous on paper, e.g. requiring a human operator to approve any action the robot is intending to take. In practice, this would likely be one of those "moral crumple zones" with little practical meaning.
Alternatively, and more likely, measures would be taken to make it not autonomous on paper, e.g. requiring a human operator to approve any action the robot is intending to take. In practice, this would likely be one of those "moral crumple zones" with little practical meaning.