The article makes the point that Amazon should try to prevent the accidental misuse of facial recognition (and more broadly other AI). I wonder to what extent this is possible as long as models exist as black boxes --- isn't it always possible for some unexpected behavior to emerge? How do we handle these?
Furthermore there is the question of bias in the training data. How do we as consumers of Amazon's AI police this?
Furthermore there is the question of bias in the training data. How do we as consumers of Amazon's AI police this?