I'd like to ignore the privacy aspect. I want to discuss the false positive problem.
Is the technology reliable to have a low false positive rate? Probably face recognition is good enough to classify the photograph of your friends in Facebook, and the friends of your friends. It has to choose a person in a set of ~100.
What happens when someone is falsely identified? Is the suspect released after a ID check or he is moved to the police station to a full check?
Can this be a proxy for racial profiling? There was a horror story about a face recognition software that didn't detect someone with black skin. This can have the reverse problem. http://www.youtube.com/watch?v=t4DT3tQqgRM
In Dubai, I'd be concerned about TRUE positives. The more perfect the identification of someone who committed a supposed "crime" as deemed so by Dubai's ludicrous standards, the worse this is.
Look at it another way... black people (in the US) are already profiled. This sort of technology could decrease that abuse - let all the races share the false positive rate.
Is the technology reliable to have a low false positive rate? Probably face recognition is good enough to classify the photograph of your friends in Facebook, and the friends of your friends. It has to choose a person in a set of ~100.
What happens when someone is falsely identified? Is the suspect released after a ID check or he is moved to the police station to a full check?
Can this be a proxy for racial profiling? There was a horror story about a face recognition software that didn't detect someone with black skin. This can have the reverse problem. http://www.youtube.com/watch?v=t4DT3tQqgRM