For example facial recognition at a high priced building, suddenly non-white people can't get in because you did not show a real face at the scanner. Sorry, it's a bug not a feature.
The same problem would of course apply to anyone with facial deformities etc.
The problem would be essentially of two forms - Automatic exclusion from a group by facial feature, or automatic inclusion in a group by facial feature.
Think of the automatic inclusion as advanced phrenology or other sort of woo, in this scenario company X sells the idea that they will have facial recognition of 'criminal types' based on whatever research they can pull out that criminals often correspond in appearance. Then they fill that corpus up with criminals which then is a nice self-selecting system of find the black/hispanic person and deny them the job, loan etc. because of likelihood for crime past the threshold set.
the reason why this would happen is the same reason why banks used to deny loans to people in certain zip codes, because by doing so they could be racially profiling and excluding but pretend not to do so.
I’m not sure this is s good argument against the technology because there is a solution to this objection: they’ll improve it and ensure every face type is comparably recognized. It should be rejected on more basic principles.
With a pervasive system, unless people are hermits, they’ll get scanned periodically, and with other bits of information the changed face can be correlated with the same person (I go into a salon looking one way, come out different, go into a boxing match, come out different, but I’m following my routine and go to the same subway stop and convenience store and use the same payment method, etc.)
Generally the arguments against a technology is not that the technology does not work for its purpose, but that the way humans will use it is problematic.
I'm thinking something analogous to ASME for mechanical systems. Or is the technology still very much in the wild west phase?