If face detection defines them as not a person, then things that rely on there being a person in the field of vision will not work for them. (Like the racist soap dispenser that went viral a few years ago)
If face recognition makes the old racist "they all look the same to me" declaration, then peaceful protestors get arrested for looking like criminals.
"Peaceful protestors" became a bit of a joke this year, as newsreaders stood in front of burning cars & buildings telling us a peaceful protest was going on.
That's a straw man though, no classification system assigns a "criminal" score. Here's another quote that points out exactly what I was talking about:
“There are two ways that this technology can hurt people,” says Raji who worked with Buolamwini and Gebru on Gender Shades. “One way is by not working: by virtue of having higher error rates for people of color, it puts them at greater risk. The second situation is when it does work—where you have the perfect facial recognition system, but it’s easily weaponized against communities to harass them. It’s a separate and connected conversation.” [0]
I don't mean physiognomy, though such things periodically arise and are roundly and rightly ridiculed.
I mean "Our computer says we have footage of you robbing a supermarket" (higher error rates)
Another way for issues to arise (your easily weaponised point, above) is if you need a separate system for recognising people from certain groups. If it's all the same system, it's harder to argue that you are acting in good faith when you only follow up on matches on marginalised groups.
If face recognition makes the old racist "they all look the same to me" declaration, then peaceful protestors get arrested for looking like criminals.