Actually, even if it is accidental or subconscious, I'd say this is a perfect example of systemic racism--racist behavior (I'm referring to an algorithm/device specifically here, which you can safely call objectively racist) which is normally within society's acceptance level which, when magnified to a societal scale, is no longer acceptable. Another symptom of this illness is that you might not even notice as a member of this system the system is broken if you're white, but you would if you're black.
Personally, I think it could be understandable people didn't consider race when developing facial recognition technology, especially when we've only had mainstream awareness of this for under a decade and many people live in racially homogeneous or dominated cultures. However, I don't think it's acceptable for organizations, and the time when you can safely say you didn't understand biased learning data will be over soon. There are considerations you need to make scaling your tech from personal project to something the public will consume.
Also, the day will come when computers can point out racist stuff better than the average human can now, albeit with a high false positive rate. I say this because it's relatively easy; even if you only count a subset of tweets talking about racism as not trolling, that's still a shockingly high number of meaningful things about the world many people aren't seeing.
Personally, I think it could be understandable people didn't consider race when developing facial recognition technology, especially when we've only had mainstream awareness of this for under a decade and many people live in racially homogeneous or dominated cultures. However, I don't think it's acceptable for organizations, and the time when you can safely say you didn't understand biased learning data will be over soon. There are considerations you need to make scaling your tech from personal project to something the public will consume.
Also, the day will come when computers can point out racist stuff better than the average human can now, albeit with a high false positive rate. I say this because it's relatively easy; even if you only count a subset of tweets talking about racism as not trolling, that's still a shockingly high number of meaningful things about the world many people aren't seeing.