But this is not an algorithm. It's a trained neural network which is practically a black box. The best they can do is train it on different data sets, but that's impractical.
That's exactly the problem I was trying to reference. The algorithms and data models are black boxes - we don't know wat they learned or why they learned it. That setup can't be intentionally fixed, and more importantly we wouldn't know if it was fixed because we can only validate input/output pairs.
You do understand that this has nothing to humans in general right? This isn't AI recognizing some evolutionary pattern and drawing comparisons to humans and primates -- it's racist content that specifically targets black people that is present in the training data.
I don't know nearly enough about the inner workings of their algorithm to make that assumption.
The internet is surely full of racist photos that could teach the algorithm. The algorithm could also have bugs that miss-categorize the data.
The real problem is that those building and managing the algorithm don't fully know how it works or, more importantly, what it had learned. If they did the algorithm would be fixed without a term blocklist.
Humans, unfortunately, are offended if you imply they look like gorillas.
What's a good fix? Human sensitivity is arbitrary, so the fix is going to tend to be arbitrary too.