That's the great part about AI, because computers don't make mistakes, you can launder whatever biases you want through an objective and rational computer.
We already have this today when anti-theft alarms go off at stores as you exit with your paid purchase. You're presumed guilty because someone didn't disable the tag since the robot can't be wrong.
This is such an excellent way of concisely expressing a major fallacy I see whenever I talk about AI/modeling/etc. (I.e. “it isn’t racist it’s just numbers” and other nonsensical takes). I’m borrowing this language for the future.
That's a hopefully fixable organization problem. I think the closest we should let AI get to judicial decisions is suggested ranges. There still needs to be a person a decision traces back to, who's not just passing through what the computer says.
Ha. Human problems never get fixed, just moved around. There's nothing new to help get the racism or of the justice system - a core piece of the problem is that there are only disincentives for mercy and no checks whatsoever on individual biases. We still have a racist system in the US sixty years after the civil rights movement, and I don't see anything changing that anytime soon.
Tech problems sometimes get fixed, and occasionally either eliminate a class of human problems or create new ones. It's the only thing that changes... Rejecting tech wholesale means embracing the status quo.
There should absolutely be a human where the buck stops. But... "fixing an organisation" involves calling people racist. Then screwing their careers. That should carry a heavy emotional burden. It should involve anger and pushback.
"Some weights in an algorithm are wrong" seems a much less fraught problem.