
A Health Care Algorithm Offered Less Care to Black Patient - adelHBN
https://www.wired.com/story/how-algorithm-favored-whites-over-blacks-health-care/
======
charles_f
Overall this reflects a bias in the dataset that the "algorithm" was trained
on, i.e. in the decisions humans made (be it the doctors, insurers, or the
general context, since this is based on predict future cost of care, etc.).
This reminds me of another example in a recruitment "algorithm" at Amazon that
was shut off for bias against women[0].

That this was found in the "algorithm" means a) that it was checked for
biases, which is already somewhat of a good news, and b) that it can probably
be fixed, or at the very least tested.

This is just my opinion, but I think generally speaking this is good. Even
though detecting and fixing those biases might not be straight forward, I like
to think it's probably orders of magnitude simpler than fixing these biases in
humans.

[0] [https://www.reuters.com/article/us-amazon-com-jobs-
automatio...](https://www.reuters.com/article/us-amazon-com-jobs-automation-
insight-idUSKCN1MK08G)

