
Bias in a medical algorithm favors white patients over sicker black patients - pseudolus
https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/
======
pmdulaney
I have a problem with this whole algorithmic bias thing. If there's an
algorithm that explicitly deals with race (e.g., If race == "black" put at end
of line) then yes, that's a problem. But if an otherwise well-thought-out
algorithm happens to treat blacks and whites differently, why is that a
problem.

Suppose an algorithm designed to help people at risk of committing suicide
ends up disproportionately helping middle-aged white males (who happen to
commit suicide more than any other group). Is that a problem?

Or if an algorithm designed to deal with hypertension ends up giving more
benefits to blacks? (who statistically are more likely to suffer from
hypertension)

What you choose to optimize for is definitely an important matter, as Norbert
Weiner's famous story about optimal defense against a nuclear attack
illustrates.

~~~
DanBC
> disproportionately

We need to pay attention to high risk groups. We also need to pay attention to
lower risk groups, especially if we do not understand why they're a lower
risk.

If they're a lower risk because they use different methods it's very dangerous
to not pay enough attention to them, because method substitution can happen
rapidly. And if this previously low risk group starts using a more lethal
method their rates of death will suddenly increase, and we don't have any
information about them because we didn't bother paying attention.

> who happen to commit suicide more than any other group

This is only true in some countries. In other places young women are the
higher risk group. Imposing Western trained algorithms on a global audience
will (and has) caused harm.

------
Konnstann
This is the kind of thing you need subject matter experts for, not Data
scientists. Working in biomedical sciences and seeing ML applied to medicine
is scary, considering the lack of understanding of biases in the information
used to train the algorithms.

