
New York Regulator Probes UnitedHealth Algorithm for Racial Bias - bryanrasmussen
https://www.wsj.com/articles/new-york-regulator-probes-unitedhealth-algorithm-for-racial-bias-11572087601?mod=rsswn
======
missosoup
This 'algorithm has bias' kind of headline needs to die.

Every system has '''bias''' in the colloquial sense so long as the inputs to
that system are not uniformly distributed. The only thing the designer of the
system can do is make a conscious choice about which way to '''bias''' the
system and be able to justify that choice.

In practical terms it means that systems that deal with diverse populations
are always going to be 'unfair' towards some group of people. The only control
we have is who that group of people will be. This is inherent and unavoidable.
We need to accept this fact and keep it in consideration when designing such
systems (or choosing not to design them in the first place).

{procedural fairness, group fairness, utility}. Pick two.

[https://www.youtube.com/watch?v=Zn7oWIhFffs](https://www.youtube.com/watch?v=Zn7oWIhFffs)

~~~
solotronics
It comes down to the world frame some folks have. To them the world should
have total fairness between all different people irregardless of
predisposition, skill, situational luck, or effort put in. If it is not fair
by their system they can fix it by pressing down on the scale. If any system
or person has a fact that disrupts their worldview they are labeled things
like "biased/racist/patriarchal/etc". Their desire for absolute equity
requires rejecting facts that show people are different.

To people with this worldview an algorithm will always be "biased" because it
simply reflects statistics instead of their view of they see as "right" which
is with their benevolent finger on the scale.

I edited this slightly to try and remove any us vs them rhetoric. I am not
passing any judgement just stating what I have observed.

~~~
cycrutchfield
That’s gonna be a yikes from me, dog.

Edited this (since you did as well) to say that simply pointing to statistics
and saying that “facts don’t care about your feelings” is sophomoric and
probably intentionally disingenuous. For instance, statistically in the US
black people are more likely to commit crimes than white people. So should we
implement systems that target suspects based on the color of their skin?
Obviously not. There is a very important confounding factor, socioeconomic
status, which happens to be very correlated with skin color in the US due to a
long history of racial discrimination. So saying that “these systems should
not be racially biased” is an important criteria.

~~~
8f2ab37a-ed6c
We already discriminate as of today based on gender statistics in areas such
as car insurance, where men pay more on average, because they're statistically
more likely to cause accidents and drive recklessly. Why can't statistics be
extended to other areas?

~~~
mantap
In the EU exactly that kind of gender discrimination is illegal. Insurers are
not allowed to charge men more than women, just for being men.

The question is, do you want to live in a society where prices are set based
on one's genes. I sure don't.

~~~
ahbyb
>The question is, do you want to live in a society where prices are set based
on one's genes. I sure don't.

Those genes do clearly make us behave in different ways, so why not?

I'm not a feminist, but it would be fairer for women to pay less for the same
insurance since they are less likely to get in trouble with their cars. Why
should they pay more?

~~~
solotronics
I think the argument reduces to since we can get hyper accurate data on an
individual level how can we treat people fairly? If an insurance algorithm
charges one person who is higher risk 100x more than someone else is it really
a fair form of insurance?

------
throwawayuuuu5
It is not possible to avoid racial bias. If you make a model predict equally
for two classes, you’ve likely only made your predictions worse for both.

Prediction outputs come with a degree of confidence. It is much better to
change how you evaluate your predicted probabilities for each protected class
than to try and force the probabilities themselves to come out even.

The algorithm is rarely going to be the problem. It is the input data that is
allowed into the model. If you have a problem with how healthy whites are
getting recommended for treatment more than sick blacks, you need to remove
the inputs that otherwise differentiate them.

~~~
LanceH
Outreach to healthy people is a different care than for someone who is sick.
The comparison should be between two groups with the same need/types of care.

~~~
throwawayuuuu5
That’s backwards. If the problem is that the algorithm says healthy white gets
treatment more than sick black; that means the algorithm has something else
that it’s considering besides healthy and sick. If that’s the problem you’re
solving for, you need to remove the input that creates the difference. Once
it’s gone, the algorithm will be unable to produce a different result for two
people with the same sickness level.

------
neonate
[http://archive.is/AOYcV](http://archive.is/AOYcV)

------
bbanyc
Garbage in, garbage out. An algorithm trained on data from a bigoted society
like ours will return bigoted results.

You can try to fix the algorithm but the only thing that can really work is to
fix society...good luck with that.

------
gruez
>The probe follows a study, published in the journal Science on Thursday, that
found that an algorithm sold by UnitedHealth’s Optum unit ranked white
patients with fewer chronic diseases and healthier vital signs the same as
sicker black patients.

>[...]

>“This compounds the already unacceptable racial biases that black patients
experience

The article makes it sound like blacks are being discriminated against, but it
looks like they're given preferential treatment? Healthy whites are ranked the
same as sick blacks. That seems like a good thing, because less sick = lower
premiums. Am I reading this correctly?

~~~
danharaj
The very first sentence:

> New York’s insurance regulator said it is launching an investigation into a
> UnitedHealth Group Inc. algorithm that a study found prioritized care for
> healthier white patients over sicker black patients.

 _prioritized care for healthier white patients_

~~~
perl4ever
How does an insurer prioritize care for members relative to each other?

~~~
alphabettsy
From the article, “Dozens of hos­pi­tals and in­sur­ers in the U.S. use the
Op­tum tool to iden­tify di­a­betes and other chronic-dis­ease pa­tients who
should re­ceive ex­tra as­sistance such as home-care vis­its, help man­ag­ing
med­i­cines and sup­port co­or­di­nat­ing doc­tor ap­point­ments.”

~~~
perl4ever
My insurer sent me letters about something like that. I assumed it was of no
value, just something they do to look like they care. Kind of like the card
they sent for discounts on non-prescription "health related" pharmacy
products. Like the homepathic cold remedies.

I also assumed that they weren't deciding who could receive the services, just
who would get the letters.

...and I also assumed they didn't have my race recorded anywhere now that I
think of it.

