
Opinion: A.I. Could Worsen Health Disparities - pseudolus
https://www.nytimes.com/2019/01/31/opinion/ai-bias-healthcare.html
======
kevinalexbrown
Dr Khullar suggests that AI will exacerbate biases in medical practice. His
fundamental concern is that machine learning will codify biases and become
self-fulfilling prophesies. But there is scant evidence that AI will _worsen_
these disparities.

If anything, a machine-learning point of view better addresses his concerns
than a traditional one, because they can be much more quickly updated to
correct for identified biases. Doctors spend years and years of hard work
becoming efficient and effective human algorithms themselves, and updating
those human algorithms in the face of newer evidence is difficult. In standard
practice, biases are often invisible and uncodified to begin with. "Moral
intuition" is something all doctors use, but it's also something of a black
box in nearly every real-world use case.

~~~
TuringNYC
I'm speaking as former CTO/Co-Founder of medical image ML firm (for 3yrs):

1\. there is already a major bias in medical diagnosis - a bias favoring those
who can actually pay

2\. automating even parts of the diagnostic process saves money and reduces
cost, that is a huge benefit to _everyone_

3\. not everything gets done immediately. lets figure out the basics first
(getting classifiers working on whatever dataset we have) and then focus on
getting it to work on everything. It isnt like medicine was right from day
one...heck, I seem to recall leeches and bloodletting being the norm for a
long time.

4\. Almost every doctor i spoke to was afraid of ML/AI because it pierced
their forced scarcity and threatened their wages. I might argue that Health
Disparities are worsened currently because medical boards throttle residency
programs and fellowships to create an artificially constrained supply and
hence high prices. (before I get the rot response of...of course doctors will
never go away...:yes, they wont go away, but they will focus less on rote
things and increase throughput thus increase supply thus decrease wages.

5\. We got all our training data from minorities. Incidentally, foreign
countries are a lot more generous with training data. For our ML diagnostic
firm, we had envisioned giving the product away for free in poorer countries
where we could just get training data.

~~~
nradov
How do medical boards throttle residency programs? The biggest limiting factor
today is the Medicare funding cap.

[https://news.aamc.org/for-the-media/article/gme-funding-
doct...](https://news.aamc.org/for-the-media/article/gme-funding-doctor-
shortage/)

~~~
TuringNYC
In the US, the average resident makes ~57k USD these days. If you're familiar
with medical bill rates in the US, a week of billings covers the entire annual
salary. For specialists (e.g., derm, radiology, etc) a _day_ of billing can
cover the entire annual salary for the resident. Even if you assume not all
bills are collected, or that many are negotiated down by insurers, the profit
margin on residents is off the charts.

Given billing rates, "we dont have money" is a very convenient answer for why
there arent more residents (and hence more future supply of doctors.) Heck,
given the wild profit of a resident, I'd personally fund their annual salary
for a share of the annual billings.

The real answer is...current doctors, specifically specialty boards must
actually be willing to train a resident, however they are funded (medicare, by
hospitals, by me, etc.) -- and specialty boards do not. It would increase
supply and decrease their future wages. Openings are very carefully throttled
to create artificial scarcity.

Medical specialty boards are essentially cartels.

This is hard to imagine as a technologist because we largely operate in a free
market. Anyone can enter the market and opt to work for less money than you. A
foreign worker can try to do your job for less. The job can be off-shored.

~~~
WhompingWindows
Do technologists truly operate in a free market? There are rampant anti-
competitive practices across tech, I think it's a SV libertarian fantasy that
they are in a free market, a fantasy they tell themselves to paper over their
squashing of rivals.

~~~
AnthonyMouse
The _job market_ is very competitive. You don't need anyone's permission to
enter it, all you have to do is do good work. Salaries are high due to a
combination of massive demand and the fact that it takes a long time to get
good at it. Even their stupid collusion attempts are basically fruitless,
because the tech market isn't just four colluding companies, there are
thousands. You don't have to go from Google to Apple, you can go to Amazon or
Red Hat or numerous others, or create your own startup. That number of
companies could never secretly collude -- they couldn't even get away with
four. Which is why salaries are still high.

The true threat is companies crushing smaller rivals, because that's how in
the long term you end up in a situation where there _aren 't_ thousands of
tech companies because no one can compete without the assent of one of the
major ones, and they prefer to destroy you or compete with you or buy you out
than let you grow independently. And that's how salaries could fall in the
long-term. But you tell people that supporting walled gardens and closed
proprietary services could lower their long-term salary and they don't hear
you, because they're after the quick buck today.

------
perfmode
James Mickens on this topic:

[https://youtu.be/ajGX7odA87k](https://youtu.be/ajGX7odA87k)

> Some people enter the technology industry to build newer, more exciting
> kinds of technology as quickly as possible. My keynote will savage these
> people and will burn important professional bridges, likely forcing me to
> join a monastery or another penance-focused organization. In my keynote, I
> will explain why the proliferation of ubiquitous technology is good in the
> same sense that ubiquitous Venus weather would be good, i.e., not good at
> all.

> Using case studies involving machine learning and other hastily-executed
> figments of Silicon Valley’s imagination, I will explain why computer
> security (and larger notions of ethical computing) are difficult to achieve
> if developers insist on literally not questioning anything that they do
> since even brief introspection would reduce the frequency of git commits. At
> some point, my microphone will be cut off, possibly by hotel management, but
> possibly by myself, because microphones are technology and we need to
> reclaim the stark purity that emerges from amplifying our voices using rams’
> horns and sheets of papyrus rolled into cone shapes. I will explain why
> papyrus cones are not vulnerable to buffer overflow attacks, and then I will
> conclude by observing that my new start-up papyr.us is looking for talented
> full-stack developers who are comfortable executing computational tasks on
> an abacus or several nearby sticks.

~~~
paganel
Thanks for the link. At some point he says "the gadgets are the true people of
the Earth", which more or less resembles what Jacques Ellul first wrote about
60 years ago [1]:

> Hard determinists would view technology as developing independent from
> social concerns. They would say that technology creates a set of powerful
> forces acting to regulate our social activity and its meaning.

and

> According to this view of determinism we organize ourselves to meet the
> needs of technology and the outcome of this organization is beyond our
> control or we do not have the freedom to make a choice regarding the outcome
> (autonomous technology) (...) In his 1954 work The Technological Society,
> Ellul essentially posits that technology, by virtue of its power through
> efficiency, determines which social aspects are best suited for its own
> development through a process of natural selection.

I used to be a pretty big believer in things like "technology will make
everything better", but after reading some of Ellul's books I've started to
have my doubts about that.

[1]
[https://en.wikipedia.org/wiki/Technological_determinism#Hard...](https://en.wikipedia.org/wiki/Technological_determinism#Hard_and_soft_determinism)

~~~
graphitezepp
Great now I can quote somebody about why I think technology is evil who isn't
the Unabomber thanks.

~~~
Verdex
Sarcasm?

I only ask because you're saying this on a website. A website focused on
funding technology startups. Hosted on the internet. Build by darpa grants.
Like, this doesn't seem like your sort of place if you're serious about
thinking technology is evil.

~~~
perfmode
Sometimes the people in the best position to judge are the ones who know the
most.

------
monksy
What I"m getting from the article: People seem to think that AI is magic, and
similar to any other technology. ("THESE DEVELOPERS PUT IN BAD STUFF") That's
not how AI works. You have to be aware of bias, introduce random error, accept
false positives/negatives, and avoid overfitting.

That's not something that someone that took a boot camp on Tensorflow is going
to understand a lot about.

EDIT: Also if you're using the results of the AI process, you should
understand what the metadata about the results and where a good balance is.

------
sametmax
Any technological progress in health tech worsen disparities, because when
it's new, it's expensive. The more the tech is ground breaking, the more the
disparity because the effect is so drastic for the money.

Doesn't mean we should not improve health tech.

~~~
Synaesthesia
It’s also sometimes needlessly expensive when companies make excess profits on
medical gear. It’s up to governments (is really) to help everyone get access
to improved care.

~~~
Symmetry
Medical device companies have very high gross profit margins but net profit
margins around 7%, which is pretty typical for high tech manufacturing. That
suggests that there isn't much in the way of excess profit.

~~~
linguistbreaker
Agreed I think the (financial) excesses of the health care system are mostly
related to the litigiousness of the USA and the administrative burden.

~~~
nradov
"Overall annual medical liability system costs, including defensive medicine,
are estimated to be $55.6 billion in 2008 dollars, or 2.4 percent of total
health care spending."

[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3048809/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3048809/)

Litigiousness is an issue but it's not the biggest one.

~~~
Symmetry
My understanding is that the main proposed mechanism of action is that fear of
lawsuits results in excess treatments.

~~~
nradov
That's included in the 2.4% estimate.

------
SolaceQuantum
"Dhruv Khullar (@DhruvKhullar) is a doctor at NewYork-Presbyterian Hospital,
an assistant professor in the departments of medicine and health care policy
at Weill Cornell Medicine, and director of policy dissemination at the
Physicians Foundation Center for the Study of Physician Practice and
Leadership."

OK. Also noted that this is an opinion piece and not journalism.

From the opinion piece: "A recent study found that some facial recognition
programs incorrectly classify less than 1 percent of light-skinned men but
more than one-third of dark-skinned women. "

Study link: [http://news.mit.edu/2018/study-finds-gender-skin-type-
bias-a...](http://news.mit.edu/2018/study-finds-gender-skin-type-bias-
artificial-intelligence-systems-0212?mod=article_inline)

Exact stats:

"In the researchers’ experiments, the three programs’ error rates in
determining the gender of light-skinned men were never worse than 0.8 percent.
For darker-skinned women, however, the error rates ballooned — to more than 20
percent in one case and more than 34 percent in the other two."

From the NYT opinion piece: "A.I. programs used to help judges predict which
criminals are most likely to reoffend have shown troubling racial biases, as
have those designed to help child protective services decide which calls
require further investigation."

Associated links: [https://www.propublica.org/article/machine-bias-risk-
assessm...](https://www.propublica.org/article/machine-bias-risk-assessments-
in-criminal-sentencing) [https://www.nytimes.com/2018/01/02/magazine/can-an-
algorithm...](https://www.nytimes.com/2018/01/02/magazine/can-an-algorithm-
tell-when-kids-are-in-danger.html?module=inline)

Relevant quotes from each:

"The formula was particularly likely to falsely flag black defendants as
future criminals, wrongly labeling them this way at almost twice the rate as
white defendants."

"48 percent of the lowest-risk families were being screened in, while 27
percent of the highest-risk families were being screened out. Of the 18 calls
to C.Y.F. between 2010 and 2014 in which a child was later killed or gravely
injured as a result of parental maltreatment, eight cases, or 44 percent, had
been screened out as not worth investigation."

~~~
gbrown
It blows my mind that anyone thinks AI for recidivism is a good idea, given
the well documented biases inherent in the existing system.

~~~
Symmetry
Well, the well documented biases in the existing system are exactly why you
might think that AIs would be a good idea.

~~~
gbrown
Not if by AI you mean anything in the same neighborhood as supervised
learning.

------
dontreact
I would respond that A.I is the -only- realistic hope we have for reducing the
biases in our medical system. The systemic and individual level bias of the
medical system is not going to go away due to some sudden enlightenment. It’s
true to that to some extent the first wave of AI applications will inevitably
carry with them some of the biases that exist in the current medical system.

These biases are going to lead to measurably disparate outcomes. Fortunately
measurably disparate outcomes are exactly the type of thing that can be used
to train or otherwise guide the improvement of a machine learning model.

As long as we have mindfulness that there will be similar biases in the first
wave of applications, AI and the typical associated data slicing and dicing
that is done when doing model development will be the best tools for detecting
and then mitigating these biases.

~~~
AlexandrB
> I would respond that A.I is the -only- realistic hope we have for reducing
> the biases in our medical system.

I would counter that the only realistic hope is social change. A.I. or not,
biases will persist in medicine as long as they persist in society at large.
The idea that an unbiased A.I. will arise from a process designed and run by
biased individuals sounds like utopianism. Technology is not magic and it
won't solve our "hard" social problems for us.

~~~
jdp23
Yeah really!

There's plenty of evidence that medicine today has deeply embedded biases.
Disparities in Black maternal health outcomes have been thoroughly covered in
the literature and even mainstream media.

On the other hand there's plenty of evidence that ML and AI _magnify_
inequality in other areas.

So relying on technology as a magic bullet to solve these societal problems
seems ... naive.

------
MAXPOOL
Not my exact field, but I keep track of the research for potential
applications.

The general vibe is that there are potentially large welfare gains to be
achieved if algorithms, ML or statistical methods are integrated into human
decision making in principled way. People have innate tendency treat noise as
a signal. That does not mean that the dangers mentioned in the article are not
real [2]. We should be very aware of them in their current forms and prevent
repeating them.

Few of my favorite papers:

1\. _Human Decisions and Machine Predictions_ The Quarterly Journal of
Economics, Volume 133, Issue 1, 1 February 2018, Pages 237–293,
[https://doi.org/10.1093/qje/qjx032](https://doi.org/10.1093/qje/qjx032)
[https://www.nber.org/papers/w23180](https://www.nber.org/papers/w23180)

2\. _Dissecting Racial Bias in an Algorithm that Guides Health Decisions for
70 Million People_ (2019)
[https://dl.acm.org/citation.cfm?doid=3287560.3287593](https://dl.acm.org/citation.cfm?doid=3287560.3287593)

3\. _Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and
Interpretability_
[https://arxiv.org/abs/1809.04578](https://arxiv.org/abs/1809.04578)

4\. _Direct Uncertainty Prediction for Medical Second Opinions_
[https://arxiv.org/abs/1807.01771](https://arxiv.org/abs/1807.01771)

------
binalpatel
I work in healthcare and one thing that's always top of mind for me is that
that the data we work with is only (generally) indicative of the
physicians'/billers' best guess.

So - say we get a bunch of diagnosis codes from a hospital. Codes are
generally added in via medical billers after a patient is discharged, based on
the physicians input and other data on the patient record. So at this point
the data in which you generally work with has gone through two different
humans, who used their (best attempt) subjective viewpoint for this patient.

This generally works fine for a lot of common conditions, evident conditions,
so things like heart attacks, fractures, and so on. But for things that are
complex, like sepsis leading to more evident conditions, the data may not
necessarily capture that sepsis even occurred.

Not to say this is a unique problem to healthcare, but something that's not
talked about often. A lot of the data we train and model on is based on a
human's best guess, which may in some ways be limiting given really complex,
dynamic processes.

------
WhompingWindows
Like all discussions of AI, I feel the discussion in the comments seems
unmoored from actual predictive analytics in healthcare. We have this fantasy
future of omnipotent AI machines controlling our destiny, when in reality the
here and now is all we have solid evidence about. The reality is all of this
"AI" in healthcare is just fancy mathematical equations crunched on
increasingly large data-sets, and until humans themselves are much less
biased, the math isn't going to solve these social problems.

My job involves creating predictive modeling for the VA hospital system. I
have recently, in the past month, worked on models predicting the probability
of death in the next year, probability of receiving Social work services,
probability of receiving a screener which indicates food insecurity, and more.
The idea behind all of these is to take our clinical intuitions and the
intuitions based on prior research, then gather variables that allow us to use
our theoretical intuition to predict future health outcomes. These predictive
models may turn into dashboards, which are basically daily or weekly tables
that show clinicians which of their patients are predicted to get certain
outcomes.

Now, how does this all circle back to biases and disparities? All of our
models include racial, ethnic, gender, rurality, age, and many other
categories of sociodemographic information. However, at every step of this
process, there is a human (whether PI, analyst, or clinician in the final
step) looking at numbers/variables/values and making decisions.

Thus, I don't think we can truly separate the AI from the human in current
healthcare analytics. We do our best to control for disparities and get down
to the brass tax, the actual medical information, but there is simply too much
human decision-making in the current workflow to truly divorce the "disparity
differential" from whatever humans would do on their own sans mathematical
modeling.

Overall: our models are mere collaborators, and until we minimize our personal
and systemic biases and disparities, we can't hope to use our fancy
mathematical models to minimize them for us.

~~~
jdp23
Great comment! One thing I'd add:

> until we minimize our personal and systemic biases and disparities, we can't
> hope to use our fancy mathematical models to minimize them for us.

And as long as technologists are in denial about the extent to which personal
and systemic biases influence reality (as we're seeing in this thread), tech
will continue to reinforce and magnify these biases.

------
Symmetry
Another potential problem is that minorities are often more reluctant than
average to let their genetic information be used in health research. It's
understandable given historical issues but will probably lead to bad outcomes
in the future.

------
ruipds
This may be true, but it happens today as well without AI. Big pharma
corporations choose to do research on treatments for diseases that affect the
ocidental world. Often setting aside the needs of poor nations.

~~~
SolaceQuantum
This is addressed in the article:

"The risk with A.I. is that these biases become automated and invisible — that
we begin to accept the wisdom of machines over the wisdom of our own clinical
and moral intuition. Many A.I. programs are black boxes: We don’t know exactly
what’s going on inside and why they produce the output they do. But we may
increasingly be expected to honor their recommendations."

~~~
Datenstrom
I do understand that there is a serious possibility of deploying a bad black
box AI, a number of questions come to mind though.

Isn't it true though, that human intelligence (and especially
corporate/government/etc. intelligence) is also susceptible to biases which
are invisible? Also, do we know why humans or groups of humans produce the
output they do? Is there some kind of black box testing procedure we could use
to increase trust in AI to a point at least equal to humans?

~~~
SolaceQuantum
The presumed argument here is that you can challenge individual people,
corporations, etc. more easily on discriminatory behavior than you can
challenge an algorithm. If an algorithm happens to refuse to issue loans to
black people, who's the class action lawsuit going to sue?

~~~
CamTin
Presumably you could sue the bank that is using the AI to make the loan
decisions, or is there something I'm missing?

~~~
AlexandrB
You'd have to prove that the A.I. was discriminating based on a "protected
class" and not on some other basis. But you have no insight into the A.I. or
its training data. Nor do you have a comparable A.I. of your own to run A/B
experiments that can prove discrimination. Now what?

~~~
CamTin
Don't these problems already arise when trying to prove bias in a legacy meat-
based intelligence?

------
sologoub
What this article is also missing is the fact that a lot of the existing data
problems have to do with the cost of obtaining this data. When we move to a
more universal collection of the data in a structured format that can then be
used to further train the model, you actually end up with a better
representation.

However, this all falls apart with the current access to healthcare. If the
access is not universal, then you can’t expect the results to be anywhere near
equal or at least similar. We really need to solve the healthcare access
problem.

The other item I find questionable is the example with home-based rehab vs a
facility. Sure, for better-off patients with a good home environment,
transportation, good food, etc., being in that good/positive environment will
likely lead to better outcomes. However, if the person doesn’t have that, is
that still better than a facility? Would be great if we saw data adjusted for
this disparity.

~~~
AlexandrB
> When we move to a more universal collection of the data in a structured
> format that can then be used to further train the model, you actually end up
> with a better representation.

I'm not sure how that's true. Access to products and services that do this
kind of collection is very much a class issue. Poor people, especially in the
US, can't afford regular physicals nor personal health trackers like a fitbit.
Additionally, personal health technologies with the best, most accurate data
are the most expensive ones - e.g. Apple Watch vs. generic fitness band.

If we lived in a world where class was separate from race or gender, you might
be correct. But that's not the case.

------
chriselles
I would have thought that a scaled medical diagnosis AI/ML system with zero
marginal cost for each additional user would provide “better, faster, cheaper”
diagnosis thru increased primary health care capacity, reduced costs per
patient, and reduce disparity.

What am I missing?

------
Synaesthesia
AI could be used to improve the health outcomes of everyone. It all depends on
how we use it.

------
mikelyons
As A.i. does more of the diagnostic work for doctors, the skills of doctors
grown through accumulated experience will atrophy. Similar to how few people
now farm, and traditional farming methods end up being generationally
forgotten.

------
w323898
>failing NYTimes casts FUD on vaporware >Frontpage of HN sounds about right

