Hacker News new | comments | ask | show | jobs | submit login
Opinion: A.I. Could Worsen Health Disparities (nytimes.com)
80 points by pseudolus 16 days ago | hide | past | web | favorite | 78 comments



Dr Khullar suggests that AI will exacerbate biases in medical practice. His fundamental concern is that machine learning will codify biases and become self-fulfilling prophesies. But there is scant evidence that AI will worsen these disparities.

If anything, a machine-learning point of view better addresses his concerns than a traditional one, because they can be much more quickly updated to correct for identified biases. Doctors spend years and years of hard work becoming efficient and effective human algorithms themselves, and updating those human algorithms in the face of newer evidence is difficult. In standard practice, biases are often invisible and uncodified to begin with. "Moral intuition" is something all doctors use, but it's also something of a black box in nearly every real-world use case.


I'm speaking as former CTO/Co-Founder of medical image ML firm (for 3yrs):

1. there is already a major bias in medical diagnosis - a bias favoring those who can actually pay

2. automating even parts of the diagnostic process saves money and reduces cost, that is a huge benefit to everyone

3. not everything gets done immediately. lets figure out the basics first (getting classifiers working on whatever dataset we have) and then focus on getting it to work on everything. It isnt like medicine was right from day one...heck, I seem to recall leeches and bloodletting being the norm for a long time.

4. Almost every doctor i spoke to was afraid of ML/AI because it pierced their forced scarcity and threatened their wages. I might argue that Health Disparities are worsened currently because medical boards throttle residency programs and fellowships to create an artificially constrained supply and hence high prices. (before I get the rot response of...of course doctors will never go away...:yes, they wont go away, but they will focus less on rote things and increase throughput thus increase supply thus decrease wages.

5. We got all our training data from minorities. Incidentally, foreign countries are a lot more generous with training data. For our ML diagnostic firm, we had envisioned giving the product away for free in poorer countries where we could just get training data.


>4. Almost every doctor i spoke to was afraid of ML/AI because it pierced their forced scarcity and threatened their wages. I might argue that Health Disparities are worsened currently because medical boards throttle residency programs and fellowships to create an artificially constrained supply and hence high prices.'

Thank god. The medical cartel needs to go.


Medical boards don't control residency programs, hospitals and the federal government do.


>> Medical boards don't control residency programs, hospitals and the federal government do.

Medical boards are the only ones who can train and grant residencies. The US Government cannot train nor can they grant someone a residency. Nor can the US Government grant someone a fellowship. As an example, to become a "Fellow of American College of Surgeons" you would apply here: https://www.facs.org/member-services/join/international The cartel then decides who gets to join the select group. There are three dozen such cartels, one for dermatology, one for radiology, etc etc. Nowhere in the process noted on the link does it specify the US government gets to decide membership.

The government can fund it, but anyone can fund it. The funding is minuscule -- the 2017 resident salary is $57k a single digit percentage of billings by residents.


The medical board can train and grant residencies, but they don't control how many actual-working-in-a-hospital residency slots are available, downstream of them.

(There are never enough.)

Anyone can fund it, just like anyone can fund filling in the potholes in front of my apartment. For some reason, though, until the government gets around to it, they aren't going to get filled.


No one fixes random potholes because they dont get paid to do it. If fixing potholes had a 500% profit margin as medicine does, people would start fixing them immediately.

Now, if "The American Fellowship of Pothole Fixers" claimed there was no money to fix potholes and only their coveted group of 300 pothole fixers are allowed to fix potholes, i'd call BS.


The AAMC is a cartel and artificially limits the supply of doctors each year.


How do medical boards throttle residency programs? The biggest limiting factor today is the Medicare funding cap.

https://news.aamc.org/for-the-media/article/gme-funding-doct...


In the US, the average resident makes ~57k USD these days. If you're familiar with medical bill rates in the US, a week of billings covers the entire annual salary. For specialists (e.g., derm, radiology, etc) a day of billing can cover the entire annual salary for the resident. Even if you assume not all bills are collected, or that many are negotiated down by insurers, the profit margin on residents is off the charts.

Given billing rates, "we dont have money" is a very convenient answer for why there arent more residents (and hence more future supply of doctors.) Heck, given the wild profit of a resident, I'd personally fund their annual salary for a share of the annual billings.

The real answer is...current doctors, specifically specialty boards must actually be willing to train a resident, however they are funded (medicare, by hospitals, by me, etc.) -- and specialty boards do not. It would increase supply and decrease their future wages. Openings are very carefully throttled to create artificial scarcity.

Medical specialty boards are essentially cartels.

This is hard to imagine as a technologist because we largely operate in a free market. Anyone can enter the market and opt to work for less money than you. A foreign worker can try to do your job for less. The job can be off-shored.


You don't understand how it works. Specialty boards don't control funding for residency programs.


Then explain how these things work instead of just giving a glib "you're wrong. Why? Because I said so. "


Have you read the link I posted in my grandparent comment above? It's all right there.


@nradov - i'd love to understand your point more, but all you've shown is that there is some funding gap for residency programs. Funding gaps exist for unprofitable things where you dont get back immediate money for each dollar you put in. There is a funding gap for the arts, for urban preservation, etc.

Medical resident positions are wildly profitable entities, so "funding gap" sounds like a boogieman excuse.

Medical resident positions are so wildly profitable, that if any medical board was willing to train residents/fellows, i'm certain I can get VC/PE/HF funding to fund those spots and no one would have to worry about funding gaps. Who wouldn't want to fund a position that produces 10x revenues?!

I'd be willing to bet that for many residencies (cosmetic derm, spinal surgery, ortho, radioiology) residents would be willing to work for absolutely free given the massive windfall they expect in 5yrs time.

"Medicare funding gaps" are boogieman excuses provided by the AMA and medical specialty boards to not train doctors, especially specialists and sub-specialists and create artificial scarcity and increase their own wages.


Medical boards don't train residents at all. And board certification isn't even required to practice medicine; it's entirely optional. You're complaining about the wrong problem.

Residents are trained in teaching hospitals, most of which are non-profit. So VC/PE/HF funding isn't applicable. The federal government provides the majority of funding for residency slots and there is a hard cap.

Teaching hospitals certainly could fund more resident slots themselves but they generally choose to spend their money on other priorities like new MRI machines or free care for indigent patients or shiny new buildings named for major donors. Hospital budgeting decisions are made by business executives and BoD members just like any enterprise; they aren't controlled by the AMA or medical boards.

Training residents isn't as profitable as you think; there are huge overhead expenses for supervision, insurance, equipment, and support staff. But if you don't believe me then feel free to get VC funding, found a new for-profit teaching hospital, obtain AGCME accreditation, and hire a thousand residents. I expect you'll find the economics don't work but maybe you'll disrupt the industry and make a fortune?


Do technologists truly operate in a free market? There are rampant anti-competitive practices across tech, I think it's a SV libertarian fantasy that they are in a free market, a fantasy they tell themselves to paper over their squashing of rivals.


The job market is very competitive. You don't need anyone's permission to enter it, all you have to do is do good work. Salaries are high due to a combination of massive demand and the fact that it takes a long time to get good at it. Even their stupid collusion attempts are basically fruitless, because the tech market isn't just four colluding companies, there are thousands. You don't have to go from Google to Apple, you can go to Amazon or Red Hat or numerous others, or create your own startup. That number of companies could never secretly collude -- they couldn't even get away with four. Which is why salaries are still high.

The true threat is companies crushing smaller rivals, because that's how in the long term you end up in a situation where there aren't thousands of tech companies because no one can compete without the assent of one of the major ones, and they prefer to destroy you or compete with you or buy you out than let you grow independently. And that's how salaries could fall in the long-term. But you tell people that supporting walled gardens and closed proprietary services could lower their long-term salary and they don't hear you, because they're after the quick buck today.


We’re on this forum because we, as little guys, can often beat the big guys at things. It’s more of a free market than most things. Though nothing is a completely unregulated market.


Damn, funny to see how these things are perceived years after the event. The residency cap didn’t coincidentally exist. The AMA pushed for it and they pushed for similar things in universities https://usatoday30.usatoday.com/news/health/2005-03-02-docto...

The AMA is actually evil. They’ve probably killed people in America though their protectionism. A truly banal evil : the pursuit of increased physician salaries.


Fairness in AI/ML has been a huge talking point over the last 2 years in the community. I know of around 2 panels/conferences with major industry/academic participation in the US that are scheduled in the next few months.

Contrary to the image of mathematicians being rather consequence averse metric driven people, I have found University labs place a large emphasis on trying make sure their models do not have such biases.

It is a serious issue worth attention, but the response from the community has been prompt.


James Mickens on this topic:

https://youtu.be/ajGX7odA87k

> Some people enter the technology industry to build newer, more exciting kinds of technology as quickly as possible. My keynote will savage these people and will burn important professional bridges, likely forcing me to join a monastery or another penance-focused organization. In my keynote, I will explain why the proliferation of ubiquitous technology is good in the same sense that ubiquitous Venus weather would be good, i.e., not good at all.

> Using case studies involving machine learning and other hastily-executed figments of Silicon Valley’s imagination, I will explain why computer security (and larger notions of ethical computing) are difficult to achieve if developers insist on literally not questioning anything that they do since even brief introspection would reduce the frequency of git commits. At some point, my microphone will be cut off, possibly by hotel management, but possibly by myself, because microphones are technology and we need to reclaim the stark purity that emerges from amplifying our voices using rams’ horns and sheets of papyrus rolled into cone shapes. I will explain why papyrus cones are not vulnerable to buffer overflow attacks, and then I will conclude by observing that my new start-up papyr.us is looking for talented full-stack developers who are comfortable executing computational tasks on an abacus or several nearby sticks.


Thanks for the link. At some point he says "the gadgets are the true people of the Earth", which more or less resembles what Jacques Ellul first wrote about 60 years ago [1]:

> Hard determinists would view technology as developing independent from social concerns. They would say that technology creates a set of powerful forces acting to regulate our social activity and its meaning.

and

> According to this view of determinism we organize ourselves to meet the needs of technology and the outcome of this organization is beyond our control or we do not have the freedom to make a choice regarding the outcome (autonomous technology) (...) In his 1954 work The Technological Society, Ellul essentially posits that technology, by virtue of its power through efficiency, determines which social aspects are best suited for its own development through a process of natural selection.

I used to be a pretty big believer in things like "technology will make everything better", but after reading some of Ellul's books I've started to have my doubts about that.

[1] https://en.wikipedia.org/wiki/Technological_determinism#Hard...


I’m going to have to read this book. Thanks for mentioning it.


> "the gadgets are the true people of the Earth",

and corporate businesses are fast becoming the true citizens of nations.


Great now I can quote somebody about why I think technology is evil who isn't the Unabomber thanks.


Sarcasm?

I only ask because you're saying this on a website. A website focused on funding technology startups. Hosted on the internet. Build by darpa grants. Like, this doesn't seem like your sort of place if you're serious about thinking technology is evil.


Sometimes the people in the best position to judge are the ones who know the most.


For what it's worth. The conference this talk is held at is given an interesting connotation when read in a combination of English & Dutch. Use in English being Use. And "Nix" can be read in Dutch as a short-hand/homophone for "Niks", meaning nothing. It sounds kind of forced when explaining it like this, but for me (and I'm sure for other Dutch people frequenting this forum) it's quite natural to interpret it as such. I.e., the conference name can be interpreted as meaning Use Nothing, which seems to be related to/reinforce the topic of this talk.


personally I find his writing tiresome. it's literally the same setup and joke, with little content.


Personally, I find his talks fantastic. However, to be fair I also normally have problems determining what his actual points are and what is a joke and I rarely see what his main point is (sometimes I'm pretty sure it's just a bunch of stuff randomly thrown together).

That being said, his talks are a great example of how to make an otherwise dry topic very interesting and consumable by laymen. And he typically takes a skeptical approach to technology just working. Combine those two things and you get something that we as an industry desperately need. A skeptical and conservative view towards emerging technology that arbitrary people can consume (especially technology inept decision makers).

Currently, the best presentations to decision makers about new technologies are all made by evangelists, wide eyed early adapters, and snake oil sales men. These presentations encourage those with power to make poor decisions (they don't want to be left behind for the next big thing, they already thought the internet wasn't going to work).

James Mickens provides a much needed splash of cold water. And it's in a form that is easy to listen to when you don't understand technology. The typical approach for why a new technology is a bad idea is a boring technical digression where a bunch of people say a bunch of words that nobody understands. James Mickens makes it interesting and compelling without getting bogged down. Somebody does need to get into the details at some point, but if we don't have a way to signal that some new things are in fact a bad idea then nobody is going to get the chance and we'll be stuck implementing the next bad idea yet again.


More and more people I respect are becoming skeptical of Silicon Valley, or at least the attitudes attributed to it, to the point where I don't think convincing people is necessarily the problem any more. What is lacking is a solid plan of what to do instead.


My takeaway from talk is basically "go slower". He points out that history can be a good guide. In academic fields you need to get IRB approval for human subjects. A similar system might make sense for models applied to people, for example the system used for sentencing prisoners probably should have some kind of third party oversight.


> In academic fields you need to get IRB approval for human subjects.

On the other hand, that has its own problems:

https://slatestarcodex.com/2017/08/29/my-irb-nightmare/

It seems like the real issue is the information asymmetry. You can build hot garbage in five days but it takes the customer five months to figure it out, by which point they've lost all their data to malware. Meanwhile on day zero the carefully-designed application is $50 and the hot garbage is "FREE*", so which does the user choose without any other way to tell the difference?


I get what they are usually saying and I generally agree. For example DHH has a ton of material on how to do things differently, or more sane if you so will. It is just, now that I am convinced then what? I can try and incorporate some things, but it doesn't change much overall. So while I can appreciate the "gospel", there needs to be a path for people who are already on board. Maybe a organization, methodology, role or even a damn certificate. Because there are thousands of people learning "growth hacking", "agile" or whatever every day.


> What is lacking is a solid plan of what to do instead.

One of the issues is that surveillance capitalism is a collective action problem. If companies have more data about people like you then they can capture more of the consumer surplus when they sell you things. But they don't need data about you specifically for that, only aggregate data about people like you. So if you don't sell your privacy but someone else does, you don't get the free services but you still pay the higher prices. So everybody sells out.

Europe tried to address this with the GDPR, but the amount of friction that creates is problematic. What might work better is that instead of regulating collection, regulate third party distribution. Put Equifax out of business because they can't have a giant data breach if no one can give them any data. And if there are no more credit scores and it's harder for people to get a loan, housing prices would come down to what people could afford without having to pay interest on half a million dollars to the bank for thirty years. Probably a good thing.

Combine that with a big honking tax on advertising revenue to reduce the profits from collecting the data for that purpose and you reduce the incentive to collect data on everyone, without affecting smaller companies that don't sell advertising or user data to anyone.

But that would be a huge political feat. You'd be going after multiple hundred billion plus dollar companies in addition to the banks.

The other alternative would be for enough individuals to recognize the collective action problem and selflessly help their neighbors by not patronizing these companies, but that's not a trivial feat either.


I don’t disagree. I am talking even smaller scale though. A lot of people would risk being deemed a poor performer and getting fired if they did things correctly. Because what they would be delivering wouldn’t be valued. There needs to be a path where you can join an organization, take a course, go to a conference, get a certificate or whatever so people can differentiate. I essentially think those influential in this area are overestimating “will” over “way” in “if there’s a will, there’s a way”. Today, with information proliferation, if there’s a way people will come to you. Maybe it could be as simple as a six hour work day. That isn’t somehthing most companies would do without thinking about it.


There are individual-level consequences, but it's a macro-level problem. Doing the right thing costs less in the short term but more in the long term. But then someone quotes the Keynesian dodge ("in the long run we are all dead") as if humans will be extinct before we have to pay the piper, as if we're talking about billion year timescales rather than a few years or months.

And maybe we're back to the information asymmetry. People don't connect the fact that using Facebook's VPN could make them have to pay more for groceries than the cost of just paying for a different VPN, so they use it, and it costs them more than they expect it to, and after being multiplied by a thousand things like that, they don't understand why they have so much more debt than their parents did. The fact that the two are related hasn't really entered the public consciousness.

But it's not at the level of company-to-software-developer, it's at the level of customer-to-company. Companies can already tell what kind of developers they're employing. Companies know when they're selling out. But customers generally don't know that about companies.

It's like the whole religious war between Apple and Google. Is Android or iOS the best phone for user privacy? Trick question. It's PureOS. But most people aren't even aware of the possibility of that.


What I"m getting from the article: People seem to think that AI is magic, and similar to any other technology. ("THESE DEVELOPERS PUT IN BAD STUFF") That's not how AI works. You have to be aware of bias, introduce random error, accept false positives/negatives, and avoid overfitting.

That's not something that someone that took a boot camp on Tensorflow is going to understand a lot about.

EDIT: Also if you're using the results of the AI process, you should understand what the metadata about the results and where a good balance is.


Any technological progress in health tech worsen disparities, because when it's new, it's expensive. The more the tech is ground breaking, the more the disparity because the effect is so drastic for the money.

Doesn't mean we should not improve health tech.


It’s also sometimes needlessly expensive when companies make excess profits on medical gear. It’s up to governments (is really) to help everyone get access to improved care.


Medical device companies have very high gross profit margins but net profit margins around 7%, which is pretty typical for high tech manufacturing. That suggests that there isn't much in the way of excess profit.


Agreed I think the (financial) excesses of the health care system are mostly related to the litigiousness of the USA and the administrative burden.


"Overall annual medical liability system costs, including defensive medicine, are estimated to be $55.6 billion in 2008 dollars, or 2.4 percent of total health care spending."

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3048809/

Litigiousness is an issue but it's not the biggest one.


My understanding is that the main proposed mechanism of action is that fear of lawsuits results in excess treatments.


That's included in the 2.4% estimate.


This thread went from factual to political very quickly.


Some medical devices also have other net cost benefits though. For example, shorter, less intensive hospital stays from less invasive testing.


>companies make excess profits on medical gear.

Define excess profits.


"Dhruv Khullar (@DhruvKhullar) is a doctor at NewYork-Presbyterian Hospital, an assistant professor in the departments of medicine and health care policy at Weill Cornell Medicine, and director of policy dissemination at the Physicians Foundation Center for the Study of Physician Practice and Leadership."

OK. Also noted that this is an opinion piece and not journalism.

From the opinion piece: "A recent study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. "

Study link: http://news.mit.edu/2018/study-finds-gender-skin-type-bias-a...

Exact stats:

"In the researchers’ experiments, the three programs’ error rates in determining the gender of light-skinned men were never worse than 0.8 percent. For darker-skinned women, however, the error rates ballooned — to more than 20 percent in one case and more than 34 percent in the other two."

From the NYT opinion piece: "A.I. programs used to help judges predict which criminals are most likely to reoffend have shown troubling racial biases, as have those designed to help child protective services decide which calls require further investigation."

Associated links: https://www.propublica.org/article/machine-bias-risk-assessm... https://www.nytimes.com/2018/01/02/magazine/can-an-algorithm...

Relevant quotes from each:

"The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants."

"48 percent of the lowest-risk families were being screened in, while 27 percent of the highest-risk families were being screened out. Of the 18 calls to C.Y.F. between 2010 and 2014 in which a child was later killed or gravely injured as a result of parental maltreatment, eight cases, or 44 percent, had been screened out as not worth investigation."


It blows my mind that anyone thinks AI for recidivism is a good idea, given the well documented biases inherent in the existing system.


Well, the well documented biases in the existing system are exactly why you might think that AIs would be a good idea.


Not if by AI you mean anything in the same neighborhood as supervised learning.

I would respond that A.I is the -only- realistic hope we have for reducing the biases in our medical system. The systemic and individual level bias of the medical system is not going to go away due to some sudden enlightenment. It’s true to that to some extent the first wave of AI applications will inevitably carry with them some of the biases that exist in the current medical system.

These biases are going to lead to measurably disparate outcomes. Fortunately measurably disparate outcomes are exactly the type of thing that can be used to train or otherwise guide the improvement of a machine learning model.

As long as we have mindfulness that there will be similar biases in the first wave of applications, AI and the typical associated data slicing and dicing that is done when doing model development will be the best tools for detecting and then mitigating these biases.


> I would respond that A.I is the -only- realistic hope we have for reducing the biases in our medical system.

I would counter that the only realistic hope is social change. A.I. or not, biases will persist in medicine as long as they persist in society at large. The idea that an unbiased A.I. will arise from a process designed and run by biased individuals sounds like utopianism. Technology is not magic and it won't solve our "hard" social problems for us.


Yeah really!

There's plenty of evidence that medicine today has deeply embedded biases. Disparities in Black maternal health outcomes have been thoroughly covered in the literature and even mainstream media.

On the other hand there's plenty of evidence that ML and AI magnify inequality in other areas.

So relying on technology as a magic bullet to solve these societal problems seems ... naive.


We should work hard to minimize bias in society but given that they tend to arise spontaneously (see the Robber's Cave experiment) I don't hold out any hope of ever actually eliminating them. And some biases, such as lookism, might be too deeply wired for us to have any hope of eliminating.


Evidence based medicine with standardized treatment protocols and checklists provides a realistic hope for reducing the biases in our medical system.


How long has the medical system been trying this sort of thing? Therefore how close are we to the limits of how much bias we can eliminate?

It seems like there are few places where we are not using evidence based medicine. Standardized treatment protocols/checklists are already being pushed and applied as much as possible. The people who don’t want to do it are not likely to do it. In some sense AI based decision assistants are doing evidence based medicine and standardizing treatment protocols, just doing it in places where simpler flowcharts or checklists would be inadequate.


The medical system has been trying this sort of thing for decades but evidence based medicine had only become a major focus relatively recently. And it is working, but you can't expect fast changes in a complex system.

I don't think it's possible to quantify the limits of bias we can eliminate. But the system is moving in the right direction. And AI isn't really required for most of this work; it can provide improvements in some areas but simpler statistical techniques are usually good enough.


Not my exact field, but I keep track of the research for potential applications.

The general vibe is that there are potentially large welfare gains to be achieved if algorithms, ML or statistical methods are integrated into human decision making in principled way. People have innate tendency treat noise as a signal. That does not mean that the dangers mentioned in the article are not real [2]. We should be very aware of them in their current forms and prevent repeating them.

Few of my favorite papers:

1. Human Decisions and Machine Predictions The Quarterly Journal of Economics, Volume 133, Issue 1, 1 February 2018, Pages 237–293, https://doi.org/10.1093/qje/qjx032 https://www.nber.org/papers/w23180

2. Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People (2019) https://dl.acm.org/citation.cfm?doid=3287560.3287593

3. Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability https://arxiv.org/abs/1809.04578

4. Direct Uncertainty Prediction for Medical Second Opinions https://arxiv.org/abs/1807.01771


I work in healthcare and one thing that's always top of mind for me is that that the data we work with is only (generally) indicative of the physicians'/billers' best guess.

So - say we get a bunch of diagnosis codes from a hospital. Codes are generally added in via medical billers after a patient is discharged, based on the physicians input and other data on the patient record. So at this point the data in which you generally work with has gone through two different humans, who used their (best attempt) subjective viewpoint for this patient.

This generally works fine for a lot of common conditions, evident conditions, so things like heart attacks, fractures, and so on. But for things that are complex, like sepsis leading to more evident conditions, the data may not necessarily capture that sepsis even occurred.

Not to say this is a unique problem to healthcare, but something that's not talked about often. A lot of the data we train and model on is based on a human's best guess, which may in some ways be limiting given really complex, dynamic processes.


Like all discussions of AI, I feel the discussion in the comments seems unmoored from actual predictive analytics in healthcare. We have this fantasy future of omnipotent AI machines controlling our destiny, when in reality the here and now is all we have solid evidence about. The reality is all of this "AI" in healthcare is just fancy mathematical equations crunched on increasingly large data-sets, and until humans themselves are much less biased, the math isn't going to solve these social problems.

My job involves creating predictive modeling for the VA hospital system. I have recently, in the past month, worked on models predicting the probability of death in the next year, probability of receiving Social work services, probability of receiving a screener which indicates food insecurity, and more. The idea behind all of these is to take our clinical intuitions and the intuitions based on prior research, then gather variables that allow us to use our theoretical intuition to predict future health outcomes. These predictive models may turn into dashboards, which are basically daily or weekly tables that show clinicians which of their patients are predicted to get certain outcomes.

Now, how does this all circle back to biases and disparities? All of our models include racial, ethnic, gender, rurality, age, and many other categories of sociodemographic information. However, at every step of this process, there is a human (whether PI, analyst, or clinician in the final step) looking at numbers/variables/values and making decisions.

Thus, I don't think we can truly separate the AI from the human in current healthcare analytics. We do our best to control for disparities and get down to the brass tax, the actual medical information, but there is simply too much human decision-making in the current workflow to truly divorce the "disparity differential" from whatever humans would do on their own sans mathematical modeling.

Overall: our models are mere collaborators, and until we minimize our personal and systemic biases and disparities, we can't hope to use our fancy mathematical models to minimize them for us.


Great comment! One thing I'd add:

> until we minimize our personal and systemic biases and disparities, we can't hope to use our fancy mathematical models to minimize them for us.

And as long as technologists are in denial about the extent to which personal and systemic biases influence reality (as we're seeing in this thread), tech will continue to reinforce and magnify these biases.


Another potential problem is that minorities are often more reluctant than average to let their genetic information be used in health research. It's understandable given historical issues but will probably lead to bad outcomes in the future.


This may be true, but it happens today as well without AI. Big pharma corporations choose to do research on treatments for diseases that affect the ocidental world. Often setting aside the needs of poor nations.


This is addressed in the article:

"The risk with A.I. is that these biases become automated and invisible — that we begin to accept the wisdom of machines over the wisdom of our own clinical and moral intuition. Many A.I. programs are black boxes: We don’t know exactly what’s going on inside and why they produce the output they do. But we may increasingly be expected to honor their recommendations."


I do understand that there is a serious possibility of deploying a bad black box AI, a number of questions come to mind though.

Isn't it true though, that human intelligence (and especially corporate/government/etc. intelligence) is also susceptible to biases which are invisible? Also, do we know why humans or groups of humans produce the output they do? Is there some kind of black box testing procedure we could use to increase trust in AI to a point at least equal to humans?


The presumed argument here is that you can challenge individual people, corporations, etc. more easily on discriminatory behavior than you can challenge an algorithm. If an algorithm happens to refuse to issue loans to black people, who's the class action lawsuit going to sue?


Presumably you could sue the bank that is using the AI to make the loan decisions, or is there something I'm missing?


You'd have to prove that the A.I. was discriminating based on a "protected class" and not on some other basis. But you have no insight into the A.I. or its training data. Nor do you have a comparable A.I. of your own to run A/B experiments that can prove discrimination. Now what?


Don't these problems already arise when trying to prove bias in a legacy meat-based intelligence?


There's an additional danger. Many times society moves forward when the standard-bearers for what was "acceptable" or "correct" before retire or die out. A.I. doesn't die. An A.I. constructed with today's biases may, in some form, outlive its creators and carry these biases well into the future.

"Science progresses one funeral at a time." -- Max Planck


What this article is also missing is the fact that a lot of the existing data problems have to do with the cost of obtaining this data. When we move to a more universal collection of the data in a structured format that can then be used to further train the model, you actually end up with a better representation.

However, this all falls apart with the current access to healthcare. If the access is not universal, then you can’t expect the results to be anywhere near equal or at least similar. We really need to solve the healthcare access problem.

The other item I find questionable is the example with home-based rehab vs a facility. Sure, for better-off patients with a good home environment, transportation, good food, etc., being in that good/positive environment will likely lead to better outcomes. However, if the person doesn’t have that, is that still better than a facility? Would be great if we saw data adjusted for this disparity.


> When we move to a more universal collection of the data in a structured format that can then be used to further train the model, you actually end up with a better representation.

I'm not sure how that's true. Access to products and services that do this kind of collection is very much a class issue. Poor people, especially in the US, can't afford regular physicals nor personal health trackers like a fitbit. Additionally, personal health technologies with the best, most accurate data are the most expensive ones - e.g. Apple Watch vs. generic fitness band.

If we lived in a world where class was separate from race or gender, you might be correct. But that's not the case.


I would have thought that a scaled medical diagnosis AI/ML system with zero marginal cost for each additional user would provide “better, faster, cheaper” diagnosis thru increased primary health care capacity, reduced costs per patient, and reduce disparity.

What am I missing?


AI could be used to improve the health outcomes of everyone. It all depends on how we use it.


As A.i. does more of the diagnostic work for doctors, the skills of doctors grown through accumulated experience will atrophy. Similar to how few people now farm, and traditional farming methods end up being generationally forgotten.


>failing NYTimes casts FUD on vaporware >Frontpage of HN sounds about right




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: