Hacker News new | past | comments | ask | show | jobs | submit login
'AI' Is Supercharging Our Broken Healthcare System's Worst Tendencies (techdirt.com)
205 points by rntn 6 months ago | hide | past | favorite | 161 comments



I'm not sure I understand why AI is supposed to make anything better, at least under the regulatory regime we have right now.

1. AI doesn't do anything new. It's just human thinking done better in the best case scenario. So the plus side is that we may come up with ideas that a human might have come up with 10-15 years later. However, a lack of ideas is not the problem we face at all. We have solutions to nearly every problem we face. We have enough wealth to implement nearly every solution we have. The real issue we face is that the wealth is concentrated in certain hands that do not want it to make it to others' hands, and those who are capable of implementing the solutions we have prioritize maximizing the wealth they capture with those solutions. AI, as long as it remains in private wealth focused hands, will only serve to turbo charge these issues.

2. Under the current regulatory regime the responsibility of what AI does wrong is not clear. Much like how Uber, and the other gig economy companies leveraged the opportunities created by the spread of the smartphone to create a regulatory innovator, whose only real purpose was to dismantle regulations and impose costs on 3rd parties and capture profits for themselves, AI will and is initially again being used as little more than a legal innovation, using the gray area in terms of legal responsibility as an advantage to do things a company would never have a human employee do or to allow companies to use other people's works without paying for it, and not getting sued for the theft.

Expect AI to continue spreading into more legal gray areas as companies take advantage of the lack of clarity on legal responsibility to raid and pillage our economy and societies.


Right now, the only products I've seen leveraging LLMs are the tools from Nuance[1] that fill in physician notes. Arguably this is not an improvement; physician notes are already an incomprehensible garble of cut-and-paste content (often from other systems).[2] But at least it is unlikely to negatively impact patient care.

So far, AFAICT, the drive is to cut costs by saving the time the physician spends with the EHR. This has been a goal of the EHR systems for as long as they've existed as it saves money. There is a lot of press releases about products that do more but it's not at all clear to me that these products being widely used.

The UnitedHealthcare example in this article is interesting in that is was being used in production. OTOH, it sounds much more like something evaluating business rules or some heuristics rather than an LLM or even an ML based solution.

[1]: https://blogs.microsoft.com/blog/2023/08/22/microsoft-and-ep...

[2]: https://ehrintelligence.com/news/ehr-clinical-note-length-co...


"So far, AFAICT, the drive is to cut costs by saving the time the physician spends..."

In mental health a lot of providers don't even file on customer's insurance. Patient pays cash/charge and the patient is stuck with filing claims. Providers washed their hands of that dirty work.

During my last job (large firm large metropolitan city) employer subcontracted for covid testing erecting a large tent to handle 8 covid exams at once. Here's the procedure:

- you badge in; subcontractor used our hw/network.

- you them give them your name and DOB, which the PC already told them

- they write your name on specimen collection tube, and send you to bay X. The nurse at bay X sees you coming

- at the bay you are asked for your name and dob again. And you badge in again.

That level of duplication can only arise as a function of billing my then employer for extra non productive labor or fraud cya protection. Either way it's damn annoying.

At my last doc appointment I spent the first 20 mins answering THEIR BILLING questions: was my name xyz? Did my insurance change? Etc. No services have been rendered yet.

And despite all the IT for billing and PC displays displaying common sense health tips, there was no display of how long one would wait for the doc. It was a blind date: maybe they'd show; maybe you had better things to do.

In sum, whatever crappy IT health care has, it's to help them, not customers. God damn: they made the paper work hell they live in and, as I say, mental health care patients are stuck with the dirty work because they can't stand it anymore.

My wife spends 2hrs a week yelling at aetena. She's pretty sure they decline claims out of willful stupidity or nefarious greed. I would not be surprised to hear crappy AI involved.

I absolutely hate that business.

I was solicited multiple times for interviews by health insurance corps. I told them NFW. Their pay sucks. Their concept of quality improvement sucks. They suck.


> The UnitedHealthcare example in this article is interesting in that is was being used in production. OTOH, it sounds much more like something evaluating business rules or some heuristics rather than an LLM or even an ML based solution.

I'm fairly certain that at least one insurance company VP out there is using the LLM/ML hype to trumpet what is actually a statistical model evaluation. OTOH if it provides cost savings does the business truly care what it is? Especially if they can spin it as 'The computer makes more informed decisions thanks to AI'.


It helps keep doctors treating patients instead of bogged down in documentation and bureaucracy. More time for patients and less stressed-out doctors means better outcomes.


My employer is exploring LLMs in our existing product for transcription purposes in a clinical setting, especially telehealth.

It's pretty damn impressive tbh.


Can we be sure that those intermediated notes aren't just hallucinating what it thinks the doctor wants to write down?

How do you verify that what gets written down really is what the doctor wanted? How do you automate that to protect yourself against LLM hallucination?


“ Expect AI to continue spreading into more legal gray areas as companies take advantage of the lack of clarity on legal responsibility to raid and pillage our economy and societies.”

This is exactly what’s happening, it doesn’t even need ai just an algorithm that allows companies to have an excuse for denying care. If there is no human in the loop nobody is responsible. You can’t send an AI to jail so they can just abuse the system.

Judges need to force companies to stop using algorithms and allow approve all expenses when an algorithm is found to be illegal or potentially illegal and that will end the practice quickly. The burden of proof should be on the algorithm writer to show its legal


There are no solutions, only tradeoffs. The over-reliance on one size fits all "solutions" are what got us into this mess, and AI will prove just another bandaid with its' own problems, until we realize this.


Yes and no - sometimes you need tailored complex care from a professional. Sometimes you need routine treatment, but it's going to take 3 months to get an appointment with the given specialist, or maybe in the case of rural areas there isn't even one within 1000km of you.

A lot of good can be done by extending "one size fits all" solutions for simple things there's just no capacity for


A naïve readthrough would lead one to that conclusion perhaps but the elephant in the room here is capitalism. Pragmatically, any "good enough" solution will be used to replace skilled care to improve patient throughput and thus the bottom line.


The problem is that "skilled healthcare" is essentially a cartel which uses artificial scarcity to increase the price. You can see the counter-example in Cuba, which has an incredibly high number of MD's per capita compared to other countries, and achieves very good healthcare results, especially in the context of other metrics of development and economy.

There are so pitifully few health care practitioners compared to what we need in most developed countries I don't think too many jobs are going to be at risk any time soon if AI picks up some of the slack.


Cuba is hardly the example if their indentured servant doctors defect to Brazil.

Things must really really bad for someone to defect to Brazil


Exactly - it just shows how terrible the rest of the world is doing if Cuba of all places puts their medical system to shame, simply by training more doctors


Artificial scarcity? Maybe. There aren't but so many households with children that fit inside the venn diagram of can afford med school, has the intellectual capacity and bloody-mindedness to actually finish med school, and is interested in going to med school.


Yes and no. No doubt the medical profession requires talent and rigor, but I think to some extent the hurdles which are placed in front of students who want to become doctors are artificially high to help justify the small number of graduates medical schools put out, which protects the artificially high salaries.

For instance, if you read up on the frankly cruel residency process which doctors are forced to clear in the US, the insane hours are in large part because one of the doctors who was influential in shaping the current practice was an abuser of stimulants.

So I think it's a bit of a "just so" story to assume that the version of the medical profession we have is the best possible version.

And indeed Cuba is a good counter-example to your argument. Unless Cubans are just a lot more clever and resilient on average than Americans for example.


Yeah I'm not convinced. I've worked adjacent to the healthcare and pharma industries most of my adult life and I can tell you the complexity of the human body beggars belief. You aren't going to run the equivalent of a coding bootcamp for healthcare and get good results.


Not all of medicine is dealing with the full complexity of the human body. A lot of it is applying wrote knowledge to essentially follow a flow-chart to map a set of symptoms and details about the patient to a diagnostic / treatment plan. Doctors are trained to not deviate from medical orthodoxy, and for instance it's been studied that if doctors are told another doctor has already come to a given conclusion about a diagnosis, there is a strong effect that they will confirm the diagnosis even in the face of contravening evidence.

I.e. I don't need "Dr House" to tell me to take 2 and call him in the morning. There is a whole large swath of routine medicine which AI could probably do.

And I am happy if you and your family have been lucky to be blessed with good health and good care, but many people have experienced having a mysterious medical condition, and going to several doctors or waiting months before it can be properly diagnosed. Human medicine is far from perfect.


Yeah. Would you like to review a graph over time of medical outcomes in the US based on this approach to healthcare (we see you minute clinic)? Hint: there's a reason we consistently rank dead last when compared to other industrialized nations.


Sending more students to medical school wouldn't help. Every year there are already students who graduate with an MD but are unable to practice because they don't get matched to a residency program. The first thing we need to do is get Congress to increase Medicare funding for graduate medical education programs (or find another way of funding those).

https://savegme.org/


< can afford med school

I mean arguably subsidizing med school would be a pretty sensible thing for a government to do if it wanted to increase the number of doctors. The other elements of the venn diagram can't be modified I agree, but affordability of med school doesn't have to be there


> There aren't but so many households with children that fit inside the venn diagram of can afford med school, has the intellectual capacity and bloody-mindedness to actually finish med school, and is interested in going to med school.

That is the artificial scarcity he was referring to.

There is no reason to lump all medicine together in a MD degree /certification.

Why not have one certification for each common problem?


Why not? Symptom overlap springs to mind. Both anxiety and reflux can present 90% of the common symptoms of a heart attack. Get a reflux specialist when what you needed was a cardiologist and you're fucked. Labor costs also spring to mind. I can only imagine the look on some hospital administrator's face when the proposal to triple headcount comes across their desk.


Presumably you would structure such a medical system such that lower level practitioners would know enough to know when to escalate to an MD.

90% of medicine is stuff like people going to the doctor with the flu to get a sick note to stay home from work and instructions to rest and drink fluids. I don't think you need an MD for that.


You've literally described the current healthcare system in the US. Minute Clinic/NPs/PAs providing front line care. It doesn't appear to be working well based on how our country's healthcare system ranks against other industrialized nations.


I don't live in the US anymore, so it's hard for me to speak to the US healthcare system, but as an outsider it seems to me that the problems with the system stem from the incentive structure, and won't be fixed by any technology.

It seems to me the US healthcare system sits in this weird gap where it's not funded by private individuals, but it's also not publicly funded (excluding medicare/medicade/VA/tricare etc)

So it doesn't benefit from efficiencies of a single-payer system like the UK has, and it doesn't have to obey proper market forces either, since you have a bunch of for-profit entities in the middle, but the costs are obfuscated from the consumer unless you're unlucky enough not to have employer-funded health insurance.

So until you fix those core issues, I don't see how technological advancements are going to have much of an effect either positive or negative.

It's an orthogonal issue.


I don't disagree on any particular point except the notion that a medical AI represents an advancement. That claim would have to be studied extensively over decades.


We already have that structure to an extent. Much primary care for minor problems is delivered by a PA or NP. They are trained to escalate to an MD when necessary, although due to lack of training sometimes they miss things that an MD would have caught.


> due to lack of training sometimes they miss things that an MD would have caught.

It goes the other way as well. My aunt is an RN and she loves to tell the story about the time that a doctor was in such a rush one time she had to call his attention to the fact that his patient was dead.


Here's a fun question: would that be due to gross inattention on the doctor's part or gross overwork due to patient/provider ratios being skewed into bizarro world by cost-cutting measures on the part of the hospital?


Yes both of which could be helped by augmenting the healthcare system with AI to relieve practitioners from repetitive, menial tasks


You mean like how healthcare providers were relieved by advancing legions of PAs and NPs into general medicine? If that approach worked we would have seen improvements to healthcare metrics by now. We haven't.


We've already got NPs and PAs "picking up the slack" in the US. Prices are still outrageous for folks without health insurance and healthcare outcomes continue to decline. Adding another half-assed tool to the chain isn't how this gets fixed.


The US is not the only country with healthcare needs. There are plenty of developed nations with public health care systems which primarily suffer from scarcity rather than cost to patient, and AI could help with that.


What part of healthcare provided rote by untrained individuals at the behest of a LLM sounds like a solution to anything?


So for example, a treatment plan might look like this:

1. Go to a GP and explain symptoms 2. GP orders some tests based on symptoms 3. GP refers you to a specialist based on test results 4. Specialist orders more tests 5. Specialist analyzes results and orders more tests, or prescribes treatment plan 6. Check in with specialist in weeks or months to evaluate results and update treatment

It seems to me that for many common issues, 1-4 could plausibly be replaced by AI guided care with minor downsides.

If you are normally spending a lot of time waiting between steps for a practitioner to become available, then it's going to be a net win for many people.


"with minor downsides" ...


What about nursing? That seems to be the flip side where there isn't the protection of the cartel around doctors, so wages get driven to the floor and we see scarcity due to cost of living and mismanagement.


Nurses might not get paid what they are worth in all cases, but they have amazing job security.

Ironically it's probably easier to replace a lot of what doctors do with AI than what nurses do.


2) is pretty much the definition of a modern startup if you ask me.

Precious few actually make a positive difference.

Obviously that's not the story they'll tell you, or even themselves, but once you start scratching the surface you'll find that they're all about squeezing profits and pushing costs just like everyone else.


> I'm not sure I understand why AI is supposed to make anything better

It's probably because you don't understand how the world works. Most things aren't revolutions they just allow us to good things more often and cheaper and so create wide spread "wealth". In this case imagine a doctor who isn't magically better than all doctors combined, but instead is the best of all doctors and specialists and performs very consistently and is available on 24/7 basis. It didn't revolutionize everything but I imagine the collective impact on health and lifespans will be tremendous.


You don't understand because you think we have a society built around making life better for everyone. It's not. Our society currently is optimized for one thing: Making sure the rich stay rich, and the poor stay poor. In that respect, AI is amazing. AI requires a small upfront cost, then can do the job of thousands of people for the pay of less than one. It works 24 hours, and will never ask for more money. And it works exactly to the specification you set, and will never, ever do anything to help your customers.


I disagree with this. There's a lot of evidence that modern society has made things much better for everyone.

That doesn't mean that there aren't negative influences in modern society. The rich wanting to get richer is definitely one of the major ones that is particularly powerful right now.

But that's why society is a process. You gotta fight back and moderate the negative influences, and boost the more positive ones.

I think the question whether AI will fall on either the positive or negative side of the ledger is wrong in itself. AI will provide a lever to magnify whichever direction society and people were going in anyways independent of AI. It's a tool, and hopefully a really powerful tool, but it's not a God. It can be used equally for good and bad.

And if you believe, like I do (and by all appearances you do too), that our current society is balanced to benefit bad actors over good actors (where bad/good are being defined in terms of benefiting society overall), the magnifying effect of AI is likely to magnify the net negative.

The one advantage of something new like AI coming around is that, I believe, the primary reason open democratic societies allow themselves to become tilted in favor of bad actors is because of complacency...it's usually a slow shift rather than a single big change. The arrival of a major chance like AI forces society and everyone in it to once again take stock of where society stands.

This gives us an opportunity to change the tilt of our society to promote good actors again, before AI truly starts having an impact, which gives us an opportunity to ensure AI magnifies a net benefit, rather than a net negative, for society.

And frankly I think we are closer to having that reckoning than we've ever been in my few decades on this planet, so overall I'm optimistic about AI. But again, whether we benefit or suffer with AI will be independent of AI itself.


> Making sure the rich stay rich, and the poor stay poor.

It doesn't explain how Sears managed themselves into bankruptcy, handing everything to the likes of Amazon despite having all the money in the world, comparatively.

It also doesn't explain how EVs are taking off. I've seen multiple conspiratorial documentaries about why the oil companies will never allow EVs to ever take hold.


>you think we have a society built around making life better for everyone.

You know, I don't think they do think that?

I think they think that's what society should be, though.


> AI requires a small upfront cost, then can do the job of thousands of people

That's the same with every efficiency improvement. Efficiency improvements (like AI) aren't zero sum. When production costs go down, competition forces prices to also go down, which then leads to more wealth for everyone not just the capital owners (even if it disproportionately benefits capital owners).

People are made redundant and then find jobs in other parts of the economy where they're actually needed. This is a painful but necessary corrective mechanism for the economy to remain dynamic and productive and innovative.

That said, I still want a global wealth tax.


That's the problem with healthcare. Due to inelastic demand, it doesn't succumb to market forces the way as other markets. Also in many places you have issues with provider collusion, monopoly, and regulatory capture keeping prices high.

The US is particularly bad in this respect. Market sources only seem to prevail in rare cases - like elective procedures which are not covered by insurance (i.e. laser eye surgery).

It's hard to make decisions based on price when you don't even know the price in many cases before receiving care, and you may not have any choice of provider.


You are assuming over the coming years that most parties so displaced simply shift to other parts of the economy as lucrative and are able to take advantage of those benefits. You are also assuming collusion explicit or implicit and the wealth of remaining buyers doesn't just keep prices high.

EG you you have 100 people who can't afford steam off a hotdog and 100 customers with n units of disposable income per day. You end up with 150 customers who can't afford shit and 50 with 2n units of disposable income. The price was never bounded by the actual cost in the first place so the smaller wealthier clientele leads to higher prices.

One could easily imagine many tens of millions of new poor making much less while prices driven by wealth of other segments of society continues to rise who both have less coming in to pay higher prices.

Meanwhile you have tens of millions of people who are presently create little value NOW whose jobs end up politically difficult to deprecate. You need to have SOME people on your side after all.

I don't see our society being a good one now or in 20 years. There is no political will to do anything hard and a thread of greed, cowardice, anti intellectualism and weakness woven through our society.


> You are assuming over the coming years that most parties so displaced simply shift to other parts of the economy as lucrative and are able to take advantage of those benefits.

That's how it's always been historically, but you are right that I can't predict it'll be the same in the future, especially when it comes to AGI.

> You are also assuming collusion explicit or implicit and the wealth of remaining buyers doesn't just keep prices high.

Collusion to fix prices can't happen in highly competitive markets with declining production costs, because of game theory reasons. You always get this graph[1] unless you have regulatory capture such as with insulin supply in the US. Or unless you have a labor-heavy cost base such as with the education industry.

[1] https://pbs.twimg.com/media/E314pxwXMAI2ZIn?format=png&name=...

The risk with AI is that it won't be highly competitive. As a general rule, only in a monopoly-like situations can you have abnormal profits. That's where the global wealth tax comes into play, among other policies targeted at monopolistic practices.


Hasn't the poverty rate consistently decreased?


The gap between rich and poor is growing at increasing pace, just like the gap between rich and mega-rich is.

If you went completely hungry yesterday, nearly starved to death, but today you get enough calories that you feel less pain so you can at least stand up and walk around a bit, while you watch your brother stuff his face with so much cake he has to puke 3 times to make room for more, to then play frisbee with the rest, what would you think? How it was for you yesterday isn't the standard. How it is for others, today, is the standard. It's about fairness and human dignity, nothing more and nothing less. If everybody was starving, we wouldn't even have a name for it, or a problem with it. It wouldn't be a slight on anyone.

Some people shriek in anguish or sob in darkness, kill themselves, over issues that would be a 2 minute phone call and a check that is 0.0001% of their monthly income for someone else. That's a problem, and that kings used to have less than the average homeless do today doesn't change that, and isn't something capitalism gets to take credit for either. Humans have been inventing and improving things since before history, even animals do it, that's just normal.


> The gap between rich and poor is growing at increasing pace, just like the gap between rich and mega-rich is.

Why the hell should that matter? If the system wanted to keep poor people poor, won't the poverty rate stay the same? Thats the assertion I was countering.

Why keep moving the goalpost? But I will play along:

> If you went completely hungry yesterday, nearly starved to death, but today you get enough calories that you feel less pain so you can at least stand up and walk around a bit, while you watch your brother stuff his face with so much cake he has to puke 3 times to make room for more, to then play frisbee with the rest, what would you think? How it was for you yesterday isn't the standard. How it is for others, today, is the standard. It's about fairness and human dignity, nothing more and nothing less. If everybody was starving, we wouldn't even have a name for it, or a problem with it. It wouldn't be a slight on anyone.

1. You should first ask if your brother got his cake fair and square. If so, then you should be happy you are not puking and be happy for him. Don't be selfish. Let's say your brother worked 10 times more than you (80 hrs) and smarter than you while you stayed in bed and watched TV all day on his dime. Isn't inequality the fairer but still painful outcome in this case? F

2. Should you and your brother have the same quality of life despite the efforts you put in? Why should you be entitled to his fruits of action beyond sustenance?

3. If you agree there in diversity in level of effort and thought, then inequality is the fairer outcome.

4. Also, one should stop comparing only outcomes, you should also compare inputs.


Maybe your brother can work more because he was born healthy and he was treated better, while you were born with a disability and neglected.

One needs to look at circumstances that allow different inputs in the first place.

Your capacity for input is not some inherent core part of your being, unless you believe in souls, it is a result of what surrounds you and what has come before you.


> Let's say your brother worked 10 times more than you (80 hrs) and smarter than you while you stayed in bed and watched TV all day on his dime.

And you accuse me of moving goalposts? You're not worth my time at any hourly rate.


Do you think rich people should be responsable for others or they have the freedom to get rich and do nothing?


For some people to be extremely rich, others have to be poor. If everybody had a billion it would be the same as if everyone had a dollar.

I think sociopaths should be called out as the dumb fucking sociopaths they are.


1. AI is broadly accessible. While it definitely has its downsides, I think there's a valid argument to be made that access to AI medical advice is better than access to no medical advice.

2. AI can hold vastly more knowledge in its "head" than a doctor. There are many rare conditions that a doctor rarely sees. That doctor doesn't have enough time to spend researching these kinds of things for their patients, but an AI might be able to analyze their medical records/test results/etc. and come up with possible diagnoses that a regular primary care physician couldn't. If these diagnoses go to the PCP for evaluation (as opposed to going straight to the patient), what is the downside or harm?


> valid argument to be made that access to AI medical advice is better than access to no medical advice.

This is not true. And it's where I believe your whole response takes a wrong turn.

Why is it wrong? Because of risk of harm.

Giving out wrong information can easily kill people. Or permanently insure them. There's a tremendous amount of harm that can occur if you do things wrong in medicine.

When you attach "an AI said X" to something, it adds the appearance of authority. This makes it incredibly dangerous. Folks that don't know enough will believe that the "advice" is true because it "comes from an AI" (and how could an AI be wrong? It's advanced! It's smart! It _can't be wrong_ because the company is using it instead of a real Doctor, and _they wouldn't want to harm me with something that can be wrong_...right?)

For example, let's take COVID. If you had GPT say "go take hydroxychloroquine for COVID", you'd end up with a lot of people getting harmed by (1) taking a drug that's not going to help them when they _think_ it would (so they're not going to do something else for treatment, because they incorrectly think they're already being treated) and (2) there's always side effects to taking _any drug_, and these side effects may actually cause _more harm_ than doing nothing.


I agree with what you're saying here - absolutely, bad medical advice from AI (or anywhere) can cause harm.

What I disagree with is the way that you're drawing the conclusion that there isn't even a valid argument to be made for using AI. To be clear, I didn't say we absolutely should do this - the part of my post you quoted is where I said there's an argument for it (i.e. we should have the discussion and evaluate).

If an AI says take hydroxychloroquine for COVID then absolutely that will cause harm. But that theoretical possibility isn't enough to determine whether an AI should be a source of medical advice - you have to actually evaluate whether/how often it will do that. If .0001% of the time it says to do that and the rest of the time it gives good advice on how to treat COVID at home and what symptoms to look out for that would cause you to need to call an ambulance, then the harms are probably greatly outweighed by the benefits.

This is especially true in the case that I argued - where the alternative is no medical advice, because people don't have access to doctors. Lots of people died at home of COVID when they could have been saved at a hospital - we have to consider what the benefits to them might be in addition to what the detriments of AI giving bad advice could be.

The last thing I'll say is that your position seems to be that if a source of medical advice sometimes gives bad advice, we should not use it. The problem is that based on that position, people should not go to doctors. Doctors are humans who make mistakes - plenty of evidence of that to go around. All of these points that you've made apply to human doctors:

> Giving out wrong information can easily kill people. Or permanently insure them. There's a tremendous amount of harm that can occur if you do things wrong in medicine.

> When you attach "[a doctor] said X" to something, it adds the appearance of authority.

> If you had [a doctor] say "go take hydroxychloroquine for COVID", you'd end up with a lot of people getting harmed

I want to point out that there absolutely were doctors who told patients to take hydroxychloroquine.

This is all to say that I think you have to weigh relative harms and benefits. Should someone with access to top tier medical care get advice from an AI? Almost certainly not. Somebody who has access to no medical care or to the sort of doctor who prescribes hydroxychloroquine? As I said initially, it's not necessarily a hard yes but there is certainly an argument to be made that it's a yes.


Would the AI recommend taking unsafe amounts of hydroxychloroquine? Or would it give safe dosing instructions?

I am guessing the latter though I don’t know for sure.


Earlier discussion on this topic, with the original source link from ArsTechnica:

https://news.ycombinator.com/item?id=38299921 (Nov 17, 2023)

As I said on that thread: the dangers of AI will always come down to human decisions made on how it should be applied. These 'solutions' will come from the McKinseys and IBMs of the world to serve hospital CEO needs, not those of patients or care providers.

For those thinking that AI will reduce healthcare costs by improving efficiency, I point you to the 'electronic medical records' hype of 15 years ago. Are per capita healthcare costs any lower now?


" For those thinking that AI will reduce healthcare costs by improving efficiency, I point you to the 'electronic medical records' hype of 15 years ago. Are per capita healthcare costs any lower now? " Can you elaborate? In countries like Denmark or Estonia electronic medical records are well used among doctors and patients.


I think the point is that one of the big hype phrases is that it will decrease costs of healthcare and lower people's bills, but even if you lower the cost, there is no actual incentive to lower patient bills, so the extra value simply goes to owners and investors rather than the end consumer.


That's assuming the extremely broken and frankly incomprehensible American model.

Over in the EU, where most of the healthcare is single-payer (sometimes with optional add-ons/extra payers/private care), cost of care has been growing too. There are simply many more treatments which are available now than before, more tests possible to discover issues, etc. as well as of course the higher life expectancy and higher average age across the board. That doesn't mean improvements shouldn't be undertaken. Such as electronic medical files (which are the norm in multiple EU countries but sadly not everywhere, and not on an EU scale - as born in one EU country and now living in another, it would have been extremely useful to be able to automatically share my childhood records), or maybe ML to do predictions and stuff.


I agree 100%. In America the capitalist dream of competition and technological innovation actually driving costs lower for the consumer has mostly just died in almost every category. Any savings are now passed onto a cabal of private equity firms and legislatively-entrenched insurance corporations.

Goodhart's Law is in full force, and the measure of business health, profit, has become the target, rather than consumer welfare and rights. Boy have we optimized for profit at the expense of anything else.


Even that is very American specific. In Germany there is a certain limited competition between health insurances.


Would you rather be treated in a modern hospital or one from 50 years ago? Would you choose the historical hospital if it charged you the same fraction of median income as it did back then?


It's not that black and white. There's more at play than just charges. Things can be better and still shockingly corrupt, with extreme amounts of unjustified overhead. Things are minimally better for any individual patient, while being extremely more expensive, and the insurance overhead has skyrocketed.

The question is a little bit like responding to a complaint about industrialized agriculture's effects on the environment by saying "Would you rather eat sausage from 100 years ago or now?" It's not actually that relevant to the initial complaint, and it reads like it's trying to just terminate the conversation on a loophole. Things can be better and still also be filled with many problems, particularly modern problems that weren't relevant in the time period you're comparing to.

To actually answer the question honestly, it depends on my condition. For the kind of thing I would typically see my GP for, including run-of-the-mill checkups, definitely one from 50 years ago. For any serious medical issues, modern (as long as I can convince the doctors that it's actually serious so they spend more than 30 seconds on investigation).

As far as charges go, per-capita healthcare expenditure is now over 5 times as high as it was in 1980. Are we getting a 5-fold improvement in medical care? When I go to the doctor for anything more than a checkup, 90% of my time is spent waiting, and I spend twice as much time with RNs who repeatedly ask me the same questions, before a frazzled doctor comes in, asks a couple questions, then rushes out, maybe handing me a prescription for antibiotics regardless of what my concern is.


> It's not that black and white

The reason I wrote anything was that your original comment sounded very black and white to me.

I think we tend to overlook how much things improve in the long term. Knowing that they do hints at potential to do better still and should encourage us to take action and improve the situation.

Pointing out the current problems is necessary for that to happen. However, your comment came across as denying that healthcare can be improved, or at least that improvements can benefit patients. This is obviously false and I'm sure you didn't mean to claim it, so it's great we're able to add a bit more nuance.


Fair enough, and I appreciate the extra perspective.

I wouldn't deny that many of these things have improved. Very little actually gets worse in a literal sense (environmental issues and other negative externalities aside, though they are very relevant as well). The concern is that they tend to get marginally better when they should by all means be getting significantly better, because the lion's share of the benefits feed the ever-widening wealth gap.


Ceteris paribus is doing a lot of heavy lifting here.

If the 50 year old hospital has an available operating room and the modern hospital does not, I'm going there. I won't have a choice if the treatment needed is urgent.


The idea was that with all this healthcare data, patients would realize better outcomes. Like, your super-Doctor is going to realize that 6 months ago, you went to urgent care of XYZ condition, and that's going to shine some kind of light on your situation now, so he's going to be better prepared to treat you.

The reality is, the vast majority of that data is useless noise, nobody has the time to make any kind of analysis on it, and it's just another healthcare cost center providing zero value to the patient.


A major contributing factor to the lack of realized benefits with electronic health records in the United States is that the interoperability specification, CCDA, is only loosely complied with under the "meaningful use" rule from Health and Human Services. Certification is done on site at the EHR manufacturer, in an environment controlled by the manufacturer. The demonstration of compliance does not have to be done in a default "out of the box" environment, but merely that the system is _capable_ of being configured in a way that allows for emitting standards-compliant CCDA.

In practice, this means that most manufacturers do not build systems that are standards compliant by default, and definitely do not install systems that are standards compliant in their initial configuration. Then, individual healthcare practices add customization on top of that, which further complicates data integration.


Ultimately I believe electronic records are a huge factor contributing to the assembly line like experience of modern health care. I pray that I don't die in a hospital.


In the US, the cost of health care has not significantly decreased since the introduction of rules, regulations and government money pushing healthcare organizations to implement and use EHR systems.

I would say all of the intervention by the US federal government has made everything much worse. The EHR vendors (I'm looking at you, Epic) managed to consume the money thrown at the problem and yet have only moderately improved. In my opinion it also pushed smaller hospitals into expensive contracts with EHR vendors, eroding their already tenuous financial health and leading to even more consolidation.


Precisely, and for this reason, AI will be the snake oil panacea, just as computerization of records were. Both will be important, but the size of the contract will be based on the "transformative" messaging around the pitch. The real value as you've said will accrue to the consultancies and vendors as it always has, so even without the enormous compute bills for AI tools, prices will keep going up.


Epic is an extremely weird case.

I knew someone who worked at a company that was contracted to be a support provider; they had at least one nervous breakdown.

Also interesting to note that the wikipedia article for the company mentions that in some European countries, moving to Epic was a huge pain that caused doctors stress.

OTOH apparently their actual corporate campus is some whimsical Alice-in-wonderland world with no expense spared.


>I would say all of the intervention by the US federal government has made everything much worse

This is assuming without intervention things would've gotten better. It's entirely possible things would be just as bad as they are, or worse, without said intervention


Lack of timely access to accurate information is and was a huge problem, that can be solved by electronic health records.

Whether or not the currently implementation is worth the cost/benefit ratio is debatable.

I envision a scenario though, where all EHR vendors integrate with Apple/Google, and someone’s phone can give access to their healthcare data, maybe even involuntarily in the event of an emergency.


Yep. The actual risk of “AI” in the medium term is that these things are gonna mostly be used for optimizing the shit out of extracting money from normal people (i.e. pushing down the standard of living) while all normal folks will get out of it is fancy autocomplete.

Great. Shifting the power and information imbalance farther toward large corporations is exactly what our flavor of capitalism needed. /s

(Mostly, in the legitimate, if you will, economy, that is—they’re also gonna be hugely useful to scammers and astroturfers)


> The actual risk of “AI” in the medium term is that these things are gonna mostly be used for optimizing ...

I like this. I will further add since In the long term we are all going to die it will be fine for everyone :-)


" These 'solutions' will come from the McKinseys and IBMs of the world to serve hospital CEO needs, not those of patients or care providers."

That's the pattern for most internal enterprise software that's being purchased.


It seems the Big Data hype never really died down - it just changed the name


The Economist thinks things are better[1]. In that costs are not rising faster than average inflation any more.

So not lower than before, but lower than was expected for 2023 a few years ago.

For those blocked by the paywall, the article lists causes for the slowing growth as: productivity improvements in paperwork, cheaper technology for dialysis, among other things, the Affordable Care Act, non-US countries insisting on generic drugs more often, and slower-than-inflation growth in median income (the rich hoovering up all the money).

1.https://www.economist.com/finance-and-economics/2023/10/26/h...


Its supercharging the worst in everything its applied to. Try getting a job right now. All of the automated systems that are doing screening will throw you out if you aren't 100% compatible with whatever they're looking for. Sure, its cheaper than human screening, but now you've got a position not filled for even longer, which is probably costing you in other ways.


> screening will throw you out if you aren't 100% compatible with whatever they're looking for

how does one know if this it the reason they are being thrown out.


You can never precisely be sure, but here are some of the behaviors I’ve been observing that give me the feeling I was auto-rejected:

- rejection coming in nearly instantly - rejection coming at “odd” times — weekend evenings, 3 AM in my time zone for a company in my time zone

The latter one doesn’t hold up too well for remote-first companies but I would hope few tech firms have recruiting staff going through applications on weekends.


Yup. I applied for a PM position, within 20 minutes of it going up. Quite a specific one: "'expert level' health insurance system knowledge and experience integrating between multiple EHR vendors and partners'". I have all of this. Have built claims benefits management systems being used by some household names in healthcare. I'm an expert in EPIC, Cerner, ESOsuite EHR systems.

10 minutes later: "we are looking for someone whose skillset and experience better aligns with our requirements and the position".

I guess my resume needed more buzzwords.


You make up a CV that exactly matches the job advert and see whether there's a response


This doesn't rule out that there were enough good or "better" applicants that crowded yours out.


Of course it does, you send a few of these perfect CVs and see what happens.


Why would a human also not choose a perfect CV over other options. What does this prove about AI.


You want a perfect candidate not a perfect CV. A perfect CV is highly likely to have lies and was made up based on the job ad, addressing every single requirement bullet.


If I sent out completely truthful CVs, I'd still be working a $15/hr data entry job or whatever.

Obviously you can't fake it forever if you don't have the skills to back it up, but the reason people lie and embellish on CVs is because they know they are reasonably fast learners, so any skills mismatch can be quickly adjusted while on the job. Easier to ask for forgiveness than permission, etc. etc.

This pretty much became a necessity since ~2005, when IT demand skyrocketed and became mainstream and automatic candidate rejection and filtering became pervasive


Even without the lies, you can meet every single bullet point and still be a worse fit than someone who doesn't match every single bullet point.

Maybe the gal or guy who fits every bullet point has a long career and quite broad knowledge. Maybe the gal or guy who doesn't fit every single bullet point is much smarter, a much harder worker with far more grit, and a quick learner.

You end up hiring the first person who may be mediocre, and missing out on the second who may be a super high performer.


> A perfect CV is highly likely to have lies and was made up based on the job ad, addressing every single requirement bullet.

Wouldn't "AI" also know this though. I would imagine AI is doing more than keyword match type stuff.


Human can look at a CV which doesn't match but say I like what this person has done, learning our stack should be no big deal.


As a hiring manager, based on the average candidate that gets forwarded to me after presumably several rounds of filtering (some mechanical, some human), the idea that there is some kind of smart matching going on is slightly ridiculous: they can't tell a security-focused software developer resume from an assurance-focused security specialist, for example.


I guess it hurts less if you can blame ai?


AI isn't scanning job applications lol ... lazy recruiters can't review them fast enough.


Alternate headline: "Our Healthcare System continues to deny claims, now with AI". Makes it clearer who the culprit is.. its not AI. They have been denying claims for decades without any help from AI too.


Most of these systems just present an RN (and I'm not sure why this hasn't been challenged as practicing medicine without a license, other than the payers' argument probably being "you can still get this intervention/ treatment/ drug, we're just not paying for it") with a claim, and a list of reasons the system deemed that it can be denied. The RN's job is ostensibly to see if there's any one of those bullet points that needs to be vetoed.


Yes, but if “AI” denies you, humans are suddenly no longer responsible.


The company using the tool is responsible regardless of how the decision is made, same as it is today, for whatever good that does us.


Never in a million years could I have imagined there would be something worse than navigating the health care system. Hours of phone calls with deliberately confusing systems, impotent customer service representatives and no choices. It's like the movie Brazil already. Why wouldn't they add AI? Great idea.


Hah. There's literally an AI startup designed to automatically navigate confusing phone trees to get information from health systems:

https://outbound.ai


Truth is stranger than fiction


> Even when users successfully appealed these AI-generated determinations and win, they’re greeted with follow up AI-dictated rejections just days later, starting the process all over again.

Wait, what? Does that mean they can just take your money and reject your request to cover treatment until you die even if you had every right to do so?


Unless you have expertise in fighting them, yes.

https://www.propublica.org/article/blue-cross-proton-therapy...


Yes, this is what Americans mean when they parrot bullshit about freedom: the freedom for corporations to do whatever they like in pursuit of profit.


We don't have a free market in healthcare in the U.S. though. You can be for a free market and against the abomination we currently have. If we just went back to 80%/20%, high-premium-or-high-deductible proper insurance, no HMOs, no PPOs, no co-pays, none of that nonsense, with published pricing, then we'd have a free market.


I'm all for reforms to move towards a more market based system. But one has to recognize that at this point we're nearing a century of anti-market meddling - a quick search says 1943 is when healthcare started getting bundled with employment!

And talking about a "free" market is a red herring. There's no reason to believe that the current industry actually wishes to converge on uniform prices with upfront transparency - they could just start publishing such prices at any time currently, but rather continue to benefit from heavy price discrimination. So getting to a functioning healthcare market would actually take a lot of regulation that actively undoes the cancerous behemoth that's been slowly grown. Try asking any doctor how much something might cost and most of the time they will scoff - the rejection of any market dynamics has become pervasive by the entire system.

Also to have an actual free market we'd need to eliminate things like mandatory pharmaceutical prescriptions and patents, which there is unfortunately very little support for doing.


> And talking about a "free" market is a red herring. There's no reason to believe that the current industry actually wishes to converge on uniform prices with upfront transparency [...]

Adam Smith himself noted that capitalists seek rent. That doesn't mean that capitalism is all bad. The healthcare industry almost certainly does not want a free market, but so what, it's about whether the state makes it so there's a free market anyways or else makes deals with the industry it ought to regulate.

Regulatory capture is a thing, a very real thing, and a very big deal.

> Also to have an actual free market we'd need to eliminate things like mandatory pharmaceutical prescriptions and patents, which there is unfortunately very little support for doing.

Patents are not the problem. Though I'd be happy to see patent terms shortened, especially for software patents.


I didn't say "capitalism is bad". I pointed out that the state of the current market players are so far from what you'd expect in a "free" market, that pushing to make the market incrementally more free is likely to do the exact opposite by increasing the coercive powers of the already-entrenched interests.

Once the health "insurance" cartels are put out of business, providers are legally required to publish uniform price schedules and ahead of time quotes, pharmaceuticals can be freely bought across borders, mandatory prescriptions are eliminated, residencies are funded privately, etc, then maybe there is a chance at a freed market fostering a functioning market. But not one step before then.


Is the regulations that are forcing co-pays, networks, obscured pricing? From the outside it sounds like the market doing it's best to extract as much money as possible, with the occasional regulation designed to look like someone is trying to fix the mess with a pretty bandaid (like the recent one on detailed upfront pricing).


ERISA started the ball by making it employers' responsibility to provide healthcare for employees, and then more "reforms". These things led to the current sorry state of healthcare in the U.S.


Thanks for proving my point.


Isn't the medical field one of the most regulated?


The answer is not less regulation, it's to have publicly owned healthcare. Those regulations were written in blood.

Also go listen to one of the latest episodes of Odd Lots where they interview Lena Khan about anti-trust. Private equity has been rolling up medical companies for decades, essentially doing things like buying up all the anesthesiology clinics in an area so they can jack up prices. Some of the bloat is due to regulations but a lot of the bloat is due to over-financialized bullshit like private equity buying up providers.


Yep, and ever since a certain insurance provider has switched to an AI system, I've had nearly all my Type 1 Diabetic prescriptions rejected (that I've had no problem getting for over a decade).

Then it takes hours on the phone for every prescription to finally get one approved. Rinse and repeat every 3 months.


We’ve had a flex spending account (HSA wasn’t an option) that we were only putting as much in as we were sure we’d spend, rejecting a very high rate of payments this year, even from doctor’s offices (WTF do you think that’s for?!) and demanding documentation, seemingly with no rhyme or reason. Seems like they’re actively trying to keep us above the roll-over limit so they can steal our money, which isn’t something we’ve experienced with health flex accounts in the past.

Wonder if we’re “beneficiaries” of the AI revolution. Or just an “if” statement triggering off a random number.


I agree, sounds like their systems are optimizing based on the roll-over limit.

Basically I'm just going back to cash pay and negotiation for everything either before the visit, or at the facility. Then using alternatives to health insurance for the original purpose of insurance (lol), to cover unexpected events.


This is inhumane.


Technically no, but you need to be able to afford paying for lawyers to fight them or spend an enormous amount of time and energy.

For some reason health insurances and hospitals can get away with practices that no other business would get away with. They can commit fraud and when they get caught, all they have to do is to say "oops" and fix the mistake. No other consequences. It's pretty crazy



Also continuing the trend of unaccountability at least on a personal level.

https://en.m.wikipedia.org/wiki/Skin_in_the_game_(phrase)



It will almost certainly be used make customer service worse as well. In "Ways of Being" James Bridle provides other examples of how corporations are going to use "AI" to exacerbate the problems they already cause in pursuit of profit.


From the article linked by Techdirt: “executives have sought to almost entirely subordinate clinical case managers’ judgment to the computer’s calculations”

Sounds like the issue is the executives. What does this have to do with “AI”? Also, the company that built this tech (naviHealth) was started in 2012. Their product existed long before large language models were created.


AI is the new crypto. If you can turn your head sideways and squint hard enough to make it seem like it might be in the same room as AI, it's going to get labeled as such

I wager that 3/4 of the products out on the market purporting to be "AI" in the style of LLMs like GPT are just what was referred to as ML a year or two prior, or even worse just a standard computer program in the style of the past 20 years


Do any of you remember how healthcare was in the 60s before corporate America got involved? It just worked. The uninsured got taken care of. A 300 bed hospital had 3 administrators. Now we have a whole army of people in the hospital who have to justify their jobs by the number of emails they send.


The ‘60s were 55+ years ago, so it’s unlikely that many here remember that.


In the UK it's already bad as it is - seeing a GP or doctor in person is essentially impossible. AI use will only worsen the issue. You can already use "virtual gps" where if you tell them you broke a nail or farted they-ll tell you you either have covid or cancer. Buyers beware.


I got a new job recently and I have been saying this is what I will be doing, writing AI to deny claims, AS A JOKE.

We pay taxes so the government can pay to subsidize these healthcare corps that just serve to funnel money to its already rich shareholders.

Healthcare is a pathetic broken mess and it will never get better.


https://www.marketwatch.com/investing/index/sp500.35?country... is up by less than the S&P over the last few years.

I'm not saying the healthcare system is great or anything (I generally agree with "pathetic broken mess"), but it's not making its investors particularly rich. Most of the excess cost is getting spread out within the system itself and never makes it through the other end. If you want to blame doctors, nurses, admins and suppliers that's arguably a better place to start.


> it will never get better

Don't lose hope, other countries manage to be a lot better. It's possible if you only try.


You would have to somehow scare our elected reps into doing what is right

Or start our own “people’s hospitals” or even insurance (there should not be insurance schemes IMO)

Or maybe not pay existing hospitals/ insurance companies

I do not know how else to “try”


We were one congressional vote away from a public option for healthcare. Let's not pretend it isn't explicitly one party obstinately refusing to do anything on healthcare and dozens of other issues. The answer isn't to scare our elected reps into doing what is right. It's to stop voting for Republicans.


> We pay taxes so the government can pay to subsidize these healthcare corps that just serve to funnel money to its already rich shareholders.

Americans are about 4% of people but gets 25% of world GDP. There's 333 Mega Americans, with 8 GigaPeople in the world. Most Americans are doing just fine financially in comparison to the rest of the world.

To make things fair, then the wealth of the richest US citizens (about 55% of equity ownership by value[1]) should to be spread to the rest of the world, not spread to other Americans by US taxation changes.

I'm from New Zealand. Our ownership of US shares is likely reasonably fair. NZ has three billionaires[3] (not one inherited their wealth). We are still a relatively wealthy country, but our median income is about 60% of the US median income. I'm not sure what % of NZers would be classified as in poverty by US standards.

40% of US equities are owned by foreigners. If we assume rich foreigners use Cayman Islands and Luxembourg for their ownership, then from [2] a third of that 40% is rich foreigners. Probably better to assume wealth distribution is similar to US so 55% of that 40% is wealthy foreigners.

[1] https://www.taxpolicycenter.org/taxvox/who-owns-us-stock-for...

[2] https://home.treasury.gov/news/press-releases/jy0613

[3] https://en.wikipedia.org/wiki/List_of_New_Zealanders_by_net_...


Nice citations. Finish your thought. What's your point?

Do you honestly think the average American lives better than the average Kiwi? That is an adorable thought lol.

Check out the latest Channel 5 documentary to see how many Americans live. We have some mega-wealthy people but also have extreme poverty and even our middle class often does not have health insurance.


> How bad is the AI? A recent lawsuit filed in the US District Court for the District of Minnesota alleges that the AI in question was reversed by human review roughly 90 percent of the time:

Isn't that about the same for non-AI interactions with US health insurance carriers? Their job is to make money and they do that by denying claims whenever they can. It's truly an awful experience.


> “AI” (or more accurately language learning models nowhere close to sentience or genuine awareness) has plenty of innovative potential. Unfortunately, most of the folks actually in charge of the technology’s deployment largely see it as a way to cut corners, attack labor, and double down on all of their very worst impulses.

Having read the first paragraph with its profound and non-bias insight, I can only expect that the remainder of the article will offer a well-rounded and impartial exploration of the issue.


Are you saying that statement is untrue?


> Though few patients appeal coverage denials generally, when UnitedHealth members appeal denials based on nH Predict estimates—through internal appeals processes or through the federal Administrative Law Judge proceedings—over 90 percent of the denials are reversed, the lawsuit claims.

The PHB's in charge probably view this as a win. A 10% savings!


What worries me with AI is *garbage-in, garbage-out*. As seen with current year chatbot responses skewered by programmer's politics.

If only Aaron would have succeeded liberating all medical knowledge already in the public domain; Then we could ensure AI's honesty.


It’s probably working exactly as intended, they can hide behind the appeal process, but i would bet they’re thinking of the savings on those that don’t appeal


Doctors should start using LLMs to deal with the appeals.


This is not healthcare per se but insurance policy. It doesn't say anything about the use of AI in actual medicine.


Given how tightly integrated insurance is with what treatments Doctors are allowed to give you, it's absolutely a problem for the entire healthcare system.

"Before you go on this medication that's specifically targeted at your symptoms and diagnosis, you must first try and fail these 5 other treatment methods, including a round of anti-anxiety meds and counseling."


I've actually had anti-anxiety medications relieve a whole raft of symptoms that obviously turned out to be anxiety. There are many problems with healthcare systems, but I have to say that lorazepam is really the best drug I ever took; I can see why people become dependent, so I take it quite sparingly.


There's a lot that could be said about how an "anxiety" diagnosis is abused by doctors and insurance companies (especially against women), but in the end it really boils down to one thing:

The medications you're taking should be a determination between you and your Doctor, and not an AI working for your insurance company.


I could probably expound quite a bit on the problems of interjecting an insurance company into healthcare, but I'd say that as a principal, I completely agree with you.


Insurance companies in America determine which patients are worth treating and which ones are unprofitable. In theory AI can enable better and cheaper treatments for everyone but that would mean insurance companies would have to start caring about something other than profits. Their main metric is profit so AI will be used to maximize profit and minimize costs just like in every other industry.


Medical Insurance has all of the characteristics of a bad insurance business: small claims, frequent claims, horrendous adverse selection risk, and moral hazard everywhere.

Undoubtedly, AI will help it reach its logical conclusion even faster.


Sadly, there is no state of malfunction that will result in the people having the ability to affect change feel compelled to affect change. With incentives or "that's just how the world works" effectively conspire against improvement.


I'm sorry, did the title say "medicine"? What's your problem? If you think these kind of low effort comments are contributing to the discussion, please reconsider.


Insurance policy is unfortunately a large part of healthcare in America.


To all the sibling comments saying "insurance is part of the 'Healthcare System,'" well, the headline could have said "Health Insurance" but instead chose a broader category. The parent comment is helpful because it adds context.

Further, the anecdatum of one provided in the article doesn't support the use of "AI" in the headline nor the thesis that "'AI Is Supercharging" anything. It refers to an algorithm that the company never divulges. It could just as well be a random number generator or a magic eight ball as much as it is "AI."

The article is a clickbait moment of hate.


agree, this is a low quality article, using a single example to define an entire industry

They put no effort into finding places where AI is making things better in healthcare. One example I know of, https://ferrumhealth.com/


Well, they don't say medicine, they say healthcare system. That seems accurate considering your actual access to healthcare and the level of quality depends on insurance.


The title says "Healthcare system". Insurance is part of that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: