Hacker News new | past | comments | ask | show | jobs | submit login
GPT-4 will hunt for trends in medical records thanks to Microsoft and Epic (arstechnica.com)
47 points by samizdis on April 18, 2023 | hide | past | favorite | 33 comments



GPT-4 or its successors, will kill medical records as such.

And it will be like shooting fish in a barrel.

It is said that doctors spend a lot of their time on bullshit computer procedures, involving garbage software, when they should be seeing patients or learning something.


I doubt that anything can kill BS medical records. Two reasons that I see:

1) privacy and other laws have high demand of "structured" and "secure" information keeping ("it's a law that I must fill this information into this BS table and it's 100 BS fields");

2) governments love to spend the money on "standalone" software that has 1000 BS requirements... And contact companies do love to make that software as BS as they can because the worse system it is, the more additional contracts do they get to solve the problems they made.

Basically, I am sceptical that anything could help medical staff. (I am pessimistic because of what I see in my country of Lithuania. Barely working proprietary systems ("e-sveikata" and "e-receptai") that costs tax payers hundreds of millions Euros.)


> will kill medical records as such

Why do you think so?

I see a huge thorny ball of legal issues that needs to be resolved before gpt-like services can really become useful...

I personally feel very held back by legal (and infosec) uncertainty. I would use chatgpt a lot more of it didn't feel so legally sticky (?).


From Microsoft's pov there's basically no infosec issues. They're running their own copies of GPT inside their airlock. It's no different than any other analysis they do on behalf of their customers.

To everyone else it's turning data over to a 3rd party but for them it's like running scapy.

They can't train on one customer's data and use it on another but that's less of an issue than you'd think.


I doubt that. A lot of the paperwork is there on purpose, like pre auths that are meant to make it difficult to prescribe things.


I have a very queasy feeling about this. I know they'll sell it as productivity boost & other fluff (the article is not super clear on exactly what they are going to achieve) but it seems they are trying to see how far they can push the HIPAA law and mine for actionable info. This is scary because you all are hallucinating about discovering issues by having a 'super House MD' look through everything and find out patterns but realistically what will happen is they will use it to figure out innovative ways to minimize cost of care and leave you with the biggest bill. folks there is a lot of bad faith actors in healthcare and it feels like we are giving them a loaded gun.


From very far away, I trust tech companies (even Microsoft) more than the medical ones topping the charts of US corporate lobbying.


wow, so cynical!


/s ?

if not /s then my response is 'wow so naive!'

I donno WTF are people smoking here but go read responses for the same story on other sites like slashdot and reddit for some balance.


Gee, I’m so glad that I “opted in” to have my private medical data used for corporate research!

How did I “opt in”? My healthcare provider migrated to Epic for all medical records. I had the “choice” of… not receiving healthcare, I guess? But even then, all my past records were imported to Epic anyway.

And before you “market choice” folks start squawking: your provider changing their records system is not a “qualifying event,” so you don’t get to shop for another provider until the next enrollment period. Also, ALL of the providers available to me use Epic.


That's some Epic clusterfuck of healthcare service


I've been very much against regulation of AI at all.

But now that you're talking drawing inferences from medical data...... it's making me pause to think.


At some point there needs to be a comprehensive way to allow individuals to make decisions about how their data should be handled. Allow me to decide how my medical records should be handled in the same way that I can manage whether my location data should be shared with the app I'm currently using. Allow me to revoke consent and set up general rules. Don't make me repeat myself unless there's a good reason. Make a system I can trust.

I think it could be done in the EU at least. If we up the it competence level of our decision makers a couple of notches.


I hope not.

It's time we start finding cures for illnesses like mine which doesn't has medication.

It's time to use big data and ml on as much data possible to start helping much more people faster.

Every CRT every diagnosis helps.

Feel free to advocate for proper anonmysation if the data but not for the use of it.


I mean I agree and I think most people do. The background for my argument is a feeling that a lot of important research is held back because of privacy concerns. At least in Sweden it feels like decision makers err on the side of caution instead of allowing important research programs as you describe. But I've only seen one such case and maybe that one is very non representative. Who knows.

My thought was that if we could just deal with privacy concerns in a comprehensive way we could open floodgates for research.

But then again it might just as well add another layer of technocratic stagnation.


This honestly doesn't sound bad to me although it depends on how it's implemented. If patients are informed ahead of time and can easily opt out by a variety of methods, then I think it's ethical. I would donate my data for statistical research.

I think the problem is not with using a GPT for this purpose, but with the risk of privacy violations. But there is a way to do this ethically.


I agree, it really depends on how its implemented.. but clear lines will need to be drawn. If it helps automate time consuming repetitive tasks, and helps us advance modern medicine im all for it.


Poor IBM Watson, killed by not being hip


If Watson had been good at anything but Jeopardy, IBM would have spent billions making sure we all knew.


The biggest problem with Watson was that it wasn't ever really a specific software. It was just kind of IBM's branding for machine learning consulting services.


Having GPT-4 play Jeopardy today would be no fun at all


You might be able to make a drinking game on when it forgets to phrase the response as a question.


Is anybody currently feeding a country's worth (e.g. Israel) of medical records into an AI and asking it to come up with novel off-label uses of medicine?

E.g. "TDAP Tetanus vaccine = 42% lower risk of dementia in older adults"... more findings like this would be great.

https://pubmed.ncbi.nlm.nih.gov/33856020/


In my dark little heart of hearts, I know there's some US insurance company executives sitting around thinking "how can I use machine learning to automate the process of denying coverage based on 'trends' or whatever that I won't have to justify because I'm not obligated to make the LLMs decision process transparent to anyone".


>Several vaccine types are linked to decreased dementia risk, suggesting that these associations are due to nonspecific effects on inflammation rather than vaccine-induced pathogen-specific protective effects.

Doesn't this make it sound like there's a confounding factor? Such as: vaccines are more common in people that are wealthier or people that choose to align their lifestyles with recommendations from doctors?


Hmm, how does a language model "search for trends"?


By inferring connections in data pasted into its context window. Except its context can be as much as 50 pages at a time and semantic search means you can paste relevant context related to any query buried in mountains of data into context automatically.

Pages and pages of text that can't be read by any one person can be parsed in seconds by a machine with enough knowledge and understanding to make something meaningful of it.

The same idea is already being applied for law - https://twitter.com/ai__pub/status/1644735555752853504


I can’t see anything about trend mining on the summary. Could you show where they’re doing the same thing?

What you described sounds like a way to search documents not mine for trends.

How is ChatGPT going to infer connections here without supervision? Are you saying the process is to paste every medical record into the context window and then prompt GPT to “find common patterns in the above?”

That doesn’t sound likely to come up with useful results on its own… Maybe I’m wrong though.

It’s seems more likely that GPT is being used to interpret the text into a standardised form that can then be mined using other techniques.


>How is ChatGPT going to infer connections here without supervision? Are you saying the process is to paste every medical record into the context window and then prompt GPT to “find common patterns in the above?”

It'll be more implicit than that. For a toy example, paste some fake but detailed stats about certain treatments. then try to ask gpt to recommend a particular procedure for a certain patient. Those stats will undoubtedly influence gpt's recommendations...and in ways that make sense.

Searching documents is powerful here because searching documents influences GPT's response in much the same way it would an equivalent human.


Doesn't understanding language require that a model understands trends in the features of text embeddings inputted into it? And haven't we found that the more text and transformers the complex the trends it can understand in that text? If you have enough features present in medical records, why wouldn't it be able to find trends?


Because 1. it has a limited context window, so it has no permanent memory 2. you can't fine-tune ChatGPT/GPT-4 to get around this, and of course 3. language models don't do anything by themselves unless you input text into them.


The article is inaccurate; GPT will be used as an interface to their data visualization tool. It's kind of like Tableau.


Good.

Can't wait for more people getting help.

Alone a system which knows all patients with my sickness (RLS) could potentially show something when clustering 100ktens of diagnosises




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: