At a conference I was at recently, I'm nigh positive "Decision Support Tool" was a dirty word.
That being said, doctors do not want tools that are clunky, tools that force them to change their existing workflows, tools that run slow, can't pull in relevant data in real-time, rely on fragile HL7 interfaces, etc. Doctors want tools that work for them. Doctors do not want to be fighting or struggling with tools that are supposed to help them. While that may seem obvious, it's a very difficult bar for most software makers to meet -- particularly in the inpatient and ED space.
I can understand the author's perspective -- she's a new resident at Yale New Haven Hospital and previously ran a medical software start-up that closed up shop relatively quickly. It would be convenient to rationalize the failure of her start-up due to fears of new tech. While it may have been a fear of new tech that prevented her start-up from succeeding, it was not a fear of being made obsolete -- it was a fear that her product wouldn't do the job well.
And yet every EMR system I've ever seen is comically bad. Why the medical providers haven't revolted over being forced to use those things is beyond me.
People outside healthcare really don't understand the sheer amount of time your physician spends trawling through the medical record in the regular course of doing his or her job. You can think of the patient chart as a shared "My Documents" folder on your computer. In modern multi-disciplinary care, you have many different parties taking care of the same patient and working in shifts. In order to get up to speed with what's happening for a particular patient, you need to open up each of the recent files in "My Documents" for that patient and review it. Lab results, progress notes, schedules, etc. Then repeat a couple dozen times for all the other patients you're caring for that day. Endless clicking. Constantly flipping between windows. Burn out.
That's not too much of an exaggeration of what the real EMR experience is like today.
Everything I've optimized really well was after a week or so using the old bad solution.
Programs made for me, to automate some part of my job? Invariably break some part of my workflow that promptly gets labeled an edge case.
Well, exactly. That's half of the reason why enterprise, medical, restaurant and other similar software generally sucks.
> Every developer should do customer service and a sales call on a regular basis.
In B2C, sure. In B2B for the examples I mentioned above, a "sales call" is where the problem starts. On such a call, you'll get the opinions of people paying for software, not of those using it.
In UK doctors are treated much worse than they are in the USA.
The worst was a thermostat.
Heaven help you if you ever lose the manual.
First: you're dealing with humans so you have to let a lot of things possible. Which is how you easily get shitty UI.
Second: everything is a legal horror. Certification, certification, certification.
PS: and third. Most medical software is not chosen by its users but by management.
There are many reasons for it, just as making software to manage restaurants has been an impossible one. The use cases are operationally extremely different in all cases.
Unfortunately, and inevitably, a little tweak in medical software may as well be a tweak on a satellite in space.
Like I said, changes need to be prevented from making the software bad, but the difficulty of making changes also makes the software bad. I don't know what the solution is, but I would like to see this aspect addressed more explicitly.
Until correctness can be done easily in accessible programming languages with a decent developer pool, iteration in this space, along with other life-critical spaces such as trains, planes (and wouldn't automobiles be nice too?) is sadly a long, manual process compared to web development.
Edit: to expand a bit on correctness, think Coq but with the ease of Java or Typescript. Even Rust's safety isn't yet formally proven; that's not a knock on rust, but a demonstration of just how hard it is to do.
Even if such a thing existed, it would require a very rich set of constraints on the program to be able to encode something like "won't make the wrong decision when a patient has a pulmonary embolism and these three other conditions" and I think that's what doctors would want.
As for rich constraints, that does bring up another point- how do you quickly iterate on a system full of bad data or systems of incompatible data? To get Watson to a degree of certainty, so much data manipulation and review happened by hand that it was a wasted effort.
This isn't to say that I'm in love with what I've seen of the software my doctors use (or the puckering feeling seeing how outdated the OS tends to be). I'd be a whole lot less happy, though, if their software had the reliability of say, imgur or reddit.
And here you are, cutting-edg hospitals running 2012 versions of modern software, and paying millions of dollars for the privilege.
The direction has to be exactly the opposite, by giving the patient increasingly more control over his own health.
Coq but with the ease of
Java or Typescript
Note also that Coq/Isabelle/Agda and their successors will still suffer from a variant of the oracle problem: where should the specifications come from?
Finally note that almost virtually aviation software is formally verified in with interactive theorem proving as of 2018. The extreme levels of reliability in this space have always been achieved with other software engineering methods, including rigorous testing.
(This is what you get when you design for those who pay for your software, and not for those who'll use it.)
This doesn't mean that the doctors bury their heads in the sand, but it is a part of the reason providers are resistant to adopting these tools. Relying on their own knowledge and intuition may not be better but it won't hurt them as evidence in a hypothetical malpractice suit. The decision support tool has to be sufficiently better to outweigh this.
I hold no opinion on the validity of this line of thinking, but I can say that it is part of the calculus, at least for some.
Even "better" when there's no concept of "the EMR". My foray into healthcare IT was at a hospital/clinic district that used no less than 4 EMR systems (one new and one legacy for the main two hospitals, plus one new and one legacy for all the satellite clinics). That's not including the specialized systems for ER, Oncology, Obstetrics/Gynecology...
All for the pursuit of Meaningless Use.
Obviously it's not smart for a doctor to reject the idea of knowing internal body temperatures at least to have as a data point, but those doctors weren't COMPLETELY off base.
My girlfriend can be very ... concerned. We spent $1000 in vet bills before a real avian expert was like "Guys, he's fine. Look how happy he is. He just loves the attention."
Apparently one of his tests for vet interns is to tell them "This bird is sick. Figure it out". Then he gives them a perfectly healthy bird. Their bias always makes them find something. Caused by their tests half the time because they're stressing the bird.
Seems like the "bias" in that case is likely due to an authority figure intentionally misleading them in the context of a "test", where one naturally assumes they're not being deceived by the very premise of said test? If an actual bird owner came in as a client and said the same (or if the test explicitly told them to assume this is the situation), the interns might very well still realize the client is wrong.
I’m no vet but if you have to dig hard for signs of illness, the patient likely isn’t sick. Our bird had increased uric acid. Could be anything. It was a little high.
If he was actua sick we wouldn’t be guessing whether the number is high. It’s be 10x. Waaay out of bounds. A clear signal.
It’s like in product design. Your conversion rate inproved 0.05% after you ran an experiment for 2 days. Is that a signal or noise? Eh probably just noise. Observe longer.
The problem with comfortably saying the client is wrong is tht they’re the authority figure. They have years of data, you have 30 minutes with the patient.
And yes I have had vets casually chat with me for 2 hours when they did spot something suspicious in the bird’s behavior and wanted to observe longer to see if it’s a pattern.
While I do agree with a single point not being enough information, nobody is looking at body temperature alone when diagnosing a patient— unless they’re reaching internal temperatures of below 95° and above ~100.9 for hypothermia or hyperthermia respectfully. With each diagnosis there are /x/ number of signs and symptoms that go along with it so it is crucial that we gather as much information on all the VS we can, no matter how minuscule the data may seem.
Procedures aren't the same between clinics, though.
All emails to the company were unanswered. All phone calls went to voicemail. Nothing could be done. She was 5 years into her PhD on this and had to restart the whole project (she wasn't much of a coder, more a clinical trials person).
In medicine, the SV mindset of 'move fast, break things' means that granny is breaking her hip and isn't going to last much longer. You CANNOT 'break things' in medicine. However, if you do get though clinical trials, hoooo baby! You essentially have a monopoly and will be rolling in it. MedTech investing is a lot like SV's VC investing, just a lot more careful.
Does the Apple watch not work?
> But if they had been under clinical trial already, there was a real possibility that real people out there would have been harmed severely (a 'bad' geriatric fall can become lethal).
How would that be different if they didn't have a device? They would have died anyway, right?
Maybe. If they think that the device will help them in their hour of need, then they may be taking chances they normally would not have. To have something remote brick and then not tell the people that it was bricked is a gigantic ethical violation.
> Does the Apple watch not work?
Not to a medically relevant level:
"Apple Watch cannot detect all falls. The more physically active you are, the more likely you are to trigger Fall Detection due to high impact activity that can appear to be a fall.”
If we pick apart disclaimers, I think we will find that nothing works. /gentle_sarcasm
We on the other hand have the Therac-25.
We claim to be on a level of engineering fields. We should also act that way.
Mars Climate Orbiter, 1999: http://articles.latimes.com/1999/oct/01/news/mn-17288. Undetected metric conversion error (though unclear if this was a manual process or software).
Toyota unintended acceleration, 2010: https://www.edmunds.com/car-safety/for-toyota-owners-uninten.... Toyota maintains that it was not caused by software error.
Now let's list some cases where the software did have ultimate control:
Schiaparelli EDM lander, 2016: https://newatlas.com/esa-schiaparelli-mars-crash-inquiry/496.... Faulty decision-making in the automated descent system due to input saturation.
Tesla auto-pilot crash, 2016: https://www.theguardian.com/technology/2016/jun/30/tesla-aut.... Faulty decision-making due to incorrect image analysis. In this case, the additional safety controls (the driver himself) failed too.
Uber car crash, 2018: https://money.cnn.com/2018/03/20/news/companies/self-driving.... Faulty decision-making due to incorrect image analysis, though the investigation is still ongoing, so no definite cause determined (FAFAIK).
If you expect your doctor's software to be perfect, be prepared to pay for 2 software engineers salaries per doctor.
"AECL had never tested the Therac-25 with the combination of software and hardware until it was assembled at the hospital."
Is that true, or are doctors incredibly risk averse since they have seen what can happen when something goes wrong, and that is death.
> … argue that physicians in general underappreciate the likelihood that their diagnoses are wrong and that this tendency to overconfidence…
More on the combination of algorithms and humans:
> Meta-analyses comparing clinical and mechanical prediction efficiency have supported Meehl's (1954) conclusion that mechanical data combination and prediction outperforms clinical combination and prediction.
1) Having CT scans, and paying for them, does not really objectively lead to the conclusion that followed. Let's feel that tumor inside your lungs.
2) CT scans can also work without contrast agents. In addition they typically do not register everybody for a CT scan nor pump them full with contrast agents. There is a process. In US some hospitals are trigger happy as they get paid per case, blame the system not the technology. If anything an algorithm will fix that nasty human behaviour.
3) Having biased humans enforce decisions is not always a guarantee for success either. Every human sees only a fraction of the total amount of cases an algorithm processes within seconds. There are several fields where AI already outperforms elaborate test panels of MDs. Though it is hard to introduce these algorithms for the same reasons Tesla is having issues. Who is responsible when a mistake is made?
3.1) you would be amazed how often MDs do not agree when the same problem is put in front of them. 50/50 and 60/40 are very common cases. AI is typically more in the 80/20 90/10 range which is a huge improvement.
Now, all of this does not mean we do not need MDs anymore. An important aspect often neglected due to time bounds is the interaction of a patient with the doctor. With algorithms saving time more could go to the patient. That's a win.
Also, mammography or even colonoscopies have been proved for most of the population to do more harm than good. cochrane is full of meta-studies about it.
The medical industry is very shady.
There is no evidence for that statement. More specifically, there is no evidence that a single radiation dose below 100mSv is harmful at all, but plenty of evidence (Taiwanese radioactive apartment buildings, nuclear navy worker study) that it isn't. Muller made it up for political reasons.
> Title: Cancer risk in 680 000 people exposed to computed tomography scans in childhood or adolescence: data linkage study of 11 million Australians
> Conclusions: The increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. Because the cancer excess was still continuing at the end of follow-up, the eventual lifetime risk from CT scans cannot yet be determined. Radiation doses from contemporary CT scans are likely to be lower than those in 1985-2005, but some increase in cancer risk is still likely from current scans. Future CT scans should be limited to situations where there is a definite clinical indication, with every scan optimised to provide a diagnostic CT image at the lowest possible radiation dose.
And about "a single radiation dose": As soon as you get a CT the chances that you will have only a single one in your life are greatly reduced, because you just had that one. So it still is better if the count remains at zero, or your precondition can easily be invalidated.
Now compare this to
> only a single one in your life
"A single dose" as in "a discrete event". Another single dose the next month is (probably) harmless again. Cells react to radiation with repair mechanisms, and once that activity subsides, the event is over.
Radiation exposure isn't linearly cumulative. The argument that it is was made before we even knew the structure of DNA! Today, we know better.
I also don't see what the problem with the selection of people is supposed to be. Those selected are more likely to not be able to repair DNA damage? I think this particular selection makes no difference for the purpose.
Overall, OP said "there is no evidence" and it seems that yes, there is. What you think of that evidence is not the question, OP had said there isn't any. When I look at the actual recommendations it seems that most medical people don't think so, after all, the recommendation still is to limit the radiation exposure, not just for the frequently exposed (radiation workers) but also for those one-time patients.
Even on a per-event basis reducing the amounts of radiation was and is a major design goal for the devices. Does not look like those who are involved in all of this think there is no problem.
This is evidence for a correlation between the number of CT scans and cancer incidence. To jump to the conclusion that the cancer is caused by the radiation from the CT requires a leap of faith.
The funny thing is, if an epidemiological study shows that low dose ionizing radiation is beneficial (radioactive apartment buildings, nuclear navy workers), it's dismissed by a completely ad-hoc "healthy worker effect" or "healthy student effect". But in a study of people who received a CT scan, where you should expect a "sick people effect" (healthy people don't get CT scans), you "don't see a problem".
OP responded to a specific comment, I responded to OPs comment. I don't understand your point in that given context. I'd think showing one study - I didn't bother to look any further - that shows a risk was sufficient.
Since even adults have plenty of still dividing cells left I see it as reasonable to assume that adults are at risk too, even if that will likely be lower.
I also recommend at least the "Conclusion" section of this document, selected as an example, not as the one definite document: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3611719/
It is a good read overall too.
If the doctor makes money from your CT Scan you are absolutely right to question the need. Conflict of interest and all. Sure you increase the chances of cancer but that has to be weighed by what can happen if you don't do the CT Scan.
Over-testing leads to over-diagnosis, and that can be harmful.
Consequently, having an extremely skeptical viewpoint on tools is perfectly rational.
I can also tell you from discussions with doctors that one of the problems is that the intersection of GUI programmer, competent engineer, and medical domain knowledge is either a null set or a single person. (For example: EEG analysis seems to be a natural fit for ML/AI--enormous amounts of data with events only sporadically scattered in it--yet there is nobody capable of handling the intersection of talents required.)
I would also add that my perception from working at a major east coast hospital has actually been that hospital IT clamps down on new tools more than anyone because of HIPAA requirements, etc, that the doctors ignore/don't care about as much as they should. It's a complicated, layered system.
My neighbor's son is a doctor who needed better software. He trained himself to become a GUI programmer. The resulting company is very succesful. (As reported by his mother, so I'm allowing for some parental bias.)
The fear they have isn't that they'll be replaced on the job. The fear they have is that they will be required by some policy to enter yet another copy of the same data into yet another "time saving" system.
Any theory of the current state of medicine that involves a cardio-thoracic surgeon feeling like they are not completely irreplaceable, one-in-a-trillion geniuses/minor deities, put here on this planet to spare us lesser mortals (as scheduling allows) seems... improbable.
Hacker News, come on. You're better than this.
Taking it from the top: The obvious take is that the new tools this is referring to is EMR and things like Watson. Will return to this in a moment.
Subjective and objective data both a play a role in medicine. The eye of an experienced person can often see in a blink what would be missed by someone looking only at numbers in a chart. Gestalt, or the fast system of Kahneman, is invaluable when time is a serious concern. But noone starts out that way. The slower, methodical plod of consciously using bayesian thinking is how the art is learned. Hear hoofbeats, think horses, not zebras... trying to weigh all available data and attempting to chart a course that gives patients the best outcomes at the most reasonable costs. Nowadays additional hoops must be jumped through: laws constrain, institutions have policies that must be followed, and most of all care is dictated by what is allowed by the insurance company. Rather than an invisible hand, this is an invisible supervisor robbing much autonomy and initiative from a sense of worthwhile work. Furthermore the ever-present fear of litigation pushes towards a course with more testing than might be suggested by treatment and diagnosis alone: how would this course be defended if things go wrong, as they will for a certain number? All of these things individually stood to reason, but we as a society must keep in mind the cumulative weight of it all. Emergent phenomena isn't just a thing of programs and physics, it's a thing of human systems like healthcare.
Back to the article. A happy picture is painted of modern CT scans, yet it neglects the downsides. In 1980 the average per capita dose of radiation was 3.0 mSv, with 0.5 coming from medical imaging. It is now 5.5 mSv and rising, with medical imaging alone exceeding 3.0 mSv. Medical imaging is now a larger source of ionizing radiation than all other sources combined, with particularly high risks for those in utero or pediatrics. Like any other test or treatment, there is a risk/reward ratio. As technology improves, it is more likely to be adopted, not because earlier physicians were anti-technology Luddites, but because the improved technology changed that risk/reward ratio. We are more likely to use imaging with less exposure, or better yet use a modality without that risk.
Back to the Bayesian part of thinking... testing isn't perfect. I'd love to see a test that is 100% sensitive and 100% specific. But there are inevitably false positives and false negatives. Tools and tests need to be used in an appropriate situation. For example: I have a test that is 99% sensitive. Great! It'll catch someone with the disease, 99% of the time. So I can thoughtlessly order tests and thoughtlessly obey the results, right? Wrong. What happens if you use it to test for a rare disease that only 0.1% of the population will have? It depends on how specific the test is. How many false positives does it let in? If I test it on 1,000 folks indiscriminately, I'll end up with a basket of folks, only one of which actually has the disease. How many false positives got treated (and possibly harmed by that treatment)? Mammograms work this way (which have fallen a little out of favor in younger demographics without risk factors like the BRCAs), necessitating imaging and invasive biopsies that, upon further collection of data and review, seem not worthwhile for those under 40 and of questionable value under 50.
Tools are great! They need to be used appropriately though. Things have a cost, not just financial but physical and temporal. Indiscriminate use of tests and tools is the last thing anyone should want.
Nothing to add in a world of advancing technology? Bah. Most would love for its promises to come to fruition. EMR for example. We were promised time savings, with cross-talk between systems for better availability of data and improved patient safety. Mostly what has happened is administrators now have data used to push docs to see more and more patients (and spend less and less time with any one of them), all the while the paperwork stacks up. Somehow the paperwork never quite seemed to go away.
Maybe doctors don't reject tools that make their jobs easier. The article is full of tools that were eventually adopted, after all. I can point to many in development that have their ardent advocates, like point-of-care ultrasound among many others. Maybe they don't like tools that were sold as making their jobs easier but mostly don't, and instead benefit insurance companies and conglomerate administrators.
Under capitalism, old companies (like hospitals) don't really tend to adapt in response to market forces by actually changing anything as drastic as the shape/relative scale of their internal bureaucracy.
It looks like that happens from a 10,000ft view, but what's really happening is that old companies are just dying, having been outcompeted by new small companies that "grew up in" the market environment where the changes were "the new normal." And then, eventually, the new, small companies acquire the big old dying companies for their brand value—so the resulting merged company has the appearance of the big old company having managed to turn over a new leaf.
When a company is only slightly relatively unfit (due to e.g. serving a market with inelastic demand, like medical care), it can take decades for their relative unfitness to deplete their resources to the point that they'd seek to be acquired. The current heavily-bureaucratic hospitals might be actively dying right now—it'll just take them another 50 years to become all-the-way dead.
Thoughtlessly obeying is undoubtedly a sign of incompetence; a doctor must always use judgement based on patient's condition and not rely on a bunch of numbers, a doctor told me in my teens. However, ordering tech based tests is not thoughtless all the time, at least not with competent doctors. I mean, in this age of self diagnosing based on googling, a doctor not ordering such tests would be seen as incompetent, and even ignorant. Would a doctor risk his reputation, and possibly livelihood, just to prove a point, even when his spidey senses tells him there is nothing severe about the patient's condition when the patient insists on it, directly or indirectly? Only a House would do so. The writer is too eager to generalize for some reason.
Doctors are not fools. If a tool truly made their job easier, it would not be rejected out of hand.
Job security for physicians is rarely the issue, there are plenty of sick people.
You'd imagine something as basic as checklists would be implemented as standard procedure in modern medicine -- yet this study was in 2013, not 1963. Most likely the majority of surgeons are still dragging their feet on this. It's a very conservative field with lots of big egos.
It's understandable, because you don't want to mess around with people's health, but I also see doctors largely acknowledge that their field is more conservative than it should be.
It's apparently also different based on specialization. Opthalmologists apparently have a reputation for being early adopters.
The original thermometers were a foot long, available only in academic hospitals, and took twenty minutes to get a reading.
And it's probably the same with CT scans -- I have a stomach ache, I don't want to come back tomorrow to be scheduled for a $2000 CT scan just in case it's appendicitis. Now if I take the medicine and it's not better in a day or two, maybe then I will want that CT scan.
That's what most doctors in my country would do. Are you saying that US doctors will send you to CT right away, just to be sure?
Had they sent me home, medicine or no, things could have gone a lot worse.
I'm glad they do the CT scan here and not just send people away until they get worse.
I had a dog who started having a runny bloody nose all the time. 4 xrays, 2 nasal endooscope procedures and several thousand dollars and still can't figure out what's wrong. In desperation took him to a really good emergency/surgery vet clinic, they did a CT scan and it reveals a large tumor growing on the nasal cavity side of the soft palate. Confirmed with an endoscope going into the mouth and back up into the sinuses, couldn't see it from the front. Super frustrating because it was too late to do anything at this point.
I think using CT scans in the context of a checkup, to look for problems, is bad, because as discussed elsewhere on this thread, if you go looking for something, you'll find something. But if you already know there is a problem, they're the best tool we have for looking inside the body without cutting.
It's not like you came in with gas pains and the doctor had to send you for a CT just in case it turned out to be something more serious.
Modern CT scans, for example, perform better than even the best surgeons’ palpation of a painful abdomen in detecting appendicitis. As CT scans become cheaper, faster, and dose less radiation, they will become even more accurate.
Though Defensive Medicine is definitely happens in the USA and probably other countries as well:
Yea you would think so... Unfortunately, in medicine, just because a computer program is better then a Dr at diagnosing a patient is no guarantee it will be used. The classic example here was the MYCIN expert system developed in the 1970s. MYCIN was shown to outperform infectious disease experts by 1979 in a blind testing:
>... Eight independent evaluators with special expertise in the management of meningitis compared MYCIN's choice of antimicrobials with the choices of nine human prescribers for ten test cases of meningitis. MYCIN received an acceptability rating of 65% by the evaluators; the corresponding ratings for acceptability of the regimen prescribed by the five faculty specialists ranged from 42.5% to 62.5%. The system never failed to cover a treatable pathogen while demonstrating efficiency in minimizing the number of antimicrobials prescribed.
I don't think that was what was being measured.
The evaluation was a comparison of:
>...MYCIN's choice of antimicrobials with the choices of nine human prescribers for ten test cases of meningitis.
In that evaluation:
>...MYCIN received an acceptability rating of 65% by the evaluators; the corresponding ratings for acceptability of the regimen prescribed by the five faculty specialists ranged from 42.5% to 62.5%.
The fact that the acceptability ratings of the five faculty specialists ranged from 42.5% to 62.5% implies that this isn't a trivial problem.