>Wittman (1941) constructed an SPR that predicted the success of electroshock therapy for patients more reliably than the medical or psychological staff.
>Carroll et. al. (1988) found an SPR that predicts criminal recidivism better than expert criminologists.
>An SPR constructed by Goldberg (1968) did a better job of diagnosing patients as neurotic or psychotic than did trained clinical psychologists.
It's completely amazing we allow human doctors to make diagnoses at all. At the very least algorithms should always be part of the process (but note that humans given the results of an algorithm still do worse than just the algorithm on it's own.)
The problem is there has been enormous resistance to the use of algorithms in medicine. People irrationally distrust algorithms and strongly prefer human judgement. Even when they know the algorithms are superior. Psychologists have actually studied this phenomenon and have named it "Algorithm Aversion": http://opim.wharton.upenn.edu/risk/library/WPAF201410-Algort... This isn't even getting into the institutional resistance to change or having people lose their jobs to robots.
Anyway all I'm claiming is that statistical methods are much more accurate than humans. Nothing there disputes that claim. If you want the most accurate predictions possible, you should use an algorithm.
That article implies that humans are somehow fair or unbiased. That is a completely ridiculous claim that has been proven false many times. Human judges give ugly people twice the sentences of attractive people. Judges have been shown to give significantly harsher sentences just before lunch, when they are hungry. Not to mention all the classic biases against gender/race/political affiliation/etc. Studies have shown interviews are worse than useless at assessing how good someone will be as an employee. Instead employers are biased by how much they like the candidate. We should hardly expect traditional parole interviews to be any different.
But almost no one cares about these results. Yet when an algorithm is shown to have a (not statistically significant) bias, people freak out. This, if anything, proves my point that algorithm aversion is a serious problem.
While there are respectable ML folks making those criticisms, the commentary I've read seems more click-bait than science.
ML can be used just as traditional statistics to make causal inferences and predict the effect of intervention. There's nothing about ML that reinforces status quo more than traditional statistics, let alone case studies (aka anecdotes) or "common sense."
Google had an interesting take on how you could control for some dimensions of bias here https://research.google.com/bigpicture/attacking-discriminat...
Yes but that is not how ML is marketed. It's marketed as way better than traditional statistics, and soon even better than humans.
But reality is that a trained system is only as good as the data used to train it.
But ML can improve upon other methods of interpreting that (biased) data. Thus, in some ways better than traditional statistics and non-mathematical human intuition.
It would be nice if you could provide sources for these claims.
(To be fair, the study size was small).
There's a study on attractiveness and juror bias here (it's more complicated than just "ugly people get worse sentences" but some bias does show up for certain juror personality types):
Reference: Flores, Bechtel, Lowencamp; "False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks.”", Federal Probation Journal, September 2016, You can find the article here: http://www.uscourts.gov/statistics-reports/publications/fede...
In fact the ProPublica analysis (written by journalists,not scientists) was so wrong that the authors of the study wrote:
"It is noteworthy that the ProPublica code of ethics advises investigative journalists that "when in doubt, ask" numerous times. We feel that Larson et al.'s (2016) omissions and mistakes could have been avoided had they just asked. Perhaps they might have even asked...a criminologist? We certainly respect the mission of ProPublica, which is to "practice and promote investigative journalism in the public interest." However, we also feel that the journalists at ProPublica strayed from their own code of ethics in that they did not present the facts accurately, their presentation of the existing literature was incomplete, and they failed to "ask." While we aren’t inferring that they had an agenda in writing their story, we believe that they are better equipped to report the research news, rather than attempt to make the research news."
The basic algorithms like RCRI and CHADS2VASC are almost universally applied in the appropriate context (at my quaternary care teaching hospital, granted).
If there are numerous algorithms that we should be using but aren't, I'd argue that the bigger barrier is making them accessible and easy enough to use that they can be applied during a clinical encounter (among all the other things that need to be done).
Ordering A1C, remembering to actually check the results, and telling patient to stop eating sugar - well before they are symptomatic for diabetes.
Yeah, it's that last part ("telling patient to stop eating sugar" [and getting them to actually change their behavior]) that's the hard part.
This is the holy grail of medicine and has been for decades. We've done billions of dollars of research over the last century, and the one clear thing we've found is that we do not know of any cost-effective way to get patients to change their behavior at large.
Seriously, if you really have the answer to that question, you could be a multi-billionaire.
 That is, we can change behavior of some patients, but they're generally the patients who would have changed their behavior anyway. And we can change the behavior of some of the rest, but they're not cost-effective.
Like, what might be the huge negative cost of tobacco taxes?
(There's an argument to be made for freedom, but you are making an economic argument about costs, not a moral one)
Sometimes we just have to let other people live with the consequences of their actions; which is hard, because often we also have to live with those consequences.
Whatever we say about "other people", we should also remember that we are "other people" to somebody.
But does such a test actually provide benefit to the general public? Would an otherwise heatlhy 30-year old with a bad diet fail this test - or would it only show that despite his bad diet, that his blood glucose levels have been pretty stable the last 3 months? I'm thinking that in an ordinary person with a working pancreas, it wouldn't show the spikes of diabetes.
That said, I do think these sorts of things work, especially when paired with labor protection policies that allow folks to miss work to go to the doctor (or be sick). For example, once I was in the Norwegian health system, I got a letter encouraging me to see the doctor for my cancer screening (I'm female). Turns out, they want folks to get one every 3 years and keep a database to keep track of who has gotten it and who hasn't. I don't know how well this sort of thing would work in a fragmented system like the US has, though.
Anyhow, if it wasn't for this, I would have never had such a test. I'm in my late 30's. My father was diabetic at my age, actually, and it runs in the family. I'm a prime candidate for preventive measures, but still nothing. Now, I understand that I go to the doctor rarely but at any one of these I could have been asked about a physical and stuff like that.
There's also some research, in general , about the resistance of doctors to such systems. And yes while some of the reasons are UI, some are motivated by doctors preferences.
And even if the UI is a bit slow, we should at least seen such systems common in the more critical settings, or just with vulnerable or complex patients.
I'm a doctor. I'd be delighted to have decision support for every problem. As it stands I have decision support for medication dosing and for appropriate use of imaging, both of which are very helpful.
Don't blame the customer for not buying if the product is not good.
Could be just marketing, but maybe.
There's a free test. if you do test, please share your opinion of it, it's interesting.
That's not 100% true for all disciplines. E.g., in radiology, there are some assisting technologies around for identifying tumors etc. While many radiologists use them daily, the evidence isn't always that clear. Often they just lead to a higher false positive rate.
Given all that, brain radiology is a lot easier that pathological radiology (e.g. tracing the extent of a lung tumor). There is a lot of research into automating this, but dice overlaps are still poor.
The article and some of the people quoted in it, like Hinton, seem to think MDs need so much help for diagnosis. Real life isn't House MD. The answer is very often obvious from the history/physical exam and the most basic labs.
Anyways, I don't like the fact that Hinton and others seem perfectly OK with not knowing how the machine is diagnosing. It's machine clinical gestalt.
I also think this diagnosis by machine would be very frustrating. Imagine, as a patient, asking the diagnostic robot: "why do you think this happened to me?" or "Why do you think the diagnosis is x?"
And then not having an answer other than "the imaging and tests are consistent with imaging and tests of previous patients with x disease" - this sounds like a bad answer. That wouldn't be good enough for me, but maybe it would be for plenty of other non-curious parties. Maybe there is this huge group of people who want healthcare from robots with robotic bedside manners. But I doubt it. Hinton is wrong, doctors are going to be augmented by helpful diagnostic applications. We will still have to learn to diagnose on our own but we will have help too. Maybe a robot to help triage cases into "serious/less serious" categories (and with working initial diagnosis) with good accuracy.
I still feel that this is a tool that should be used by Doctors (rather than to replace them) as the reports will likely contain more detail than a layman would understand.
1.) Medical data is often quite sparse and quite poor quality
You may only get a few years here and there for a patient, and a lot of the things mentioned in the article ("cough is raspy", "I have a feeling it might be pneumonia") aren't always in the medical record, and even if they are they aren't in a form that's easily accessible to a computer.
2.) Interaction matters
Seeking medical care is an extremely vulnerable state to be in. A good doctor isn't doing just diagnosis, but teasing out the right bits of information. It's unclear how a computer will handle, "I feel I'm not getting the full story from this patient" situations. A good doctor (not all of them are) will have the interpersonal skills to get the full story.
Finally, even if you solve diagnosis, then what? You have to take action. For a lot of expensive chronic conditions, it's not like the answer isn't obvious or even particularly needs diagnosis. If you're overweight, you probably already know you should eat less and maybe be more active. Many times even people with certifiable diabetes diagnoses do not change their lifestyle appropriately. How you handle putting the diagnosis into action is a tough problem, and it's not entirely clear how things like AI will fix it. Convincing people to change habits is damn hard.
Disclosure: I work as a data scientist at a medical AI startup (www.lumiata.com)
really, just following your lead here, but i guess we should then ask:
do patients respond better to a doctor they have a good relationship with?
i mean, there are some patients who might say, "Well, I made a deal with my doctor that I'd reduce my sugar intake and try to keep my blood glucose levels down below X, on average" (or whatever) and maybe they're motivated by a desire to please their doctor or not disappoint their doctor.
but, could that work as well for a robot doctor?
what would happen if there were a robot doctor that patients liked more than any of their human doctors?
I am sorry to say that anyone thinking that we could "ML all the things" in a hospital or office practice on this here day clearly has no idea what a mess our hospitals really are.
Some things are amenable to ML though, and most of us welcome any help we can get. Even from machines.
I'd much rather have a machine do all the pattern matching without any human bias.
Doctor-patient trust is a big issue, especially when the former doesn't take the latter seriously.
What do you call an affordable price in relation to mean, or median, purchasing power?
A quick web search finds a lot of one off dining tables in the USD700 to USD3k range in the US. Made to measure dining tables in the UK can be had for less than GBP800.
Median household income in the UK 2014 was about GBP24k gross, roughly GBP20k net (https://www.google.no/url?sa=t&rct=j&q=&esrc=s&source=web&cd...).
So a GBP800 table would be about two weeks of after tax income. Was it really so much less 200 hundred years ago?
Perhaps someone can find the data for 200 years ago and do a more sophisticated analysis.
I'm going to make a call on that claim. Keep in mind that 200 years ago was 1817, well before the rise of the middle class. I bet that far fewer people could afford the luxury of getting a table even approaching the quality of an Ikea table, let alone what a few $100 extra will get you for a custom table today.
What's nice is it seems to offer a falsifiable prediction. Namely, that there are "super-experts" out there who consistently beat machine learning algorithms. Even if they aren't numerous enough to bring up the mean. Do the current studies show that?
I chose the financial markets because you could say that it is the perfect competitive space for this kind of evaluation, though I wouldn't be surprised if Centaurs or humans existed in other areas as well which are better than machine learning algorithms. Still the point stands that for people in the 91st to 98th percentile a lot of value will probably be lost when humans start going away from practices in droves. Another acceleration for the 1%.
I believe that in cancer research, ML will be crucial. Finding similarities among mutations of cancerous cells. I am really hoping cancer treatment can benefit from this.
"AI will become just another tool in a physician's toolkit."
AI will not replace the role of a doctor. However, the role of a doctor might shift to overseeing the AI in very specific scenarios. Furthermore, I'll add that there's a lot more to medicine than making diagnoses.
As it stands right now we have no limit of promising applications of AI in healthcare. The bigger issue is getting these applications into the clinic. Very few of these boundless applications are actually implemented in a real-life setting that can affect patient outcomes. I can't tell you how many studies are published proclaiming, "HEY AI CAN DO X BETTER THAN DOCS." Cool study, bro. Now can you actually get that into clinic and start saving lives?
Suchi Saria at JHU is one of the few people I know that has bridged that gap. Other resources for those that are interested:
- Baxt, William G. "Application of artificial neural networks to clinical medicine." The Lancet 346.8983 (1995): 1135-1138.
*Programmer MD student about to start my PhD in comp sci specifically machine learning + healthcare.
The "deep learning" (backpropagation stuff) reminds me the euphoria of the expert systems in the nineties, but with palpable results, i.e. despite requiring crazy amounts of computer power, you can see some difficult problems solved. What scares me is that instead of the "clean and understandable" modeling of the rule-based expert systems, the "deep learning" (neural networks backpropagation stuff) is hardly understandable, even by experts (learn how to train a model is one thing, and knowing why it works, is another -e.g. you can correlate success for N training cases, guess that you're covering the model, and then, discovering for the N+1 that there was a correlation/causality issue -e.g. discovering you trained for learning blue things instead of square things, just because the square things were blue, and when a red square appears, it does not get identified-).
Software on its own can't even tackle a 'simple' task as reliable 6 lead EKG (electric heart film) analysis yet. A chest x-ray has a lot more variables. Plus clinical variables as patient history etc can make a big difference in diagnosis on a similar image.
With all the current interest in "AI" it's easy to forget that this is an old problem area, and current techniques don't fundamentally change anything. In most applications the biggest issues remain access to and quality of the data. For the right application though, you can do useful things.
While machine learning performance frequently rivals or exceeds humans at many individual tasks once sufficiently constructed and trained, only humans excel at dynamically choosing which tasks to pursue, switching levels of analysis, and when to break the rules for the win (losing at GO? unplug the computer).
To speculate heavily, animal based cognition may be composed of just such a multitude of specialized trained modules, akin to machine learning algos of today: object recognition, emotion recognition, language recognition, typical script/structure of a given scenario, etc. But above that will be classifiers that interpret internal and external environmental signals to choose which of those specialized modules to engage and suppress. In lower animals lacking a heavily recurrent prefrontal cortex, the higher order modules are probably directed by mid brain structures to engage basic fight flight fuck behaviors needed for survival (e.g. pattern recognition module sees snake, freeze or run modules are engaged). In animals with prefrontal cortex, goal and context driven suppression of prepotent responses becomes possible.
Anyway, it seems to me that for machine learning to become a general intelligence, there will need to hierarchies of specialized machine learning classifiers, some specialized in sensory classification, but others in that are meta..classifying those classifiers into scripts, scenarios, etc.
There's some pretty good, interesting work in this area.
Probably a doctor will still have to check the results and sign off on them.
Why not? An image is an image.
At the very least it would seem that a machine-based classifier provides human physicians and researchers with more examples to base their inquiries on (possibly even illuminating some features they may have previously missed as important portions of theoretical models).
Past AI apps, like 1980s expert-systems, generally relied on brittle binary criteria that were hard to match with certainty. Too often they produced results that were either obvious or implausible, but at least they could explain themselves. They were also poor at matching against fuzzy clues from patients (and doctors) who are notoriously inconsistent and nonquantitative at describing symptoms. No doubt a greater emphasis on quantitation lies at the heart of today's AI systems. But if the classifications and recommendations of tomorrow's AIs lack explicability, there's no way in hell they'll be trusted or given authority by risk-averse practitioners.
A middle ground is needed, where the 'advice' from the AI is grounded in clear statistically significant bases and adds value to the process, rather than competing with humans. In some spaces like suggesting cancer therapies, that are more likely to succeed using quanta, I think AI will be adopted and appreciated first. Primary care medicine will probably see it last, though it probably is already employed invisibly behind the scenes by insurers for validation and quality control (like prescription drug contraindication).
I mean sure you have doctors that have 20 years of experience but still get the diagnosis wrong even it if it's close, but still it seems that compared to machines that get feed large amounts of data still come up short to. I think saying machines will replace doctors is the wrong approach, in the article one of the doctors interviewed said "If it helps me make decisions with greater accuracy, I’d welcome it”. Thats we need more tools that enable doctors to make more accurate decision than going on an experienced hunch.
I think it's great this subject is being explored it will help more people, and doctors do their jobs even better.
There's also a phenomenon where patients can actually become more knowledgeable than their physicians (especially a GP who is a generalist by nature) regarding their specific condition(s). It might sound counter-intuitive, but think about it - a GP / Family Doctor has to know something about pretty much every condition under the sun. He / she doesn't have time to spend obsessively focusing on just, say, diabetes. Me, on the other hand, the only condition I care about is diabetes, so I can spend all my free time on Pubmed reading all the latest papers on the subject, etc.
And as it happens, recently my diabetes took a turn for the worse. I was originally diagnosed as a type 2, and was being treated with metformin and my blood sugar had been stable for 5 years or so, but it took a big jump sometime in the past few months. I had read up on "type 1.5" diabetes / LADA, and had an inkling that might be my situation. So I read more on that before going to see my doctor, and when I got there, I was actually the one telling him which tests we should run to confirm/deny that scenario. (Note: of course he looked the stuff up to confirm it instead of just taking my word, but he was nodding and going "yep, you're right" as he was doing so).
No AI involved, but I do believe the widespread availability of medical research / information at the patient level is a valuable thing. Yeah, some people probably annoy the shit out of their doctors with uninformed self-diagnosis, but I don't think that offsets the benefit of this information being available.
When I go to the doctor, I have Googled the symptoms I'm experiencing before I go. Once I've been given a diagnosis by the doctor, I Google the diagnosis to verify that I have the symptoms one would expect from such a diagnosis.
Having had and heard too many experiences where Doctors got it wrong to huge detrimental effects, I want to double check what I've been told and not just blindly accept what one Doctor has judged in 30 seconds based on an initial perception of me without really knowing anything about me.
I think if I've correlated what I thought was a possibility along with what the Doctor has diagnosed and the expected symptoms of that diagnosis, at least I can be confident to a degree that I can trust the diagnosis, prognosis and the course of action provided.
If I am experiencing symptoms vastly different than I would expect from the diagnosis, I want to be asking questions as to how and why the doctor feels they are correct and I am incorrect. I realize I'm not a doctor, but in this day and age, for us all to have the world's information at our fingertips, to be blindly believing anyone whose advice could have catastrophic consequences on our health and lifespan is shortsighted at best and plane idiocy at worst.
That doesn't make someone a hypochondriac, that makes someone cautious about misdiagnosis. Unfortunately, there are many hypochondriacs out there.
While most people may start off as hypochondriacs (who hasn't been spooked by the prospect of cancer after reading about it), the more research you do, the more accurate you become and in recent years I've finally managed to become effective at discerning good doctors vs bad ones.
Bring in the machines I say. (The astronomical savings to the taxpayer from earlier diagnoses is the cherry on top.)
This is one area where I consider the debate about employment vs automation settled. Health comes first. Bring on the machines indeed.
I am very happy that my doctor is the opposite of this. If I had a doctor like that, I'd ditch him/her and find somebody new.
I had Metabolic syndrome (prediabetic) and don't now. Still have slight elevated BP and weigh too much but I lost 50 lbs and my other blood work is amazingly good. I was expecting a battle with both of them.
My cardiologist took me off if several meds as well.
Machines can't replace doctors. You can't sue a machine for chance occurrence poor medical outcome.