Hacker News new | past | comments | ask | show | jobs | submit login
AI vs. MD: what happens when the diagnosis is automated (newyorker.com)
185 points by cft on Mar 28, 2017 | hide | past | web | favorite | 91 comments



It's worth noting that humans completely suck at almost any task even simple statistics can be applied to. Even world class experts will typically be beat by simple linear regression. See here for many examples: http://lesswrong.com/lw/3gv/statistical_prediction_rules_out... And this is very old research with methods that are very primitive compared to modern machine learning (most of this was done on pencil and paper!):

>Wittman (1941) constructed an SPR that predicted the success of electroshock therapy for patients more reliably than the medical or psychological staff.

>Carroll et. al. (1988) found an SPR that predicts criminal recidivism better than expert criminologists.

>An SPR constructed by Goldberg (1968) did a better job of diagnosing patients as neurotic or psychotic than did trained clinical psychologists.

It's completely amazing we allow human doctors to make diagnoses at all. At the very least algorithms should always be part of the process (but note that humans given the results of an algorithm still do worse than just the algorithm on it's own.)

The problem is there has been enormous resistance to the use of algorithms in medicine. People irrationally distrust algorithms and strongly prefer human judgement. Even when they know the algorithms are superior. Psychologists have actually studied this phenomenon and have named it "Algorithm Aversion": http://opim.wharton.upenn.edu/risk/library/WPAF201410-Algort... This isn't even getting into the institutional resistance to change or having people lose their jobs to robots.


It's interesting that you mention the criminology scenario. Courts are starting to use software algorithms to predict recidivism rates for purposes of setting bond amounts. The reports are even given to judges to inform sentencing. However, there is an enormous backlash against them, because e.g. they predict higher rates for racial minorities: https://www.propublica.org/article/machine-bias-risk-assessm....


The propublica study has been shown to be a lie, see https://www.chrisstucchio.com/blog/2016/propublica_is_lying....

Anyway all I'm claiming is that statistical methods are much more accurate than humans. Nothing there disputes that claim. If you want the most accurate predictions possible, you should use an algorithm.

That article implies that humans are somehow fair or unbiased. That is a completely ridiculous claim that has been proven false many times. Human judges give ugly people twice the sentences of attractive people. Judges have been shown to give significantly harsher sentences just before lunch, when they are hungry. Not to mention all the classic biases against gender/race/political affiliation/etc. Studies have shown interviews are worse than useless at assessing how good someone will be as an employee. Instead employers are biased by how much they like the candidate. We should hardly expect traditional parole interviews to be any different.

But almost no one cares about these results. Yet when an algorithm is shown to have a (not statistically significant) bias, people freak out. This, if anything, proves my point that algorithm aversion is a serious problem.


Chris Stucchio's point is different from that of the article. The problem is that the word "bias" is overloaded, and used in two different ways. When ProPublica uses the word bias, they mean that one race is affected in a systematically different way than another race. When Chris Stucchio uses the word bias, he means a statistical method which has a built in bias to achieve a particular result. It is quite possible for both to be true: the statistical methods have no built in bias, but their prediction ends up making different predictions for different races. I find the most reasonable explanation to be that society itself has a racial bias, and society itself is the data generator.


Yes, this is a broad criticism of ML in public policy: that it's an amplifier for the status quo.


> amplifier for the status quo

While there are respectable ML folks making those criticisms, the commentary I've read seems more click-bait than science.

ML can be used just as traditional statistics to make causal inferences and predict the effect of intervention. There's nothing about ML that reinforces status quo more than traditional statistics, let alone case studies (aka anecdotes) or "common sense."


While this is definitely a clickbaity topic, and algorithms are usually better than people there are still real issues that come from seeking accuracy, particularly since real world data is so often biased to begin with.

Google had an interesting take on how you could control for some dimensions of bias here https://research.google.com/bigpicture/attacking-discriminat...


> There's nothing about ML that reinforces status quo more than traditional statistics

Yes but that is not how ML is marketed. It's marketed as way better than traditional statistics, and soon even better than humans.

But reality is that a trained system is only as good as the data used to train it.


> only as good

But ML can improve upon other methods of interpreting that (biased) data. Thus, in some ways better than traditional statistics and non-mathematical human intuition.


> Human judges give ugly people twice the sentences of attractive people. Judges have been shown to give significantly harsher sentences just before lunch, when they are hungry.

It would be nice if you could provide sources for these claims.



The lunch statistic probably came from this study: http://www.economist.com/node/18557594 http://www.pnas.org/content/108/17/6889

(To be fair, the study size was small).

There's a study on attractiveness and juror bias here (it's more complicated than just "ugly people get worse sentences" but some bias does show up for certain juror personality types): http://onlinelibrary.wiley.com/doi/10.1002/bsl.939/abstract


If there is ever a dress for success time it is when you are in court.


Here's the paper for the just before lunch claim: http://www.pnas.org/content/108/17/6889


The ProPublica you cite as example for bias in criminal justice is plainly wrong. A follow up study found the the system they examined was not biased at all.

Reference: Flores, Bechtel, Lowencamp; "False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks.”", Federal Probation Journal, September 2016, You can find the article here: http://www.uscourts.gov/statistics-reports/publications/fede...

In fact the ProPublica analysis (written by journalists,not scientists) was so wrong that the authors of the study wrote:

"It is noteworthy that the ProPublica code of ethics advises investigative journalists that "when in doubt, ask" numerous times. We feel that Larson et al.'s (2016) omissions and mistakes could have been avoided had they just asked. Perhaps they might have even asked...a criminologist? We certainly respect the mission of ProPublica, which is to "practice and promote investigative journalism in the public interest." However, we also feel that the journalists at ProPublica strayed from their own code of ethics in that they did not present the facts accurately, their presentation of the existing literature was incomplete, and they failed to "ask." While we aren’t inferring that they had an agenda in writing their story, we believe that they are better equipped to report the research news, rather than attempt to make the research news."

Ouch...


Is there evidence that there are high-performing algorithms that are available to doctors that they aren't using?

The basic algorithms like RCRI and CHADS2VASC are almost universally applied in the appropriate context (at my quaternary care teaching hospital, granted).

If there are numerous algorithms that we should be using but aren't, I'd argue that the bigger barrier is making them accessible and easy enough to use that they can be applied during a clinical encounter (among all the other things that need to be done).


Here's a simple algorithm that would help millions, yet it is not accessible to most patients:

Ordering A1C, remembering to actually check the results, and telling patient to stop eating sugar - well before they are symptomatic for diabetes.


> Ordering A1C, remembering to actually check the results, and telling patient to stop eating sugar - well before they are symptomatic for diabetes.

Yeah, it's that last part ("telling patient to stop eating sugar" [and getting them to actually change their behavior]) that's the hard part.

This is the holy grail of medicine and has been for decades. We've done billions of dollars of research over the last century, and the one clear thing we've found is that we do not know of any cost-effective way to get patients to change their behavior at large[0].

Seriously, if you really have the answer to that question, you could be a multi-billionaire.

[0] That is, we can change behavior of some patients, but they're generally the patients who would have changed their behavior anyway. And we can change the behavior of some of the rest, but they're not cost-effective.


Taxes?

Like, what might be the huge negative cost of tobacco taxes?

(There's an argument to be made for freedom, but you are making an economic argument about costs, not a moral one)


Unfortunately, taxes don't always make significant reductions of usage for addictive substances and may shift commerce towards illegal markets.


If you knew how to do that, you could also solve the climate change / global warming problem, as well as host of other big social issues at the same time. As it is, people simply follow the incentive gradient, and most of the time there's damn little you can do about it.


There is, of course, the (unintended?) side-effect that if there was a cost-effective way for controlling peoples' behavior then we would be effectively under the tyranny of whomever possessed those means.

Sometimes we just have to let other people live with the consequences of their actions; which is hard, because often we also have to live with those consequences.

Whatever we say about "other people", we should also remember that we are "other people" to somebody.


And so it turns out once again, in every field, people really are the problem ...


Right. Patient compliance is a hard problem. But doctor compliance with basic protocols is also very bad, and seems fixable with computers.


This would be great for diabetes patients. It might be a good early intervention measure for folks that are prone to be diabetic.

But does such a test actually provide benefit to the general public? Would an otherwise heatlhy 30-year old with a bad diet fail this test - or would it only show that despite his bad diet, that his blood glucose levels have been pretty stable the last 3 months? I'm thinking that in an ordinary person with a working pancreas, it wouldn't show the spikes of diabetes.

That said, I do think these sorts of things work, especially when paired with labor protection policies that allow folks to miss work to go to the doctor (or be sick). For example, once I was in the Norwegian health system, I got a letter encouraging me to see the doctor for my cancer screening (I'm female). Turns out, they want folks to get one every 3 years and keep a database to keep track of who has gotten it and who hasn't. I don't know how well this sort of thing would work in a fragmented system like the US has, though.


That is already done at a population level, and when I was in primary care I would get notifications about patients whose A1c was overdue.


I've had exactly one of these tests, and that was only because the insurance I had required all employees to get a battery of blood tests drawn (or pay more each month for insurance). They planned on requiring this yearly, with plans to force folks to start meeting the metrics or pay more for insurance. I'm not sure how this panned out for them.

Anyhow, if it wasn't for this, I would have never had such a test. I'm in my late 30's. My father was diabetic at my age, actually, and it runs in the family. I'm a prime candidate for preventive measures, but still nothing. Now, I understand that I go to the doctor rarely but at any one of these I could have been asked about a physical and stuff like that.


Some hospitals do it, some don't. My wild guess - at best 50% of patients get it. And maybe 25% of those get it interpreted as well as you would for your family.


I've seen reviews comparing clinical-decision-support-systems, which advise doctors on diagnosis and treatment and the machines did better.

There's also some research, in general , about the resistance of doctors to such systems. And yes while some of the reasons are UI, some are motivated by doctors preferences.

And even if the UI is a bit slow, we should at least seen such systems common in the more critical settings, or just with vulnerable or complex patients.


Yes, I've tried to use clinical decision tools for complex cases. They aren't integrated into our EHR (which is the most common EHR), so inputting the data for a case takes between 10-15 minutes. That's as much time as I get to see a patient at an outpatient visit.

I'm a doctor. I'd be delighted to have decision support for every problem. As it stands I have decision support for medication dosing and for appropriate use of imaging, both of which are very helpful.

Don't blame the customer for not buying if the product is not good.


There's isabel healthcare's CDSS, the interface seems rapid and one of the reviewers("sprecher") says that he can use it in a patient visit:

http://www.isabelhealthcare.com/customer-satisfaction/testim...

Could be just marketing, but maybe.

There's a free test. if you do test, please share your opinion of it, it's interesting.


> The problem is there has been enormous resistance to the use of algorithms in medicine

That's not 100% true for all disciplines. E.g., in radiology, there are some assisting technologies around for identifying tumors etc. While many radiologists use them daily, the evidence isn't always that clear. Often they just lead to a higher false positive rate.


I've spent a lot of time doing manual tracings of brain structures and I tell you now that while automated methods are improving rapidly, and just now approaching the average ability of a tracer, they do not rival experts yet. The most time efficient methods now is semi-automatic. Begin with the automated methods, then make manual corrections.

Given all that, brain radiology is a lot easier that pathological radiology (e.g. tracing the extent of a lung tumor). There is a lot of research into automating this, but dice overlaps are still poor.


Instead of reading the usual hacker news echo chamber, probably good to read what doctors thought about the article too.

https://www.reddit.com/r/medicine/comments/61sgfw/ai_versus_...


Thank you for posting this.

The article and some of the people quoted in it, like Hinton, seem to think MDs need so much help for diagnosis. Real life isn't House MD. The answer is very often obvious from the history/physical exam and the most basic labs.

Anyways, I don't like the fact that Hinton and others seem perfectly OK with not knowing how the machine is diagnosing. It's machine clinical gestalt.

I also think this diagnosis by machine would be very frustrating. Imagine, as a patient, asking the diagnostic robot: "why do you think this happened to me?" or "Why do you think the diagnosis is x?"

And then not having an answer other than "the imaging and tests are consistent with imaging and tests of previous patients with x disease" - this sounds like a bad answer. That wouldn't be good enough for me, but maybe it would be for plenty of other non-curious parties. Maybe there is this huge group of people who want healthcare from robots with robotic bedside manners. But I doubt it. Hinton is wrong, doctors are going to be augmented by helpful diagnostic applications. We will still have to learn to diagnose on our own but we will have help too. Maybe a robot to help triage cases into "serious/less serious" categories (and with working initial diagnosis) with good accuracy.


To be fair, when I ask my doctors why I'm having XYZ issues, they usually give me an unsatisfactory answer like "maybe your diet", "maybe you're stressed", "hard to say exactly", or some slightly more tactful variant of "it's all in your head". At least with a machine, I'd feel less looked-down-upon.


I suspect the reasoning from the algorithm would be a lot more specific than from a doctor given the same information. People don't have deep understanding of their reasoning process and (necessarily) make a lot of decisions based on a hunch. An algorithm would be able to tell you confidence ratings for multiple conditions and reasoning behind which treatment was prescribed or what additional tests were required.

I still feel that this is a tool that should be used by Doctors (rather than to replace them) as the reports will likely contain more detail than a layman would understand.


There's a lot of interesting things going on at the intersection of medicine and AI. That said, it's important to understand some of the limitations:

1.) Medical data is often quite sparse and quite poor quality

You may only get a few years here and there for a patient, and a lot of the things mentioned in the article ("cough is raspy", "I have a feeling it might be pneumonia") aren't always in the medical record, and even if they are they aren't in a form that's easily accessible to a computer.

2.) Interaction matters

Seeking medical care is an extremely vulnerable state to be in. A good doctor isn't doing just diagnosis, but teasing out the right bits of information. It's unclear how a computer will handle, "I feel I'm not getting the full story from this patient" situations. A good doctor (not all of them are) will have the interpersonal skills to get the full story.

Finally, even if you solve diagnosis, then what? You have to take action. For a lot of expensive chronic conditions, it's not like the answer isn't obvious or even particularly needs diagnosis. If you're overweight, you probably already know you should eat less and maybe be more active. Many times even people with certifiable diabetes diagnoses do not change their lifestyle appropriately. How you handle putting the diagnosis into action is a tough problem, and it's not entirely clear how things like AI will fix it. Convincing people to change habits is damn hard.

Disclosure: I work as a data scientist at a medical AI startup (www.lumiata.com)


> Interaction matters

really, just following your lead here, but i guess we should then ask: do patients respond better to a doctor they have a good relationship with?

i mean, there are some patients who might say, "Well, I made a deal with my doctor that I'd reduce my sugar intake and try to keep my blood glucose levels down below X, on average" (or whatever) and maybe they're motivated by a desire to please their doctor or not disappoint their doctor.

but, could that work as well for a robot doctor?

what would happen if there were a robot doctor that patients liked more than any of their human doctors?


Tech communities hold a common opinion regarding ML and medicine: "bring machines to the clinics! Research shows them to be superior". Everyone always takes great care in forgetting that the ML research for medicine and diagnosis assesses those systems in a well-defined setting. I.e, you have to feed the system the appropriate info for it to work. From personal experience, I can say that obtaining this info can be exceptionally difficult for a multitude of reasons, and trying to fully automate that is currently not economically doable. The medical system currently does not have the information-retrieval capability to make ML really useful.

I am sorry to say that anyone thinking that we could "ML all the things" in a hospital or office practice on this here day clearly has no idea what a mess our hospitals really are. Some things are amenable to ML though, and most of us welcome any help we can get. Even from machines.


Doctors are terrible. The sooner they're out of the diagnostic loop the better. Every time I go to see one I get a different diagnosis or something generic. It's basically a crapshoot based on the experiences of the doctor.

I'd much rather have a machine do all the pattern matching without any human bias.


However I do want that human there to compare what the machine thinks is happening with reality. The rate for human recognition in the blind might be a shot in the dark, but the rate for a human vetoing a bad decision seems to be better. That's also most of what our evolution optimized for (fatal mistakes are fatal, missed opportunities are often not as bad).


The problem with human pattern recognition is not the accuracy itself. A big problem is that 1. human experts are not consistent between themselves and worse 2. not even consistent if they have to evaluate a case repeatedly. One would expect an AI error to be systematic and not changing between predictions. In the short term, an AI (as in Augmented Intelligence) approach would be a good intermediate. Another advantage of AI diagnostics could be its availability in smartphones, especially with skin cancer that could be useful to quickly check marks and spots. Already doctors are advising people to self-check for unusual marks etc., and usually a very big problem is that people wait too long before seeing a doctor.


The problem with that last bit is that there's also the issue of convincing doctors that you suspect something's wrong. Sometimes, doctors don't take patients seriously when they're concerned about something, or don't make more than a cursory glance at the situation before figuring that the patient is overreacting or something. I've been dealing with some breath/lung related problems for a few months now, and I still have to tell my doctors that yes, I have issues breathing sometimes, and yes there's something going on.

Doctor-patient trust is a big issue, especially when the former doesn't take the latter seriously.



I care a lot about outcomes at the individual level. I worry that with these systems, generally health will improve because there will be a level of standardization, but specifically it will get worse. An analogy for me is with furniture, but you could also substitute financial advice. In furniture we can get a 'quality' table from Ikea for a couple hundred bucks, and we can make 10s of thousands of them. But it's much much harder to get a reasonably priced custom table. 200 years ago when a lot more people spent time building wooden ships, it was much easier to get a carpenter to build a table/furniture at a relatively affordable price point. Now that comparatively orders of magnitude of fewer people are in the practice, the price for that furniture, or custom cabinets, etc. at the same quality level has skyrocketed. In financial advice, the rise of algorithms doing things like asset allocation has caused an entire generation of financial planners to never be hired, and therefore never trained. The old financial planners who do much more than asset allocation are becoming rarer and rarer and no one is replacing them. In 20 years, I firmly believe the specifics of my tax situation, etc will be more difficult to solve than today because I will have a harder time finding this kind of specialized expertise. I see the same in medicine and every other field which low-end machines doing the low-end work. We're not training the future high-level practitioners because they need those 10,000 hours or whatever to become super experts. How long will it take before the machine catches up with the best humans is really the question here.


> it was much easier to get a carpenter to build a table/furniture at a relatively affordable price

What do you call an affordable price in relation to mean, or median, purchasing power?

A quick web search finds a lot of one off dining tables in the USD700 to USD3k range in the US. Made to measure dining tables in the UK can be had for less than GBP800.

Median household income in the UK 2014 was about GBP24k gross, roughly GBP20k net (https://www.google.no/url?sa=t&rct=j&q=&esrc=s&source=web&cd...).

So a GBP800 table would be about two weeks of after tax income. Was it really so much less 200 hundred years ago?

Perhaps someone can find the data for 200 years ago and do a more sophisticated analysis.


> 200 years ago when a lot more people spent time building wooden ships, it was much easier to get a carpenter to build a table/furniture at a relatively affordable price point.

I'm going to make a call on that claim. Keep in mind that 200 years ago was 1817, well before the rise of the middle class. I bet that far fewer people could afford the luxury of getting a table even approaching the quality of an Ikea table, let alone what a few $100 extra will get you for a custom table today.


The Industrial Revolution was sometime from 1760 to 1820, I thought?


Yes. I realized the same thing a few minutes after posting, and edited that part out. The middle class part is what matters most: I think more people now have significantly more purchasing power than 200 years ago.


That's an interesting argument and it certainly seems right, but I can't say if that's because there's truth to it or because I am as "algorithm averse" as the next guy.

What's nice is it seems to offer a falsifiable prediction. Namely, that there are "super-experts" out there who consistently beat machine learning algorithms. Even if they aren't numerous enough to bring up the mean. Do the current studies show that?


To answer your question is complicated because of the existence of Centaurs[1]. I will say that the fact that entities like Renaissance [2] exist is proof that there are still some humans that are better than machine learning algorithms. Now whether Renaissance employs Centaurs is hard to tell. My statement though is that machines reduce the number of experts because they reduce the population looking at a problem and my suspicion is that experts are uniformly distributed. So in a population of 100, there might be 1 expert in a population of 10000, 100 experts and 1 super expert, etc.

I chose the financial markets because you could say that it is the perfect competitive space for this kind of evaluation, though I wouldn't be surprised if Centaurs or humans existed in other areas as well which are better than machine learning algorithms. Still the point stands that for people in the 91st to 98th percentile a lot of value will probably be lost when humans start going away from practices in droves. Another acceleration for the 1%.

1. https://en.wikipedia.org/wiki/Advanced_Chess 2. https://en.wikipedia.org/wiki/Renaissance_Technologies


There was an era when human/computer combinations were better at chess than either alone but that era has passed and nowadays pure computer chess programs do better than human/computer teams. We should probably expect to see the same thing in other fields where as computers get better there's a period where centaurs work best before an era where pure AI dominates.


Do you have a source/ranking I can take a look at here? Hadn't heard of that and it totally changes the game.(no pun intended)


Here's a discussion. The amount humans add does depend on how fast the chess game goes and it's 90 minute games where humans have become superfluous. For 1 day per move games humans can still provide useful input last I heard.

http://marginalrevolution.com/marginalrevolution/2013/11/wha...


My Uncle worked in this field. He told me that when given suggestions from AI, residents would make more incorrect diagnosises. I believe the theory was that most of medicine is probability based, but sometimes you can see someone and get a feeling that you may have a lower probability illness. However, when receiving suggestions from the AI, residents were second guessing themselves and performing worse.

I believe that in cancer research, ML will be crucial. Finding similarities among mutations of cancerous cells. I am really hoping cancer treatment can benefit from this.


My not-yet-expert* response whenever my classmates joke about me building some AI robot-doc to replace them:

"AI will become just another tool in a physician's toolkit."

AI will not replace the role of a doctor. However, the role of a doctor might shift to overseeing the AI in very specific scenarios. Furthermore, I'll add that there's a lot more to medicine than making diagnoses.

As it stands right now we have no limit of promising applications of AI in healthcare. The bigger issue is getting these applications into the clinic. Very few of these boundless applications are actually implemented in a real-life setting that can affect patient outcomes. I can't tell you how many studies are published proclaiming, "HEY AI CAN DO X BETTER THAN DOCS." Cool study, bro. Now can you actually get that into clinic and start saving lives?

Suchi Saria at JHU is one of the few people I know that has bridged that gap. Other resources for those that are interested:

- http://mucmd.org/ - Baxt, William G. "Application of artificial neural networks to clinical medicine." The Lancet 346.8983 (1995): 1135-1138.

*Programmer MD student about to start my PhD in comp sci specifically machine learning + healthcare.


20 years ago, as student, I took Expert Systems (rule-based) and "Bioinformatics" (neural networks, backpropagation stuff, now called "deep learning") courses, and it was like Expert Systems based on rules -with or without fuzzy logic- would be the future. While neural networks were in the same "box" as other techniques like genetic algorithms (evaluation function driven partially random guesses, with greedy/non-greedy "evolution", etc.). Back then it was like imminent being able to put the knowledge of an expert e.g. a medicine doctor so the artificial expert system could make decisions as good as the one made by the human. After years, it seems that human experts in some areas were not so easy to replace, not because not being able to mimic the rules related to medical diagnosis, but because the inability to filter "bullshit-input" (creative hypochondriac people, multiple patterns recognized at once, etc.).

The "deep learning" (backpropagation stuff) reminds me the euphoria of the expert systems in the nineties, but with palpable results, i.e. despite requiring crazy amounts of computer power, you can see some difficult problems solved. What scares me is that instead of the "clean and understandable" modeling of the rule-based expert systems, the "deep learning" (neural networks backpropagation stuff) is hardly understandable, even by experts (learn how to train a model is one thing, and knowing why it works, is another -e.g. you can correlate success for N training cases, guess that you're covering the model, and then, discovering for the N+1 that there was a correlation/causality issue -e.g. discovering you trained for learning blue things instead of square things, just because the square things were blue, and when a red square appears, it does not get identified-).


These articles are always way overhyped. A lot more development is needed before it can even be safely used as a tool alongside radiologists. The dermatology example is an excellent one. It would be great to have a simple image classifier that can help the diagnosis before the clinical appointment.

Software on its own can't even tackle a 'simple' task as reliable 6 lead EKG (electric heart film) analysis yet. A chest x-ray has a lot more variables. Plus clinical variables as patient history etc can make a big difference in diagnosis on a similar image.


But there have been successful commercial applications of ML/AI working alongside radiologists safely as a tool for at least a couple decades.

With all the current interest in "AI" it's easy to forget that this is an old problem area, and current techniques don't fundamentally change anything. In most applications the biggest issues remain access to and quality of the data. For the right application though, you can do useful things.


Additionally the anatomy between people can very greatly. It's pretty hard for computers to determine what is reasonable and unreasonable.


As a layman, it seems to me that the biggest advantage that humans have over machine learning is flexibility.

While machine learning performance frequently rivals or exceeds humans at many individual tasks once sufficiently constructed and trained, only humans excel at dynamically choosing which tasks to pursue, switching levels of analysis, and when to break the rules for the win (losing at GO? unplug the computer).

To speculate heavily, animal based cognition may be composed of just such a multitude of specialized trained modules, akin to machine learning algos of today: object recognition, emotion recognition, language recognition, typical script/structure of a given scenario, etc. But above that will be classifiers that interpret internal and external environmental signals to choose which of those specialized modules to engage and suppress. In lower animals lacking a heavily recurrent prefrontal cortex, the higher order modules are probably directed by mid brain structures to engage basic fight flight fuck behaviors needed for survival (e.g. pattern recognition module sees snake, freeze or run modules are engaged). In animals with prefrontal cortex, goal and context driven suppression of prepotent responses becomes possible.

Anyway, it seems to me that for machine learning to become a general intelligence, there will need to hierarchies of specialized machine learning classifiers, some specialized in sensory classification, but others in that are meta..classifying those classifiers into scripts, scenarios, etc.


It's funny to see how the HN crowd reacts to this. If AI was a magical bullet, don't you think doctors would already be replaced by machines? Also, most people don't think rationally when it comes to diseases... You might be able to self-diagnose using google, but that's not something most people can do without ending up thinking they have cancer.


There's a huge gap between demonstrating an effective algorithm and bringing an appliance to market which uses that algorithm. The vast majority of development work on any product goes, not into the basic science, but into making it simple, user-friendly, reliable and safe. This goes tenfold when you're talking about anything related to health or safety.


I wonder how long it will take machines to get good at the 2nd part mentioned in the article, the 20 questions part where they try to figure out causes using only the conversation. For neurological disorders you also need an agent that can actually do a manipulative exam to figure out what is wrong. Machines are still far from having the full spectrum of agency needed to preform exams to find the underlying source of a problem. Of course it is extremely rare to find a human being who can do those as well...


https://www.buoyhealth.com/visit/

There's some pretty good, interesting work in this area.


Good luck to AI with procedures that require a doctor to search/find/diagnose a small(0.4cm) breast tumor in a ultrasound exam that could actually save the patient's life. AI can be applied(it is actually) in computer tomography scans but in exams like X-ray, ultrasound etc you cannot make it work( at least now). Also there is legal issue: who is responsible for the diagnose? The computer? The institution ? A doctor? AI in medicine will help us a lot but it's too early!


While I guess it is probably not fully ready for being made into a product yet, there is actually a lot of research currently going on using deep learning in e.g. mammography that show a lot of promise.

Probably a doctor will still have to check the results and sign off on them.


I think you will first see ai assisting doctors so the doctors are obviously still responsible. It's not like ai will take over all of medicine immediately but ai is generally much better at interpreting a lot of statistical data than doctors


> but in exams like X-ray, ultrasound etc you cannot make it work( at least now)

Why not? An image is an image.


Believe me it's not the same, especially at ultrasound where everything it's not clear due external factors an body types. I do this for a living (radiologist)plus I love programming and would like to see those two come together but the reality is different. Ai could assist doctors and suggest possible findings.


Ultrasound is a bit different because it's "active looking" - if after a cursory scan the technician isn't actively directing the device at (and around!) the suspicious spot, you won't get the images required to be sure about what's there.


I find it odd that the article seems to downplay the potential benefits machine learning provides by emphasizing a lack of explanatory power from classification algorithms.

At the very least it would seem that a machine-based classifier provides human physicians and researchers with more examples to base their inquiries on (possibly even illuminating some features they may have previously missed as important portions of theoretical models).


But more opinions (from AIs) that differ from the advice of the 'responsible health professional' will surely add confusion to the doctor's life, and in the US medical space, legal liability. In general, doctors are understandably reluctant to invite more oversight, especially if it doesn't clearly add value and if it's not independently and certifiably trustworthy.

Past AI apps, like 1980s expert-systems, generally relied on brittle binary criteria that were hard to match with certainty. Too often they produced results that were either obvious or implausible, but at least they could explain themselves. They were also poor at matching against fuzzy clues from patients (and doctors) who are notoriously inconsistent and nonquantitative at describing symptoms. No doubt a greater emphasis on quantitation lies at the heart of today's AI systems. But if the classifications and recommendations of tomorrow's AIs lack explicability, there's no way in hell they'll be trusted or given authority by risk-averse practitioners.

A middle ground is needed, where the 'advice' from the AI is grounded in clear statistically significant bases and adds value to the process, rather than competing with humans. In some spaces like suggesting cancer therapies, that are more likely to succeed using quanta, I think AI will be adopted and appreciated first. Primary care medicine will probably see it last, though it probably is already employed invisibly behind the scenes by insurers for validation and quality control (like prescription drug contraindication).


I'm pretty confident the role of a doctor will be relegated to the role of a calculator at some point. Nobody will do it anymore, rather, medical research will be the focus and providing data to AI, then less skilled/cheaper work can be done for the human care portion. IMO it is abuse to have someone work into their 30's to eventually get paid after all the sacrifice and long hours.


The way many countries treat their doctors is indeed pretty abusive, but that's orthogonal to the fact that effective medical care right now requires a shit ton of training. This is not artificial complexity. I hope one day soon machines will surpass MDs and we'll all have access to an autodoc, but until then - and since you brought abuse up - I think it's worth considering what conditions we create for medical care people to work in.


The start-up I work for, GYANT[0], is actually working on this problem. The author of the article, Siddhartha Mukherjee, also wrote The Emperor of All Maladies: A Biography of Cancer, which was awarded the 2011 Pulitzer Prize for General Non-Fiction. The book is very good, and so is this article.

0: http://gyant.com/


after reading this article it made me think about and episode of Scrubs I was watching the other day, where Dr. Cox made a decision that led to 3 deaths of his patients and after he was afraid to make decisions during the episode. I thought to myself it must be difficult to make a decisions that could affect peoples lives, would having a machine help make those decisions easier an more accurate?

I mean sure you have doctors that have 20 years of experience but still get the diagnosis wrong even it if it's close, but still it seems that compared to machines that get feed large amounts of data still come up short to. I think saying machines will replace doctors is the wrong approach, in the article one of the doctors interviewed said "If it helps me make decisions with greater accuracy, I’d welcome it”. Thats we need more tools that enable doctors to make more accurate decision than going on an experienced hunch.

I think it's great this subject is being explored it will help more people, and doctors do their jobs even better.


Our family doctor jokes that people come lately with: oh I googled what's wrong with me, I'm here just for a second opinion. :)


My GP encourages me to go online and research things for myself. When he first diagnosed me as diabetic, one of the first things he said was "go online and read up on diabetes, and then we'll talk it over in more detail when you come back in a month".

There's also a phenomenon where patients can actually become more knowledgeable than their physicians (especially a GP who is a generalist by nature) regarding their specific condition(s). It might sound counter-intuitive, but think about it - a GP / Family Doctor has to know something about pretty much every condition under the sun. He / she doesn't have time to spend obsessively focusing on just, say, diabetes. Me, on the other hand, the only condition I care about is diabetes, so I can spend all my free time on Pubmed reading all the latest papers on the subject, etc.

And as it happens, recently my diabetes took a turn for the worse. I was originally diagnosed as a type 2, and was being treated with metformin and my blood sugar had been stable for 5 years or so, but it took a big jump sometime in the past few months. I had read up on "type 1.5" diabetes / LADA, and had an inkling that might be my situation. So I read more on that before going to see my doctor, and when I got there, I was actually the one telling him which tests we should run to confirm/deny that scenario. (Note: of course he looked the stuff up to confirm it instead of just taking my word, but he was nodding and going "yep, you're right" as he was doing so).

No AI involved, but I do believe the widespread availability of medical research / information at the patient level is a valuable thing. Yeah, some people probably annoy the shit out of their doctors with uninformed self-diagnosis, but I don't think that offsets the benefit of this information being available.


I find it's a good sign that he jokes about it. I've encountered many doctors who were bitter and insecure about patients who researched their symptoms online, and reacted with a knee jerk, passive aggressive "you're a hypochondriac" prejudice that pervaded the diagnostic process.


It's definitely a good sign that he jokes about it, on the other hand, I would prefer a doctor that acts as a consultant, there to give proper advice.

When I go to the doctor, I have Googled the symptoms I'm experiencing before I go. Once I've been given a diagnosis by the doctor, I Google the diagnosis to verify that I have the symptoms one would expect from such a diagnosis.

Having had and heard too many experiences where Doctors got it wrong to huge detrimental effects, I want to double check what I've been told and not just blindly accept what one Doctor has judged in 30 seconds based on an initial perception of me without really knowing anything about me.

I think if I've correlated what I thought was a possibility along with what the Doctor has diagnosed and the expected symptoms of that diagnosis, at least I can be confident to a degree that I can trust the diagnosis, prognosis and the course of action provided.

If I am experiencing symptoms vastly different than I would expect from the diagnosis, I want to be asking questions as to how and why the doctor feels they are correct and I am incorrect. I realize I'm not a doctor, but in this day and age, for us all to have the world's information at our fingertips, to be blindly believing anyone whose advice could have catastrophic consequences on our health and lifespan is shortsighted at best and plane idiocy at worst.

That doesn't make someone a hypochondriac, that makes someone cautious about misdiagnosis. Unfortunately, there are many hypochondriacs out there.


I have the exact same views as you on this. I've also experienced many, many years of consistent medical malpractice, to deleterious, life-ruining effects, so I had no choice but to take my health into my own hands and make rigorous determinations as to whether the doctors I would see were right or wrong.

While most people may start off as hypochondriacs (who hasn't been spooked by the prospect of cancer after reading about it), the more research you do, the more accurate you become and in recent years I've finally managed to become effective at discerning good doctors vs bad ones.


I never tended towards hypochondria. I do however have trust issues given the malpractice that I've witnessed over the years. I thank my lucky stars that I've never been directly on the receiving end of it. I'm firmly of the belief that we should treat our medical professionals as consultants and advisors, but we should be fully cognizant of our own health requirements.


I encountered two of those. Turns out it WAS cancer (stage IIIc by the time of diagnosis).

Bring in the machines I say. (The astronomical savings to the taxpayer from earlier diagnoses is the cherry on top.)


Woah, I'm sorry for what happened to you. I got screwed over pretty badly because of the hypochondria confirmation bias as well.

This is one area where I consider the debate about employment vs automation settled. Health comes first. Bring on the machines indeed.


The machines are already doing everything they are safely capable of. I can understand you being emotional about it, but you are overestimating the current value of machines in medicine by a huge margin. The medical system will have to change a lot before the clinical environment is machine-friendly enough. It will happen. But it is not doable as of today.


I've encountered many doctors who were bitter and insecure about patients who researched their symptoms online, and reacted with a knee jerk, passive aggressive "you're a hypochondriac" prejudice that pervaded the diagnostic process.

I am very happy that my doctor is the opposite of this. If I had a doctor like that, I'd ditch him/her and find somebody new.


I took my desire to go carb free to my internal med doctor and cardiologist and backed it up with some limited research data. I was surprised they both said go for it and come back for blood work. My cardiologist is probably 65 and my internal med doctor is probably 40 to 45.

I had Metabolic syndrome (prediabetic) and don't now. Still have slight elevated BP and weigh too much but I lost 50 lbs and my other blood work is amazingly good. I was expecting a battle with both of them.

My cardiologist took me off if several meds as well.


Don't be ridiculous.

Machines can't replace doctors. You can't sue a machine for chance occurrence poor medical outcome.


Amusing trivia from the article: Geoffrey Hinton is the great-great-grandson of George Boole.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: