Now for people eagerly awaiting the MDs downfall, I think you are precipitating things a bit. We all tend to believe in what we do, and I concur in saying that expert systems will replace doctor judgement in well-defined, selected applications in the decade to come. But thinking that the whole profession will be impacted as hard as factory workers, with lower wages and supervision-only roles, is not realistic. What will be lacking is the automation of data collection, because you seem to underestimate by far the technical, legal, and ethical difficulties in getting the appropriate feedback to make ML appliances efficient. I firmly believe in reinforcement learning, and as long as the feedback system will be insufficient, doctors will prevail, highly-paid jerks or not.
I myself am an anesthesiologist, a profession most people think of as a perfect use case for those techs (as I do), and wonder why we haven't been replaced already. The reality is that the job is currently far beyond what an isolated system could do. We already have trouble in making cars follow the right lane in non-standard settings. I hope people realize that in each and every medical field, the number and complexity of factors to control is far greater than driving in the right lane.
People who drive the medical system have no sense of technology. They cannot even envision the requirements for machines to become efficient in medicine. That is why we are seeing quite a lot of efficient isolated systems pop up, but we won't be seeing fully integrated, doctor-replacement systems for a long time. This will require a new generation of clinical practitioners, who will understand how to make the field truly available to machine efficiency.
Recently, my dad was sick with a pretty bad cough. Like, so bad that he couldn't speak without coughing. He fainted twice from minute long coughing fits, one of those times hitting his head his head on the stove on the way down, leaving a deep cut and blood everywhere.
He went to at least three different doctors. He got a scan of his chest. Everything looked clear, and all of the doctors were stumped. Things were pretty bad.
I mentioned this to a UCSF resident friend, and her immediate response was "Oh, is he on <some blood pressure medication I forget the name of>?" I was like, uh, let me see. Called my mom, she checked, and, lo and behold, he was on it. So his doctors took him off it and within a week he was better.
This coughing wasn't some obscure side effect of the medication she knew through sheer brilliance: it's a side effect that's been widely known since the 1970's. Hell, it was on the drug's Wikipedia page.
So there's a couple morals you could take from this. One would be, wow, doctors are smart to be able to diagnose an issue based on a single symptom and some reasonable assumptions about a patient's background! The other is that the median doctor is pretty worthless; spending tens of thousands of dollars gives you no guarantee you'll see someone competent; and that a medical system that relies on you grabbing drinks with a UCSF resident to get good results is fundamentally broken.
Machine learning and expert systems don't have to be as awesome as the best doctors to be valuable. They don't need to be better than competent doctors, even. They just need to provide a bare level of competence to provide a huge amount of value.
ALWAYS READ EVERYTHING YOU CAN ABOUT DRUGS YOU ARE PRESCRIBED!
Sorry for all caps, but it is super important. Not that your dad is in the wrong, lots of people have justified (to a degree) trust in their doctors. However doctors are people and just by that alone aren't perfect.
A few years ago my doctor prescribed me an antibiotic for an ongoing illness I had. I read the entire pamphlet for it and did some reading online about it. All before taking it. Turns out it can cause seizures if it interacts with polypropylene glycol, one of the main ingredients in e-cig juice, which I use daily. I had told my doctor I use an e-cig.
Really I cannot stress how important it is to be knowledgeable about the drugs you are taking.
I'm not trying to undermine your point entirely, but there is a flip side.
I can't tell you how many times I have seen a patient start a medication, then come back to the office within 48 hours because they coincidentally have every side effect that is listed in the pharmacy's information sheet or that they looked up online. The vast majority of these side effects are benign, present with next to no pertinent physical exam findings, and can't be definitively tied to the new med (like upset stomach, fatigue, headache, etc.).
Then they will start listing that medication as one of their "allergies", and if the nurse/doctor documenting doesn't dutifully probe what type of "allergic reaction" they had, they may end up not being prescribed that med in the future when it really is the drug of choice. A little nausea is a small price to pay if it kills a potentially life-threatening infection.
Also, I'm skeptical about the seizure risk. The thing about side effects is that they are supposed to be stratified according to risk. Doctors are typically aware of these risks, but patients aren't. So if your drug is listed as causing "headache, nausea, and seizures", there may have only been one patient out of millions who had a seizure while 50% experienced headache, yet the handout probably won't tell you that.
But even if it is a notable risk, I would be surprised if the propylene glycol you inhale from an e-cig could accumulate to a high enough level in the bloodstream to cause drug interactions, although I admit adequate research on the subject is lacking.
My advice would be trust your doctor first. If you don't trust your doctor, start seeing a doctor that you do trust. Then if you have a significant adverse reaction to a medication, talk to your doctor about it. Quite often they know something that you are not going to find by spending a few minutes on the internet.
As a side note, a good history includes asking about many habits. A lot of healthcare providers are guilty of simply asking "Do you smoke, drink, or use drugs?", but ideally the smoking aspect should be phrased as "Do you use any tobacco or nicotine products?". Patients usually won't read your mind and volunteer that kind of information. They will tend to give yes/no answers, so direct and specific questions are important.
Odd, here in Norway that's exactly the kind of information I expect to see on a leaflet inside the packet, not only on prescription drugs but also over the counter pain killers like paracetamol. Roughly translated from the Norwegian it says:
Rare side effects (more than one in ten thousand but fewer than one in one thousand patients) include: over sensitivity, allergic skin reaction/rash, reduced white blood count, anaemia, disturbed liver function. Very rare side effects include serious skin reactions. Liver function can be affected by paracetamol and alcohol abuse.
That seems like a major problem. Is there any reason that more detailed information can't be included? Mathematical literacy may be a problem, but that doesn't mean that there aren't millions upon millions of mathematically and scientifically literate consumers who could use this information effectively.
I'm not trying to justify any of this, but that's how it is.
Not sure if it will be helpful in the future, but I can tell you that descriptors used with side effects follow a standard convention:
very common: > 10%
uncommon: 0.1% - 1%
rare: 0.01% - 0.1%
very rare: < 0.01%
Once you view your GP as a mechanic, you can make much better decisions. Sure, their "cars" are more complex, but the role they play is similar. If you want your body to function well, you are responsible for it, not the doctors. They just help you out.
The system is built on rent seeking behavior. If I don't treat it immediately then the patient will come back again and I can charge him/her a subscription.
Health care in America is very inefficient. Mostly because of insurance lobbying and govt that can't make a firm long term decision.
I would love to see most common problems be self diagniosable with kits and AI.
ACE-inhibitors are well-known to cause a cough but not like the one you're describing.
If I was to guess I'd say he had a respiratory tract infection that magically disappeared (as they do) not long after ceasing the medication. This is a common type of scenario where lay people get confused about correlation and causation, and is one of the reasons you need doctors to help you.
Even if it was the medication, like I said, this sounds like a somewhat unusual case.
You'll probably spend the rest of your life thinking the doctors didn't know what they were talking about, but in my experience with doctors (I have a lot of that since I'm a doctor myself), even the "median" ones are far better at diagnosing things than lay people.
For any given complaint, most will just tell you that your test results are in the normal range (don't get me started on this "normal range") and tell you there's nothing they can do.
I live in the bay area and have had a few GPs over the years, some from well known institutions.
Doctor's seem to put in the minimal amount of effort to get you out of their office in the allotted 20 minutes so that they can move on to the next patient.
I have multiple anecdotes similar to OP's - times where if it weren't for my insisting or my own inkling to visit a specialist I just would never have been helped.
Are there goood GP's? I'm sure there are - but to casually dismiss OP's statement is a bit ironic - since casually dismissing is part of the biggest problems as far as my own experience with doctors.
I responded to OP's statement. Dismissing is quite different.
The doctor, after number 3 in my list and less than 15 minutes into the appointment said "Are you done?" After item 4 a couple minutes later he said "Is that it?" In a condescending manner.
Needless to say, I was pissed.
But, the thing is, it's not the doctors fault. He's working in a system that values throughput over all else...and this is Kaiser, a vertically integrated provider where you would think that would be less of an issue. So he was late to my appointment and saw my long appointment as a way to get back on schedule to ensure he saw the 40 patients he had to see that day.
The whole system is broken.
This is not meant to sounds harsh, all I am saying that from any business you can't buy a product they aren't in the business of selling.
Comparing this business model to McDonalds is a bit ridiculous. Maybe compare to a lawyer booking double time, or thousands of other professionals. But don't make the unnecessary jump to a fast food chain.
Are you in US? Is double consult something insurance/government pays for twice for you? Or are you are talking about a different billing code? Or self-pay?
Doctors still often go over time.
So given that, a patient is likely getting the most expensive product a given doctor will deliver already. To get a better product simple options are specialists (much higher reimbursements, selection bias for more complex cases) and specialists at major teaching hospitals (even more of the above).
There are effective ways for a patient to hack the system by doing lots of work themselves (e.g. by maintaining his own clinical summary) but they are beyond what most patients will do.
It's a lot like YC office hours actually. Though doctors have less time than YC partners. And doctors get paid per appointment.
In general, it's not useful to think of doctors as engineers. The line doctors work is much more tech support - lots of similar highly repeatable cases, with a well understood script. The ones that are more like engineers are doing research in research hospitals and writing that script. They still see some patients as part of their work, but appointment reimbursements are not a profit center for them, but part of their research.
Then again, my previous GP said he had 7 minutes per consultation.
(FWIW, I’m in the Netherlands).
We aren't comparing doctors to "lay people" - we're comparing them to machine learning algorithms. I'd prefer the machine to many of the arrogant, but mediocre, doctors I've seen in my life.
Edit: It seems "UCSF resident friend" is American for doctor. I suppose my statement was actually out of context then.
Separate from that, I've had similar experiences. The median doctor I've interacted with isn't able to put in effort to really understand an individual case, and it's very easy to get overlooked or ignored. I've run into a few exceptions but that is the general rule. Source: wife had pregnancy with complications, young children.
I've had the dubious honor of spending quality time in a few hospitals with sick relatives in the last year. The nurses rock, and PAs and NPs seem to do what residents used to do.
The residents in particular seem to be clueless. With electronic records, they can't look at the chart as easily and aren't situationally aware.
It's a common fallacy to want an older, experienced physician vs. a resident or fellow. There are exceptions like specialized surgical procedures. Generally speaking though, when physicians leave the academic environment they rarely get better. Residents and fellows (and physicians who stay in academia) spend a huge amount of time on education - it's built into every day they're at work. Things like case conferences, morning reports, and Morbidity and Mortality reviews.
When you enter private practice it is completely up to you to stay educated and current. Sure, you may attend SGIM once a year and read monthly journals, but that is nothing compared to the structured learning environment they left behind.
I will take a good resident at a county hospital over a private hospitalist any day. Even at exceptional Tier one hospitals the level of dumb shot that happens with lazy private physicians can be jaw dropping.
The private hospitalist is going to order some labs and consults, round on you once a day (maybe) and that's about it.
A good resident will discuss your case at length with their interns, co-residents, fellows, and attendings.
A this point, in a lot of schools, PAs and MDs get about the same amount of in-class training, for example (especially when you consider that PA prereqs are often stricter than MD prereqs in terms of prior coursework and experience). A PA with 2 years of experience is at about the same experience as an MD just out of school. The discrepancies with specialists (e.g., optometrists, dentists, psychologists, and so forth) is even greater.
I don't think MDs are incompetent, but I think our healthcare model is basically wrong, in that people just aren't able to juggle everything in their heads, regardless of how smart they are, and someone focused on their problem, with access to the internet, just might be able to get more traction than the person trying to remember 1 out of 10000 things in their head. I think this is also why there's been a push toward specialists: it's just easier that way, even if the problem isn't that specialized, because the specialists only have to remember 1 out of 20 things.
The basic healthcare model, with MDs at the top, followed immediately by PAs and nurses, needs to change, to be more decentralized. There's a role for those training backgrounds, to be sure, but many of the same tasks could be accomplished in different ways, and many things are maybe done more efficiently and less expensively through different routes (by relying more on pharmacists, for example, or expanding training and practice for optometrists, psychologists, and so forth).
Maybe if a pharmacist had been consulted, for example--maybe if there was less of an expectation that you start with an MD, or rely on an MD to provide all the answers, or expect them to know everything--the side effect of that med would have been confirmed earlier.
This is the one silver lining that I might hope for in a renewed healthcare debate, which is a restructuring of care systems to increase competition and decrease costs. So far everything has focused on how to pay. That's not to say I agree with moving away from a single-payer system, or that the current government will lead anywhere productive, or that I've seen anything innovative from them in that area, but it's something that hasn't been done yet.
Fwiw, "putting in effort" is somewhat relative. Do you mean they were lazy, or spread too thin?
This seems fairly disrespectful. I suppose you mean that the median doctor isn't perfect, but do you have any idea how many hours residents put in? You might think carefully before calling the median one worthless, and ask whether they're really, truly wasting their lives in the service of others.
Less emotionally, I find it hard to believe that half of doctors proved essentially no benefit.
> it's a side effect that's been widely known since the 1970's
Guess how many side effects have been widely known since the 1970's?
For clarity: I am 100% in favor of thinking critically about a doctor's advice, having received contradictory diagnoses before.
Or, if you want to search the transcript for it, "So, given a choice between two doctors, let’s say — one fresh out of medical school and the other with fifteen years’ experience, which one do you go for?"
It's a good listen or read.
Tl;dl: Anupam Jena , "And we can basically see what happens if a patient happens to be treated by a doctor who is 20 years out of residency versus 5 years out of residency. And what we find is that if you happen to be treated by a doctor who is 10 years or 15 years out of residency, your mortality within thirty days of being hospitalized is higher." [Note: surgeons were one of the specializations that did not follow this correlation]
Side note: the medical profession is really terrible at keeping statistics on outcomes, outside of actual trials.
Does this make it fine not to expect a doctor to inspect the list of known side effects for drugs his patients are taking?
I've heard this very story a few times already, perhaps the sources of data widely accepted by doctors are not up to par with what non-doctors consider as accurate, in some cases anyway.
Absolutely the accuracy of doctors should be improved, but the bar is not "are you as good as a UCSF resident?" (one of the best programs nationally) but "are you better than nothing?"
It just seems crazy to me to suggest that the median doctor, i.e. at least half, is worthless.
I have similar crappy experiences like most people here, and tended to hate all medical personnel uniformly. Then I started dating a young doctor - still can't wrap my head around how overworked over-stressed and underpaid piece of shit work that is.
Bear in mind that we're talking about biggest university hospital in Switzerland. Working ridiculous amount of hours (normal shift can be 10-13 hours, overtime unpaid), night shifts that will just mess you up mentally, crazy and dangerous patients, drug addicts, very real danger of contracting stuff like HIV or hepatititis C by single tiny mistake, or actually killing somebody by overlooking some tiny fact about N-th patient that night. You can end your career for life and end up in jail and/or with lifelong debt by one mistake. ONE. Even after directly saving life of 500 other people.
Compared to my comfy corporate job, where I earn much more, did get mistake or two in production over last 5 years (nothing critical like making loss, but still), well... We should be thankful that anybody clever is actually still doing that job. Most of them could earn more and have actually a life someplace else. Blame how system is setup much more than individuals forced to exist in it.
This is why, if I were an MD, I would be latching on to ML as quickly as I possibly could. There is a TON of money to be made here, sadly, I do not have the domain expertise to make it, though I am picking it up pretty quickly.
I'll bet the farm it was lisinopril, or another ACE inhibitor. The ACEI cough is notorious enough that any 1st year medical student probably should have been able to piece this together.
The problem is not that those doctors were ignorant. It's more likely that they did not ask the right questions to get a proper history. You should ask every patient to list their medical problems and medications unless it is a routine follow-up when nothing has changed.
Something else that can happen is patient ignorance (and I am not insinuating your dad was guilty of this, or making a value judgment on people who are).
Often times you can't get a proper past medical history or medication list unless the patient brings all of their prescription bottles to the office. E.g.:
>"What medications are you on?"
>"Well, I'm on a sugar pill, a water pill, a cholesterol pill, a stomach pill, and some allergy medicines."
>"Ok, do you know the name of that sugar pill?"
>"It's a little white pill. I think it starts with an 'F'. No wait, I'm thinking of my water pill."
>"Do you have any medical problems?"
>[glances at meds list] "Ok, so why do you take metformin, hydrochlorothiazide, atorvastatin, omeprazole, cetirizine..."
While I think you might be overreacting a bit here, I have a similar story. I had an irritated, gunky eye for the better part of a year. Was bounced around multiple doctors, including one specialist, and had all sorts of really annoying treatments I was supposed to do. Finally got bounced to a specialist at a major teaching hospital. Took him five minutes to correctly diagnose me, and the major part of the fix involved instantly stopping all of the things recommended by the other doctors. (The rest of the fix was simply to make sure my eye was protected while sleeping.)
So yeah, I think there is a lot of value that might be added by an expert system that is familiar with the slightly more exotic things that can go wrong with your health.
Your dad's case is like complaining about constant nosebleeds while taking Aspirin every day. It's a bit obvious, to be honest.
For example, I am a radiologist. Probably 90% of my cases are mundane and are either normal or have 'easy' pathology that I can readily detect and quickly report. Another 9% is a mixed bag of things that take a lot more time--something I need to reference or think a bit more about or possibly show a colleague. And then there's the 1% that is truly a 'make' or 'break' case. My sub-specialty training and experience can really shine in these situations and I can easily dispense a diagnosis or make a 'tough call' where another might equivocate or defer to more imaging or a follow-up. I like to think I'm not being paid for that 90%... I'm being paid for the other 9+1% That's where I truly add value in the system. But how would I measure my 'success' rate when even the best eventually make mistakes given a long enough time and there is often not a gold standard for diagnosis or even long term follow-up/resolution to many of the tough cases I've seen.
Studies have been done where experts review a sampling of cases and that's essentially what we do now for quality control--we randomly review a few prior exams for a case we are reading and then submit feedback based on our opinion on the same case. In these studies, trained radiologists do very well and make clinically significant mistakes only rarely.
But here's the problem. Let's say we use machine learning to interpret a scan, something like a CT of the chest to look for a pulmonary embolus. On that scan, the machine may see an incidental pulmonary nodule. Well that's fine, we have good data on how to follow-up on those and what to recommend. Well, what about an anterior mediastinal lesion? Now it's not as clear. The differential is wide and dependent on age, sex, symptoms, history, etc. Let's say we build that logic tree, let the machine learn from 1000 anterior mediastinal mass cases with tissue diagnosis. Well guess what, we do those studies all the time and it turns out that NOTHING is 100% sensitive AND 100% specific. You have to sacrifice one for the other... it's a balance. So, you would need to build ROC curves for every possible finding on every scan and decide that 1% of the time you will miss a cancer to save having to biopsy an extra 20 people... or maybe you want to miss cancer only 0.1% of the time, but that means you'll have to biopsy an extra 200 people and one will have to be hospitalized from complications. You think they will be happy knowing you made that decision? Guess what, we are already doing that with Mammography. The screening guidelines that the different societies and agencies argue about are this very rationale... how many women should be allowed to die to prevent those extra biopsies, false positive workups and all the things that come with that. I welcome machine learning into medicine. Guess what, it's destined to be mired in the same ethical dilemmas we face everyday.
I have a feeling I'll make it to retirement.
For example, let's say you could--you can't--but let's say you could without a shadow of a doubt predict that the pulmonary nodule in your lung is not a cancer with 98% certainty. Well if you are 40, that's not that good actually.... that means 2 out of 100 people in a very productive time of their lives may have a cancer go completely ignored! So should I tell every patient in my report, "There is a 2% chance that this is malignant, but I won't recommend biopsy because there are chances of complications from that and we can save a lot of money by letting a few slip through the cracks--it will cost the healthcare system too much. Thanks for your understanding."
Remember, statistics predict population outcomes... not individual outcomes. I can tell someone that something very rare might happen.. but guess what, when it happens... the idea that it was a rare possibility doesn't assuage any negative feelings about it.
There is no right answer. Some people are illiterate! Even educated people don't understand statistics... how am I going to quantify that kind of risk/benefit analysis in a way that ensures a patient truly understands the implications. What if that risk were 1%, or what if the patient was 70 years old? Should either of those affect my recommendation? Who am I to decide who should be recommended one thing vs another... it's a value judgement! But if I leave it solely to the patient, a lot of times they will ask me, "What would you do?" Probably the most common thing asked after a long discussion like that. The answer is, "I don't know."
Bottom line: We can do 'strong' recommendations for things that are well studied like breast cancer and pulmonary nodules, but we don't have data to support recommendations in many other areas of everyday practice. A machine learning system would need data that just isn't available yet to make recommendations.
Even the concept of electronic medical records has been resisted, and is currently poorly implemented. The gov't had to literally bribe doctors to convert to electronic systems and most seemed to go out and get the least poorly built products on the market.
I'm not trying to paint them as evil. Heck, even NASA scientists were suspicious of computers taking their jobs. I see this in other fields all the time.
Seeing the potential and seeing the current state of affairs kind of makes me sad. I want to live in the world where a balance has been reached. Where my medical record is a salient form of AI that checks if I lost those 10 pounds I promised and loops in the doctor when my flu symptoms linger too long. I want it to eliminate the hassle for doctors, hospitals, insurance companies as well as patients, and I think in the process it can drive costs down and raise quality of life.
I've never seen a doctor shun technology to keep themselves relevant (I've no doubt it will become a thing more in the future).
Per my interaction, the best doctors will do whatever it takes to improve their efficiency. Their biggest problem is the sheer volume of patients. Everyone wants to go to the best doc, and word spreads pretty fast. If you start doing good, very soon you are swarmed with patients 24/7. So any tool that can speed up their patient handling time is welcome. This is not a scheduling problem. People built a ton of scheduling tools and those tools have done next to nothing to move the efficiency needle.
What's needed is, what you said, a diagnostic tool with automated feedback from patient's activities and dramatic physiological changes. The best doctors that I know will take it up in a heart beat.
I'd argue that the answer to this is, "Sorry, but we're full up on appointments that day/week/etc". And if you start needing to book your normal clients too far out, you stop taking new patients. The fact that people want to see a doctor doesn't mean they get to see that doctor. The answer is not to take more patients but give each one less time, resulting in sub-par treatment.
And if you start needing to book your normal clients too far out, you stop taking new patients
What's a normal client ? Do you mean repeat clients ?
People sue doctors, too, but I'd put money on the proposition they'll be more willing to sue a company than the individual they met who seemed like he had their best interests at heart.
Most doctors don't work in hospitals, they work in family practices and most definitely are making the decisions about how their practices operate. Most of us don't go to hospitals to see our doctors, we go our doctors private office.
Specialists still seem to work from private offices, but my experience is that most doctors work for large medical groups where they do not make the technology decisions.
They have more data than most other networks, and more resources to invest in R&D.
Whether this is a good thing or not is open to interpretation. Doctors make mistakes, but people often catastrophize their test results.
Back in medical school we were taught that 80% of the diagnosis comes from the history, 10% from the physical examination and 10% from the investigations. Or something like that, it's the idea, not the numbers that matter.
As a young computer nerd, this didn't sit well with me, but the more I practice medicine the more I understand it.
My teachers didn't realise at the time, but I think they were actually talking about Bayesian probability.
I think we are talking about this more and more in medicine, but unfortunately not all specialities have embraced it.
There are lots of likelihood ratios published and good statistics available to help with diagnosis, but we don't use them effectively. There are some online tools and apps available to help navigate the literature, but I'm not really sure why we don't use them more.
in this particular area, one might argue that Georgia is more enlightened than California.
Both of those aspects are being relaxed today. Doctors are become tech-oriented and more diagnostic companies (run by first-rate medical/engineering teams) are starting up that may upend more established medical companies. We will see a rapid change in the diagnostics landscape in the next decade. This may also add some risk to the diagnostic field, but the benefits far outweigh the risks.
No, she was very unwell. However the test results would either be normal in someone regardless of the severity of their asthma, or would be expected to be 'abnormal' in someone with mild asthma, and 'normal' in someone with severe asthma. The ability to interpret those test results, and take a history and examine the patient is important.
Similarly, we used the latest evidence-based guidelines to assess the patient's asthma severity, and based on several objective criteria (breathing rate, oxygenation of blood, peak flow, etc) the guidelines determined she had moderate-severe asthma
However we called the ICU doctors to see her. The ICU consultant, with many decades of experience managing acutely unwell asthmatics, simply looked at the patient for two minutes, observing how her chest moved during breathing, and the sounds and respiratory effort she was making, and decided to take her to ICU. This was a good decision as she ended up deteriorating and requiring very aggressive treatment. Whilst guidelines can make a suggestion based on the interpretation of some objective data points, the ability to assess a patient as a whole, based on history and examination, is still an important skill, and one which it is hard to automate
Tests are usually done for a clinical purpose, not just to find out what your result is. What do the numbers mean to you?
Many straightforward tests people can do and interpret themselves, like people with diabetes on insulin (measuring BSL). They're not all so straightforward.
We're already seeing a significant rise in the role of nurse practitioners at the front line of medicine. Today, they gather the data and hand it to an MD, so handing it off to an ML system would be straightforward.
No, I am quite confident in that ML will really take off when data fed to those systems will be automatically gathered.
If your comment was a book, I'd lovingly put it on the shelf next to Reinhart's masterpiece ("revised and expanded ... with three times as many statistical errors and examples!"). Unlike Shakespeare, it appears nonfiction is well within the capabilities of the Interwebs (asterisk).
(asterisk) For many years, it was believed that millions of monkeys hitting millions of keyboards would eventually recreate the works of Shakespeare. Now, thanks to the Internet, we know this to be false.
News flash, dermatologists don't just look at moles and oncologists don't just do differentials... let's see how these isolated systems deal with cleaning up TKI- and IST-related gastric bleeds, C. diff code browns, and timing chemo around liver resections for sepsis. All in a day at County...
My bet is that clinicians who use AI to amplify their own abilities will come to run the system. Errybody else gets to be a glorified NP, at best, or (worse) administrative ;-)
I hope that, as you said, it will allow doctors to focus on more difficult problems that we did not even think would be possible to tackle.
Certainly, the medical industry does not face the same competitive pressure from outside, but that is the nature of industry in general.
Rather than calling medicine a monopoly, it is more accurate to accuse heal care providers and practitioners of colluding to ensure prices are high. Monopolies by definition cannot collude since there would be nobody to collude with.
In the grand scheme of things, I believe medical costs are accurately priced and it is people's wages that are not adjusting correctly.
Not so, as the number of medical residents is de facto set by Congress (due to how the funding system works), thereby limiting the number of doctors in each specialty.
The critical difference is that there's no limit on the number of people who can get standard driver's licenses, but there is on medical licenses.
It's not just the AMA directly, though. Residencies are funded federally, I believe. I'm pretty sure the numbers per specialty are centrally controlled, but I'm not certain how the process works. And I don't know who limits the medical school slots, but I'm under the impression medical schools can't just increase the number of students at will.
All that being said, keeping the supply artificially low probably has some benefits.
...if drivers got to set the test standard and pass rate.
But I'm not sure opening up the gates is exactly what we want either. There are countless examples of charlatans trying to push the line here, and as a consumer, it would be too difficult, especially while fighting a serious disease, to tell who can really help me and who just wants my money.
There are better ways to make healthcare more affordable and more effective than to throw away the idea of proper certification.
For instance, some states (like the one in which I live) have granted more powers to nurse practitioners. I support this move.
There's even a big sign at the doctors office that says "only one health issue per appointment". Why? So they can scam the system for more cash.
Do you have any experience with the US medical system? I lived in Alberta for 7 years and in Quebec for 3 (important to note because it is hard to take you seriously since you did not even mention that healthcare is provincially administered in Canada).
> I have to book one appt to get referred for a blood test, another to take the blood, and then another appt to get the results!
Here is an alternative explanation to "doctors are incompetent scammers": you need a referral for a specialist so that specialists are not overwhelmed with requests from hypochondriacs who read something on the Internet. Once the test results come back the doctor wants to see you to explain the results and recommend further treatment. That saves everyone time and money and is the opposite of "scam[ming] the system for more cash."
The "one health issue per appointment" is not a government policy, it is a policy that some physicians have so that they can see more patients each day.
Great use of logic.
I still do not understand why you are blaming the general practitioner clinic for referring you to a specialist. If you want same-day STD testing there are a lot of private test centers in Canada. For example, top Google search result: https://www.stdcheck.com/canada/
That's been available here/US and it has come in handy.
If you look at the average earnings, Canadian physicians make ~320K gross salary , but they need to pay expenses out of that (reception staff, office, etc.). They do well, but they are not in strata of hedge fund managers.
We should probably just pay them a flat rate, but on the balance the Canadian system is much more cost effective than the US free market system.
I guess doctors don't believe in the free market.
That said, I do agree that we likely won't see the full potential of replacing doctors with machines in the near term simply due to the political and regulatory hurdles, as well as the need to overcome the fear of the public and address the very real and significant privacy issues.
As to anesthesiology, I've actually done some work for a startup in that space working on a system that could very easily move anesthesiologists to a more supervisory role where they would watch (say) 50 surgeries at once (surgeries being handled by software tied into all monitors, as well as EMR systems for history and physical data, and able to dispense chemicals as needed) and step in if and when there was a problem. Technically, there is not much really stopping it, but politics and regulations present significant impediments (and probably rightly so, for now).
Medical diagnosis in the general case, esp with mis- or disinformation from patients, is quite complex. The data sets available aren't that good given the built-in biases & missing data. What we're seeing is deep learning can help when it focuses on one, little thing with a ton of good data available while ignoring everything else. That's what MYCIN did back when this concept started. That's not enough to replace MD's any time soon.
What you'll find is we can at best supplement the decision-making practices of MD's by running their data through a bunch of ML systems in parallel to try to suggest things they might miss. This data will overtime feed into the ML systems to improve them. Augmented Intelligence, not Artificial Intelligence, will remain the best way to do things due to all the stuff in doctors' brains from their professional experience that's not in machine learning datasets.
It seems that the barriers to automation may be more the result of strong professional associations rather than specific complexities of the job.
Let me try and give one intuitive explanation; if others would like to chime in with something better, by all means.
Let's suppose that you are classifying objects in images - say bananas and oranges, but it could be tumors or anything that you like.
So we train a classifier to predict this, and we find that of 100 classifications on a hold-out set, we get 73 of them correct. You might, quite reasonably, interpret this to mean that if we randomly select a new image of either an orange or a banana, we will have a 0.73 probability of classifying it correctly. (There are actually some subtleties in this interpretation which I'm ignoring, but they aren't so important for the point I want to make.)
Suppose, however, that we draw out an image that we want to feed into our classifier, and we look at it for a moment. Suppose further that this image contains an object that is long, thing, curved and yellow. We'd expect our classifier to classify it as a banana, and sure enough, it does. Now we draw out another image, except this one has an object that is long, but bent almost completely in a circle, and is more orange than yellow. Now, we might still expect our classifier to classify this as a banana, but should the classifier really be as certain about this prediction as it was about the previous one? Intuitively, I would say not. However, the overall classification accuracy remains unchanged, and so we can't say anything in particular about the certainty of this prediction.
So uncertainty isn't just the proportion of your results that you classify correctly.
Furthermore, it also isn't exactly equivalent to the class probability produced by your classifier, though I don't think this is the best forum for me to get into the details on that.
But you want the uncertainty conditioned to the particular observations you have made.
A ton more here:
I mostly agree with your post, ML has great potential to augment and assist doctors in making the right calls. Paraphrasing Jeremy Howard, "Doctors will be skeptical of these tools coming to take over their jobs at first, but will learn to trust them more and more when they see the kind of predictions they make, thereby slowly but steadily leading to their adoption (as an enhancement or tool) in many areas."
Now, I'm absolutely not saying that we can't engineer solutions in the medical field. You know more than I do about what diagnostic evaluation takes and how procedures are performed, so I don't pretend to understand that aspect perfectly. What I do know for sure is that a machine cannot and will never replicate human thought and decision making and cannot replace a doctor ever until the end of time.
Also, you bring up a good point about salaries. Doctors are important and do great work but deserve to make no more than double or triple what the lowest paid person in their society makes. It's time to bring lawyers, doctors, accountants, developers, engineers—the professional class—back down to reality with the rest of us.
I see your point and agree - if you look at ML and computing in general as a tool useful within the current context of medicine.
But I think the whole field of medicine (or a big part of it) may become obsolete - because the change will be driven by technology, not medicine.
By monitoring people's vitals in real time - think wrist bands which can do blood tests (+ urine, sweat, sperm, etc), feed that real time data to ML algorithms, train it on billions of people and you can accurately tell ahead of time when a person is getting sick. Next day the drone brings the medicine.
Doctors will still be involved of course, but their role will be mostly to confirm whatever the algorithm has decided..
So I think we'll be seeing a move to preventive medicine / real-time monitoring during the next decade or so and within our lifetime people might not be involved in medicine at all.
That is, if we have peace and things go according to plan... Which is a long bet of course.
Preventable conditions are often a combination of genes plus environment plus luck. Trouble is, the "luck" part dominates more often than we'd like. This is why preventive medicine is often accused of being tremendously arrogant.
We've got a long ways to go, and it would help to focus on the highest marginal benefit items. (We were supposed to have flying cars, and all I got was the sum total of Hunan knowledge on my phone... not a bad trade, on balance)
If your profession only relies on reading a bunch of discrete values from monitors, it's a prime candidate for automation regardless of the complex process that derives the values.
The claim that this device is a replacement for an anaesthatist is similar to claiming that an automated wheel balancing machine will one day replace car mechanics. Certainly it can perform one specific component of one kind of anaesthesia, but it is far cry from the full skill set of an anaesthatist.
Once you get to general, for example, the control problem is a lot more difficult because of the required precision.
Physicians will adapt by carving out even narrower niches ("I only specialize in the LEFT eye..." ;). But I really don't see a they-took-our-jobs moment until people are allowed to file lawsuits against AI. Companies will never put out an AI M.D. in the wild unless there's a flesh and blood counterpart to shoulder the liability (+1 job right there).
Don't get me wrong though, I eagerly anticipate the day when going to the doctor/machine is not such a miserable soul-sucking experience... (I'm a physician.)
MDs are not factory workers. First you have the AMA protecting you. Second, there appears to be a very strong preference by humans for medical care to involve another human being there providing the care - in person.
There are other professions where machines are already much better than humans and the need for humans to even be there in person is debatable. Airline pilots for example. Of course, while a computer can fly a plane better than a human and planes can be flown remotely, it will take much longer for passengers to accept the idea and get on a plane without a human pilot in the cockpit.
The effect of machines for pilots already highlights risks to come. Humans still need to be there in person, but they almost never need to do anything but sit there. The job is to be a warm body and tell people to look out the window while they fly over the Rockies (seriously, take off to landing no human is really needed to fly anymore - pilots do nothing 95% of the time).
How did machines taking over the cockpit affect how the pay, benefits, competency and job satisfaction of pilots took a nosedive? Did you see the movie "Sully"? Where a human did step in over the machine, he almost got crucified even though it was the right move he only just barely was able to convince others of it.
Is it possible machines won't replace doctors very soon but could soon start to lead to lower pay, less trust, less prestige (here come the nurse practitioners at least), less need to use your brains at work and less meaningful interaction with patients...? And when a machine can't do something in those rare cases, will the doctor be ready for that? Think the best and brightest still will want to do this? Doctors already are not fans of that thing where patients come in having looked up their symptoms online, well they are about to do that 10x and be more right than you.
Total guess here: how likely is it that we see the best and brightest increasingly go into medical research instead of practice medicine? As the corporate pharma kickbacks you guys get can't make up for the low pay and shitty shit job conditions machines have created for doctors, while nurses and B-team quality doctors take over the actual physician working with patients stuff?
It's not a matter of if, its a matter of when. Greed will be the impetus, and eventually quality catching up will be the nail in the coffin for traditional medicine.
I propose that the doctors who use ai to make themselves damned near infallible (by catching the corner cases where the ai is poorly trained) will make out like bandits, just like top lawyers. The ABA is a monopoly, too, and legal cases are perfect fodder for RNNs. Yet good corporate lawyers still make money faster than they can count it.
Meanwhile, shitty lawyers are out of a job.
Lather, rinse, repeat, but with a million times more regulation and inertia.
Person checks funny looking mole on smartphone app, app with intentionally conservative algorithm says there's some risk it might be a melanoma, dermatologist gets to have a proper look at it in three dimensions (and a second opinion from the medical grade version of the ML app). Result: more people visit doctors over issues that never bothered them that much and the doctor gets assistance with their diagnosis.
And of course some of those people that wouldn't have bothered visiting the doctor under the old system actually do turn out to have a melanoma and the early diagnosis significantly improves their survival chances.
If ML unleash a global increase in diagnostics for all MDs than it's even better.
Not something I would have expected a machine to tell me after a procedure completely unrelated to my heart.
In fact complicated algorithms with so many streams of data that are probably not directly shared by specialists until it is needed may uncover new things even before traditional Doctors spot it
It seems easier to scale compute power than to scale doctors considering how long doctors take to train.
Of course, being able to get a machine to a doctor's level is not easy!
what I'm excited for with AI + medicine is how many poor people will be able to get _some_ level of healthcare where they were previously getting none. These programs aren't free, but once they exist they are cheap to run. so people in 3rd world countries can get their films read, or have an AI dematology program look at their skin.
cynically though, I worry that once these AI programs get perfected they will only exist behind the worlds toughest paywall and insurance will just charge you the same fee to see a real doctor. they'll not let it run films on poor african villagers because they have to "protect their IP" or some bs.
I'm assuming the 'data' for your condition was apparently in the EKG, thus there is no reason in the world an ML shouldnt have picked that up. Where as with humans, maybe you got a tired, new, or bad anesthesiologist.
if you said they made the determination by something other than the EKG, I would be more convinced the 'machine' would have missed it
If you've developed expertise in deep learning and want to apply your skills to healthcare in a startup... please email me: email@example.com. My co-founder and I are ex-Google machine learning engineers, and we've published work at a NIPS workshop showing you can detect abnormal heart rhythms, high blood pressure, and even diabetes from wearable data alone. We're working on medical journal publications now based on an N=10,000 study with UCSF Cardiology.
Your skills can really make a difference in people's lives. The time is now.
IIRC they were outperforming the average radiologist on some tasks 10 years ago.
* Following behind the expert to give a second opinion
* Going ahead of experts to screen cases that should be read by an expert
Humans are imperfect and have their sources of unwanted variability. Algorithms may be less variable and reach near-human performance in controlled settings, but are often not flexible enough, not good at incorporating multi-modal patient history, and sometimes fail in spectacularly bad ways.
As soon as AI manages to show that capabilities, it's the end. Algorithms are much more accurate in combining all the available biomedical information needed to decide for diagnosis and therapy.
Another is the existence of edge cases. While not quite as bad as self-driving cars, they still require human review.
One radiologist I spoke with was under the impression that his field was going to disappear in the next n years. I don't think that will be the case entirely, but it will change.
One reason why image detection is ripe for this kind of "disruption" is that some of the highest paid medical fields (dermatology, radiology) basically employ expert image recognizers.
On the other hand, I'm not sure what percentage of health care costs goes toward employing doctors directly, but I don't think it's huge compared with the rest of the facility and equipment and administration cost.
The real benefit will come when people don't have to go in to the doctor at all. Which makes the smartphone "take a picture" aspect pretty sweet.
If this interests you and you have developed the expertise, there is another startup opportunity to explore -- please email me at firstname.lastname@example.org. We are bunch of machine learning PhDs, developing/publishing and commercializing deep learning algorithms for disease/risk identification from retinal photography. We play with millions of retinal images, and it's a lot of fun!
What if we haven't but we do?
We (Cardiogram) in particular are hiring for: mobile engineers (building an Apple Watch app used by 100,000 active users), storage and data infrastructure engineering (how are you going to store 10 billion sensor measurements those users are producing?), and machine learning engineers (now that we have the data, what can we do with it?).
In practice, many people fit multiple categories. There's a tight relationship between data infrastructure and machine learning, for instance, since new infrastructure often enables new algorithms. Likewise, building a new feature in Cardiogram for Apple Watch may give us 10x more labeled data, and therefore make the deep learning algorithm perform better. Interdisciplinary is good.
So, how true or false is it?
Mind you I'm not talking about researchers, who will always have a job. I'm talking about practitioners. I've had a medical condition from birth and I've had to deal with my share of doctors. Outside of the insurance system, they are easily the most unpleasant part of the whole ordeal to deal with. There are some gems, but most you will encounter are pompous, arrogant, and "commanding" -- when they enter a room, they are flanked by "residents", "assistants" and generally give off this air of superiority which is really just because of their route experience. The whole thing comes off more as a performance than anything else. Worse, they often get mad when you question them or ask them to explain themselves, or how they arrived at a conclusion.
Good luck finding work when an algorithm can do your job better than you. It's only a matter of time.
I have also encountered doctors I did not like but fortunately for me I had a choice where to go. Maybe machine learning should focus on weeding out unpopular practitioners instead.
However, since there are so many applicants, schools have no choice but to cut all below a certain GPA. This has an effect on the pool of applicants and the schools are trying to mitigate that.
In my opinion, technology will help reduce the cognitive load sustained by doctors, driving error rates down and simplifying medical education, allowing the professional to focus more on the relationship with the patient.
Many specialized guidelines contain actual algorithms: objective and subjective signals to look for, their interpretation and evidence-based recommendations based on them. In one instance in my education, this manifested in the form of giant lookup tables. I was expected to memorize them and keep doing it again and again when they inevitably change in a few years. To me it was like trying to memorize a highly dynamic version of the periodic table of elements.
One look at it and I knew a computer could fully implement its logic yet my classmates think I'm crazy for even suggesting it. I say let the computer remember that stuff. It won't replace all of medicine, certainly won't substitute for a proper understanding of the disease... But the act of mapping objective data to evidence-based conduct should really have been abstracted away long ago. The electronic medical records system should bring up these guidelines the moment the physician types the data in. If the doctor types in anthropometric data like BMI and other relevant measurements and they indicate obesity, the system should be able to suggest a proper conduct.
Unless they're used in emergency situations where time is essential, doctors shouldn't have to memorize magic numbers much less entire tables of them. With one less thing to cram, maybe school wouldn't be so stressful, doctors would make less mistakes and would also make more time for actually caring about the patient.
Right now, a really rude doctor will be tolerated within the system because of their technical knowledge, diagnostic skills, surgical skills, etc. But if all that other stuff is performed by machines, and we just have nurses, etc., there to tend to patients, explain things to them, and generally provide a human touch, I'd expect that very rude human employees would no longer be tolerated.
Would it have mattered to you if a machine interpreted the results and a nurse, fully versed in the specialty, had a consultation with you and guided you through the treatment process?
It seems like the bigger issue here is having some warmth of some sort in the situation - which I think is where medicine is really going, at least in the way we experience it. Computers and machines to do the hard diagnosis, with educated humans guiding other humans through. I believe we'll have less unpopular practitioners because we'll be able to spend more time teaching bedside manner and communication skills.
The obvious rebuttal which is common is: "why not have a compassionate human read the machine results/analysis/interpretation/prescription?"
To which I respond, how much are you willing to pay to have someone read to you?
Instead, I think we'll see more powerful diagnostic tools at the disposal for physicians to use. Doctors will still play an important role in treating their patients and will be more effective because they'll have powerful tools assisting them.
But to your point, will technology help patients feel more empowered in their medical encounters? Or to get more value out of their interactions with their doctors? https://www.remedymedical.com/ seems to think their platform will do just that for primary care / telemedicine visits.
I get into this argument with people a lot.
I say something like, "Self driving cars will replace all the transport jobs" and the pedant says, "Well they won't replace ALL the transport jobs, so argument null and void." But no shit they won't replace ALL the transport jobs, if they replace even 30% of them, that's huge. Highest unemployment during "The Great Depression" was ~25%. I think in the case of transportation it will be more like 80% fewer transportation jobs in 10 years, something like that.
With doctors and hospitals, if there were better incentives in place promoting any level of efficiency, we would have less need for doctors (and hospital staff in general). I agree that we won't see 100% automation. But a hospital finding it can do more and better work with 50% less staff? I could see that being a possibility in our near future, with the caveat that it won't happen in the US due to the regulatory environment and incentives in healthcare.
Countries outside of the US might be more open to the idea of efficiency in healthcare.
There are fewer accountants today because of tax software and Excel spreadsheets. They are still needed for more complex and unique situations, but there will be a lot fewer of them. The local H&R Block uses lower paid "Tax Preparers" instead.
There are a lot of research done about Walmart and about when the store moved into town there were not fewer jobs, but somehow the area got poorer. What they found was that the local community leaders(small business owners, lawyers, accountants) were automated away and/or moved to Walmart headquarters.
Is one I could find, but that also references other papers in the field.
This leads to me understand that like Walmart, the simple human physical labor jobs will remain long after the more skilled positions. The main reason, I believe, is that software is simply cheaper than hardware(robots) so highly skilled positions which require software not hardware to remove will be the first to go. This is exacerbated by the fact that the software replaceable jobs are the highest paid and therefore the most profitable to remove.
Very much this. This is how we'll be able to replace general practitioners with, at first, nurses with a bit of extra schooling - and then nurses with a 4 year degree... and then with a 2 year degree. Yet we'll still see surgeons for some time because it takes robotics or some other scientific breakthrough to replace them.
Programming, machine learning: it's constantly being automated, which leads to paradigm shifts. Those new paradigms are then studied, understood, and automated. Automation is about "not doing the same thing over and over again so you can solve new problems."
Programming, games: I'm giving a talk later this month about automating game production with machine learning. You can't really use machine learning to be "creative" yet, but you can use it to generate novel game assets, which is actually where most of the time is spent in game dev. There's also a lot of pipeline improvements that could be made by existing tools companies, most notably Adobe and Autodesk. So the upshot is it will be possible for small teams to create much more expansive game worlds. Make GTA V for $1M instead of $260M kind of thing. But even then, the history of game design has been automating different pieces -- it's basically the reason for the existence of game engines.
Starting businesses: This is controversial, but I think entrepreneurship is the highest intellectual endeavor. It requires you to apply everything you've learned about life, the universe, and everything in ways that most people haven't thought of. I think you'd need a very real AGI to do this; although I suppose I could see an algo running P&G or J&J and making decisions about what brands to invest in and divest from, along with how to market based on what people are saying on social media, stuff like that.
Eventually urgent cares and ER will install them for people directly. No reason the machine can't prescribe antibiotics for the latest form of strep that is going around. That leaves the doctor for the hard cases where the machine doesn't know.
I would assume these challenges aren't unique to Canada and from an outsider's perspective the medical system in the US seems worse (maybe not if you're rich)
I think most people in AI are more surprised that it took this long. The tech has been there for decades for a pretty large percentage of routine diagnostics, especially carefully defined clinical-sample type diagnostics. People were pretty sure in the '80s that it'd all be automated soon, but it never managed to make it into actual hospitals so funding dried up. Mixture of bureaucracy, legal issues, patients not liking the idea of computer diagnosis, doctors not liking the idea of computer diagnosis, incentives, etc.
I think changes in all those "environmental" factors are likely to be the biggest boost to something like this getting deployed in practice. Tech advances are good as well, and will expand the range of diagnostics that can be automated, but there is already enough low-hanging fruit in medicine that is easy enough for any of a half-dozen AI techniques to do it, that I don't think tech is actually the bottleneck.
Overall AI will ease the demand for human doctors, maybe not overnight, but gradually yet quickly. Cheaper healthcare for all, thinner salary for doctors, I will say not a bad deal overall.
What I find people miss here is that computers will not help you heal your boo-boos. Say you get stabbed in the face with a knife. You need a doctor to help you. Or to give birth.
And I don't why you bring up the spokesperson stuff, it doesn't need to.
I feel bad for the rest of the people who visit my clinic and have to deal with any of the other garbage practitioners who usually fall into one of two buckets. Foreign (mainly Indian) hacks with zero medical knowledge and bedside manners, and greedy yuppie strivers with a knack for memorization but terrible analytical ability.
Most doctors don't deserve their inflated salaries or social status, and I hope they are soon brought back down to Earth by technology, they have been able to skid by for way too long.
 - https://en.wikipedia.org/wiki/Mycin
I think now really is different. Part of that is algorithmic advances like deep learning, as shown in this Nature paper.
An even larger part of it is that the financial incentives are flipping due to value-based care. In 1979, a hospital that implemented an expert system for accurate diagnosis may, paradoxically, see its revenue fall. Nowadays, with ACOs, risk-based contracting, and bundled payments, the financial incentives create tailwinds rather than headwinds for large-scale adoption of AI in medicine.
Contrary to popular belief, the medical system can absorb new techniques very quickly--when incentives are aligned. And they are now becoming aligned.
Many of the barriers standing in the way of wide-spread use diagnostic software are not technological in nature.
Ignore those barriers at your own risk.
The deep neural net visual diagnostics are different. They are learned on pure pixels as we would from photons striking our retina, signals traveling all the way back to our visual cortex for learning. There is no assembling of thousands of rules here and therefore the system is less brittle.
These new systems get more powerful with more data given to them. Expert systems required humans to craft rules from the data, and therefore required constant maintenance and is fallible to human error.
So yes, some of the barriers are definitely technological in nature.
IIRC, MYCIN had several hundred rules. The researchers in this article had to pre-process 130,000 labeled examples. If you see a misclassification in an expert system you can at least backtrack and identify the individual rules that contributed to the failure. AFAIK, systemic errors in training data are much more difficult to detect and fix.
I think people tend to overstate the practical issues with expert systems and understate the issues with deep learning, partly because we have decades of experience with real-life deployments of the former and relatively little experience with the latter.
Yes, the explainability of expert systems is the only thing going for it.
Yes, there are issues with both, but we are really debating different solutions for different problems. For visual recognition, there is no doubt in my mind that deep learning is king.
It would be interesting to compare that with the current state of the art in the field, and see if ML can contribute new scientific/medical theory as well.
This is most simply because, whatever the algorithm is trained to do, it's certainly trained better to do that thing than to introspect. Introspection is a separate skill!
But there's also a more insidious element: introspection (in humans, at least) tends to result in the creation of a lot of "personal concepts" that don't map to well-known common concepts. An introspection on one mind must necessarily result in a taxonomy that contains terms for the tiny, unique features that only that mind has—which makes it very, very hard to communicate one's personal introspections to others. (You might call this a kind of overfitting: the introspection capability becomes optimized for that one mind, but ceases to translate well to features in other minds—like human minds.)
I'd place a much stronger bet on our ability to train one AI to "stare at the brain" of other AIs as they make decisions [tons of them, as its training data], with the expected output being a general theory on common AI features responsible for the given calculation step. A computer psychologist, of sorts. :)
Of course, you could include such a pre-trained model as a "module" alongside the AI itself, and call the combined system "one AI" if you like.
Indeed it is. It's something that a second algorithm (perhaps ML, perhaps not) would do.
And this is beginning to remind me of Society of Mind.
It's a neat approach that I've used with some random forest classifiers.
The algorithms found most of the tumors that humans had missed, with similar false positive rates. BUT humans refused to work with the software!
The problem was that the software was very, very good at catching tumors in the easy to read areas of the breast, and had lots of false positives in more complicated areas. Humans spent most of their effort on the more complicated areas. Every tumor that the software found that the human didn't simply felt like the human hadn't paid attention - it was obvious once you looked at it. The mistakes felt like stupid typos do to a programmer. But the software constantly screwed up where you needed skill. The result is that humans learned quickly to not trust the software.
One of the things we do is perform a Turing test of sorts where we test if the performance of our detector is statistically indistinguishable from a human. (In fact, we actually have a contest running right now where we give you 10 EEG records, some marked by humans, some marked by our software, and if you can figure out which were marked by which we'll donate $1000 to the American Epilepsy Society.)
Rural and underdeveloped areas are going to be the largest market IMO. Everyone can access a smartphone but not everyone has the luxury of seeing a Doctor in person, and if they do the time/travel costs can be significant.
Disclosure, I work for an EHR startup with a Telemedicine product.
The paper ends with "deep learning is agnostic to the type of image data used and could be adapted to other specialties, including ophthalmology, otolaryngology, radiology and pathology."
I am lucky enough to have Sloan Memorial as my hospital and no other than Dr. Marghoob one of the leading experts and I actually have a scan of my body made with 50 or so High Definition Cameras (I am litterally a 3d model in blue speedos and with a white net on my head).
They have a new system where they can look at the cell level without doing a biopsy and actually found my melanoma before they did the biopsy (i.e. they knew it was melanoma before they did biopsy) but it's really a cumbersome process and I had 6 experts studying and working to position that laser properly.
So the real challenge today is how do we get the data into the system.
I say when it comes to medicine, err on the side of caution. Obviously a diagnosis app isn't too dangerous - worse case scenario the app gives you a positive diagnosis, so you go into the doctor's, they take a sample, and find the growth to not be cancerous. No harm, no foul. But other ideas could be more dangerous.
I wonder when we will get to a point where machine learning can help there?
That said, pulling this is one of the best ML applications to date. Recognizing cats or scenery doesn't seem nearly as useful
One such task is lung cancer nodule detection from CT scans. A paper I recently co-authored applied many different architectures to this detection and achieved very good results. (https://arxiv.org/pdf/1612.08012.pdf)
The best combination of systems detected cancer nodules which were not even found by four experienced thoracic radiologists.
The difficulty is two-fold. Firstly liability - a dermatologist aims not to miss a single case of melanoma in the tens of thousands of patients seen over their career, if this algorithm is used widely in millions of patients then either the sensitivity will have to be higher and more biopsies performed or there will have to be an acceptable rate of missed diagnosis for melanoma.
Secondly, in edge cases such as moles that are slightly atypical. In these scenarios there is no way that I would be comfortable making an assessment from a photograph. Now of course, a machine could also gather further information via methods such as in vivo confocal microscopy but in this case the cost savings are likely to be negligible.
These close-to-even ratios make for a more powerful test of classification. I would assume that these test samples have biopsy data means that some dermatologist thought that they might be malignant (unnecessary medical operations are unethical). This might lead to some bias towards samples that are difficult for humans to diagnose.
Separating these into binary classifications of specific tumor types makes it easier to classify than out of every possible tumor type (as a dermatologist does).
Still the claims this paper makes are very promising. A lot of the training data was classified by dermatologists, not biopsy. Using more biopsy data could lead to even better classification, as well as improvements to the model.
As an example, I recently trained a neural neural network to perform a useful task for our lab using 3 (!) hand-labeled brains.
I am learning machine learning right now and I find working with datasets with fewer than 100 examples to be quite difficult.
It seems counter intuitive when you first think about it but having way more data actually makes the task of fitting the model much easier as there is granularity that can be used to get feedback on adjustments to the structure of the model.
There was a really cool medical imaging paper recently that literally just labeled several 2D slices in a 3D dataset consisting of 3 images and performed a reasonable segmentation: