Hacker News new | past | comments | ask | show | jobs | submit login
Deep learning algorithm diagnoses skin cancer as well as seasoned dermatologists (stanford.edu)
596 points by capocannoniere on Jan 25, 2017 | hide | past | favorite | 279 comments



As MDs, I think it is very clear that all of us who understand even the slightest about computers and tech see that machine learning is the way to go. Medicine is ideally suited to ML, and in time, it will absolutely shine in that domain.

Now for people eagerly awaiting the MDs downfall, I think you are precipitating things a bit. We all tend to believe in what we do, and I concur in saying that expert systems will replace doctor judgement in well-defined, selected applications in the decade to come. But thinking that the whole profession will be impacted as hard as factory workers, with lower wages and supervision-only roles, is not realistic. What will be lacking is the automation of data collection, because you seem to underestimate by far the technical, legal, and ethical difficulties in getting the appropriate feedback to make ML appliances efficient. I firmly believe in reinforcement learning, and as long as the feedback system will be insufficient, doctors will prevail, highly-paid jerks or not.

I myself am an anesthesiologist, a profession most people think of as a perfect use case for those techs (as I do), and wonder why we haven't been replaced already. The reality is that the job is currently far beyond what an isolated system could do. We already have trouble in making cars follow the right lane in non-standard settings. I hope people realize that in each and every medical field, the number and complexity of factors to control is far greater than driving in the right lane.

People who drive the medical system have no sense of technology. They cannot even envision the requirements for machines to become efficient in medicine. That is why we are seeing quite a lot of efficient isolated systems pop up, but we won't be seeing fully integrated, doctor-replacement systems for a long time. This will require a new generation of clinical practitioners, who will understand how to make the field truly available to machine efficiency.


My issue with doctors:

Recently, my dad was sick with a pretty bad cough. Like, so bad that he couldn't speak without coughing. He fainted twice from minute long coughing fits, one of those times hitting his head his head on the stove on the way down, leaving a deep cut and blood everywhere.

He went to at least three different doctors. He got a scan of his chest. Everything looked clear, and all of the doctors were stumped. Things were pretty bad.

I mentioned this to a UCSF resident friend, and her immediate response was "Oh, is he on <some blood pressure medication I forget the name of>?" I was like, uh, let me see. Called my mom, she checked, and, lo and behold, he was on it. So his doctors took him off it and within a week he was better.

This coughing wasn't some obscure side effect of the medication she knew through sheer brilliance: it's a side effect that's been widely known since the 1970's. Hell, it was on the drug's Wikipedia page.

So there's a couple morals you could take from this. One would be, wow, doctors are smart to be able to diagnose an issue based on a single symptom and some reasonable assumptions about a patient's background! The other is that the median doctor is pretty worthless; spending tens of thousands of dollars gives you no guarantee you'll see someone competent; and that a medical system that relies on you grabbing drinks with a UCSF resident to get good results is fundamentally broken.

Machine learning and expert systems don't have to be as awesome as the best doctors to be valuable. They don't need to be better than competent doctors, even. They just need to provide a bare level of competence to provide a huge amount of value.


I just want to add something really important onto this.

ALWAYS READ EVERYTHING YOU CAN ABOUT DRUGS YOU ARE PRESCRIBED!

Sorry for all caps, but it is super important. Not that your dad is in the wrong, lots of people have justified (to a degree) trust in their doctors. However doctors are people and just by that alone aren't perfect.

A few years ago my doctor prescribed me an antibiotic for an ongoing illness I had. I read the entire pamphlet for it and did some reading online about it. All before taking it. Turns out it can cause seizures if it interacts with polypropylene glycol, one of the main ingredients in e-cig juice, which I use daily. I had told my doctor I use an e-cig.

Really I cannot stress how important it is to be knowledgeable about the drugs you are taking.


>ALWAYS READ EVERYTHING YOU CAN ABOUT DRUGS YOU ARE PRESCRIBED!

I'm not trying to undermine your point entirely, but there is a flip side.

I can't tell you how many times I have seen a patient start a medication, then come back to the office within 48 hours because they coincidentally have every side effect that is listed in the pharmacy's information sheet or that they looked up online. The vast majority of these side effects are benign, present with next to no pertinent physical exam findings, and can't be definitively tied to the new med (like upset stomach, fatigue, headache, etc.).

Then they will start listing that medication as one of their "allergies", and if the nurse/doctor documenting doesn't dutifully probe what type of "allergic reaction" they had, they may end up not being prescribed that med in the future when it really is the drug of choice. A little nausea is a small price to pay if it kills a potentially life-threatening infection.

Also, I'm skeptical about the seizure risk. The thing about side effects is that they are supposed to be stratified according to risk. Doctors are typically aware of these risks, but patients aren't. So if your drug is listed as causing "headache, nausea, and seizures", there may have only been one patient out of millions who had a seizure while 50% experienced headache, yet the handout probably won't tell you that.

But even if it is a notable risk, I would be surprised if the propylene glycol you inhale from an e-cig could accumulate to a high enough level in the bloodstream to cause drug interactions, although I admit adequate research on the subject is lacking.

My advice would be trust your doctor first. If you don't trust your doctor, start seeing a doctor that you do trust. Then if you have a significant adverse reaction to a medication, talk to your doctor about it. Quite often they know something that you are not going to find by spending a few minutes on the internet.

As a side note, a good history includes asking about many habits. A lot of healthcare providers are guilty of simply asking "Do you smoke, drink, or use drugs?", but ideally the smoking aspect should be phrased as "Do you use any tobacco or nicotine products?". Patients usually won't read your mind and volunteer that kind of information. They will tend to give yes/no answers, so direct and specific questions are important.


> there may have only been one patient out of millions who had a seizure while 50% experienced headache, yet the handout probably won't tell you that.

Odd, here in Norway that's exactly the kind of information I expect to see on a leaflet inside the packet, not only on prescription drugs but also over the counter pain killers like paracetamol. Roughly translated from the Norwegian it says:

Rare side effects (more than one in ten thousand but fewer than one in one thousand patients) include: over sensitivity, allergic skin reaction/rash, reduced white blood count, anaemia, disturbed liver function. Very rare side effects include serious skin reactions. Liver function can be affected by paracetamol and alcohol abuse.


> yet the handout probably won't tell you that.

That seems like a major problem. Is there any reason that more detailed information can't be included? Mathematical literacy may be a problem, but that doesn't mean that there aren't millions upon millions of mathematically and scientifically literate consumers who could use this information effectively.


You would probably have to ask a pharmacist, but from the clinical side I can tell you most prepared health information that we can give to patients has to be very bare bones and comprehensible to essentially everyone at or above an 8th grade education level- I assume because it is considered too resource intensive by the publishers to produce multiple versions of the same information, and people with higher education typically have the initiative/means to ask their doctor the right questions or research the information themselves.

I'm not trying to justify any of this, but that's how it is.

Not sure if it will be helpful in the future, but I can tell you that descriptors used with side effects follow a standard convention:

    very common: > 10% 
    common: 1%-10%
    uncommon: 0.1% - 1% 
    rare: 0.01% - 0.1% 
    very rare: < 0.01%
But you will probably never know the exact origin of these figures (like how many patients were studied, what populations were included, how tightly the study was controlled, whether adverse effects were self-reported, etc.) without doing some intense searching. And even if you did, I doubt it would have a significant impact on your healthcare. I don't want to go on a tangent about the nuances of pharmacology in clinical medicine, so I'll just circle back to my point that you should trust your doctor, or else find a new one that you do trust.


This! I view doctors more and more like any other service provider. Their goals (removing symptoms / being on time for next appointment / staying within budget / maybe even prescribing drugs I get commission on) are not necessarily aligned with mine: identifying and removing the root cause for disease. So the decisions I make are my own, but I take the input from doctors and try to do as much research as possible. Needless to say, I don't take prescription drugs unless strictly necessary.

Once you view your GP as a mechanic, you can make much better decisions. Sure, their "cars" are more complex, but the role they play is similar. If you want your body to function well, you are responsible for it, not the doctors. They just help you out.


I 100% agree with this. Doctors in America are hopelessly bad for what they charge.

The system is built on rent seeking behavior. If I don't treat it immediately then the patient will come back again and I can charge him/her a subscription.

Health care in America is very inefficient. Mostly because of insurance lobbying and govt that can't make a firm long term decision.

I would love to see most common problems be self diagniosable with kits and AI.


i agree with your advice. but on japan for example drugs are given ouT of their package with no info whatsoever about side effects and all. just like candies. what do you do with that?


Interesting... So you just have to trust your doctor, no matter what? Does doctor explain the reasoning behind giving you the pills?


hardly. consultation including prescription is five minutes tops.


Especially if you're taking multiple types of medicine, know their interactions. There may be a specific sequence of consumption that you should use otherwise side effects get amplified, e.g., NSAIDs.


That's quite a lot you've taken away from one anecdote.

ACE-inhibitors are well-known to cause a cough but not like the one you're describing.

If I was to guess I'd say he had a respiratory tract infection that magically disappeared (as they do) not long after ceasing the medication. This is a common type of scenario where lay people get confused about correlation and causation, and is one of the reasons you need doctors to help you.

Even if it was the medication, like I said, this sounds like a somewhat unusual case.

You'll probably spend the rest of your life thinking the doctors didn't know what they were talking about, but in my experience with doctors (I have a lot of that since I'm a doctor myself), even the "median" ones are far better at diagnosing things than lay people.


I don't know - if you're at all in a profession that requires problem solving, it can be quite jarring to work with a doctor.

For any given complaint, most will just tell you that your test results are in the normal range (don't get me started on this "normal range") and tell you there's nothing they can do.

I live in the bay area and have had a few GPs over the years, some from well known institutions.

Doctor's seem to put in the minimal amount of effort to get you out of their office in the allotted 20 minutes so that they can move on to the next patient.

I have multiple anecdotes similar to OP's - times where if it weren't for my insisting or my own inkling to visit a specialist I just would never have been helped.

Are there goood GP's? I'm sure there are - but to casually dismiss OP's statement is a bit ironic - since casually dismissing is part of the biggest problems as far as my own experience with doctors.


Your problem with doctors sounds like it's their time constraint. Why don't you pay a bit more to get a longer consult? You want them to work magic in 20 minutes?

I responded to OP's statement. Dismissing is quite different.


Las tdoctors visit I had I wanted to go through a list of 6 issues. I scheduled the longest available time slot, 45 minutes. I wasn't sure how long it would take, but assumed I'd be able to take as much time as I needed within those 45 minutes.

The doctor, after number 3 in my list and less than 15 minutes into the appointment said "Are you done?" After item 4 a couple minutes later he said "Is that it?" In a condescending manner.

Needless to say, I was pissed.

But, the thing is, it's not the doctors fault. He's working in a system that values throughput over all else...and this is Kaiser, a vertically integrated provider where you would think that would be less of an issue. So he was late to my appointment and saw my long appointment as a way to get back on schedule to ensure he saw the 40 patients he had to see that day.

The whole system is broken.


That's an unfortunate experience you had. Patients with lists can be a bit of a trial but at least you had the foresight to book a longer appointment. It's a shame the doctor saw it as a way to get back on schedule.


My father used to work for Kaiser - if he saw < 20 patients a day there would be some clearing of throats from management.


It's not possible to pay for a longer consult, in the same way it's not possible to pay for a 3 Michelin star meal in a McDonalds.

This is not meant to sounds harsh, all I am saying that from any business you can't buy a product they aren't in the business of selling.


I guess GPs where you are practice differently to GPs where I am. I can't imagine why they wouldn't book a double consult. How do they do more complex procedures and sort out patients with complex health problems?

Comparing this business model to McDonalds is a bit ridiculous. Maybe compare to a lawyer booking double time, or thousands of other professionals. But don't make the unnecessary jump to a fast food chain.


Spend more time than allocated on complex cases, then run late and short on others. More typical of specialists, but GPs do it too. Beyond that, incrementally, 1 appointment/billing code and a new test/intervention at a time. Until the patient gets better, worse or gives up.

Are you in US? Is double consult something insurance/government pays for twice for you? Or are you are talking about a different billing code? Or self-pay?


Australia. We have different billing codes for different length of time and different complexity of concern/procedure. We book double/triple/etc. appointments when necessary.

Doctors still often go over time.


Interesting. In US there are also different billing codes like that. But it's not something a patient has input on. However, it's safe to expect a provider to chose the maximum code that can be reimbursed at any given time by default.

So given that, a patient is likely getting the most expensive product a given doctor will deliver already. To get a better product simple options are specialists (much higher reimbursements, selection bias for more complex cases) and specialists at major teaching hospitals (even more of the above).

There are effective ways for a patient to hack the system by doing lots of work themselves (e.g. by maintaining his own clinical summary) but they are beyond what most patients will do.


Patients don't really have anything to do with choosing item numbers. They just may request a longer appointment or a specific procedure or test.


Why don't the doctors just tell him that they need more time to address his concerns properly, rather than dismissing his concerns?


Won't happen. Unlike engineers, doctors are trained to come up with a diagnosis given whatever subset of information can be presented to them and understood by them in the time allowed. The main source of their information is not even the patient, but prevalence (i.e. prior probabilities). It's a 80/20 sort of system, really more like 95/5.

It's a lot like YC office hours actually. Though doctors have less time than YC partners. And doctors get paid per appointment.

In general, it's not useful to think of doctors as engineers. The line doctors work is much more tech support - lots of similar highly repeatable cases, with a well understood script. The ones that are more like engineers are doing research in research hospitals and writing that script. They still see some patients as part of their work, but appointment reimbursements are not a profit center for them, but part of their research.


When making an appointment, my GP's assistant explicitly asks whether you've got multiple complaints, and if so, to list them; before explaining that in that case, they can reserve more time.

Then again, my previous GP said he had 7 minutes per consultation.

(FWIW, I’m in the Netherlands).


I don't know. Maybe they do? Maybe OP needs a different doctor who is more aligned with what he/she wants.


in Japan if you get more than 5 mIns with a doctor it is your lucky day.


> even the "median" ones are far better at diagnosing things than lay people

We aren't comparing doctors to "lay people" - we're comparing them to machine learning algorithms. I'd prefer the machine to many of the arrogant, but mediocre, doctors I've seen in my life.


Actually, OP was comparing doctors to his lay friend, who diagnosed the cough.

Edit: It seems "UCSF resident friend" is American for doctor. I suppose my statement was actually out of context then.


I think that a median doctor + a statistical classification scheme > a median doctor alone, though.


Sure. And most of us would be super keen to use that. In fact, ED doctors in particular use heaps of things like that. Maybe instead of rule systems we will eventually use deep learning algorithms.


This response makes me want to coin a new term: "doctor-splaining."


You're looking for "docsplaining"


Yeah, that's better.


A recent MD friend of mine mentioned that the standards in medical school are much higher today than 20 or 30 years ago. While this is pretty anecdotal, you may have better luck with doctors who graduated more recently. Perhaps someone with experience from medical school can comment.

Separate from that, I've had similar experiences. The median doctor I've interacted with isn't able to put in effort to really understand an individual case, and it's very easy to get overlooked or ignored. I've run into a few exceptions but that is the general rule. Source: wife had pregnancy with complications, young children.


The downside is that at least in my experience the newer doctors seem to be docbots. Lousy bedside manner.

I've had the dubious honor of spending quality time in a few hospitals with sick relatives in the last year. The nurses rock, and PAs and NPs seem to do what residents used to do.

The residents in particular seem to be clueless. With electronic records, they can't look at the chart as easily and aren't situationally aware.


It seems that at any given program either the residents are amazing and the fellows are mediocre of vice versa.

It's a common fallacy to want an older, experienced physician vs. a resident or fellow. There are exceptions like specialized surgical procedures. Generally speaking though, when physicians leave the academic environment they rarely get better. Residents and fellows (and physicians who stay in academia) spend a huge amount of time on education - it's built into every day they're at work. Things like case conferences, morning reports, and Morbidity and Mortality reviews.

When you enter private practice it is completely up to you to stay educated and current. Sure, you may attend SGIM once a year and read monthly journals, but that is nothing compared to the structured learning environment they left behind.

I will take a good resident at a county hospital over a private hospitalist any day. Even at exceptional Tier one hospitals the level of dumb shot that happens with lazy private physicians can be jaw dropping.

The private hospitalist is going to order some labs and consults, round on you once a day (maybe) and that's about it.

A good resident will discuss your case at length with their interns, co-residents, fellows, and attendings.


The counterissue to standards being higher (which might or might not be true) is that programs are ramming students through at a faster rate. They're reducing and compressing course and theory and pushing them out into clinical settings more. You could argue that this exposes them to real-world learning faster, but my impression is that it's driven mostly by just getting extra free help in the clinic. I think once you exit the classroom, too, you're at as much of a risk of unscientific idiosyncracies of supervisors as you are likely to benefit from real-world learning.

A this point, in a lot of schools, PAs and MDs get about the same amount of in-class training, for example (especially when you consider that PA prereqs are often stricter than MD prereqs in terms of prior coursework and experience). A PA with 2 years of experience is at about the same experience as an MD just out of school. The discrepancies with specialists (e.g., optometrists, dentists, psychologists, and so forth) is even greater.

I don't think MDs are incompetent, but I think our healthcare model is basically wrong, in that people just aren't able to juggle everything in their heads, regardless of how smart they are, and someone focused on their problem, with access to the internet, just might be able to get more traction than the person trying to remember 1 out of 10000 things in their head. I think this is also why there's been a push toward specialists: it's just easier that way, even if the problem isn't that specialized, because the specialists only have to remember 1 out of 20 things.

The basic healthcare model, with MDs at the top, followed immediately by PAs and nurses, needs to change, to be more decentralized. There's a role for those training backgrounds, to be sure, but many of the same tasks could be accomplished in different ways, and many things are maybe done more efficiently and less expensively through different routes (by relying more on pharmacists, for example, or expanding training and practice for optometrists, psychologists, and so forth).

Maybe if a pharmacist had been consulted, for example--maybe if there was less of an expectation that you start with an MD, or rely on an MD to provide all the answers, or expect them to know everything--the side effect of that med would have been confirmed earlier.

This is the one silver lining that I might hope for in a renewed healthcare debate, which is a restructuring of care systems to increase competition and decrease costs. So far everything has focused on how to pay. That's not to say I agree with moving away from a single-payer system, or that the current government will lead anywhere productive, or that I've seen anything innovative from them in that area, but it's something that hasn't been done yet.


Were you seen in a hospital or a private practice? If it was a hospital you almost certainly primarily interacted with graduates of the last 5 years since you're mostly seeing residents.

Fwiw, "putting in effort" is somewhat relative. Do you mean they were lazy, or spread too thin?


> the median doctor is pretty worthless

This seems fairly disrespectful. I suppose you mean that the median doctor isn't perfect, but do you have any idea how many hours residents put in? You might think carefully before calling the median one worthless, and ask whether they're really, truly wasting their lives in the service of others.

Less emotionally, I find it hard to believe that half of doctors proved essentially no benefit.

> it's a side effect that's been widely known since the 1970's

Guess how many side effects have been widely known since the 1970's?

For clarity: I am 100% in favor of thinking critically about a doctor's advice, having received contradictory diagnoses before.


Incidentally, the question of "Do younger or older physicians produce better outcomes?" recently came up on the Freakonomics podcast (they were doing a 3-episode focus on the medical system). [1; 12/14/16]

Or, if you want to search the transcript for it, "So, given a choice between two doctors, let’s say — one fresh out of medical school and the other with fifteen years’ experience, which one do you go for?"

It's a good listen or read.

Tl;dl: Anupam Jena [2], "And we can basically see what happens if a patient happens to be treated by a doctor who is 20 years out of residency versus 5 years out of residency. And what we find is that if you happen to be treated by a doctor who is 10 years or 15 years out of residency, your mortality within thirty days of being hospitalized is higher." [Note: surgeons were one of the specializations that did not follow this correlation]

Side note: the medical profession is really terrible at keeping statistics on outcomes, outside of actual trials.

[1] http://freakonomics.com/podcast/bad-medicine-part-3-death-di...

[2] https://www.ncbi.nlm.nih.gov/pubmed/?term=Jena+AB%5BAuthor%5...


Well, I don't think this is surprising, really. Those doctors are closer temporally to their training than the old docs. This is also why "Are you smarter than a fifth grader?" is actually harder than people think, because all those kids just learned last week what the capital of West Virginia is. Surgery seems to be a logical exception, since at the end of the day, I suppose it's more like being a car mechanic than a trivia competitor, and having a body of experience and "tricks of the trade" is more valuable than trying to compete with knowing more than WebMD and Wikipedia. I can't wait to pass the torch of diagnostics over to computers. People are generally lazy and they get comfortable too quickly for my taste, including doctors. I bet if you found some kind of situation in which doctors were kept on their toes, and constantly retraining their knowledge base, you'd find that the mortality rate doesn't go up.


> Guess how many side effects have been widely known since the 1970's?

Does this make it fine not to expect a doctor to inspect the list of known side effects for drugs his patients are taking?

I've heard this very story a few times already, perhaps the sources of data widely accepted by doctors are not up to par with what non-doctors consider as accurate, in some cases anyway.


Without knowing this specific case, I can't say for sure. But my broader point is that the knowledge base required of doctors is so huge that even if it's not excusable, it is understandable. Like online community moderation, it's really easy to point to cases where doctors got it wrong and assume it's all for naught.

Absolutely the accuracy of doctors should be improved, but the bar is not "are you as good as a UCSF resident?" (one of the best programs nationally) but "are you better than nothing?"

It just seems crazy to me to suggest that the median doctor, i.e. at least half, is worthless.


yeah, arrogant comment that was. I presume the guy always does his work at 100% or more, flawless, checking every detail and aspect, thinking 10 minutes before saying anything... We all have our tiny and bigger faults, yet somehow when it comes to health we expect impossible.

I have similar crappy experiences like most people here, and tended to hate all medical personnel uniformly. Then I started dating a young doctor - still can't wrap my head around how overworked over-stressed and underpaid piece of shit work that is.

Bear in mind that we're talking about biggest university hospital in Switzerland. Working ridiculous amount of hours (normal shift can be 10-13 hours, overtime unpaid), night shifts that will just mess you up mentally, crazy and dangerous patients, drug addicts, very real danger of contracting stuff like HIV or hepatititis C by single tiny mistake, or actually killing somebody by overlooking some tiny fact about N-th patient that night. You can end your career for life and end up in jail and/or with lifelong debt by one mistake. ONE. Even after directly saving life of 500 other people.

Compared to my comfy corporate job, where I earn much more, did get mistake or two in production over last 5 years (nothing critical like making loss, but still), well... We should be thankful that anybody clever is actually still doing that job. Most of them could earn more and have actually a life someplace else. Blame how system is setup much more than individuals forced to exist in it.


It's not doctors. I am working on software for the OR right now and I had never taken a biology course. The amount of stuff you need to know is IMPOSSIBLE. No, it is literally impossible. The human body is beyond ridiculously complicated and this is why most doctors pattern match for everything they've seen before.

This is why, if I were an MD, I would be latching on to ML as quickly as I possibly could. There is a TON of money to be made here, sadly, I do not have the domain expertise to make it, though I am picking it up pretty quickly.


You didn't say whether the medication was declared at the initial admission. Nonetheless the thing I took away most from your remark is a reminder to be aware of side-effects. Or more broadly, a reminder that patients are participants in their own treatment, and that the existence of doctors doesn't excuse us from responsibility toward ourselves & those we care for.


>"Oh, is he on <some blood pressure medication I forget the name of>?"

I'll bet the farm it was lisinopril, or another ACE inhibitor. The ACEI cough is notorious enough that any 1st year medical student probably should have been able to piece this together.

The problem is not that those doctors were ignorant. It's more likely that they did not ask the right questions to get a proper history. You should ask every patient to list their medical problems and medications unless it is a routine follow-up when nothing has changed.

Something else that can happen is patient ignorance (and I am not insinuating your dad was guilty of this, or making a value judgment on people who are).

Often times you can't get a proper past medical history or medication list unless the patient brings all of their prescription bottles to the office. E.g.:

>"What medications are you on?"

>"Well, I'm on a sugar pill, a water pill, a cholesterol pill, a stomach pill, and some allergy medicines."

>"Ok, do you know the name of that sugar pill?"

>"It's a little white pill. I think it starts with an 'F'. No wait, I'm thinking of my water pill."

or

>"Do you have any medical problems?"

>"No."

>[glances at meds list] "Ok, so why do you take metformin, hydrochlorothiazide, atorvastatin, omeprazole, cetirizine..."


Was it Lisinopril? A cough is a recognized side effect of it, but I think something as severe as you are describing would be fairly rare?

While I think you might be overreacting a bit here, I have a similar story. I had an irritated, gunky eye for the better part of a year. Was bounced around multiple doctors, including one specialist, and had all sorts of really annoying treatments I was supposed to do. Finally got bounced to a specialist at a major teaching hospital. Took him five minutes to correctly diagnose me, and the major part of the fix involved instantly stopping all of the things recommended by the other doctors. (The rest of the fix was simply to make sure my eye was protected while sleeping.)

So yeah, I think there is a lot of value that might be added by an expert system that is familiar with the slightly more exotic things that can go wrong with your health.


Angiotensin receptor blockers can give you a cough. Almost all drugs have side effects. You got to read the drug information yourself and watch for known side effects. Start one new medicine at a time so you know which one is giving new side effects. By the way, reducing salt intake is amazing for lowering blood pressure.


In the doctors's defense, most people know that blood pressure medication may cause coughing. At least most people with hypertension.

Your dad's case is like complaining about constant nosebleeds while taking Aspirin every day. It's a bit obvious, to be honest.


Given the symptoms, discontinuing an ace inhibitor is one of the very first things you would try.


Atenolol? I'm not a doctor but I'm reasonably sure it's this.


Not this. ACE inhibitors tend to cause a cough. You tend to get this side effect pretty quickly. It doesn't just occur randomly months or years after starting the drugs unless of course something else changes in your health.


Really happy you wrote this anecdote because it illustrates a point that physicians are arguing about a lot right now--how to measure quality care. Is it readmission rates, is it 30-day mortality, is it getting all the required screening exams for each patient... we don't know. In the US, the CMS is trying to mandate many of these and it's really hard to measure quality of care.

For example, I am a radiologist. Probably 90% of my cases are mundane and are either normal or have 'easy' pathology that I can readily detect and quickly report. Another 9% is a mixed bag of things that take a lot more time--something I need to reference or think a bit more about or possibly show a colleague. And then there's the 1% that is truly a 'make' or 'break' case. My sub-specialty training and experience can really shine in these situations and I can easily dispense a diagnosis or make a 'tough call' where another might equivocate or defer to more imaging or a follow-up. I like to think I'm not being paid for that 90%... I'm being paid for the other 9+1% That's where I truly add value in the system. But how would I measure my 'success' rate when even the best eventually make mistakes given a long enough time and there is often not a gold standard for diagnosis or even long term follow-up/resolution to many of the tough cases I've seen.

Studies have been done where experts review a sampling of cases and that's essentially what we do now for quality control--we randomly review a few prior exams for a case we are reading and then submit feedback based on our opinion on the same case. In these studies, trained radiologists do very well and make clinically significant mistakes only rarely.

But here's the problem. Let's say we use machine learning to interpret a scan, something like a CT of the chest to look for a pulmonary embolus. On that scan, the machine may see an incidental pulmonary nodule. Well that's fine, we have good data on how to follow-up on those and what to recommend. Well, what about an anterior mediastinal lesion? Now it's not as clear. The differential is wide and dependent on age, sex, symptoms, history, etc. Let's say we build that logic tree, let the machine learn from 1000 anterior mediastinal mass cases with tissue diagnosis. Well guess what, we do those studies all the time and it turns out that NOTHING is 100% sensitive AND 100% specific. You have to sacrifice one for the other... it's a balance. So, you would need to build ROC curves for every possible finding on every scan and decide that 1% of the time you will miss a cancer to save having to biopsy an extra 20 people... or maybe you want to miss cancer only 0.1% of the time, but that means you'll have to biopsy an extra 200 people and one will have to be hospitalized from complications. You think they will be happy knowing you made that decision? Guess what, we are already doing that with Mammography. The screening guidelines that the different societies and agencies argue about are this very rationale... how many women should be allowed to die to prevent those extra biopsies, false positive workups and all the things that come with that. I welcome machine learning into medicine. Guess what, it's destined to be mired in the same ethical dilemmas we face everyday.

I have a feeling I'll make it to retirement.


Yes, much better to make ad-hoc decisions of additional tests versus missing disease, without the benefit of a double-checked system giving you and the patient the likelihoods of complications.


I appreciate the sarcastic comment, but I don't think you understand the implication of a system like that. Even if you have more data, it doesn't better inform a patient's decision process.

For example, let's say you could--you can't--but let's say you could without a shadow of a doubt predict that the pulmonary nodule in your lung is not a cancer with 98% certainty. Well if you are 40, that's not that good actually.... that means 2 out of 100 people in a very productive time of their lives may have a cancer go completely ignored! So should I tell every patient in my report, "There is a 2% chance that this is malignant, but I won't recommend biopsy because there are chances of complications from that and we can save a lot of money by letting a few slip through the cracks--it will cost the healthcare system too much. Thanks for your understanding."

Remember, statistics predict population outcomes... not individual outcomes. I can tell someone that something very rare might happen.. but guess what, when it happens... the idea that it was a rare possibility doesn't assuage any negative feelings about it.

There is no right answer. Some people are illiterate! Even educated people don't understand statistics... how am I going to quantify that kind of risk/benefit analysis in a way that ensures a patient truly understands the implications. What if that risk were 1%, or what if the patient was 70 years old? Should either of those affect my recommendation? Who am I to decide who should be recommended one thing vs another... it's a value judgement! But if I leave it solely to the patient, a lot of times they will ask me, "What would you do?" Probably the most common thing asked after a long discussion like that. The answer is, "I don't know."

Bottom line: We can do 'strong' recommendations for things that are well studied like breast cancer and pulmonary nodules, but we don't have data to support recommendations in many other areas of everyday practice. A machine learning system would need data that just isn't available yet to make recommendations.


(In my opinion) Doctors have done a good job of protecting them selves from technological disruption by only allowing it to be used where it is profitable to them (billing) and not where it would threaten them (diagnosis).

Even the concept of electronic medical records has been resisted, and is currently poorly implemented. The gov't had to literally bribe doctors to convert to electronic systems and most seemed to go out and get the least poorly built products on the market.

I'm not trying to paint them as evil. Heck, even NASA scientists were suspicious of computers taking their jobs. I see this in other fields all the time.

Seeing the potential and seeing the current state of affairs kind of makes me sad. I want to live in the world where a balance has been reached. Where my medical record is a salient form of AI that checks if I lost those 10 pounds I promised and loops in the doctor when my flu symptoms linger too long. I want it to eliminate the hassle for doctors, hospitals, insurance companies as well as patients, and I think in the process it can drive costs down and raise quality of life.


I think this is way off. Doctors generally actually just have no idea about technology. I'm a programmer and a doctor so all I ever see is ways we can improve things with technology.

I've never seen a doctor shun technology to keep themselves relevant (I've no doubt it will become a thing more in the future).


I don't think that is fair. I think Doctors would love to use a technology to help them with diagnosis, particularly if it can corroborate an uncertain diagnosis. But I think there are a lot of liability problems with systems that try to provide a definitive diagnosis, as well as issues of patient trust.


I had come to similar conclusions about doctors but after interacting with a few doctors closely, my perspectives changed. This is in India though where the regulatory environment is a lot different.

Per my interaction, the best doctors will do whatever it takes to improve their efficiency. Their biggest problem is the sheer volume of patients. Everyone wants to go to the best doc, and word spreads pretty fast. If you start doing good, very soon you are swarmed with patients 24/7. So any tool that can speed up their patient handling time is welcome. This is not a scheduling problem. People built a ton of scheduling tools and those tools have done next to nothing to move the efficiency needle.

What's needed is, what you said, a diagnostic tool with automated feedback from patient's activities and dramatic physiological changes. The best doctors that I know will take it up in a heart beat.


> If you start doing good, very soon you are swarmed with patients 24/7.

I'd argue that the answer to this is, "Sorry, but we're full up on appointments that day/week/etc". And if you start needing to book your normal clients too far out, you stop taking new patients. The fact that people want to see a doctor doesn't mean they get to see that doctor. The answer is not to take more patients but give each one less time, resulting in sub-par treatment.


You have to visit one of the hospitals in Bangalore - St.Johns to see the volume of patients. It's insane.

And if you start needing to book your normal clients too far out, you stop taking new patients

What's a normal client ? Do you mean repeat clients ?


Doctors aren't the ones making these decisions - hospital administrators generally are choosing what to invest in. Most MDs I know are excited about research and not necessarily afraid entertaining some amount of change.


The adoption of new technology for diagnosis butts up against the same legal problem as driverless cars. Even if this system is better at detecting skin cancer than the average dermatologist (or even the expert), people are going to sue the manufacturer when it doesn't detect their cancer.

People sue doctors, too, but I'd put money on the proposition they'll be more willing to sue a company than the individual they met who seemed like he had their best interests at heart.


> Doctors aren't the ones making these decisions - hospital administrators generally are choosing what to invest in.

Most doctors don't work in hospitals, they work in family practices and most definitely are making the decisions about how their practices operate. Most of us don't go to hospitals to see our doctors, we go our doctors private office.


Here in the Silicon Valley, there are three very large medical groups: Kaiser Permanente, Sutter Health, and Sanford Health Care. They represent over 22,000 doctors.

Specialists still seem to work from private offices, but my experience is that most doctors work for large medical groups where they do not make the technology decisions.


You're right - I was thinking about HMOs where care is very systemically administered. I think we'll see players like Kaiser introduce these tools before smaller private offices.

They have more data than most other networks, and more resources to invest in R&D.


I think most doctors would go without any billing tech if possible, but most also hire billing employees to deal with billing to insurance companies - and to bill an insurance company, you have to conform to their standards...


Citation needed for this one


Doctors protect themselves from technology changes by forcing themselves to be included in any technology change. Get blood tests? You can't get the results until a doctor looks at it. MRI? Same. ML will provide another batch of tests for them to order, and if they work better, that's great for the patient, but you'll still need to make an appointment with the doctor for ordering test, and another for diagnosis.

Whether this is a good thing or not is open to interpretation. Doctors make mistakes, but people often catastrophize their test results.


Unfortunately, most of our tests aren't all the good and in and of themselves the results are pretty meaningless.

Back in medical school we were taught that 80% of the diagnosis comes from the history, 10% from the physical examination and 10% from the investigations. Or something like that, it's the idea, not the numbers that matter.

As a young computer nerd, this didn't sit well with me, but the more I practice medicine the more I understand it.

My teachers didn't realise at the time, but I think they were actually talking about Bayesian probability.

I think we are talking about this more and more in medicine, but unfortunately not all specialities have embraced it.

There are lots of likelihood ratios published and good statistics available to help with diagnosis, but we don't use them effectively. There are some online tools and apps available to help navigate the literature, but I'm not really sure why we don't use them more.


You can get most blood tests yourself. No doctor is needed.

http://www.healthtestingcenters.com/lab-locations/georgia/bl...


i think the law in California requires that a physician or health care practitioner give the order to the lab. i don't think the average patient in California is allowed to order their own medical lab tests.

https://www.cdph.ca.gov/programs/lfs/Documents/LFS-OrderTest...

in this particular area, one might argue that Georgia is more enlightened than California.


The primary reason why there wasn't much automation in the past was the combination of doctors being conservative (which is a good thing) and lack of competition among established medical companies (the ones conservative doctors depend on). You can see how the two couple stifling innovation not through any conspiracy but just preferences.

Both of those aspects are being relaxed today. Doctors are become tech-oriented and more diagnostic companies (run by first-rate medical/engineering teams) are starting up that may upend more established medical companies. We will see a rapid change in the diagnostics landscape in the next decade. This may also add some risk to the diagnostic field, but the benefits far outweigh the risks.


Test results mean nothing without expert interpretation. This isn't some doctor conspiracy.


The test results list what the normal ranges are.


I saw an asthmatic woman in A+E today, all her blood tests were in the 'normal' range, does that mean she's well?

No, she was very unwell. However the test results would either be normal in someone regardless of the severity of their asthma, or would be expected to be 'abnormal' in someone with mild asthma, and 'normal' in someone with severe asthma. The ability to interpret those test results, and take a history and examine the patient is important.

Similarly, we used the latest evidence-based guidelines to assess the patient's asthma severity, and based on several objective criteria (breathing rate, oxygenation of blood, peak flow, etc) the guidelines determined she had moderate-severe asthma

However we called the ICU doctors to see her. The ICU consultant, with many decades of experience managing acutely unwell asthmatics, simply looked at the patient for two minutes, observing how her chest moved during breathing, and the sounds and respiratory effort she was making, and decided to take her to ICU. This was a good decision as she ended up deteriorating and requiring very aggressive treatment. Whilst guidelines can make a suggestion based on the interpretation of some objective data points, the ability to assess a patient as a whole, based on history and examination, is still an important skill, and one which it is hard to automate


The test results list the 95% confidence interval range for the population the sample they used to calibrate the test is theoretically representative of. That's not the same thing as normal.


I know...

Tests are usually done for a clinical purpose, not just to find out what your result is. What do the numbers mean to you?

Many straightforward tests people can do and interpret themselves, like people with diabetes on insulin (measuring BSL). They're not all so straightforward.


> What will be lacking is the automation of data collection, because you seem to underestimate by far the technical, legal, and ethical difficulties in getting the appropriate feedback to make ML appliances efficient. I firmly believe in reinforcement learning, and as long as the feedback system will be insufficient, doctors will prevail, highly-paid jerks or not.

We're already seeing a significant rise in the role of nurse practitioners at the front line of medicine. Today, they gather the data and hand it to an MD, so handing it off to an ML system would be straightforward.


You overestimate the data gathering capabilities of the medical system by a huge margin. Even in seemingly objective parameters, you will see: missing data, reporting details dependent on the gatherer, a substantial amount of inaccuracy, etc. etc. Plus, you are describing people who barely know how to use a web browser. And you should also take into account that people driving the system don't even see the value of expert, scientific educated consultancy in those matters. For example, I was just asked to perform "data mining" on a dataset of 150 observations, 400 variables, and ~20% missing data by a renown professor. He told me I was bound to find interesting things in view of his ~400 back-to-back t-tests.

No, I am quite confident in that ML will really take off when data fed to those systems will be automatically gathered.


Oh god, I missed the passage about the 400 t-tests. Your professor is the guy on the cover of this book:

https://www.statisticsdonewrong.com/

If your comment was a book, I'd lovingly put it on the shelf next to Reinhart's masterpiece ("revised and expanded ... with three times as many statistical errors and examples!"). Unlike Shakespeare, it appears nonfiction is well within the capabilities of the Interwebs (asterisk).

(asterisk) For many years, it was believed that millions of monkeys hitting millions of keyboards would eventually recreate the works of Shakespeare. Now, thanks to the Internet, we know this to be false.


Even in ICUs and NICUs, where it already is being automatically gathered and fed, it turns out that making sensible design and training decisions to get usable performance is harder than anticipated.

News flash, dermatologists don't just look at moles and oncologists don't just do differentials... let's see how these isolated systems deal with cleaning up TKI- and IST-related gastric bleeds, C. diff code browns, and timing chemo around liver resections for sepsis. All in a day at County...

My bet is that clinicians who use AI to amplify their own abilities will come to run the system. Errybody else gets to be a glorified NP, at best, or (worse) administrative ;-)


Ohhh, I thought you were saying that we'll always need doctors to gather the data, but you meant that the ML systems will suck until they get enough data, and they can only get enough data if the process can be automated, so we'll need doctors to do diagnoses until the ML systems stop sucking.


Not to mention how hard is to have a standardization. Doesn't do any good if the data is not guaranteed to have been measured in somewhat similar conditions. Thinking about my cholesterol. I've seen so much fluctuation on my own blood results between labs. They blame it on the "bad chemical reactives". And that is even a somewhat standardized numerical values, but what about subjective symptom: "I have a sand in eye sensation". For ML this must be translated to a numerical value. Does it feel like a 1 or a 1,5 sensation out of 10, how can we make sure we have the same understanding?


Doctors can then focus on new diseases, higher level thinking about health, rare cases and such. Under-researched domains can get attention


Yes. People are acting like the medicine problem would be solved if ML could replicate the performance of doctors today.

I hope that, as you said, it will allow doctors to focus on more difficult problems that we did not even think would be possible to tackle.


Yes, absolutely.


MDs also have the benefit of an artificial monopoly (vs factory workers) that I imagine will fight hard to keep its members employed.

https://www.cato.org/publications/policy-analysis/medical-mo...


Calling medical education and licensing a monopoly is pretty intellectually dishonest. To borrow the OP's analogy, that is like saying drivers with licenses have a monopoly on driving. Seems like a pretty strange argument.

Certainly, the medical industry does not face the same competitive pressure from outside, but that is the nature of industry in general.

Rather than calling medicine a monopoly, it is more accurate to accuse heal care providers and practitioners of colluding to ensure prices are high. Monopolies by definition cannot collude since there would be nobody to collude with.

In the grand scheme of things, I believe medical costs are accurately priced and it is people's wages that are not adjusting correctly.


The AMA controls the requirements for licensing, the schools that can educate for the purpose of licensing, and the number of licenses that can be issued. Exactly what is intellectually dishonest about calling that a monopoly?

http://www.economist.com/blogs/freeexchange/2007/09/a_spoonf...


> To borrow the OP's analogy, that is like saying drivers with licenses have a monopoly on driving.

Not so, as the number of medical residents is de facto set by Congress (due to how the funding system works), thereby limiting the number of doctors in each specialty.


If local production capacity is too low, why not import them?


> that is like saying drivers with licenses have a monopoly on driving.

The critical difference is that there's no limit on the number of people who can get standard driver's licenses, but there is on medical licenses.

It's not just the AMA directly, though. Residencies are funded federally, I believe. I'm pretty sure the numbers per specialty are centrally controlled, but I'm not certain how the process works. And I don't know who limits the medical school slots, but I'm under the impression medical schools can't just increase the number of students at will.

All that being said, keeping the supply artificially low probably has some benefits.


like saying drivers with licenses have a monopoly on driving

...if drivers got to set the test standard and pass rate.


Any system where they control entry not just by a passing threshold but by raw numbers or on a curved exam scale is a monopolistic regime. They aren't just looking for people who can pass the test and meet qualifications but to keep a throttle on numbers.


I agree with both sides here. AMA protectionist? Yes. Absolutely. That is their #1 goal even if they don't write that one down.

But I'm not sure opening up the gates is exactly what we want either. There are countless examples of charlatans trying to push the line here, and as a consumer, it would be too difficult, especially while fighting a serious disease, to tell who can really help me and who just wants my money.

There are better ways to make healthcare more affordable and more effective than to throw away the idea of proper certification.

For instance, some states (like the one in which I live) have granted more powers to nurse practitioners. I support this move.


How about I can go pick up my asthma medication without a 6 month visit to verify for the 1000th time that I do, still indeed, have asthma? The only purpose for that is so the physician can bill my insurance company. It wastes both my time and money.


That is the fault of the United States health insurance industry blocking attempts at universal single-payer healthcare and turning Medicare into a privatization scam. Blaming the AMA for that is completely misguided.


We have single player in Canada and it's even worse. I have to book one appt to get referred for a blood test, another to take the blood, and then another appt to get the results!

There's even a big sign at the doctors office that says "only one health issue per appointment". Why? So they can scam the system for more cash.


> We have single player in Canada and it's even worse.

Do you have any experience with the US medical system? I lived in Alberta for 7 years and in Quebec for 3 (important to note because it is hard to take you seriously since you did not even mention that healthcare is provincially administered in Canada).

> I have to book one appt to get referred for a blood test, another to take the blood, and then another appt to get the results!

Here is an alternative explanation to "doctors are incompetent scammers": you need a referral for a specialist so that specialists are not overwhelmed with requests from hypochondriacs who read something on the Internet. Once the test results come back the doctor wants to see you to explain the results and recommend further treatment. That saves everyone time and money and is the opposite of "scam[ming] the system for more cash."

The "one health issue per appointment" is not a government policy, it is a policy that some physicians have so that they can see more patients each day.


It was a standard STD test at the beginning of a new relationship and the results were negative so it was clearly a scam. This was in Ontario.


> the results were negative so it was clearly a scam

Great use of logic.

I still do not understand why you are blaming the general practitioner clinic for referring you to a specialist. If you want same-day STD testing there are a lot of private test centers in Canada. For example, top Google search result: https://www.stdcheck.com/canada/


It's obviously a scam to call me in and bill the government $50-100 to read me my negative result in person. I don't understand how you can defend this practice.


Is there an option to get results online?

That's been available here/US and it has come in handy.


A single payer system significantly throttles what physicians can charge. They are paid per visit, so you do get this kind of behavior.

If you look at the average earnings, Canadian physicians make ~320K gross salary [1], but they need to pay expenses out of that (reception staff, office, etc.). They do well, but they are not in strata of hedge fund managers.

We should probably just pay them a flat rate, but on the balance the Canadian system is much more cost effective than the US free market system.

[1] http://globalnews.ca/news/2898641/how-much-is-your-doctor-ma...


Why would insurance companies want to pay for more visits than medically required?


Insurance companies trying to avoid paying doctors is exactly the problem - today, in order to get reimbursed to even break even doctors in the US are forced into overcharging:

https://www.youtube.com/user/davidbelk46/videos


So because doctors feel that they are not compensated fairly that justifies them using the legal system to stand between patients and lifesaving medications of their own financial benefit?

I guess doctors don't believe in the free market.


Replace "doctors" with "US health insurance companies" and you have a true statement.


In my opinion, the AMA and the FDA are the main blockers of machines replacing doctors (outside of surgery). Nurses can do, and to a large extent already are doing the majority of the data collection. Beyond that, the doctor's main differentiable skill is pattern-matching and decision making based on experience. Machines with mountains of data generally perform this function much better than humans.

That said, I do agree that we likely won't see the full potential of replacing doctors with machines in the near term simply due to the political and regulatory hurdles, as well as the need to overcome the fear of the public and address the very real and significant privacy issues.

As to anesthesiology, I've actually done some work for a startup in that space working on a system that could very easily move anesthesiologists to a more supervisory role where they would watch (say) 50 surgeries at once (surgeries being handled by software tied into all monitors, as well as EMR systems for history and physical data, and able to dispense chemicals as needed) and step in if and when there was a problem. Technically, there is not much really stopping it, but politics and regulations present significant impediments (and probably rightly so, for now).


For a counterpoint, see the answers on this Quora:

https://www.quora.com/Why-is-machine-learning-not-more-widel...

Medical diagnosis in the general case, esp with mis- or disinformation from patients, is quite complex. The data sets available aren't that good given the built-in biases & missing data. What we're seeing is deep learning can help when it focuses on one, little thing with a ton of good data available while ignoring everything else. That's what MYCIN did back when this concept started. That's not enough to replace MD's any time soon.

What you'll find is we can at best supplement the decision-making practices of MD's by running their data through a bunch of ML systems in parallel to try to suggest things they might miss. This data will overtime feed into the ML systems to improve them. Augmented Intelligence, not Artificial Intelligence, will remain the best way to do things due to all the stuff in doctors' brains from their professional experience that's not in machine learning datasets.


One of the things the old expert systems like MYCIN could do was answer the question "Why", as in, why did you come to that conclusion? And it would show its reasoning. I may be mistaken but I don't think that the new, neural net based systems can do that. Interesting regression if true.


That was a problem in the early days. Expert systems could support root-cause analysis or justify themselves whereas neural nets couldn't. The new things are close to neural nets in operation. I'm doubting they can do it by default given that. Be interesting to see if anything is developed along those lines given work such as visualization of various layers of deep learning in pattern recognition.


I was about to comment about this topic, but you said everything I wanted to! Have an up vote!


The majority of anesthesia is routine sedation of healthy people for routine things like colon screenings. Those types of procedures are ripe for automation with systems like J&J's Sedasys cutting costs from $2000 to $200 a procedure. It was pulled off the market last year due to anesthesiologist opposition rather than problems with they system.

It seems that the barriers to automation may be more the result of strong professional associations rather than specific complexities of the job.

Thoughts?


The problem with deep learning right now is that there's not a great deal of methods outputting a nice estimation of uncertainty in predictions. But this will become less of an issue as they eventually converge with Bayesian methods, or at least if probability is introduced in a more principled way.


This might be a dumb question, but why not use the classification accuracy score as a measure of uncertainty?


It's not a dumb question at all. The classification accuracy and "uncertainty" are different, but the explanation depends on what you mean by uncertainty.

Let me try and give one intuitive explanation; if others would like to chime in with something better, by all means.

Let's suppose that you are classifying objects in images - say bananas and oranges, but it could be tumors or anything that you like.

So we train a classifier to predict this, and we find that of 100 classifications on a hold-out set, we get 73 of them correct. You might, quite reasonably, interpret this to mean that if we randomly select a new image of either an orange or a banana, we will have a 0.73 probability of classifying it correctly. (There are actually some subtleties in this interpretation which I'm ignoring, but they aren't so important for the point I want to make.)

Suppose, however, that we draw out an image that we want to feed into our classifier, and we look at it for a moment. Suppose further that this image contains an object that is long, thing, curved and yellow. We'd expect our classifier to classify it as a banana, and sure enough, it does. Now we draw out another image, except this one has an object that is long, but bent almost completely in a circle, and is more orange than yellow. Now, we might still expect our classifier to classify this as a banana, but should the classifier really be as certain about this prediction as it was about the previous one? Intuitively, I would say not. However, the overall classification accuracy remains unchanged, and so we can't say anything in particular about the certainty of this prediction.

So uncertainty isn't just the proportion of your results that you classify correctly.

Furthermore, it also isn't exactly equivalent to the class probability produced by your classifier, though I don't think this is the best forum for me to get into the details on that.


That's a rough measure of the a priori uncertainty.

But you want the uncertainty conditioned to the particular observations you have made.

A ton more here:

http://mlg.eng.cam.ac.uk/yarin/blog_2248.html


That's how accurate a given classifier is as a whole vs. a measure of confidence for a particular prediction.


I quite rarely see people get flamed for providing a well-formulated opinion on HN, so I don't think you need to preface your piece with that.

I mostly agree with your post, ML has great potential to augment and assist doctors in making the right calls. Paraphrasing Jeremy Howard, "Doctors will be skeptical of these tools coming to take over their jobs at first, but will learn to trust them more and more when they see the kind of predictions they make, thereby slowly but steadily leading to their adoption (as an enhancement or tool) in many areas."


Actually, machines will never replace human judgment. It's absurd to even think machines are capable of such a thing. Machines are tools for humans to use. If your job is adding numbers or stacking boxes or something, sure. But anything beyond the basics is science fiction or delusion. A machine will never even approach an understanding of psychic continuity, for example.

Now, I'm absolutely not saying that we can't engineer solutions in the medical field. You know more than I do about what diagnostic evaluation takes and how procedures are performed, so I don't pretend to understand that aspect perfectly. What I do know for sure is that a machine cannot and will never replicate human thought and decision making and cannot replace a doctor ever until the end of time.

Also, you bring up a good point about salaries. Doctors are important and do great work but deserve to make no more than double or triple what the lowest paid person in their society makes. It's time to bring lawyers, doctors, accountants, developers, engineers—the professional class—back down to reality with the rest of us.


i honestly can't tell if this comment is a parody of something/copypasta or not tbh


Agreed. Reads like an AI trying a bit too hard to appear human. Charged with emotion, arrogance, lacking reasoning.


> but we won't be seeing fully integrated, doctor-replacement systems for a long time.

I see your point and agree - if you look at ML and computing in general as a tool useful within the current context of medicine.

But I think the whole field of medicine (or a big part of it) may become obsolete - because the change will be driven by technology, not medicine.

By monitoring people's vitals in real time - think wrist bands which can do blood tests (+ urine, sweat, sperm, etc), feed that real time data to ML algorithms, train it on billions of people and you can accurately tell ahead of time when a person is getting sick. Next day the drone brings the medicine.

Doctors will still be involved of course, but their role will be mostly to confirm whatever the algorithm has decided..

So I think we'll be seeing a move to preventive medicine / real-time monitoring during the next decade or so and within our lifetime people might not be involved in medicine at all.

That is, if we have peace and things go according to plan... Which is a long bet of course.


Nice theory but acute medical emergencies are often stochastic. Let me know if you get to the point where you can predict a pulmonary embolism in members of the general public with enough accuracy to dispatch an ambulance.

Preventable conditions are often a combination of genes plus environment plus luck. Trouble is, the "luck" part dominates more often than we'd like. This is why preventive medicine is often accused of being tremendously arrogant.

We've got a long ways to go, and it would help to focus on the highest marginal benefit items. (We were supposed to have flying cars, and all I got was the sum total of Hunan knowledge on my phone... not a bad trade, on balance)


The difficulty is not "following the right lane", it's the huge vision problem. Analog computers can follow a lane perfectly fine.

If your profession only relies on reading a bunch of discrete values from monitors, it's a prime candidate for automation regardless of the complex process that derives the values.


I think I do understand machine vision, since I just published a paper using it. If my figure-of-speech does not convince you, I encourage you to step foot in an operating room if you get the occasion. I guarantee 100% that you will be unable to implement a system as efficient as the attending doc. Because numbers are just part of the job. You already see systems advertised as more efficient than anesthesiologists. Until we see those systems tested in a real, usual OR setting (meaning, hopelessly chaotic), you should take those efficiency statements with a grain of salt.


Anesthesiology is one of those things that you'd think even classical control theory would be able to do given enough effort. You'd think... but not so! Apparently jobs that people go to med school to learn how to do require tremendous amounts of effort to automate. Who would've thought? ;-)


Anesthesiology is one of those things that have already been automated, even if you're unaware. The problem with anesthesia via robot is not a technical one at this point, rather it is an example of the current practitioners doing all that they can to preserve their way of life.

https://www.technologyreview.com/s/601141/automated-anesthes...

https://www.washingtonpost.com/business/economy/new-machine-...


That robot performs sedation, not general anaesthesia. Sedation provides a slight depression in awareness and pain sensation, and is much easier than full general anaesthesia, in which the patient is completely paralysed, unconscious, and insensate. General Anaesthesia generally requires intubation, a complex practical procedure which can go wrong very quickly, and kill people very quickly. It requires close monitoring of numerous parameters, some of which are digital measures (blood pressure, heart rate, etc) but others are quite subjective, such as the patient's appearance and the current stage of the surgical procedure.

The claim that this device is a replacement for an anaesthatist is similar to claiming that an automated wheel balancing machine will one day replace car mechanics. Certainly it can perform one specific component of one kind of anaesthesia, but it is far cry from the full skill set of an anaesthatist.


Exactly, thanks.

Once you get to general, for example, the control problem is a lot more difficult because of the required precision.


Pretty soon, what happened to anesthesiology (one physician managing multiple extenders managing lots of technology in multiple places) is going to happen to the rest of medicine.

Physicians will adapt by carving out even narrower niches ("I only specialize in the LEFT eye..." ;). But I really don't see a they-took-our-jobs moment until people are allowed to file lawsuits against AI. Companies will never put out an AI M.D. in the wild unless there's a flesh and blood counterpart to shoulder the liability (+1 job right there).

Don't get me wrong though, I eagerly anticipate the day when going to the doctor/machine is not such a miserable soul-sucking experience... (I'm a physician.)


I hate all physicians, except the one who saved my life.


Don't worry Doc, your job is safe. Machines replacing high-touch specialized professions like doctors is not the risk here. The bigger immediate risk may be from machines making these jobs much much shittier, and then there's fallout. You won't get fired, but you may want to quit.

MDs are not factory workers. First you have the AMA protecting you. Second, there appears to be a very strong preference by humans for medical care to involve another human being there providing the care - in person.

There are other professions where machines are already much better than humans and the need for humans to even be there in person is debatable. Airline pilots for example. Of course, while a computer can fly a plane better than a human and planes can be flown remotely, it will take much longer for passengers to accept the idea and get on a plane without a human pilot in the cockpit.

The effect of machines for pilots already highlights risks to come. Humans still need to be there in person, but they almost never need to do anything but sit there. The job is to be a warm body and tell people to look out the window while they fly over the Rockies (seriously, take off to landing no human is really needed to fly anymore - pilots do nothing 95% of the time).

How did machines taking over the cockpit affect how the pay, benefits, competency and job satisfaction of pilots took a nosedive? Did you see the movie "Sully"? Where a human did step in over the machine, he almost got crucified even though it was the right move he only just barely was able to convince others of it.

Is it possible machines won't replace doctors very soon but could soon start to lead to lower pay, less trust, less prestige (here come the nurse practitioners at least), less need to use your brains at work and less meaningful interaction with patients...? And when a machine can't do something in those rare cases, will the doctor be ready for that? Think the best and brightest still will want to do this? Doctors already are not fans of that thing where patients come in having looked up their symptoms online, well they are about to do that 10x and be more right than you.

Total guess here: how likely is it that we see the best and brightest increasingly go into medical research instead of practice medicine? As the corporate pharma kickbacks you guys get can't make up for the low pay and shitty shit job conditions machines have created for doctors, while nurses and B-team quality doctors take over the actual physician working with patients stuff?


I don't think anyone is eagerly awaiting it (unless they were horribly misdiagnosed which is sad reality), but there is an inevitability to it.

It's not a matter of if, its a matter of when. Greed will be the impetus, and eventually quality catching up will be the nail in the coffin for traditional medicine.


You present this as binary: doctors OR ai.

I propose that the doctors who use ai to make themselves damned near infallible (by catching the corner cases where the ai is poorly trained) will make out like bandits, just like top lawyers. The ABA is a monopoly, too, and legal cases are perfect fodder for RNNs. Yet good corporate lawyers still make money faster than they can count it.

Meanwhile, shitty lawyers are out of a job.

Lather, rinse, repeat, but with a million times more regulation and inertia.


I'm pretty sure the kind of AI actually discussed in the article is only going to be a good thing for doctors - even the pretty mediocre ones - by sending them more work rather than less.

Person checks funny looking mole on smartphone app, app with intentionally conservative algorithm says there's some risk it might be a melanoma, dermatologist gets to have a proper look at it in three dimensions (and a second opinion from the medical grade version of the ML app). Result: more people visit doctors over issues that never bothered them that much and the doctor gets assistance with their diagnosis.

And of course some of those people that wouldn't have bothered visiting the doctor under the old system actually do turn out to have a melanoma and the early diagnosis significantly improves their survival chances.


I'm not for death of jobs, but I'd love to have a tad more information, a tad sooner, rather than relying on luck with which doctor is experienced or not, or wait until the best specialists are available.

If ML unleash a global increase in diagnostics for all MDs than it's even better.


I hope computational thought somehow makes it into a MD's curriculum. It always depresses me when people see automation as a zero sum game against humans, instead of seeing automation as something to empower human practitioners.


If anything it could put more MDs to work as problems are identified at home and escalated to an MD when there is an issue.


Yes, you are safe because of licensure and regulatory capture. Hardly seems fair to the factory workers.


After a procedure (unrelated to my heart), the anesthesiologist overseeing me suggested I go see a cardiologist. They noticed some tell-tale signs of a specific heart disease while they were monitoring the EKG readouts. I'm happy to say that anesthesiologist made my life a hell of a lot better because of that suggestion.

Not something I would have expected a machine to tell me after a procedure completely unrelated to my heart.


Why do you think so, if you have the right data with many different algorithms running on it (even the one you are not interested in) i am sure it will uncover those. I am not doubting the hunch your Doctor felt due to his expertise was useless, but with continuous data ingestion and analyses coupled with as granular as the features can get this can only improve

In fact complicated algorithms with so many streams of data that are probably not directly shared by specialists until it is needed may uncover new things even before traditional Doctors spot it


Because the programmers building the anesthesiologist machine learning model probably didn't think to include that check in the model, as they don't have the appropriate training.


And for some reason you don't think some programmers are building ML models to classify all diseases known to man kind?


But with sufficient compute power, couldn't you just have the machine equivalents of a cardiologist, anesthesiologist, neurologist, etc all looking at your information at the same time?

It seems easier to scale compute power than to scale doctors considering how long doctors take to train.

Of course, being able to get a machine to a doctor's level is not easy!


but it should (unless the medical / insurance industry pulls their shenanigans - which is a real risk) be a lot cheaper to have a machine examine you. just like we have blood pressure machines at the drugstore. we could have checkup-supplement kiosks that could check your vitals cheap/free and tell you the same thing.

what I'm excited for with AI + medicine is how many poor people will be able to get _some_ level of healthcare where they were previously getting none. These programs aren't free, but once they exist they are cheap to run. so people in 3rd world countries can get their films read, or have an AI dematology program look at their skin.

cynically though, I worry that once these AI programs get perfected they will only exist behind the worlds toughest paywall and insurance will just charge you the same fee to see a real doctor. they'll not let it run films on poor african villagers because they have to "protect their IP" or some bs.


a benefit to ML is that they can check for everything. equipment isn't here for that yet either, but I imagine it will be.

I'm assuming the 'data' for your condition was apparently in the EKG, thus there is no reason in the world an ML shouldnt have picked that up. Where as with humans, maybe you got a tired, new, or bad anesthesiologist.

if you said they made the determination by something other than the EKG, I would be more convinced the 'machine' would have missed it


ECG machines already print likely diagnoses.


A paramedic could see the same things on the EKG. It doesn't take much training to see bad rhythms on the monitor.


This is the second major study applying deep learning to medicine, after Google Brain's paper in JAMA in December, and there are several more in the pipeline.

If you've developed expertise in deep learning and want to apply your skills to healthcare in a startup... please email me: brandon@cardiogr.am. My co-founder and I are ex-Google machine learning engineers, and we've published work at a NIPS workshop showing you can detect abnormal heart rhythms, high blood pressure, and even diabetes from wearable data alone. We're working on medical journal publications now based on an N=10,000 study with UCSF Cardiology.

Your skills can really make a difference in people's lives. The time is now.


Before neural networks got deep, there was a lot of very impressive work applying neural networks to medicine. Example:

http://suzukilab.uchicago.edu/research.htm

IIRC they were outperforming the average radiologist on some tasks 10 years ago.


Why haven't they replaced the average radiologist yet then?


Having worked in the field, the model that works well is decision support = augmenting an expert with a computer-aided diagnosis. This can work in a couple of ways, but the two classic ones are:

* Following behind the expert to give a second opinion

* Going ahead of experts to screen cases that should be read by an expert

Humans are imperfect and have their sources of unwanted variability. Algorithms may be less variable and reach near-human performance in controlled settings, but are often not flexible enough, not good at incorporating multi-modal patient history, and sometimes fail in spectacularly bad ways.


Social skills. You still need them, patients still demand them. I am a physician myself and can assure you I think my profession will go the way of the dodo in the next 50 years, starting with surgeons and going on to conservative diagnostics/therapy. The only reason it is not happening immediately is patients still (and for the foreseeable future) demand and expect the physician to be a human as well: showing empathy and bonding the so called "therapeutic alliance" with them.

As soon as AI manages to show that capabilities, it's the end. Algorithms are much more accurate in combining all the available biomedical information needed to decide for diagnosis and therapy.


but doesn't radiology specifically have the reputation for being the "unsung heroes" or at least the unsung among physicians ? with the exception of interventional radiology I thought they hardly ever interacted with pts ?


Indeed. My comment was more general in nature. Specialisations of medicine that have less direct interaction will be the first to be automated. Laboratory ones are pretty much already.


I don't think radiologists are going anywhere, but these tools should at least be used by radiologists. I think one reason they haven't been adopted is for reasons of safety and liability. It would be negligent to run these kind of tools without confirming the results, and if they have to inspect the results anyway, why bother with the computational approach?


I suspect one reason is the schleppiness of implementation. Healthcare is hard enough to navigate as a doctor or patient.

Another is the existence of edge cases. While not quite as bad as self-driving cars, they still require human review.

One radiologist I spoke with was under the impression that his field was going to disappear in the next n years. I don't think that will be the case entirely, but it will change.

One reason why image detection is ripe for this kind of "disruption" is that some of the highest paid medical fields (dermatology, radiology) basically employ expert image recognizers.

On the other hand, I'm not sure what percentage of health care costs goes toward employing doctors directly, but I don't think it's huge compared with the rest of the facility and equipment and administration cost.

The real benefit will come when people don't have to go in to the doctor at all. Which makes the smartphone "take a picture" aspect pretty sweet.


Healthcare plans do not reimburse machines yet.


I'm not sure if you're joking or not, but as someone who works for a medical technology company this is very true. Whether or not the use of a particular technology is billable to first order determines whether the technology is adopted.


There may be incentive soon enough, via ACOs and bundled payment programs. When dollars saved go to the bottom line, folks start trying to save dollars.


If we could have a preliminary diagnosis quickly from a machine and only then and if necessary talk to a specialist doctor, the savings in costs could be outstanding. Talk about Affordable Care. Maybe an act of Congress could move this along.


It all goes well until the machines terminate Ms. Butler's pregnancy.


I'm an average radiologist, and would be happy to answer any questions you might have.


Indeed, I second that the time is now. There are several imaging modalities/organs that provide diagnostic information about a variety of human functions or diseases. For example, retina is a unique organ that is amenable to imaging of central nervous system, cardio-vascular system, and microvasculature without any incision. This allows us to detect, screen, monitor, and predict risk for diseases such as diabetic retinopathy, macular degeneration, glaucoma, and even cardio-vascular risk, Alzheimer's, and stroke.

If this interests you and you have developed the expertise, there is another startup opportunity to explore -- please email me at solanki@eyenuk.com. We are bunch of machine learning PhDs, developing/publishing and commercializing deep learning algorithms for disease/risk identification from retinal photography. We play with millions of retinal images, and it's a lot of fun!


> If you've developed expertise in deep learning and want to apply your skills to healthcare in a startup

What if we haven't but we do?


If you're a strong software engineer but without any particular expertise in deep learning, feel free to email me as well. brandon@cardiogr.am :)

We (Cardiogram) in particular are hiring for: mobile engineers (building an Apple Watch app used by 100,000 active users), storage and data infrastructure engineering (how are you going to store 10 billion sensor measurements those users are producing?), and machine learning engineers (now that we have the data, what can we do with it?).

In practice, many people fit multiple categories. There's a tight relationship between data infrastructure and machine learning, for instance, since new infrastructure often enables new algorithms. Likewise, building a new feature in Cardiogram for Apple Watch may give us 10x more labeled data, and therefore make the deep learning algorithm perform better. Interdisciplinary is good.


Is remote working an option?


Not for now unfortunately--but maybe once we get a little larger.


I heard before that a problem with heart monitoring for ML is that most of the samples out there are with abnormal hearts rather than normal and false alarms. As in, there's not enough published data to establish a baseline for high accuracy across general population. Most claims like this cite rules protecting medical records/data. Never got to ask a specialist in the field for confirmation or rejection of that claim.

So, how true or false is it?


True -- the available data sets (like MIT-BIH arrhythmia database) are quite small. In our case, we launched a study with UCSF Cardiology which has recruited about 10,000 people so far, and that's how we get a data set large enough for deep learning.


Very interesting! What is Cardiogram's business model?


The app is free to you, but if we can detect (say) diabetes early and route you to the right medical care, we save the healthcare system money. We've built a platform called Cardiogram Connect which optionally lets you share aggregated data in exchange (usually) for financial rewards. You'd choose to connect with whoever pays for your healthcare--depending on your situation, that could be a health system, accountable care organization, insurer, or employer--and that entity is the customer that pays Cardiogram.


Is it working? It seems a bit convoluted. Why not just let users agree to sell their aggregated, deidentified data to whoever wants it and then you take a cut kinda like the app stores.


I think the parent was asking 'who pays you money?' not, 'for whom do you provide value?'


Honestly, I can't wait for deep learning and computational methods to dethrone doctors and upend the medical profession. In the next five years, expect a computer to be able to predict most diseases a lot better than doctors can -- and with none of the attitude, high cost, or inconvenience.

Mind you I'm not talking about researchers, who will always have a job. I'm talking about practitioners. I've had a medical condition from birth and I've had to deal with my share of doctors. Outside of the insurance system, they are easily the most unpleasant part of the whole ordeal to deal with. There are some gems, but most you will encounter are pompous, arrogant, and "commanding" -- when they enter a room, they are flanked by "residents", "assistants" and generally give off this air of superiority which is really just because of their route experience. The whole thing comes off more as a performance than anything else. Worse, they often get mad when you question them or ask them to explain themselves, or how they arrived at a conclusion.

Good luck finding work when an algorithm can do your job better than you. It's only a matter of time.


I feel the exact opposite - in my treatment for prostate cancer the human interaction with doctors were a hugely positive experience for me. Interpretation of biopsies and inspection of cancer images were part of that process and I'm sure machine vision algorithms could help in this area. However, even if the classification of the cancer cells improves, the role of the doctor guiding the patient through the right treatment process still remains something I would not want to turn over to an algorithm.

I have also encountered doctors I did not like but fortunately for me I had a choice where to go. Maybe machine learning should focus on weeding out unpopular practitioners instead.


I think medicine will over time morph from a field generally perceived as an intellectual one, to a largely humanistic one, like nursing. Most people, especially the HN crowd, vastly underestimate the importance of the human touch. A doctor who possesses this can make a huge difference, but sadly the are outnumbered by those who do not.


Medicine isn't really that intellectual. It's mostly based on vast amounts of acquired knowledge via rote memorization and repetitive experiences. It's not like physics or math that requires actual creativity and intellectual vigor. (I'm a 3rd yr med student)


Especially as we become evidence based vs. eminence based.


The trend over the last 10 years here in Canada has been exactly that. There are now multiple tests one has to take before med school admissions, all of which are designed to weed out "bad" candidates. The desired candidate seems to be a good human who has the required stamina and intelligence to complete the curriculum.

However, since there are so many applicants, schools have no choice but to cut all below a certain GPA. This has an effect on the pool of applicants and the schools are trying to mitigate that.


I think it's fair to assume many if not most students enter medical school with a sincere desire to help. Will that sentiment survive 6+ years of highly stressful education, though?

In my opinion, technology will help reduce the cognitive load sustained by doctors, driving error rates down and simplifying medical education, allowing the professional to focus more on the relationship with the patient.

Many specialized guidelines contain actual algorithms: objective and subjective signals to look for, their interpretation and evidence-based recommendations based on them. In one instance in my education, this manifested in the form of giant lookup tables. I was expected to memorize them and keep doing it again and again when they inevitably change in a few years. To me it was like trying to memorize a highly dynamic version of the periodic table of elements.

One look at it and I knew a computer could fully implement its logic yet my classmates think I'm crazy for even suggesting it. I say let the computer remember that stuff. It won't replace all of medicine, certainly won't substitute for a proper understanding of the disease... But the act of mapping objective data to evidence-based conduct should really have been abstracted away long ago. The electronic medical records system should bring up these guidelines the moment the physician types the data in. If the doctor types in anthropometric data like BMI and other relevant measurements and they indicate obesity, the system should be able to suggest a proper conduct.

Unless they're used in emergency situations where time is essential, doctors shouldn't have to memorize magic numbers much less entire tables of them. With one less thing to cram, maybe school wouldn't be so stressful, doctors would make less mistakes and would also make more time for actually caring about the patient.


Actually I think moving diagnostics and other highly technical decisions into ML systems only opens up room for making sure that the humans still involved in the process are maximally empathetic towards their patients.

Right now, a really rude doctor will be tolerated within the system because of their technical knowledge, diagnostic skills, surgical skills, etc. But if all that other stuff is performed by machines, and we just have nurses, etc., there to tend to patients, explain things to them, and generally provide a human touch, I'd expect that very rude human employees would no longer be tolerated.


Ok, lets look at this stuff.

Would it have mattered to you if a machine interpreted the results and a nurse, fully versed in the specialty, had a consultation with you and guided you through the treatment process?

It seems like the bigger issue here is having some warmth of some sort in the situation - which I think is where medicine is really going, at least in the way we experience it. Computers and machines to do the hard diagnosis, with educated humans guiding other humans through. I believe we'll have less unpopular practitioners because we'll be able to spend more time teaching bedside manner and communication skills.


I get where you're going with this ("fully versed" nurse + AI = doctor) but there were also surgeons and anesthesiologists involved. Some specialties will remain but general practitioners and family doctors may be replaced with some combination of software and less expensive staff.


Holding aside for a moment current capabilities, would you prefer: Human interaction with more fault or no human interaction and less fault?

The obvious rebuttal which is common is: "why not have a compassionate human read the machine results/analysis/interpretation/prescription?"

To which I respond, how much are you willing to pay to have someone read to you?


Not so sure we are going to see an automation completely replace doctors. Just look at all the problems both technical as well as social in the self-driving cars arena. There lot more people are qualified to be drivers than to practice medicine.

Instead, I think we'll see more powerful diagnostic tools at the disposal for physicians to use. Doctors will still play an important role in treating their patients and will be more effective because they'll have powerful tools assisting them.

But to your point, will technology help patients feel more empowered in their medical encounters? Or to get more value out of their interactions with their doctors? https://www.remedymedical.com/ seems to think their platform will do just that for primary care / telemedicine visits.


> Doctors will still play an important role in treating their patients and will be more effective because they'll have powerful tools assisting them.

I get into this argument with people a lot.

I say something like, "Self driving cars will replace all the transport jobs" and the pedant says, "Well they won't replace ALL the transport jobs, so argument null and void." But no shit they won't replace ALL the transport jobs, if they replace even 30% of them, that's huge. Highest unemployment during "The Great Depression" was ~25%. I think in the case of transportation it will be more like 80% fewer transportation jobs in 10 years, something like that.

With doctors and hospitals, if there were better incentives in place promoting any level of efficiency, we would have less need for doctors (and hospital staff in general). I agree that we won't see 100% automation. But a hospital finding it can do more and better work with 50% less staff? I could see that being a possibility in our near future, with the caveat that it won't happen in the US due to the regulatory environment and incentives in healthcare.

Countries outside of the US might be more open to the idea of efficiency in healthcare.


It is no different than going to the urgent care clinic for a strep test. Doctors do not perform those either. Most diagnosis will be done via software administered by lower paid and more easily replaceable low skill labor just like urgent care clinics do with lab testing.

There are fewer accountants today because of tax software and Excel spreadsheets. They are still needed for more complex and unique situations, but there will be a lot fewer of them. The local H&R Block uses lower paid "Tax Preparers" instead.

There are a lot of research done about Walmart and about when the store moved into town there were not fewer jobs, but somehow the area got poorer. What they found was that the local community leaders(small business owners, lawyers, accountants) were automated away and/or moved to Walmart headquarters.

http://aese.psu.edu/nercrd/economic-development/for-research...

Is one I could find, but that also references other papers in the field.

This leads to me understand that like Walmart, the simple human physical labor jobs will remain long after the more skilled positions. The main reason, I believe, is that software is simply cheaper than hardware(robots) so highly skilled positions which require software not hardware to remove will be the first to go. This is exacerbated by the fact that the software replaceable jobs are the highest paid and therefore the most profitable to remove.


" The main reason, I believe, is that software is simply cheaper than hardware(robots) so highly skilled positions which require software not hardware to remove will be the first to go."

Very much this. This is how we'll be able to replace general practitioners with, at first, nurses with a bit of extra schooling - and then nurses with a 4 year degree... and then with a 2 year degree. Yet we'll still see surgeons for some time because it takes robotics or some other scientific breakthrough to replace them.


Thought experiment: do you think some part of your job will be automated before or after hospitals automate away 50% of their medical staff?


Depends on which job:

Programming, machine learning: it's constantly being automated, which leads to paradigm shifts. Those new paradigms are then studied, understood, and automated. Automation is about "not doing the same thing over and over again so you can solve new problems."

Programming, games: I'm giving a talk later this month about automating game production with machine learning. You can't really use machine learning to be "creative" yet, but you can use it to generate novel game assets, which is actually where most of the time is spent in game dev. There's also a lot of pipeline improvements that could be made by existing tools companies, most notably Adobe and Autodesk. So the upshot is it will be possible for small teams to create much more expansive game worlds. Make GTA V for $1M instead of $260M kind of thing. But even then, the history of game design has been automating different pieces -- it's basically the reason for the existence of game engines.

Starting businesses: This is controversial, but I think entrepreneurship is the highest intellectual endeavor. It requires you to apply everything you've learned about life, the universe, and everything in ways that most people haven't thought of. I think you'd need a very real AGI to do this; although I suppose I could see an algo running P&G or J&J and making decisions about what brands to invest in and divest from, along with how to market based on what people are saying on social media, stuff like that.


I think it will be both. They will start with software for doctors - who will use it because the computer never forgets to ask some critical question that could discover a 1 in a million disease and not the common cold (with similar symptoms) that the everyone else in the office today has.

Eventually urgent cares and ER will install them for people directly. No reason the machine can't prescribe antibiotics for the latest form of strep that is going around. That leaves the doctor for the hard cases where the machine doesn't know.


I've been harbouring similar thoughts to you for quite some time. I live in Canada, where healthcare is supposedly "free" (it's not, you get taxed like crazy here!). The treatment you receive here is subpar in my opinion. The declining quality of treatment, I'm guessing is most likely attributed to an increasing population and the limited number of quality doctors available to treat these people.

I would assume these challenges aren't unique to Canada and from an outsider's perspective the medical system in the US seems worse (maybe not if you're rich)


I never thought doctors would be hit so soon by the automation/AI crisis, but this article challenges that thought. However, due to the state of robotics at the moment, surgeons for example aren't going anywhere for a good decade I'd estimate, and it's not like people can, in the mole example, self-remove a chunk of it for biopsy with 99% accuracy. Then there's treatment that has to be done at a hospital under monitor of professionals, etc.


> I never thought doctors would be hit so soon by the automation/AI crisis

I think most people in AI are more surprised that it took this long. The tech has been there for decades for a pretty large percentage of routine diagnostics, especially carefully defined clinical-sample type diagnostics. People were pretty sure in the '80s that it'd all be automated soon, but it never managed to make it into actual hospitals so funding dried up. Mixture of bureaucracy, legal issues, patients not liking the idea of computer diagnosis, doctors not liking the idea of computer diagnosis, incentives, etc.

I think changes in all those "environmental" factors are likely to be the biggest boost to something like this getting deployed in practice. Tech advances are good as well, and will expand the range of diagnostics that can be automated, but there is already enough low-hanging fruit in medicine that is easy enough for any of a half-dozen AI techniques to do it, that I don't think tech is actually the bottleneck.


Well, with DL powered algorithm/device, people can pretty much practice self-diagnosis, and only under circumstances, where the model is predicting a higher risk, they can go to clinic to get the human treatment.

Overall AI will ease the demand for human doctors, maybe not overnight, but gradually yet quickly. Cheaper healthcare for all, thinner salary for doctors, I will say not a bad deal overall.


> an algorithm can do your job better than you

What I find people miss here is that computers will not help you heal your boo-boos. Say you get stabbed in the face with a knife. You need a doctor to help you. Or to give birth.


Honest question: if you have had a bad experience with a healthcare system that doesn't deliver the sort of care that you want, what makes you think that some machine learning based implementation with a human spokesperson is going to be better? How will you question the results of the algorithm?


Cheaper, more cost effective for sure.


The OP's comment was not asking for a cheaper or more cost-effective alternative.


It is related. When the diagnosis is much cheaper and doesn't require that much of a long wait, people will be much more tolerant towards it.

And I don't why you bring up the spokesperson stuff, it doesn't need to.


Another perspective: The Heroism of Incremental Care – Atul Gawande https://news.ycombinator.com/item?id=13485354


There are not that many doctors in the pipeline to meet the demands of the future. Especially, with longer lifespans and demographics spurts in places like Africa. These are good complementary tools for medical care with the doctor "quarterbacking" while all the "blocking and tackling" is automated.


I don't have much experience with specialists, but my experience with GPs has been awful, for every good one you meet, you have to deal with 5 other terrible ones. I will never give up my GP until he retires or an AI replaces him because he is probably the first doctor since my childhood with an actual air of competency around him.

I feel bad for the rest of the people who visit my clinic and have to deal with any of the other garbage practitioners who usually fall into one of two buckets. Foreign (mainly Indian) hacks with zero medical knowledge and bedside manners, and greedy yuppie strivers with a knack for memorization but terrible analytical ability.

Most doctors don't deserve their inflated salaries or social status, and I hope they are soon brought back down to Earth by technology, they have been able to skid by for way too long.


This is exactly my experience. In some medical domains its even worse - I think I have never seen a good dentist. Can you even build up passion about staring at random human mouths ?


You should always ask your doctor for a treatment plan, i.e., a structured approach to curing your condition. Make them plan a few steps ahead. And question that plan.


Systems that outperform doctors in some specific area of diagnostics aren't new. One of the earliest examples of such systems is Mycin [1], which also was developed at Stanford, but around forty-something years ago. Never went to production because of practical issues that have nothing to do with its accuracy. It's interesting that all of those "practical issues" are no longer relevant, and yet we don't see a widespread use of similar software.

[1] - https://en.wikipedia.org/wiki/Mycin


I wrote a bit about the systematic issues preventing promising AI research from moving to production in medicine, including a very brief history of MYCIN: https://blog.cardiogr.am/three-challenges-for-artificial-int...

I think now really is different. Part of that is algorithmic advances like deep learning, as shown in this Nature paper.

An even larger part of it is that the financial incentives are flipping due to value-based care. In 1979, a hospital that implemented an expert system for accurate diagnosis may, paradoxically, see its revenue fall. Nowadays, with ACOs, risk-based contracting, and bundled payments, the financial incentives create tailwinds rather than headwinds for large-scale adoption of AI in medicine.

Contrary to popular belief, the medical system can absorb new techniques very quickly--when incentives are aligned. And they are now becoming aligned.


Comparing apples and oranges here. Mycin is an expert system dealing with changing rulesets with A LOT of manual teaching. The current papers deals with computers discerning visual patterns by itself.


I disagree. Data isn't magic fairy dust, and romaniv is correct -- plenty of expert systems-based approaches did perform well in practice.

Many of the barriers standing in the way of wide-spread use diagnostic software are not technological in nature.

Ignore those barriers at your own risk.


I am assuming you are referring to the expert system used for infectious diseases. They performed well, but could not adapt to the constantly changing guidelines, medications. Data input was inefficient back in the days of terminals. It took about 5 thousands rules to be on par with a junior infectious disease doctor if I remember my studies on this subject.

The deep neural net visual diagnostics are different. They are learned on pure pixels as we would from photons striking our retina, signals traveling all the way back to our visual cortex for learning. There is no assembling of thousands of rules here and therefore the system is less brittle.

These new systems get more powerful with more data given to them. Expert systems required humans to craft rules from the data, and therefore required constant maintenance and is fallible to human error.

So yes, some of the barriers are definitely technological in nature.


>It took about 5 thousands rules to be on par with a junior infectious disease doctor if I remember my studies on this subject.

IIRC, MYCIN had several hundred rules. The researchers in this article had to pre-process 130,000 labeled examples. If you see a misclassification in an expert system you can at least backtrack and identify the individual rules that contributed to the failure. AFAIK, systemic errors in training data are much more difficult to detect and fix.

I think people tend to overstate the practical issues with expert systems and understate the issues with deep learning, partly because we have decades of experience with real-life deployments of the former and relatively little experience with the latter.


Oh yea, I'm confusing it with INTERNIST. https://en.wikipedia.org/wiki/Internist-I

Yes, the explainability of expert systems is the only thing going for it.

Yes, there are issues with both, but we are really debating different solutions for different problems. For visual recognition, there is no doubt in my mind that deep learning is king.


I hope someday soon we'll develop systems that allow us to "ask" a ML algorithm what factors led to a decision (diagnosis in this case).

It would be interesting to compare that with the current state of the art in the field, and see if ML can contribute new scientific/medical theory as well.


I'm not so sure we could ask the algorithm itself, in any literal sense. An algorithm trained to introspect might actually be wrong about its own "memories" or "motives"—just like a human might! (Though, likely, without the penchant toward political rationalizing.)

This is most simply because, whatever the algorithm is trained to do, it's certainly trained better to do that thing than to introspect. Introspection is a separate skill!

But there's also a more insidious element: introspection (in humans, at least) tends to result in the creation of a lot of "personal concepts" that don't map to well-known common concepts. An introspection on one mind must necessarily result in a taxonomy that contains terms for the tiny, unique features that only that mind has—which makes it very, very hard to communicate one's personal introspections to others. (You might call this a kind of overfitting: the introspection capability becomes optimized for that one mind, but ceases to translate well to features in other minds—like human minds.)

I'd place a much stronger bet on our ability to train one AI to "stare at the brain" of other AIs as they make decisions [tons of them, as its training data], with the expected output being a general theory on common AI features responsible for the given calculation step. A computer psychologist, of sorts. :)

Of course, you could include such a pre-trained model as a "module" alongside the AI itself, and call the combined system "one AI" if you like.


> Introspection is a separate skill!

Indeed it is. It's something that a second algorithm (perhaps ML, perhaps not) would do.

And this is beginning to remind me of Society of Mind.

https://en.wikipedia.org/wiki/Society_of_Mind


Depending on the algorithm, there is this: https://homes.cs.washington.edu/~marcotcr/blog/lime/

It's a neat approach that I've used with some random forest classifiers.


Well, there are ML algorithms that can be interrogated about their reasoning. One type is symbolic systems that use data to generate or "tune" their rules. Another one is stuff like this: https://www.youtube.com/watch?v=UqPcq0n59rQ (it's a kind of ensemble system that uses a linear combination of human-interpretable classifiers). Both kinds are pretty interesting, because you can manually inspect and tune them after training.


This reminds me of a talk that I saw about wavelet based algorithms in the 1990s for detecting tumors in mammograms.

The algorithms found most of the tumors that humans had missed, with similar false positive rates. BUT humans refused to work with the software!

The problem was that the software was very, very good at catching tumors in the easy to read areas of the breast, and had lots of false positives in more complicated areas. Humans spent most of their effort on the more complicated areas. Every tumor that the software found that the human didn't simply felt like the human hadn't paid attention - it was obvious once you looked at it. The mistakes felt like stupid typos do to a programmer. But the software constantly screwed up where you needed skill. The result is that humans learned quickly to not trust the software.


This is very true and directly related to my research. (I work at a company developing software to interpret EEG data.) There's a huge difference between an algorithm with a low error rate that makes mistakes seemingly at random vs. an algorithm with a somewhat higher error rate whose mistakes are at least comprehensible. A doctor is much more likely to trust the latter than the former. Almost as important as developing a detector with a low false positive rate is developing a detector that can figure out when the problem is too hard so it knows not to even try. (And it seems that this problem is just has hard.)

One of the things we do is perform a Turing test of sorts where we test if the performance of our detector is statistically indistinguishable from a human. (In fact, we actually have a contest running right now where we give you 10 EEG records, some marked by humans, some marked by our software, and if you can figure out which were marked by which we'll donate $1000 to the American Epilepsy Society.)


Unfortunately the paper is in Nature, paywalled, instead of Arxiv, and data/code/model/weights inaccessible. While publishing in Science/Nature/NEJM/JAMA is definitely the right approach for deep learning to gain validity in the medical community, faster progress could be gained by having a more open platform, with constant and real-time validation with more data, more medical centers and clinics. The reason progress in DL has been so breathtaking is in no small part due to the culture of openness and sharing.


sci-hub.cc is your friend.


That's true - but the principle still matters and scihub may not be around forever.


This is interesting and impressive work, however, I noticed that they compared the algorithm's performance to dermatologists looking at a photo of a skin lesion. This seems like a straw man comparison because any dermatologist would presumably be looking directly at a patient and would benefit from a 3D view, touch, pain reception etc. I realize that this was the only feasible way to conduct this study, but still suggests that an algorithm looking at a photo cannot match the performance of a dermatologist looking directly at a patient.


Respectfully disagree. Telemedicine is going to be an important aspect of medicine, Dermatology in particular.

Rural and underdeveloped areas are going to be the largest market IMO. Everyone can access a smartphone but not everyone has the luxury of seeing a Doctor in person, and if they do the time/travel costs can be significant.

Disclosure, I work for an EHR startup with a Telemedicine product.


Eric Topol puts this up there as the most impressive AI/medicine publication to date. https://twitter.com/EricTopol/status/824318469873111040

The paper ends with "deep learning is agnostic to the type of image data used and could be adapted to other specialties, including ophthalmology, otolaryngology, radiology and pathology."


As someone with two melanomas under my belt (and more than a 1000 moles) what I really want is the ability to do a mass scan of my body also further down at the cellular level not just looking at the moles on the surface.

I am lucky enough to have Sloan Memorial as my hospital and no other than Dr. Marghoob one of the leading experts and I actually have a scan of my body made with 50 or so High Definition Cameras (I am litterally a 3d model in blue speedos and with a white net on my head).

They have a new system where they can look at the cell level without doing a biopsy and actually found my melanoma before they did the biopsy (i.e. they knew it was melanoma before they did biopsy) but it's really a cumbersome process and I had 6 experts studying and working to position that laser properly.

So the real challenge today is how do we get the data into the system.


This is why we need a platform for these models asap. I would totally download this app today and use it regardless of what the FDA thinks.


Are you sure about that? Playing devil's advocate here, we have plenty of examples of scientists jumping the gun without being peer reviewed or going through a rigorous follow up testing progress, especially when it comes to medicine. The alzheimer's 60FPS blinking light example is pretty good - some scientists got it working on mice, but we don't know any potential side effects it could have on humans. Maybe none and that'd be great! Maybe it causes schizophrenia, who knows? We have no way of knowing yet! Just very very educated guesses.

I say when it comes to medicine, err on the side of caution. Obviously a diagnosis app isn't too dangerous - worse case scenario the app gives you a positive diagnosis, so you go into the doctor's, they take a sample, and find the growth to not be cancerous. No harm, no foul. But other ideas could be more dangerous.


Worst case scenario would be giving a false negative, wouldn't it? And that's the danger, not a false positive (though false positives may lead to higher health costs due to increased doctor visits).


Removing moles isn't risk free. It's minor surgery but it's still surgery, and any surgery carries risk of infection. With increasing prevalence of antibiotic resistance this is a serious concern.


As you say, it's a diagnosis app. It shouldn't be judged on the same slippery slope as a treatment technique.


I'd love that. I'm still irritated that the FDA forced 23andme to remove the Alzheimer's/Parkinson's report (though I completely understand why). Now I have to run all my 23andme data through promethease, which is significantly more complicated to understand.


This is not hard to imagine at all. I know that there must be some absolutely excellent doctors out there, but I don't trust the bottom 80% of doctors much at all, and honestly would rather have an algorithm most of the time, especially starting off. The lack of robust consumer level 'medical doctor apps' is one of the biggest mysteries to me.


There's an app used by over a million doctors called "Figure 1" that allows them to share medical images for crowdsourced diagnosis and treatment of rare cases.

I wonder when we will get to a point where machine learning can help there?

[1]https://figure1.com/medical-cases


I read the headline and wondered how ml could train the difference between a new dermatologist and a seasoned one. Cancer I get, looks totally different than non-cancerous skin :)

That said, pulling this is one of the best ML applications to date. Recognizing cats or scenery doesn't seem nearly as useful


Great results! Deep learning has been gaining track in other medicine areas as well.

One such task is lung cancer nodule detection from CT scans. A paper I recently co-authored applied many different architectures to this detection and achieved very good results. (https://arxiv.org/pdf/1612.08012.pdf)

The best combination of systems detected cancer nodules which were not even found by four experienced thoracic radiologists.


Do you participate in the current Data Science Bowl regarding CT lung cancer detection [0]? The prize pool of $1,000,000 seems quite attractive, especially if you recently developed new state-of-the-art CT lung cancer detection ML models. The only somewhat strange aspect of this competition (at least to me) is to not include locality annotations. They only provide labels of cancer/no cancer per patient...

[0] https://www.kaggle.com/c/data-science-bowl-2017


Dermatologist here. Most skin cancer diagnosis is relatively straightforward and if the lesion is suspicious will require a biopsy to establish the subtype of the cancer and plan further treatment. There is no reason why this initial visual diagnosis cannot be performed at the same level as a dermatologist by a machine or indeed by a non-doctor trained intensively for a relative short period to interpret photographs.

The difficulty is two-fold. Firstly liability - a dermatologist aims not to miss a single case of melanoma in the tens of thousands of patients seen over their career, if this algorithm is used widely in millions of patients then either the sensitivity will have to be higher and more biopsies performed or there will have to be an acceptable rate of missed diagnosis for melanoma.

Secondly, in edge cases such as moles that are slightly atypical. In these scenarios there is no way that I would be comfortable making an assessment from a photograph. Now of course, a machine could also gather further information via methods such as in vivo confocal microscopy but in this case the cost savings are likely to be negligible.


Can someone clarify for me how the training and testing sets were constructed? One problem is that cancerous and benign skin are unbalanced in a representative population. How was this imbalance handled in testing? How was the testing set constructed? And so on.


For each of the 3 tests the training sets were classified with a biopsy, images were randomly seleced then blurry images were filtered out by a separate dermatologist. The ratios Benign:Malignant were 70:65, 97:33, and 40:71 respectively.

These close-to-even ratios make for a more powerful test of classification. I would assume that these test samples have biopsy data means that some dermatologist thought that they might be malignant (unnecessary medical operations are unethical). This might lead to some bias towards samples that are difficult for humans to diagnose.

Separating these into binary classifications of specific tumor types makes it easier to classify than out of every possible tumor type (as a dermatologist does).

Still the claims this paper makes are very promising. A lot of the training data was classified by dermatologists, not biopsy. Using more biopsy data could lead to even better classification, as well as improvements to the model.


One major, major advantage that medical imaging has for deep learning is the similarity of each data point, especially the 'background data.' For instance, human brains typically look very similar across individuals (up to scanning parameter differences), except in the abnormalities - which are often precisely what you want to highlight.

As an example, I recently trained a neural neural network to perform a useful task for our lab using 3 (!) hand-labeled brains.


It's insane that you were able to get reasonable results with such a tiny dataset.

I am learning machine learning right now and I find working with datasets with fewer than 100 examples to be quite difficult.

It seems counter intuitive when you first think about it but having way more data actually makes the task of fitting the model much easier as there is granularity that can be used to get feedback on adjustments to the structure of the model.


It was an image segmentation task, and the features were similar across data sets. The other thing that made it work well was heavy use of data augmentation that captured ways in which different data points could reasonably differ.

There was a really cool medical imaging paper recently that literally just labeled several 2D slices in a 3D dataset consisting of 3 images and performed a reasonable segmentation:

https://arxiv.org/abs/1606.06650


Diagnosis based on image recognition is something machines are already very good at, even without recent deep learning techniques (although I am sure they will help).

For instance in college I worked with a radiologist to write an image-recognition program to identify osteoporosis from 3D MRI data. We used some super-basic image segmentation algorithms to identify the bounds of the bone layer that we cared about. From there a model was able to determine mechanical properties of the bone and therefore make an assessment with much more granularity than the human eye.

This was a first-year grad student class and I was coming at this totally naive with some Matlab scripts, and we managed to get usable results in weeks.

Here's a sample of that professor's research: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2926228/

While I am not in the camp of "machines will replace doctors", I think radiology and other similar fields are in for a sea-change in technique and a large reduction in the use of human judgement.


Coming from a family of people in the medical professions, they've all seen reports of how _everything_ is going change in their fields because some new computer program can do X...

To which my father usually mutters something like: "Why fuck are they wasting their time with that? Can't they fix the fucking medical billing system instead?"

Most of the medical professionals I know echo similar sentiments.


Meh, seems majority of older folks say the same BS.


Telemedicine has a lot of regulatory hurdles to get to market, but initiatives like this are extremely exciting, since they can likely be taken to market in a way that explicitly clarifies that it's not a diagnostic, it's simply a low barrier way to actually get that mole you've got looked at. If you don't have health insurance, you could actually get an idea of how critical it is to get in to see a doctor. That said, the obvious concern would be the extreme cost of a false negative (although the evidence suggests that the algorithm is no more likely to provide one than a doctor, the concern over single accidents caused by self-driving cars, even when the overall rates are far lower makes it pretty clear that the bar for success to the public for non-humans is substantially higher than it is for humans)


> That said, the obvious concern would be the extreme cost of a false negative

Probably not. People won't go to doctor unless they sense something wrong with their body. So it is actually filling the void here.

On the other hand, false positives will cause a bigger problem, because swarm of people will get triggered by the fear of cancer, and hospital might not handle the sudden surge of traffic for treatment.


I think I agree with you from a policy perspective. However, the cost of a false negative has major PR implications (just wait for the first "I tried this algorithm and it misdiagnosed what turned out to be cancer") story. While I totally agree that those stories would be entirely unfair when looking at rates, that's not how it would play out in media response (and unfortunately, regulatory response).


Are people allowed to practice medicine without a license, if they "explicitly clarify that it's not a diagnostic, it's simply a low barrier way to actually get that mole you've got looked at."

I think the same, whatever the law regarding that is, would apply to a person or company providing this service via deep learning.


WebMD essentially does this, except you diagnose your own symptoms.


In my opinion the way to stage these technologies is not to blitz toward a fully cyborg doctor replacement, but to bolster the capabilities of the doctor with new technology - similar to how calculators did not replace mathematicians (despite historical headlines suggesting this would happen).

Giving a doctor the ability to get a "second opinion" fast and cheaply to a patient is a large boon to medicine, and shouldn't be underestimated. It allows the doctor to deal with all the nuance limited automated tools can not, and gives the MD the ability to check themselves against the computer. If the MD finds themselves disagreeing on something like a skin condition, the feedback can both improve the doctor's service and provide bug information for the code and databases used to train the AI.


I wouldn't be surprised at tasks that involve image recognition - these include dermatology (visual inspection) and pathology. In fact, I wouldn't be surprised if CNN's were better at pathology as every time I looked at microscope slides, there is so much "visual clutter" in a typical tissue specimen that I'm sure I was missing a ton of information on the slide.


This is going to be part of a greater trend of automation starting to affect fields considered to be white collar and paths to prosperity. I think the same is going to happen with financial analysts, entry-level lawyers etc. It'll be interesting to see the political response, especially given how charged the atmosphere has become around "preserving" jobs.


A significant finding, to be sure. But like the paper itself says:

Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs.

What they achieved was algorithm to classify skin lesions - not perform a "diagnosis" of the overarching pathology, i.e. skin cancer.


Skin conditions are one of the few modalities where ML make deep sense as a diagnostic.

I think pharmacovigilance is the other area based on my interaction with the industry of folks at pharma and healthcare provider companies who work in ML.

Disclaimer: i run mlweekly.com and help at semantic.md


What about 3D aspects? The word "bump" is used in most descriptions I've seen online, although I don't know if that is something the doctors consider or just something that's enough to suggest a visit to the doctor.


These new methods appear to best suited to be used in the pet world sooner as the ethical and legal issues will be a bit less stringent than in a human context. May be that is where things will start to change.


My (old) dermatologist could spot skin cancer from across the room. I asked him how he could do that, he said he's seen a million of them. It's the same idea as "deep learning".


I truly believe with smart algorithms and big data we can change the way we live. Smart medicine, proper diagnoses and early detection of disease we can improve our lives


On a related note, does anyone know how IBM's Watson health is doing? They've been developing it for years but I haven't heard much about their results.


Even though diagnosis is only one piece of the puzzle, what I would hope is that this becomes part of the answer to the high cost of healthcare.


so, basically while i'm taking shower the HAL ... err ... Google Home cameras in the shower would check for moles development, blood O2 from the color of skin, vascular health from the reaction to the water temperature, pulse from visible pulsations, mental and other conditions from the eyes movements, etc...


Only as well? Not faster and cheaper?


Basic income can't come soon enough.


Problem with basic income is its kind of like inflation. If everyone has money and supply is same then the prices go up and we are at square one.

What we need is to dramatically increase supply and make the unit cost much lower.

Phones, cars and most commodities depreciate because next version is usually cheaper and better. Land doesn't grow so their prices go up and up.

Biggest living costs are housing, transport, education, healthcare.


Basic income is a pipe dream. We need a new economy.


You need an economy resilient to rapid economy changes.


Quick, someone tell me why doctors won't be obsolete in 20 years!

Geoffrey Hinton believe that we should stop training radiologists now:

https://twitter.com/withfries2/status/791720748624797697?lan...


That is the kind of prediction that people will look back on and say "I can't believe the hubris." MYCIN had better diagnostic performance than infectious disease experts by 1979 (https://jamanetwork.com/journals/jama/article-abstract/36660...) and in the 1980s the question was "How soon will expert systems replace human doctors?" Putting a number like "5 years!" is asking for disappointment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: