
Deep learning algorithm diagnoses skin cancer as well as seasoned dermatologists - capocannoniere
http://news.stanford.edu/2017/01/25/artificial-intelligence-used-identify-skin-cancer/
======
rscho
As MDs, I think it is very clear that all of us who understand even the
slightest about computers and tech see that machine learning is the way to go.
Medicine is ideally suited to ML, and in time, it will absolutely shine in
that domain.

Now for people eagerly awaiting the MDs downfall, I think you are
precipitating things a bit. We all tend to believe in what we do, and I concur
in saying that expert systems will replace doctor judgement in well-defined,
selected applications in the decade to come. But thinking that the whole
profession will be impacted as hard as factory workers, with lower wages and
supervision-only roles, is not realistic. What will be lacking is the
automation of data collection, because you seem to underestimate by far the
technical, legal, and ethical difficulties in getting the appropriate feedback
to make ML appliances efficient. I firmly believe in reinforcement learning,
and as long as the feedback system will be insufficient, doctors will prevail,
highly-paid jerks or not.

I myself am an anesthesiologist, a profession most people think of as a
perfect use case for those techs (as I do), and wonder why we haven't been
replaced already. The reality is that the job is currently far beyond what an
isolated system could do. We already have trouble in making cars follow the
right lane in non-standard settings. I hope people realize that in each and
every medical field, the number and complexity of factors to control is far
greater than driving in the right lane.

People who drive the medical system have no sense of technology. They cannot
even envision the requirements for machines to become efficient in medicine.
That is why we are seeing quite a lot of efficient isolated systems pop up,
but we won't be seeing fully integrated, doctor-replacement systems for a long
time. This will require a new generation of clinical practitioners, who will
understand how to make the field truly available to machine efficiency.

~~~
scarmig
My issue with doctors:

Recently, my dad was sick with a pretty bad cough. Like, so bad that he
couldn't speak without coughing. He fainted twice from minute long coughing
fits, one of those times hitting his head his head on the stove on the way
down, leaving a deep cut and blood everywhere.

He went to at least three different doctors. He got a scan of his chest.
Everything looked clear, and all of the doctors were stumped. Things were
pretty bad.

I mentioned this to a UCSF resident friend, and her immediate response was
"Oh, is he on <some blood pressure medication I forget the name of>?" I was
like, uh, let me see. Called my mom, she checked, and, lo and behold, he was
on it. So his doctors took him off it and within a week he was better.

This coughing wasn't some obscure side effect of the medication she knew
through sheer brilliance: it's a side effect that's been widely known since
the 1970's. Hell, it was on the drug's Wikipedia page.

So there's a couple morals you could take from this. One would be, wow,
doctors are smart to be able to diagnose an issue based on a single symptom
and some reasonable assumptions about a patient's background! The other is
that the median doctor is pretty worthless; spending tens of thousands of
dollars gives you no guarantee you'll see someone competent; and that a
medical system that relies on you grabbing drinks with a UCSF resident to get
good results is fundamentally broken.

Machine learning and expert systems don't have to be as awesome as the best
doctors to be valuable. They don't need to be better than competent doctors,
even. They just need to provide a bare level of competence to provide a huge
amount of value.

~~~
Itsdijital
I just want to add something really important onto this.

ALWAYS READ EVERYTHING YOU CAN ABOUT DRUGS YOU ARE PRESCRIBED!

Sorry for all caps, but it is super important. Not that your dad is in the
wrong, lots of people have justified (to a degree) trust in their doctors.
However doctors are people and just by that alone aren't perfect.

A few years ago my doctor prescribed me an antibiotic for an ongoing illness I
had. I read the entire pamphlet for it and did some reading online about it.
All before taking it. Turns out it can cause seizures if it interacts with
polypropylene glycol, one of the main ingredients in e-cig juice, which I use
daily. I had told my doctor I use an e-cig.

Really I cannot stress how important it is to be knowledgeable about the drugs
you are taking.

~~~
dbbolton
>ALWAYS READ EVERYTHING YOU CAN ABOUT DRUGS YOU ARE PRESCRIBED!

I'm not trying to undermine your point entirely, but there is a flip side.

I can't tell you how many times I have seen a patient start a medication, then
come back to the office within 48 hours because they coincidentally have every
side effect that is listed in the pharmacy's information sheet or that they
looked up online. The vast majority of these side effects are benign, present
with next to no pertinent physical exam findings, and can't be definitively
tied to the new med (like upset stomach, fatigue, headache, etc.).

 _Then_ they will start listing that medication as one of their "allergies",
and if the nurse/doctor documenting doesn't dutifully probe what type of
"allergic reaction" they had, they may end up not being prescribed that med in
the future when it really is the drug of choice. A little nausea is a small
price to pay if it kills a potentially life-threatening infection.

Also, I'm skeptical about the seizure risk. The thing about side effects is
that they are supposed to be stratified according to risk. Doctors are
typically aware of these risks, but patients aren't. So if your drug is listed
as causing "headache, nausea, and seizures", there may have only been one
patient out of millions who had a seizure while 50% experienced headache, yet
the handout probably won't tell you that.

But even if it is a notable risk, I would be surprised if the propylene glycol
you inhale from an e-cig could accumulate to a high enough level in the
bloodstream to cause drug interactions, although I admit adequate research on
the subject is lacking.

My advice would be trust your doctor first. If you don't trust your doctor,
start seeing a doctor that you do trust. Then if you have a significant
adverse reaction to a medication, _talk to your doctor about it_. Quite often
they know something that you are not going to find by spending a few minutes
on the internet.

As a side note, a good history includes asking about many habits. A lot of
healthcare providers are guilty of simply asking "Do you smoke, drink, or use
drugs?", but ideally the smoking aspect should be phrased as "Do you use any
tobacco or nicotine products?". Patients usually won't read your mind and
volunteer that kind of information. They will tend to give yes/no answers, so
direct and specific questions are important.

~~~
throwaway729
_> yet the handout probably won't tell you that._

That seems like a major problem. Is there any reason that more detailed
information can't be included? Mathematical literacy may be a problem, but
that doesn't mean that there aren't millions upon millions of mathematically
and scientifically literate consumers who could use this information
effectively.

~~~
dbbolton
You would probably have to ask a pharmacist, but from the clinical side I can
tell you most _prepared_ health information that we can give to patients has
to be very bare bones and comprehensible to essentially everyone at or above
an 8th grade education level- I assume because it is considered too resource
intensive by the publishers to produce multiple versions of the same
information, and people with higher education typically have the
initiative/means to ask their doctor the right questions or research the
information themselves.

I'm not trying to justify any of this, but that's how it is.

Not sure if it will be helpful in the future, but I can tell you that
descriptors used with side effects follow a standard convention:

    
    
        very common: > 10% 
        common: 1%-10%
        uncommon: 0.1% - 1% 
        rare: 0.01% - 0.1% 
        very rare: < 0.01%
    

But you will probably never know the exact origin of these figures (like how
many patients were studied, what populations were included, how tightly the
study was controlled, whether adverse effects were self-reported, etc.)
without doing some intense searching. And even if you did, I doubt it would
have a significant impact on your healthcare. I don't want to go on a tangent
about the nuances of pharmacology in clinical medicine, so I'll just circle
back to my point that you should trust your doctor, or else find a new one
that you do trust.

------
brandonb
This is the second major study applying deep learning to medicine, after
Google Brain's paper in JAMA in December, and there are several more in the
pipeline.

If you've developed expertise in deep learning and want to apply your skills
to healthcare in a startup... please email me: brandon@cardiogr.am. My co-
founder and I are ex-Google machine learning engineers, and we've published
work at a NIPS workshop showing you can detect abnormal heart rhythms, high
blood pressure, and even diabetes from wearable data alone. We're working on
medical journal publications now based on an N=10,000 study with UCSF
Cardiology.

Your skills can really make a difference in people's lives. The time is now.

~~~
kevinalexbrown
Before neural networks got deep, there was a lot of very impressive work
applying neural networks to medicine. Example:

[http://suzukilab.uchicago.edu/research.htm](http://suzukilab.uchicago.edu/research.htm)

IIRC they were outperforming the average radiologist on some tasks 10 years
ago.

~~~
selestify
Why haven't they replaced the average radiologist yet then?

~~~
anurag
Healthcare plans do not reimburse machines yet.

~~~
jgautsch
There may be incentive soon enough, via ACOs and bundled payment programs.
When dollars saved go to the bottom line, folks start trying to save dollars.

~~~
cestith
If we could have a preliminary diagnosis quickly from a machine and only then
and if necessary talk to a specialist doctor, the savings in costs could be
outstanding. Talk about Affordable Care. Maybe an act of Congress could move
this along.

~~~
coredog64
It all goes well until the machines terminate Ms. Butler's pregnancy.

------
iamleppert
Honestly, I can't wait for deep learning and computational methods to dethrone
doctors and upend the medical profession. In the next five years, expect a
computer to be able to predict most diseases a lot better than doctors can --
and with none of the attitude, high cost, or inconvenience.

Mind you I'm not talking about researchers, who will always have a job. I'm
talking about practitioners. I've had a medical condition from birth and I've
had to deal with my share of doctors. Outside of the insurance system, they
are easily the most unpleasant part of the whole ordeal to deal with. There
are some gems, but most you will encounter are pompous, arrogant, and
"commanding" \-- when they enter a room, they are flanked by "residents",
"assistants" and generally give off this air of superiority which is really
just because of their route experience. The whole thing comes off more as a
performance than anything else. Worse, they often get mad when you question
them or ask them to explain themselves, or how they arrived at a conclusion.

Good luck finding work when an algorithm can do your job better than you. It's
only a matter of time.

~~~
zwieback
I feel the exact opposite - in my treatment for prostate cancer the human
interaction with doctors were a hugely positive experience for me.
Interpretation of biopsies and inspection of cancer images were part of that
process and I'm sure machine vision algorithms could help in this area.
However, even if the classification of the cancer cells improves, the role of
the doctor guiding the patient through the right treatment process still
remains something I would not want to turn over to an algorithm.

I have also encountered doctors I did not like but fortunately for me I had a
choice where to go. Maybe machine learning should focus on weeding out
unpopular practitioners instead.

~~~
transcranial
I think medicine will over time morph from a field generally perceived as an
intellectual one, to a largely humanistic one, like nursing. Most people,
especially the HN crowd, vastly underestimate the importance of the human
touch. A doctor who possesses this can make a huge difference, but sadly the
are outnumbered by those who do not.

~~~
outlace
Medicine isn't really that intellectual. It's mostly based on vast amounts of
acquired knowledge via rote memorization and repetitive experiences. It's not
like physics or math that requires actual creativity and intellectual vigor.
(I'm a 3rd yr med student)

~~~
monkmartinez
Especially as we become evidence based vs. eminence based.

------
romaniv
Systems that outperform doctors in some specific area of diagnostics aren't
new. One of the earliest examples of such systems is Mycin [1], which also was
developed at Stanford, but around forty-something years ago. Never went to
production because of practical issues that have nothing to do with its
accuracy. It's interesting that all of those "practical issues" are no longer
relevant, and yet we don't see a widespread use of similar software.

[1] -
[https://en.wikipedia.org/wiki/Mycin](https://en.wikipedia.org/wiki/Mycin)

~~~
leereeves
I hope someday soon we'll develop systems that allow us to "ask" a ML
algorithm what factors led to a decision (diagnosis in this case).

It would be interesting to compare that with the current state of the art in
the field, and see if ML can contribute new scientific/medical theory as well.

~~~
derefr
I'm not so sure we could ask the algorithm itself, in any literal sense. An
algorithm trained to introspect might actually be _wrong_ about its own
"memories" or "motives"—just like a human might! (Though, likely, without the
penchant toward political rationalizing.)

This is most simply because, whatever the algorithm is trained to do, it's
certainly trained better to _do that thing_ than to introspect. Introspection
is a separate skill!

But there's also a more insidious element: introspection (in humans, at least)
tends to result in the creation of a lot of "personal concepts" that don't map
to well-known common concepts. An introspection on _one mind_ must necessarily
result in a taxonomy that contains terms for the tiny, unique features that
only that mind has—which makes it very, very hard to communicate one's
personal introspections to others. (You might call this a kind of
_overfitting_ : the introspection capability becomes optimized for that one
mind, but ceases to translate well to features in other minds—like human
minds.)

I'd place a much stronger bet on our ability to train one AI to "stare at the
brain" of other AIs as they make decisions [tons of them, as its training
data], with the expected output being a general theory on common AI features
responsible for the given calculation step. A computer psychologist, of sorts.
:)

Of course, you could _include_ such a pre-trained model as a "module"
alongside the AI itself, and call the combined system "one AI" if you like.

~~~
leereeves
> Introspection is a separate skill!

Indeed it is. It's something that a second algorithm (perhaps ML, perhaps not)
would do.

And this is beginning to remind me of Society of Mind.

[https://en.wikipedia.org/wiki/Society_of_Mind](https://en.wikipedia.org/wiki/Society_of_Mind)

------
btilly
This reminds me of a talk that I saw about wavelet based algorithms in the
1990s for detecting tumors in mammograms.

The algorithms found most of the tumors that humans had missed, with similar
false positive rates. BUT humans refused to work with the software!

The problem was that the software was very, very good at catching tumors in
the easy to read areas of the breast, and had lots of false positives in more
complicated areas. Humans spent most of their effort on the more complicated
areas. Every tumor that the software found that the human didn't simply felt
like the human hadn't paid attention - it was obvious once you looked at it.
The mistakes felt like stupid typos do to a programmer. But the software
constantly screwed up where you needed skill. The result is that humans
learned quickly to not trust the software.

~~~
antognini
This is very true and directly related to my research. (I work at a company
developing software to interpret EEG data.) There's a huge difference between
an algorithm with a low error rate that makes mistakes seemingly at random vs.
an algorithm with a somewhat higher error rate whose mistakes are at least
comprehensible. A doctor is much more likely to trust the latter than the
former. Almost as important as developing a detector with a low false positive
rate is developing a detector that can figure out when the problem is too hard
so it knows not to even try. (And it seems that this problem is just has
hard.)

One of the things we do is perform a Turing test of sorts where we test if the
performance of our detector is statistically indistinguishable from a human.
(In fact, we actually have a contest running right now where we give you 10
EEG records, some marked by humans, some marked by our software, and if you
can figure out which were marked by which we'll donate $1000 to the American
Epilepsy Society.)

------
transcranial
Unfortunately the paper is in Nature, paywalled, instead of Arxiv, and
data/code/model/weights inaccessible. While publishing in
Science/Nature/NEJM/JAMA is definitely the right approach for deep learning to
gain validity in the medical community, faster progress could be gained by
having a more open platform, with constant and real-time validation with more
data, more medical centers and clinics. The reason progress in DL has been so
breathtaking is in no small part due to the culture of openness and sharing.

~~~
rikelmens
sci-hub.cc is your friend.

~~~
alexmlamb2
That's true - but the principle still matters and scihub may not be around
forever.

------
nbmh
This is interesting and impressive work, however, I noticed that they compared
the algorithm's performance to dermatologists _looking at a photo_ of a skin
lesion. This seems like a straw man comparison because any dermatologist would
presumably be looking directly at a patient and would benefit from a 3D view,
touch, pain reception etc. I realize that this was the only feasible way to
conduct this study, but still suggests that an algorithm looking at a photo
cannot match the performance of a dermatologist looking directly at a patient.

~~~
dsmithn
Respectfully disagree. Telemedicine is going to be an important aspect of
medicine, Dermatology in particular.

Rural and underdeveloped areas are going to be the largest market IMO.
Everyone can access a smartphone but not everyone has the luxury of seeing a
Doctor in person, and if they do the time/travel costs can be significant.

Disclosure, I work for an EHR startup with a Telemedicine product.

------
doesnotexist
Eric Topol puts this up there as the most impressive AI/medicine publication
to date.
[https://twitter.com/EricTopol/status/824318469873111040](https://twitter.com/EricTopol/status/824318469873111040)

The paper ends with "deep learning is agnostic to the type of image data used
and could be adapted to other specialties, including ophthalmology,
otolaryngology, radiology and pathology."

------
ThomPete
As someone with two melanomas under my belt (and more than a 1000 moles) what
I really want is the ability to do a mass scan of my body also further down at
the cellular level not just looking at the moles on the surface.

I am lucky enough to have Sloan Memorial as my hospital and no other than Dr.
Marghoob one of the leading experts and I actually have a scan of my body made
with 50 or so High Definition Cameras (I am litterally a 3d model in blue
speedos and with a white net on my head).

They have a new system where they can look at the cell level without doing a
biopsy and actually found my melanoma before they did the biopsy (i.e. they
knew it was melanoma before they did biopsy) but it's really a cumbersome
process and I had 6 experts studying and working to position that laser
properly.

So the real challenge today is how do we get the data into the system.

------
lucidrains
This is why we need a platform for these models asap. I would totally download
this app today and use it regardless of what the FDA thinks.

~~~
komali2
Are you sure about that? Playing devil's advocate here, we have plenty of
examples of scientists jumping the gun without being peer reviewed or going
through a rigorous follow up testing progress, _especially_ when it comes to
medicine. The alzheimer's 60FPS blinking light example is pretty good - some
scientists got it working on mice, but we don't know any potential side
effects it could have on humans. Maybe none and that'd be great! Maybe it
causes schizophrenia, who knows? We have no way of knowing yet! Just very
_very_ educated guesses.

I say when it comes to medicine, err on the side of caution. Obviously a
diagnosis app isn't too dangerous - worse case scenario the app gives you a
positive diagnosis, so you go into the doctor's, they take a sample, and find
the growth to not be cancerous. No harm, no foul. But other ideas could be
more dangerous.

~~~
fpgaminer
Worst case scenario would be giving a false negative, wouldn't it? And that's
the danger, not a false positive (though false positives may lead to higher
health costs due to increased doctor visits).

~~~
mrob
Removing moles isn't risk free. It's minor surgery but it's still surgery, and
any surgery carries risk of infection. With increasing prevalence of
antibiotic resistance this is a serious concern.

------
calebgilbert
This is not hard to imagine at all. I know that there must be some absolutely
excellent doctors out there, but I don't trust the bottom 80% of doctors much
at all, and honestly would rather have an algorithm most of the time,
especially starting off. The lack of robust consumer level 'medical doctor
apps' is one of the biggest mysteries to me.

------
rawnlq
There's an app used by over a million doctors called "Figure 1" that allows
them to share medical images for crowdsourced diagnosis and treatment of rare
cases.

I wonder when we will get to a point where machine learning can help there?

[1][https://figure1.com/medical-cases](https://figure1.com/medical-cases)

------
ChuckMcM
I read the headline and wondered how ml could train the difference between a
new dermatologist and a seasoned one. Cancer I get, looks totally different
than non-cancerous skin :)

That said, pulling this is one of the best ML applications to date.
Recognizing cats or scenery doesn't seem nearly as useful

------
lscholten
Great results! Deep learning has been gaining track in other medicine areas as
well.

One such task is lung cancer nodule detection from CT scans. A paper I
recently co-authored applied many different architectures to this detection
and achieved very good results.
([https://arxiv.org/pdf/1612.08012.pdf](https://arxiv.org/pdf/1612.08012.pdf))

The best combination of systems detected cancer nodules which were not even
found by four experienced thoracic radiologists.

~~~
michaf
Do you participate in the current Data Science Bowl regarding CT lung cancer
detection [0]? The prize pool of $1,000,000 seems quite attractive, especially
if you recently developed new state-of-the-art CT lung cancer detection ML
models. The only somewhat strange aspect of this competition (at least to me)
is to not include locality annotations. They only provide labels of cancer/no
cancer per patient...

[0] [https://www.kaggle.com/c/data-science-
bowl-2017](https://www.kaggle.com/c/data-science-bowl-2017)

------
sungam
Dermatologist here. Most skin cancer diagnosis is relatively straightforward
and if the lesion is suspicious will require a biopsy to establish the subtype
of the cancer and plan further treatment. There is no reason why this initial
visual diagnosis cannot be performed at the same level as a dermatologist by a
machine or indeed by a non-doctor trained intensively for a relative short
period to interpret photographs.

The difficulty is two-fold. Firstly liability - a dermatologist aims not to
miss a single case of melanoma in the tens of thousands of patients seen over
their career, if this algorithm is used widely in millions of patients then
either the sensitivity will have to be higher and more biopsies performed or
there will have to be an acceptable rate of missed diagnosis for melanoma.

Secondly, in edge cases such as moles that are slightly atypical. In these
scenarios there is no way that I would be comfortable making an assessment
from a photograph. Now of course, a machine could also gather further
information via methods such as in vivo confocal microscopy but in this case
the cost savings are likely to be negligible.

------
hughdbrown
Can someone clarify for me how the training and testing sets were constructed?
One problem is that cancerous and benign skin are unbalanced in a
representative population. How was this imbalance handled in testing? How was
the testing set constructed? And so on.

~~~
fantispug
For each of the 3 tests the training sets were classified with a biopsy,
images were randomly seleced then blurry images were filtered out by a
separate dermatologist. The ratios Benign:Malignant were 70:65, 97:33, and
40:71 respectively.

These close-to-even ratios make for a more powerful test of classification. I
would assume that these test samples have biopsy data means that some
dermatologist thought that they might be malignant (unnecessary medical
operations are unethical). This might lead to some bias towards samples that
are difficult for humans to diagnose.

Separating these into binary classifications of specific tumor types makes it
easier to classify than out of every possible tumor type (as a dermatologist
does).

Still the claims this paper makes are very promising. A lot of the training
data was classified by dermatologists, not biopsy. Using more biopsy data
could lead to even better classification, as well as improvements to the
model.

------
kevinalexbrown
One major, major advantage that medical imaging has for deep learning is the
similarity of each data point, especially the 'background data.' For instance,
human brains typically look very similar across individuals (up to scanning
parameter differences), except in the abnormalities - which are often
precisely what you want to highlight.

As an example, I recently trained a neural neural network to perform a useful
task for our lab using 3 (!) hand-labeled brains.

~~~
jpgvm
It's insane that you were able to get reasonable results with such a tiny
dataset.

I am learning machine learning right now and I find working with datasets with
fewer than 100 examples to be quite difficult.

It seems counter intuitive when you first think about it but having way more
data actually makes the task of fitting the model much easier as there is
granularity that can be used to get feedback on adjustments to the structure
of the model.

~~~
kevinalexbrown
It was an image segmentation task, and the features were similar across data
sets. The other thing that made it work well was heavy use of data
augmentation that captured ways in which different data points could
reasonably differ.

There was a really cool medical imaging paper recently that literally just
labeled several 2D _slices_ in a 3D dataset consisting of 3 images and
performed a reasonable segmentation:

[https://arxiv.org/abs/1606.06650](https://arxiv.org/abs/1606.06650)

------
habosa
Diagnosis based on image recognition is something machines are already very
good at, even without recent deep learning techniques (although I am sure they
will help).

For instance in college I worked with a radiologist to write an image-
recognition program to identify osteoporosis from 3D MRI data. We used some
super-basic image segmentation algorithms to identify the bounds of the bone
layer that we cared about. From there a model was able to determine mechanical
properties of the bone and therefore make an assessment with much more
granularity than the human eye.

This was a first-year grad student class and I was coming at this totally
naive with some Matlab scripts, and we managed to get usable results in weeks.

Here's a sample of that professor's research:
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2926228/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2926228/)

While I am not in the camp of "machines will replace doctors", I think
radiology and other similar fields are in for a sea-change in technique and a
large reduction in the use of human judgement.

------
drfritznunkie
Coming from a family of people in the medical professions, they've all seen
reports of how _everything_ is going change in their fields because some new
computer program can do X...

To which my father usually mutters something like: "Why fuck are they wasting
their time with that? Can't they fix the fucking medical billing system
instead?"

Most of the medical professionals I know echo similar sentiments.

~~~
majkinetor
Meh, seems majority of older folks say the same BS.

------
the_watcher
Telemedicine has a lot of regulatory hurdles to get to market, but initiatives
like this are extremely exciting, since they can likely be taken to market in
a way that explicitly clarifies that it's not a diagnostic, it's simply a low
barrier way to actually get that mole you've got looked at. If you don't have
health insurance, you could actually get an idea of how critical it is to get
in to see a doctor. That said, the obvious concern would be the extreme cost
of a false negative (although the evidence suggests that the algorithm is no
more likely to provide one than a doctor, the concern over single accidents
caused by self-driving cars, even when the overall rates are far lower makes
it pretty clear that the bar for success to the public for non-humans is
substantially higher than it is for humans)

~~~
eva1984
> That said, the obvious concern would be the extreme cost of a false negative

Probably not. People won't go to doctor unless they sense something wrong with
their body. So it is actually filling the void here.

On the other hand, false positives will cause a bigger problem, because swarm
of people will get triggered by the fear of cancer, and hospital might not
handle the sudden surge of traffic for treatment.

~~~
the_watcher
I think I agree with you from a policy perspective. However, the cost of a
false negative has major PR implications (just wait for the first "I tried
this algorithm and it misdiagnosed what turned out to be cancer") story. While
I totally agree that those stories would be entirely unfair when looking at
rates, that's not how it would play out in media response (and unfortunately,
regulatory response).

------
jwtadvice
In my opinion the way to stage these technologies is not to blitz toward a
fully cyborg doctor replacement, but to bolster the capabilities of the doctor
with new technology - similar to how calculators did not replace
mathematicians (despite historical headlines suggesting this would happen).

Giving a doctor the ability to get a "second opinion" fast and cheaply to a
patient is a large boon to medicine, and shouldn't be underestimated. It
allows the doctor to deal with all the nuance limited automated tools can not,
and gives the MD the ability to check themselves against the computer. If the
MD finds themselves disagreeing on something like a skin condition, the
feedback can both improve the doctor's service and provide bug information for
the code and databases used to train the AI.

------
caycep
I wouldn't be surprised at tasks that involve image recognition - these
include dermatology (visual inspection) and pathology. In fact, I wouldn't be
surprised if CNN's were better at pathology as every time I looked at
microscope slides, there is so much "visual clutter" in a typical tissue
specimen that I'm sure I was missing a ton of information on the slide.

------
EternalData
This is going to be part of a greater trend of automation starting to affect
fields considered to be white collar and paths to prosperity. I think the same
is going to happen with financial analysts, entry-level lawyers etc. It'll be
interesting to see the political response, especially given how charged the
atmosphere has become around "preserving" jobs.

------
kafkaesq
A significant finding, to be sure. But like the paper itself says:

 _Here we demonstrate classification of skin lesions using a single CNN,
trained end-to-end from images directly, using only pixels and disease labels
as inputs._

What they achieved was algorithm to _classify skin lesions_ \- not perform a
"diagnosis" of the overarching pathology, i.e. skin cancer.

------
kumarski
Skin conditions are one of the few modalities where ML make deep sense as a
diagnostic.

I think pharmacovigilance is the other area based on my interaction with the
industry of folks at pharma and healthcare provider companies who work in ML.

Disclaimer: i run mlweekly.com and help at semantic.md

------
bluenose69
What about 3D aspects? The word "bump" is used in most descriptions I've seen
online, although I don't know if that is something the doctors consider or
just something that's enough to suggest a visit to the doctor.

------
yalogin
These new methods appear to best suited to be used in the pet world sooner as
the ethical and legal issues will be a bit less stringent than in a human
context. May be that is where things will start to change.

------
WalterBright
My (old) dermatologist could spot skin cancer from across the room. I asked
him how he could do that, he said he's seen a million of them. It's the same
idea as "deep learning".

------
james_niro
I truly believe with smart algorithms and big data we can change the way we
live. Smart medicine, proper diagnoses and early detection of disease we can
improve our lives

------
arikrak
On a related note, does anyone know how IBM's Watson health is doing? They've
been developing it for years but I haven't heard much about their results.

------
sekou
Even though diagnosis is only one piece of the puzzle, what I would hope is
that this becomes part of the answer to the high cost of healthcare.

------
trhway
so, basically while i'm taking shower the HAL ... err ... Google Home cameras
in the shower would check for moles development, blood O2 from the color of
skin, vascular health from the reaction to the water temperature, pulse from
visible pulsations, mental and other conditions from the eyes movements,
etc...

------
kazinator
Only as well? Not faster and cheaper?

------
adamnemecek
Basic income can't come soon enough.

~~~
treehau5_
Basic income is a pipe dream. We need a new economy.

~~~
adamnemecek
You need an economy resilient to rapid economy changes.

------
zxcvvcxz
Quick, someone tell me why doctors won't be obsolete in 20 years!

Geoffrey Hinton believe that we should stop training radiologists _now:_

[https://twitter.com/withfries2/status/791720748624797697?lan...](https://twitter.com/withfries2/status/791720748624797697?lang=en)

~~~
sedachv
That is the kind of prediction that people will look back on and say "I can't
believe the hubris." MYCIN had better diagnostic performance than infectious
disease experts by 1979 ([https://jamanetwork.com/journals/jama/article-
abstract/36660...](https://jamanetwork.com/journals/jama/article-
abstract/366606)) and in the 1980s the question was "How soon will expert
systems replace human doctors?" Putting a number like "5 years!" is asking for
disappointment.

