Hacker News new | past | comments | ask | show | jobs | submit login
Breast cancer detection in mammography using deep learning approach (arxiv.org)
190 points by rusht 23 days ago | hide | past | web | favorite | 116 comments



There are several key points that get left out in AI radiology conversations such as this one:

1) Mammograms are not interpreted in a vacuum. In fact mammograms are usually the first in a long line of tests before a breast cancer or other diagnosis is ultimately made. In fact, it's probably more accurate to refer to mammography as a screening exam for which patients need a biopsy rather than a diagnostic test for cancer (there are rare exceptions, but overall this point holds).

2) Speaking frankly as a radiologist myself, tests like mammograms aren't even that good in terms of overall diagnosis. Thats why ultrasound, tomosynthesis and MRI are often used as supporting evidence and/or alternative exams.

3) There is controversy over the overall utility of mammograms, particularly in the screening context. Radiologists more than anyone would like the sensitivity and specificity of these studies to be higher.

It strikes me that the people that push these "radiology is ripe for disruption" or "AI outperforms radiologists" hyperbolic arguments are clearly people that have never seen the inside of a clinic. I'm sure they love this rhetoric though when pitching to VCs or sitting around the conference table coming up with 'breakthrough ideas' to turn into power-points for the other administrators.


We are working in the breast cancer space now looking at breast cancer and ultrasound (not just from a screening / diagnostic perspective - also treatment planning for medical oncologists and treatment response planning).

We don't use deep learning - we use Biophysical models. We hate using the term "AI". This is a very challenging discipline to explain to VCs.

Also, speaking to point 2 here - the "value" of building tools for ultrasound is often dismissed by VCs because "ultrasound isn't used for screening or diagnosis". This is an insane perspective from our position when we are practically based within hospitals, collaborating strongly with radiologists and medical oncologists who work with ultrasound on a daily basis.

We are very embedded within the hospital and look to understand the clinicians workflow and decision making processes first, as well as understanding what's possible given the hurdles involved in data access (which can still be tricky even when you are through IRBs and ethics).

We have found that telling VCs the reality about working with hospitals and doctors can often limit their excitement about your company prospects. Our success to date has largely been as a result of doctors and hospitals who believe in us, see the value in what we are doing. They have put time and effort into collaborating because they are impressed with what we have been able to do results wise by bootstrapping as a small team, rather than as a VC funded shiny startup.

In a weird way i would say that at it's best times medtech can be one of the "purest" industries to work in. By this i mean ultimately your technology works or it doesnt (at least from the medical communities perspective - again VCs are a different story). There are obviously exceptions to this (Theranos anyone) and there are issues around the 510K process but on the whole there is a big price to pay for making unsubstantiated claims (say compared to aspirational lifestyle marketing).


> 1) Mammograms are not interpreted in a vacuum. In fact mammograms are usually the first in a long line of tests before a breast cancer or other diagnosis is ultimately made.

The paper specifically talks about mammography, it does not claim to replace a complete diagnosis.

> 2) Speaking frankly as a radiologist myself, tests like mammograms aren't even that good in terms of overall diagnosis. Thats why ultrasound, tomosynthesis and MRI are often used as supporting evidence and/or alternative exams.

From the abstract: "2) successfully extends to digital breast tomosynthesis"

> 3) There is controversy over the overall utility of mammograms, particularly in the screening context.

> It strikes me that the people that push these "radiology is ripe for disruption" [...]

The paper, which I just skimmed over, does not read hyperbolic, for that we'll have to wait for popsci journalists.

OTOH, if one leaves the 1st world context, any type of successful diagnosis automation in medicine is a blessing for areas where you simply don't have enough trained medical staff.


I wasn't just commenting on the abstract presented. I was commenting on the comments I see here, as well as comments I see related to similar papers all the time.

I also interact with AI/ML researchers all the time. Most of them are typically some combination of: 1. Poorly informed about the appropriate context and utility of medical imaging. 2. Trying as hard as they can to push AI/ML as the most important technology in medicine today. 3. Pursuing a very task-specific project which they claim is massively generalizable in some (incorrect) way.


There's one point that often comes up when I chat with my MD friends: All of them agree that more information is not strictly better while diagnosing. In fact, most support that unneeded information is actually worse because it confounds the issue.

My engineering mind just can't come up to terms with this. Why wouldn't you collect all information you possibly can? You can always ignore irrelevant data you have, but you cannot consider data that you don't have!

The closest I've been to rationalizing this: diagnosing is a stochastic process so complex (and with the search space so large) that the random noise in extra data is likely to point you towards wrong directions. Plus you can always collect more data afterwards if your initial diagnosis turns out to be wrong. This is of course very simplified, but it makes sense.

However, I just can't turn off my inner voice from screaming "more data is always better". I guess that's why I'm not an MD :)


> You can always ignore irrelevant data

Everything we know about human psychology says you can't.


> Why wouldn't you collect all information you possibly can?

From a decision theoretical viewpoint, you would certainly want all the information you could get. For humans running a business in today's medicolegal environment, it's a very different set of issues:

1) Collecting information costs time and money.

2) Making good decisions requires the most precious resource of all, which is doctor brain-time. There isn't enough of it to spend on information with little probability of benefit.

3) If you get sued for malpractice, the unneeded data you collected probably would not have helped the patient, but it could help the attorneys arguing that you missed something. Juries struggle to understand the cost of false positives.

Even though there are valid issues here, doctors don't always make the right tradeoff in this regard. Oftentimes, I think it is more an issue of lack of training or experience that leads a doctor to consider a test to be unneeded. In the case of mammography, if doctors spend too much time doing screening and not enough time doing diagnosis, their screening performance degrades, which I think is due to a lack of feedback on their decision making[1].

[1] https://www.ncbi.nlm.nih.gov/pubmed/21343539


It's pretty hard to ignore that extra information though.

You run a screening company. You take people at high risk of lung cancer -- people who smoke a lot and have smoked a lot for many years -- and you provide low dose CT scans of their lungs.

Bob comes in. You scan his lungs and you find spots.

What do you do now?

You're probably going to start providing treatment to Bob. Will this help Bob live longer? Will it improve his quality of life? It might not.

https://blogs.bmj.com/bmjebmspotlight/2019/02/15/understandi...


> What do you do now?

Hopefully get other tests done, to confirm diagnosis.

I'm with GP here. I can't understand this attitude either. Having more information should never make you more wrong. This holds for uncertain information, because uncertainty can be quantified and tracked (if you're not doing this, then you're doing voodoo, not science).

I can see two reasons why you wouldn't want to gather more information in medical context. One, many tests carry risk to patient's health and well-being, so there's no point of doing them if that risk outweighs the expected value of evidence gathered. Two, I suspect that gathering information also gathers legal obligations and risks to doctors.


> Hopefully get other tests done, to confirm diagnosis

Those other tests involve things like "needle biopsy" -- they shove a needle through your chest into your lung into the suspect tissue to get a sample. This carries risk. We can justify that risk if it saves life. But this is the problem with screening -- often it doesn't save life (of course, it depends on the type of screening).

https://www.radiologyinfo.org/en/info.cfm?pg=nlungbiop

> Having more information should never make you more wrong

But you can see how having lots of low-quality information could make someone more wrong -- these are not clear signals, because if they were it wouldn't be a problem. These are almost noise. We're taking data from a large population ("4 in 100 people with this result have this disease") and trying to apply it to the individual, and when we try to get more information we subject this person to more radiation in scans or invasive procedures or both. We increasing the risk, but not necessarily saving life.

> there's no point of doing them if that risk outweighs the expected value of evidence gathered

Yes, this is exactly the balance that doctors are making. They're looking at all cause mortality and seeing if life is saved.


MD here.

In the short term, more information might cause harm because doctors are risk averse & scared of lawsuits and err on overbiopsy/overtreat, and many of our treatments aren't as good as we think they are, and all of this makes patient anxious.

In the long term, turning the information firehose on full blast means we can work out which incidental findings are best ignored or pursued and overall more data will help us.

The problem is that it is unethical to do #2 in the short term even if it is the long term ethical thing to do.


I also interact with medical doctors all the time: Most of them are typically some combination of: 1. Poorly informed about applicability or claims thereof of CS methods. 2. Trying as hard as they can to push the image of "the human doctor always knows best". 3. Pursuing university degree and then work in a very narrowly defined field without much relevant further education/updates believing their now 50 year old knowledge is set in stone.

I completely get your attitude, I think I agree with you overall and if I was not this lazy I could comb through my bookmarks and find the studies supporting what you said.

But I was just responding to your comment in the context of the paper linked. Which, at least when skimming over it, does not read like what you (IMHO, rightfully) criticize in the broader debate.

And yes, read the first paragraph as a tongue-in-cheek response, we both know that overgeneralizations don't help any debate ;)


Your comments do not seem to be addressing this particular study, but rather seem to be directed at the plethora of poorly-designed/over-hyped ML papers that are published on a regular basis. While that is an issue, this particular paper made no claims about disruption and it had nothing to do with diagnostic performance. It was a screening study that appears to make sound comparisons to the performance of five reasonably-qualified radiologists on screening images.


I think their comment is a very needed bucket of ice water on the multitude of other comments in this thread that are making claims about disruption and diagnostic performance.


Can you point to a particular comment in this thread making a claim about disruption or diagnostic performance?


The next main root comment. 'Of the major specialties, it seams that radiology is the most in danger of disruption'.

I wonder how long people think about these sorts of claims before posting them.

Do they really think an AI is more likely to appropriately interpret an MRI scan (and all the anatomic, physics and pathophysiological data therein contained) in the context of a specific clinical work-up more easily than triage patients the way a family practitioner or ER doctor does?


I'm as skeptical about today's "AI" as you can get, but FWIW, medical community seems concerned about it. Going by what my acquaintance fresh out of medical school keeps telling me (and what the doctors chambers' publications seem to be saying), doctors are worried about the impact of AI on their jobs, and they believe the (IMO vastly exaggerated) claims of effectiveness of upcoming solutions. And radiology does seem to be at the forefront of this - the question whether it's even worth it to start specializing in radiology today is one seriously considered by graduates.


AI is not "interpret"ing. It is merely using a fitted distribution. Just because there are many degrees of freedom fitted does not make it magic.


There was no claim in that comment. "It seems" implies a personal opinion of the poster.


I wish there was a way to repost this to all most every medical breakthrough ml story.


It could be useful as a tool to help a radiologist do their job better though. I think many of these techniques described in ML papers will be used to enable people to be better at their jobs rather than replace them. At least until there is AGI at least.


I wouldn't dispute this (if they finally put something together that isn't horrendously cumbersome, time-consuming and hard to use) but this doesn't justify the 'AI is about to replace radiology' crap I seam to see every time some academic group publishes an AI/ML paper.


I don't think this paper makes that claim, you might be thinking of media interpretations of papers which is usually the one making bogus claims. It says that it outperforms 5 out of 5 people, but that is in this specific context. They aren't necessarily claims meant to be generalized that much.


I see that type of hyperbolic claim several times in the comments.

I wasn't just commenting on the abstract presented. I was commenting on the comments I see here, as well as comments I see related to similar papers all the time.

I also interact with AI/ML researchers all the time. Most of them are typically some combination of: 1. Poorly informed about the appropriate context and utility of medical imaging. 2. Trying as hard as they can to push AI/ML as the most important technology in medicine today. 3. Pursuing a very task-specific project which they claim is massively generalizable in some (incorrect) way.


Usually they do not understand that your workflow if not revolving around pattern recognition on pictures. The best use of ML in pattern recognition for radiology is ordering images. You would get the the images sorted by the likeliness of having something unusual.


What do you think how far AGI is?


a long long time


I guess over 100 years at the very least.


The only thing I see as potential area of improvement (i do not like the word disruption) in radiology is workflow optimization to reduce the time you spend on administrative tasks. I would be interested to hear your opinion on this.


Absolutely, that's an area where various software innovations could be extremely helpful.

In fact, the area of medicine most in need of 'disruption' imho is healthcare enterprise software. Doctors are literally killing themselves because the interfaces they have to deal with on a daily basis are so appallingly poor.

Of course, the solution is less technological than political as it wouldn't take much to come up with better software than the legacy alternatives but you'd have a very hard time getting past the entrenched relationship interests of crony bureaucrats that run hospital administrations.


If I remember correctly for some women a mammograph can be completely unreadable while much clearer for other's. That seems to make it a very poor method for diagnosis, despite it being recommended almost everywhere.


Don't you think though that it would help you? I see this ML treatment of the images as a resolution improvement. It would help doctors to see better, maybe to not miss the detail that could be missed.


Of the major specialties, it seems that radiology is the most in danger of significant disruption. First of all, it can be done remotely, so there is risk if the regulation is lightened that foreign radiologists will be allowed to read studies at much less cost. The other issue is that this is something that deep learning can rapidly progress in given there are already a plethora of labeled data sets. For example, every mammogram that is taken has already been labeled normal or abnormal.


I don't know anywhere that has a surfeit of radiologists just sitting around chewing gum. The types of tools described in the paper will enter the market as radiologist assisting devices, if for no other reason that the regulatory burden for this is dramatically less than a total radiologist replacement. Their introduction will make radiologist's jobs easier, and improve scan throughput and reduce waiting times for results. This will free up radiologists to deal with more complex cases. But I doubt it will materially reduce the number of radiology jobs, at least in the next generation.


> The types of tools described in the paper will enter the market as radiologist assisting devices, if for no other reason that the regulatory burden for this is dramatically less than a total radiologist replacement.

100% agreed.

A significant dimension of the regulatory dynamics is accountability: Who will sign off on and ultimately be responsible for findings from radiologic studies?


Yes, taking responsibility is primarily the function of a doctor. This is actually a very unique and rare thing. Everywhere else we see people minimising their responsibility to others, rather than actively taking it on board.

Ceding this responsibility to corporations is a terrible idea. After all, one of the things about a corporate entity is that there isn't really anyone responsible. The GFC and Boeing are recent perfect examples of this. Automating medicine will result in making healthcare more like trying to get tech support. Yes doctors are imperfect, they make mistakes, some definitely shouldn't be working, they are territorial and monopolistic etc etc, but when the system is working you walk into a room with another person who wants to listen to you and help you, and we shouldn't ever try to take that away.


If you could reduce the cost by increasing the productivity of radiologists, we'll be doing lots more and be getting better outcomes. This should be a win-win.


I think there's a huge potential for patient benefit if this is done correctly, though I worry that we'll just create mostly adequate classifiers based on existing data.

I also think we need to watch out for the human-attention issues illustrated in almost self driving cars. If a radiologist gets used to the computer being right 9/10 times, they could miss the 10th which would usually have been caught.

Overall we need more CV/ML/AI (choose your acronym) in this space, but it definitely requires some care.


There's a ton of deep learning startups working in this space already. Hardest part for most of them is getting access to training data. Because of that we'll probably see most of the commercial innovation coming from China and other countries that don't care about privacy.

Doctors make mistakes all the time and people in the medical profession work odd hours so there's already a ton of room for errors. Having a machine check their work or provide a second opinion will help them a lot.

Having a system that can do the primary screening and prioritize patients before a radiologist is available will save a ton of lives.


Part of the problem is that inadequate training data may not be a barrier to successfully marketing and implementing these systems - the tech is sexy and people are rightly excited.

> Doctors make mistakes all the time and people in the medical profession work odd hours so there's already a ton of room for errors.

That's my point - this is the training data. Unless we're careful, we're just going to approximate what we're already doing.


These systems can be trained on more data than any doctor will see in their lifetime. This means that it can pick up on discrepancies between doctors and has room to outperform most doctors.

I'm working on a similar system but for dietaty feedback based on images and it's amazing to see the model outperform all of the dietitians because it's able to see how all of the coaches respond to similar items.


Is this different than any other illness that became easily testable? Wouldn't this eventually move into something a technician would perform, like a blood test?


Technicians don’t interpret blood tests, and radiologists don’t perform most imaging procedures, so I don’t understand the comparison.


Well my thinking is more that the results would be read by your GP and not a specialist.


Technicans perform blood tests but doctors are the ones that interpret the results.


Right, but GPs can interpret the results and raise red flags as needed. My point is not every test needs a specialist.


The problem is that unlike a blood test, neural networks are not transparent at all. So when an imaging test comes back with "possible malignancy" with red circle around the highlighted area, a GP doesn't have the experience to agree or disagree with the result. So every time that happens the next step will be to send the image to a specialist.

And in the case where the machine learning algorithms don't find anything suspicious, the GP again won't have the training or experience to confirm those results. Now if the person was otherwise healthy and this was just a screening that might be enough, but if the GP was suspicious enough to order the test in the first place, it won't be.

What will probably happens is that this kind of technology increases productivity for radiologists, and maybe increases the number of screenings done on healthy people. But it's not going to reduce the demand for radiologists.

Basically the problem is that to be able to interpret the output of a neural network you need to be an expert. What we need is AI that can present a fully formed argument that is easy for a non expert to follow and validate, but we are nowhere near that in most cases.


It's also an area where a scan from a year ago can be re-processed based on new research and find something that wasn't detected on the first processing run -- and still be found years earlier than current methods. Lots of promise here. As for disrupting the field, practicing radiology will become a rare job for humans but the demand for basic research in radiology will likely go through the roof so that the AI can be properly improved and expanded.


Most of the people that have been saying this for years have no idea how hospitals or health care in general functions. This includes the internal politics of a hospital, politics of the health care system as a whole and patterns of medical care/practice. As a surgeon, I can assure you that Radiology is not going anywhere. If anything, ML/DL will augment and improve radiology outcomes by assisting radiologists but there is no way as of yet where any type of AI can perform the role of a radiologist- that is, take a radiologic study, correlate that with an often past medical/surgical history of a patient and provide valuable insight into what's going on with the patient. I'm not a pessimist about this at all. ML/DL will absolutely improve outcomes if implemented properly; however, radiologists are going to be augmented, not replaced. They are after all physicians.


That's how all sensible machine learning systems are deployed, as assistive tools with a human in the loop so that they can improve with usage.

This stuff will be used to do initial screening to prioritize cases and provide an initial analysis for the radiologist to confirm. Once they're deployed it won't be too long before they're as good as the top practitioners.

It will be a long time before they completely replace radiologists in America but we'll probably see them on autopilot in third world countries where there's a shortage of doctors and data privacy laws are not as stringent. I've met a Chinese guy doing medicine in the states who claimed to have access to all medical data for a bunch of hospitals back in China.


Sure the politics of health systems will slow down implementation. However, I have seen many times in my career where technology has made peoples job titles more "scarce" as the efficiency gains of new medical software have decreased the time needed for people to do that job.

Radiologists are likely to see a lower demand as a result of these technologies and will either A) Spend more time on complicated cases or B) Be let go. Nobody is saying ALL radiologists are going to be out a job. Look at dosimetry as a recent example of how software improved, and the time to contour per patient decreased, causing many health systems to shrink their dosimetrist staff or offload the responsibilities to the physician office.

This isn't the first time technology has been applied to healthcare, change will come slowly and eventually people will have to find new fields to work within healthcare.


This is like suggesting that intelligent autocomplete will put office workers out of work.


It's not politically easy to push legislation that increases competition for MDs.

They've been successfully pushing back against scope of practice changes for RAs, PAs, ANPs, PTs and other midlevels, let alone allowing foreign doctors in without an expensive residency medallion. And on the data side, hospitals are doing all they can to make sure that is as locked down in their silos as possible, because all these AI papers have been posing a fundamental threat to their bottom line (Medicare and employer reimbursement for physician and related services).

Basically every single decision-making regulatory body on state and Federal levels is full of MDs, with inherent conflict of interest.

Now new FDA chief is an old-school MD again, and his first order of business was to call a conference studying the potential dangers AI can pose to patient safety.


> They've been successfully pushing back against scope of practice changes for RAs, PAs, ANPs, PTs and other midlevels,

As such changes continue to be signed into law, I assume you mean, overall, unsuccessfully doing so.


> given there are already a plethora of labeled data sets

I had some coworkers who went to RSNA this year. The AI companies are still desperate for data. There was direct discussion of the disappointment in AI in more than one talk.

It'll happen, but like anything else it'll take a lot longer than people were predicting.


Talk to some lawyers. Not happening anytime soon.


Care to elaborate?


False positives are a major source of morbidity in cancer treatment. Biopsies and unnecessary major surgeries are a big problem.


Are you aware of any studies in this regard? I can't find anything conclusive, just handwavy assertions like this[0]:

If the NNS for a screening test is 5,000, those who advocate screening must make the ethical argument that the large benefits to 1 individual justify the sum of the harms to which 4,999 people are exposed. Whether this holds up to moral scrutiny depends on the nature of the harms.

The problem that I think the medical community misses is that the unwashed masses are generally completely unable to access procedures that aren't recommended by a doctor. So in the above assertion, that someone 'advocating for screening' must take on the ethical burden of harms coming from that screening, in my estimation, is complete and utter bullshit. What I would prefer to see is that screening comes with patient education so they understand the potential risks and inaccuracies and then let 'er rip. If something goes south, it's on them.

[0] https://www.ncbi.nlm.nih.gov/books/NBK223933/


Here's an easy to read introduction to over-testing and over-diagnosis. This isn't talking about false postives. It's talking about harm caused by actual positives. https://ebm.bmj.com/content/23/1/1

Gerd Gigerenzer has done a lot of work on this, so his books are useful. Reckoning With Risk or Risk Savvy are good.

> What I would prefer to see is that screening comes with patient education so they understand the potential risks and inaccuracies and then let 'er rip. If something goes south, it's on them.

Informed choice should already be built into all healthcare systems because it's a feature of international human rights laws and patients are usually allowed to decline to have a test done. And there's usually a doctor somewhere who'll perform an unnecessary test if you're prepared to pay.

Communicating risk is difficult.

We know that merely giving people information and letting them take full responsibility won't work. We know it won't work because we already study whether people understand the risks of testing and treatment, and we find a disturbingly large number of people don't, and that includes the HCPs recommending the testing and treatments.

Most people struggle with this question: "A machine has been invented to scan a population for a disease. The machine is good but not perfect. If you have the disease there is a 90% chance it will return positive. If you do not have the disease there is a 1% chance it will return positive. About 1% of the population have the disease. Mr Smith is tested, and the test comes back positive. What's the chance Mr Smith actually has the disease?"

But the problem of lack of numeracy is more severe: only 20% - 25% of people understand that 0.1% is 1 in 1,000. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3310025/

People do not understand the difference between "absolute" vs "relative" risk increases, so we see newspapers reporting "100% increase in risk from eating X" when the numbers are an increase from 1:100,000 to 2:100,000 deaths.

https://bestpractice.bmj.com/info/toolkit/practise-ebm/under...

https://www.eufic.org/en/understanding-science/article/absol...


What if human doctors double check results for false positives? They will help prevent unnecessary procedures. The algorithms would still allow scans to be read more efficiently since negative results for low-risk patients can be mostly automated away.


If a human radiologist is double-checking all of the results (for both false positives and false negatives) then it's not so much a disruptive technology, it's another tool in the radiologist's tool belt at that point.


Radiologists get second opinions all the time. I've seen radiology projects where the radiologists gave a different reading almost 40% of the time. (example of this: https://pubs.rsna.org/doi/full/10.1148/radiol.2018171557).

In some cases they also spend a lot of time annotating and measuring small areas of the images and having a model generate suggestions for them would save a ton of time.


You say that like it's a bad thing!

I don't think taking the human all the way out of the loop is a great idea given the state of models we've seen to date (in medicine, anyway).

I think a better direction would be looking at how to make these systems more complementary with human operators, surfacing interesting features or distant connections humans tend to miss and guarding against the big errors anyone might make staring at grayscale images on the night shift.


As in my last sentence, even just mostly automating the cases with negative results for low-risk patients would improve throughout and efficiency quite a bit.

Yes, I agree it should be another, yet increasingly important tool for a while. Perhaps not until a near-AGI is invented that machines can completely automate away doctor jobs.


how about less radiologists required to double check? I would consider that as disruptive.


I imagine this is just scratching the surface and before long we'll be doing full-body, 3D scans every few years and everything from cancers (all of them) to heart disease, to gastero-intestinal issues, to things even as mundane as acne and dandruff will be diagnosed by algorithms pulling on a cumulative database of images of healthy and diseased body parts. The real hope is to be able to see into the brain and pick up things like CTE and Alzheimers years before symptoms manifest.


This is the wrong idea. Detecting pathology early and then treating it is going to be complicated and expensive, and available to only a few people for a long time. We don't actually want some kind of constant AI body surveillance (for CTE?? So an NFL player can take just enough knocks before they retire?).

What we want is prevention, not early detection. Bear in mind that prevention is a proven outcome, because some people never get Alzheimers, and some people never get cancer. Detecting cancer or Alzheimers early and then intervening to cure the condition is actually not something that is definitely possible.


While I think I know what you’re getting at, there are of course examples of “detecting pathology early and then treating it” preventing morbidity, mortality, and saving money. This is the basis of all (good) public health cancer screening practices. Think of a screening colonoscopy identifying a local pre-invasive adenocarcinoma and cutting it out (cure).


There are specific examples for high prevalence conditions as you cite, but the original comment was not talking about that, it was talking about a broader approach.

Even taking your example, getting to the point of removing a pre-invasive adenocarcinoma is actually very complicated. You need a motivated patient, you need a healthcare team of specialist doctors or nurse practitioners, pathologists, radiologists, anaesthesiologists, nurses, an endoscopy suite, a recovery area, you need a system to follow up and track these cases etc. The full embedded cost of this undertaking is huge, I don't think it is scalable, and at the same time, will never capture everyone. Randomised studies usually just barely show that these screening approaches are better than not screening, once false positives are accounted for. Furthermore, you have chosen an example where there is a useful pre-screening test (fecal blood tests), a relatively non-invasive diagnostic test (colonoscopy) and a relatively painless intervention (removing a polyp with the scope). This is not broadly applicable. A pancreatic lesion for example is very hard to diagnose for sure, and the intervention is a massive and life changing operation.


We can want what we want but we have to work with what we've got.

Cancer prevention is best, I agree, but it's not always clear that it can be perfectly prevented. So then you have to move to detection and therapeutic intervention to either eliminate the cancer or prevent it from materially impacting the health of the individual. And with cancer specifically, early detection is one of the best predictors of a good outcome. This is not always possible of course, some cancers are very difficult to detect, but some aren't. It just seems like we are making perfect the enemy of good.


What you are saying sounds reasonable, but I still feel very strongly that prevention is the key. Consider that prostate and breast cancer are extremely common, occurring in 1 in 6 men and 1 in 7 women respectively in Western countries. As someone that treats cancer, I can tell you that management of an established cancer, even if detected early, is very complex and highly unpleasant for the patient, if not only psychologically. This is unlikely to change, except it will get more expensive.

The strongest argument is an economic one. If we can develop preventative treatments that we can just give to everyone, it is a much more justifiable expenditure. The risks and benefits are much clearer. This is in contrast to what we do now, which is apply extremely expensive and complex care to a group of people that are diagnosed with cancer. There are only a small number of people in the world that can even get this care. Consider that a preventative therapy is the only type of therapy some people in the world will ever get for their cancer. Papua New Guinea for example has zero oncology specialists in a country of 8 million people.


Hi Gatsky, mind contacting me? Email is on my profile. ( I could not see any contact information on your profile). Admittingly the reason I wanted to contact you is partially ideological ( rest being technical/medical).


>I still feel very strongly that prevention is the key

In case it didn't come through I'm 100% agreement with you on this. Learning about TIL and CAR-T therapies now, but in general feel that cancer vaccines would rival any pharmaceutical on the market in terms of profitability and net decrease in global misery.

Just trying to optimize the time between now and that (IMHO) inevitable day.


There are reasons why this is probably not a good idea.

1. Even the best tests have a chance that the scan will produce a false positive. If we indiscriminately test everyone for every disease under the sun, most of the findings will be false positives.

2. The benefit must outweigh the cost of the test. Both in terms of expense, and intrinsic factors such as radiation exposure.

3. Certain things that show up in a scan would never cause a problem if they were just left alone. In these cases detecting it early has little benefit, and may lead to unnecessary interventions.

4. If a cure is not available, testing for the disease may not always be appropriate, from an ethics point of view.


False positives could just be us not being able to detect that they are just false positives while more advanced methods and technologies might be able to differ.


That's true. The problem is that most diseases occur in a very small percentage of the population (less than 1/100 or 1/1000), which means that unless the false positive rate is exceedingly small, testing the population as a whole will still produce a bunch of false positives. This is one of the reasons why doctors only order a test if there is a reasonable suspicion that the underlying disease is present.


I think I’ll pass on being a beta tester. Sounds like there’s a lot of risk from unnecessary intervention.


My wife had multiple softball sized tumors growing in her abdomen for months/years that would have easily been detected by something like this. Instead they burst and metastasized and now we're a million dollars deep into medical treatments and still nothing remotely resembling a guarantee of resolution.

You can always choose not to act on what you learn. You can't act on what you don't know.


Yeah I'm on your side here. This fear of AI medicine creating false positives that overwhelm hospitals and doctors and cause unnecessary and risky surgery is one of those ideas I see floating around comments here and there, and it doesn't seem to be backed in anything tangible. I mean maybe it's a concern, but it also seems like that if it ever happened, we'd adjust for it quickly.


> You can always choose not to act on what you learn. You can't act on what you don't know.

You can also act on what you think you know but in fact don't. False negatives kill people but so can false positives.


What are the numbers though? Having unnecessary procedures done because of false positives from a screening test can absolutely kill people, but so can wearing a seatbelt. People drown and burn up all the time because they couldn't get out of a wrecked vehicle.

There is presently some set of screening tests with varying levels of sensitivity and specificity, and they aren't all appropriate for mass screening. However, if millions of people started regularly getting non-ionizing imaging done through MRI or ultrasound or infrared or whatever, we would learn a shitload about predicting maladies and likely save quite a few more people than we kill in the process.


> What are the numbers though?

I couldn't say, but I wage the medical science community / industry is cognizant of the phenomena and, not being an expert in the field myself, I trust them to handle the matter reasonably. In the specific case of breast cancer, I've heard that the cost of false positives is high enough to be a major consideration in recommended breast cancer screening schedules.


Only if you act on it. Just because you know or suspect something doesn’t mean you have to do something about it.


Instead of trying to squeeze blood from the mammography stone that has failed to improve longevity in breast cancer patients despite enormous investment over many years, AI/ML need to take a broader perspective and look at modalities like circulating tumor DNA and 3-D ultrasound


Why those two in particular?

Especially 3D US - so far it's mostly been used for "cosmetic" purposes of getting a 3D picture of your baby. There hasn't been much clinical evidence to its usefulness from what I can tell.


As the authors rightly call out in the abstract obtaining large amounts of annotated data poses a challenge for training deep learning models for this purpose But doesn't appear that the data they've collected and annotated is made available from my read of the paper, I get that this is from a company (DeepHealth) but it seems like an opportunity for NIH to push for more broadly available data sets.

Anyone have a good reference point for the reader selection of 5 specialists with 5.6yrs avg experience? That population seems small. Another opportunity for licensing bodies or national institutions to grow a publicly available dataset -- including annotations from a wider selection of imaging specialists.


The radiologists in this study had read 6,969 mammograms on average over the preceding year. That's about 15X the certification requirement and 4X the average for U.S. doctors. Reading volume is one of the main predictors of performance[1], which suggests that these doctors were probably above-average readers. It would have been nice to see more readers involved, but reader studies are a major effort. Even with the small sample size of readers, these results were statistically significant as well as clinically significant.

[1] https://www.ncbi.nlm.nih.gov/pubmed/21343539


Mammogram Images are far away to present any privacy challenge especially if identities are not disclosed.


I see a lot of machine learning work on medical imagery, which is great, but it seems like this is solving a problem that the human brain is already pretty good at (image recognition). I wish I saw more work being done on finding patterns in medical data in numeric formats which the human brain is terrible at. Is there much of that going on?


You're absolutely correct but from what I can see there's going to need to be a lot more work done in collecting and standardizing that data before it's available in sufficient quantity and quality to do anything with. I think a more aggressive approach to normalizing externalities (primarily regulated diet/nutrition) would help as well.

There's another completely (to me) unintuitive angle as well. Andrew Lo and some folks from MIT Sloan have published a paper about using clinical trial data to predict which medicines will be approved by the FDA in order to help reduce investment risk and unlock dollars. He does a pretty good talk about it here - https://www.youtube.com/watch?v=AzELyaVf0v8

He's on a recent episode of Linear Digressions discussing this as well. http://lineardigressions.com/episodes/2019/12/8/using-data-s...


> Is there much of that going on?

Most of that are statistical methods in statistic and not ML. We're talking about survival analysis, longitudinal analysis, clinical trial, nonparametric statistic, etc...

From my experiences ML is too dependent on large dataset. Medical data are often high dimensional and small. My thesis papers have leverage two statistician works to make decision trees and ensemble leverage more statistic to handle medical data (high dimensional data). As noted by Dr. Harrell, statistician, ML is much more suited with less noisy data, medical image. Also inference is most more important in the medical field that just prediction.


The human brain could always use a bit of help. Think of these sorts of tools as medical IDEs for practitioners. Anything that reduces costs, improves outcomes, or both, are welcome.


Got me thinking... How long till AWS DeepPhysician? They don't seem to have a clear cut on the limit of the scope of their services. Joke apart, what would be the implications and responsibilities of big tech entering the medical field?


Excellent results with good generalization. The study appears to be well designed and executed. This was a significant effort. Clearly, there are commercial intentions.


Computer assisted screening for mammo has been around for years... like icad I am sure some of the vendors in this space are using some form of deep learning already


Everyone swooning with optimism over this result should the machine learning reddit comments on it first.

https://old.reddit.com/r/MachineLearning/comments/ehpllt/dee...

And also linked blog[edit]:

https://lukeoakdenrayner.wordpress.com/2017/12/06/do-machine...

TL;DR; There are sooo many subtlties to stuff like this that this things really shouldn't be taken at face value. This is far from replacing doctors in anything.


1) I see no credible criticism on reddit.

2) I see nothing in the Luke Oakden-Rayner blog that calls this study into question. This study actually avoids the pitfalls that he mentions.

3) This paper said nothing about replacing doctors.


Those criticisms aside, if this system went up against the current status quo in a prospective study, there is a reasonable chance it would be on par at least with humans. Whether that makes it a worthwhile endeavour comes down to questions of cost, technical complexity and health outcomes. This last part is actually a significant barrier to 'AI' in healthcare. For that reason, I suspect most companies will prefer to sell their products integrated into assistant style software, where the value proposition is tied to reimbursement eg reporting more scans.

The sensitivity is also still not as high as you would like ideally... this is a limitation of mammography.


This is simply amazing. Great job to anyone involved in that project!


Dr Sausage has had this technology for decades


If you find this kind work interesting, our AI group at Siemens Healthineers is hiring interns to carry out projects like this. We typically target machine learning or medical imaging PhD students, but are open to a variety of backgrounds. Please feel free to reach out via email.


Physics PhDs?


The Clincal Center for Data Science at Massachusetts General Hospital (one of the top hospitals in the world) is hiring for a variety of positions. We have access to tons of medical data (imaging, NLP, time series), clinical domain expertise, and one of the largest GPU computing clusters. https://www.ccds.io/careers/


I'm curious, how is the compensation, e.g. compared to the lowest levels at https://www.levels.fyi/ ?


It will be a great day for individuals everywhere when we automate away 75% of MD/DO jobs. It won't be a great day for the AAMC, and I also look forward to seeing how they respond.


The same way they already respond to the fact that nurses/NP are capable of doing a large number of jobs reserved for MDs: lobby and regulate.


I also can't wait for the day that the regulations you refer to which prevent NPs from doing jobs they are fully qualified for (and thereby improving the medical care for the nation) are struck down.

I'm not sure that day will ever come.


The cotton gin increased demand for labor to use the tool. You have to think very carefully and subtly to even have a chance at predicting second order effects from more complex forms of automation.


I worked for a surgical robotics company as well as other medical device companies, and procedure lengths and surgical wait times both decreased in facilities with the device. Based on my experience the danger is in areas like "will we over-prescribe imaging tests even more than now?" but as to whether we'd need less doctors to treat the same number of patients, the answer is yes.


If imaging tests become higher quality and cheaper would the concept of over-prescription of them still make sense? (Asking sincerely, I don’t work in medicine)


It is still dangerous, as x-rays are carcinogenic, and also the chance of false positives can adversely affect patient health.


From what I understand the increase in demand for labor was not to use the tool itself (which I believe would generally be water powered, not hand cranked) but rather the tool induced a greater demand for raw cotton and provided expansion opportunities to industries that consumed cotton. So more people were picking cotton and working in textile factories.


Same is true of regulation.


I'm interested to know why you have this negative view of medicine. I'm also interested in why you think the alternate future where medicine is automated (assuming that is possible and/or desirable to most people) is likely to be better for society as a whole.


And as my MD will tell you, voodoo! No one can ever replace me. No "robot." :-)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: