Hacker News new | past | comments | ask | show | jobs | submit login
Machine Learning for MRI Image Reconstruction (rhotter.github.io)
79 points by raffihotter on Jan 2, 2022 | hide | past | favorite | 46 comments



> This makes it hard to predict when and how deep learning methods will fail (there are no theoretical guarantees that deep learning will work).

I actually think we know fairly well how deep learning methods work (and what the shortcomings are), we just have no way to interpret the models it produces. Wouldn't ML techniques to reduce scan times fail at the most critical moments, ie when patients had unusual or unexpected ailments? Using ML in on downsampled MRI images feels akin to having an artist with a lot of familiarity of human anatomy touch up a scan.


I speculate (and hope) that ML diagnostics will help give medical care access to really poor people in really poor countries. There's not enough cheap doctors to help all of them, and if ML can speed things up and reduce marginal costs to zero, even if it degrades quality of care, a lot of lives could be saved. 80% ML + 20% human is better than no medical care at all.


Those countries are lacking basic healthcare standards and infrastructure. I doubt that lack of diagnosis due to understaffing is the bottleneck.


It often still is a good idea to have an approach for a possible second step for when the first step or steps are established. A lot of ifs and sometimes countries entirely skip kinds of technology like the landline phones in Africa and go to mobile phones directly since it was easier to establish. Maybe not-so-wealthy countries will see an entirely different kind of health care in 10 years than we know it now.


Countries with a GDP per capita of $5,000-$10,000 typically do have good medical care in private, but most of the population is excluded because of cost. If we give the doctors ML tools to increase bandwidth, then that should help the situation by increasing supply. Suppose we could 10x the bandwidth for routine scans. The cost should go down in private, public health capacity will go up, and overall more people should be able to access it.


The issue is training people to use the machines and keeping them running, not even the doctors themselves.


That's true, but I don't see that as necessarily decisive. If 1 doctor and 1 engineer can achieve the bandwidth of 10 (more?) doctors on some specific scan, we're still talking about less intellectual capital and less training requirements overall.

Also I don't think we're talking about special hardware here. Couldn't we just have a software package produced by someone (university, company in a wealthier country) that is used by docs everywhere? Could be done without the need of a dedicated local engineer? Perhaps the WHO could approve certain software packages for universal use in very specific cases.


>80% ML + 20% human is better than no medical care at all.

That implies that doing something is always better than doing nothing. Unless we're talking things like antibiotics, I'm not sure I'd agree. Medical error is a nontrivial cause of death, increasing that significantly could probably be worse than what you're trying to treat.


I'd argue that knowing fairly well and theoretical guarantees are significantly different.

As an example, you can run a million simulations on a satellite with different initial conditions to test your new control algorithm. However, you have infinitely many possible initial conditions, and you can't simulate all of them. If you however show that the closed loop system in stable sine sense, it's a more rigorous guarantee.


Agreed. I work in medical imaging, and people in the industry are very weary of existing technology that does have theoretical guarantees. Upscaling via bicubic/lancoz, or lossy compression (even if it has guaranteed 0.99 SSIM or NCC). Then they go ahead and do reads on 512x512 pixel CT scans. Even with a theoretical guarantee on bounded error ranges, you still have the cultural perception problem to deal with. Only if the improvements from a feature perspective are an order of magnitude better will it see any adoption imo.


Whether that is acceptable or not depends on the purpose of the scan. If it's for a routine and defined purpose, like measuring the size of something, a bit of artistic licence by the computer is not too bad, I think. As long as it gets the size right (or whatever specific aspect is relevant)


Things like MRIs are the last thing you want to be using ML to invent detail in.

This proposal basically says using ML we can quarter the number of frequencies we sample and still get good looking scans. But the full resolution is made by inventing details based on statistics from a biased input (most MRIs are taken due to something being wrong).

Again, as with super resolution, ML cannot add detail that isn’t there, anything it creates is simply based on the statistical model it formed from the training set.


Playing devil's advocate here but can't machine learning be used to remove noise rather than add detail? Removing noise would reveal detail hidden in the data kind of like the result you get after applying a spectral filter to a fourier transformed image. For example: https://www.youtube.com/watch?v=s2K1JfNR7Sc


"Removing noise" is equivalent to adding detail.


Personally, I am not even sure that "ML vs non-ML" is a useful dichotomy. If method A can be rigorously demonstrated[*] to produce superior results to method B, does it even matter whether mathematically it is constructed out of fast Fourier transforms, Metz filters, layers of convolutions or whatever else?

[*] For example, by measuring the quality of reconstruction of a known image (e.g. a real or digital phantom) or, in the ideal world, by evaluating clinical outcomes.


The problem would be blurring or denoising meaningful information, but I don't know enough to say they don't do any denoising. I can imagine the data being noisy, but perhaps it isn't? shrug :D


Yes, accelerating MRI acquisition increases noise in the images as well as introducing aliasing artifacts. I think the issue is that some modern reconstruction methods (e.g. compressed sensing that was mentioned in the article) produce predictable biases, e.g. adding a risk of smoothing out details, but for ML we don't always know in what way it will bias the reconstructed image (add details, remove important information...), and I think that is what people often worry about.


> Though compressed sensing can improve the image quality relative to a vanilla inverse Fourier transform, it still suffers from artifacts.

Odd remark. FDA approves compressed sensing products (e.g., [1], [2], [3], [4]) precisely because it is possible (and provably so) to quantify and/or characterize such “artifacts” up to substantial equivalence.

[1] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn...

[2] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn...

[3] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn...

[4] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn...


This is actually quite interesting and currently relevant to me, thanks for sharing


The following query might also interest you then:

    "510(k)" "deep learning" site:accessdata.fda.gov/cdrh_docs
Or alternatively replace "deep learning" by either another distinctive theory/methodology employed or trade/device name.

I do remind you to cross-check what material product is being reviewed.


Can’t wait to do an MRI and hear the doc say “You’re all set, good to go!”, only to discover that I actually had a tumor but that really clever ML algorithm thought that it was noise and should’ve been smoothed out…

I don’t want to be part of it, thanks


I am surprised you are so certain in this statement. There is always a tradeoff between scan time and image quality. Clinical scans often have thick slices to keep scan times reasonable. Using advanced reconstruction methods, e.g. ML, you can get thinner slices in the same scan time. How would you balance the benefit of getting higher resolution in the same time as a standard lower resolution scan time if the higher resolution scan was regularized with a neural network? The doctor might miss small tumors due to low resolution too. I understand your concern, but I wouldn't dismiss it so outright.

Note, I am biased because I research MRI acquisition and reconstruction methods and I am rolling out trials of fast MRI methods (that use some ML in the reconstruction) to find out how robust the methods actually are in practice.


Well, I really hope you succeed on your experiments, but right now it seems to me that ML introduces an error factor we still cannot precisely account for.

It looks like you are proposing some kind of mixed approach, not a simple “less data, faster scan, ML to the rescue”. I understand how MRI works but you are surely and obviously much more knowledgeable than me, so I simply wish you luck!

My problem with the article comes from reading that someone telling about using generative models on health data, I don’t think is time for this yet.


Thanks! I agree taht generative models on their own are definitely more risky than methods that combine ML for regularization along with data consistency terms that force the reconstructions to be consistent with the acquired data.


My opinion is pretty much worthless but I think this is a much more sensible approach, using the strengths of ML but putting constraints on the outputs.

Just a simple question: to achieve a guarantee that those results are at least equal or better than the ones we have now on our battle tested setups, shouldn’t we use the same sampling we use on a “default” MRI? I mean… using those reconstruction algorithms to try to achieve a better result, without downsampling so that a standard reconstruction can still be performed to be checked against?


"Default MRI" (i.e. fully sampled) should definitely be acquired when possible when testing this out to compare to gold standard. But the benefit of using ML methods in the fully sampled case would be minimal (maybe some denoising), whereas they have a much larger effect when acquiring highly undersampled data that traditional reconstruction methods fail at. It's also not always possible to get fully sampled reference data. For example in functional MRI you might not be able to get matched fully sampled data because the benefit of undersampling is in improving the temporal resolution. These cases are definitely more researchy and less clinical though, and in my work we add a 2 minute highly undersampled scan to current standard protocols and compare what we can reconstruct from our 2 minute scan compared with the fully sampled (but often lower resolution) standard scans.


> "Default MRI" (i.e. fully sampled) should definitely be acquired when possible when testing

It seems unlikely you wouldn’t appreciate this already, but clinical MRI has not fully sampled in a long time. Between the old and the new - reduced phase resolution (image plane and slice plane) parallel imaging, compressed sense (or sensing), reduced frequency resolution with partial echo techniques, high reconstruction max trim with low acquisition matrix, the list is quite long.

The changes in resulting artefacts as acceleration techniques change (eg high compressed sense values) is a bit of a change to how people work. Very digital looking artefacts are just gross.

Thanks for your work! We need more speed.


Thanks for the comment! You're right, I was oversimplifying saying that default MRI would be fully sampled. My main point still stands, you can't just chuck ML onto current protocols to do a direct comparison with ML and more conventional reconstruction methods to give clinicians access to both and expect an improvement (beyond potentially denoising) because the conventional scans are already very good at what they do. Where ML can help is in cases where we can't produce conventional scans (e.g. with very short scan times or high temporal resolution).


Those striving for shorter scan times (eg functional) go to such massive lengths that it blew my mind when I encountered it on a research magnet. Every millisecond counted.

Watching the mental gymnastics used to deal with multi band/simultaneous multi slice (or whatever vendors call it) with functional MRI was impressive to me.

Using ML to work out voxel results in such low spacial resolution scans has got to be scary.


You'll be surprised how many well-known doctors miss a ton of non-obvious anomalies in scan results. There's no way a doctor would have seen all prior records of confirmed diagnosis and their corresponding scans.

In an ideal world, a deep learning algorithm should provide an independent report of potential features of interest to a doctor to let him know if something that he could have missed. However, I hope it stays in the intended role and doesn't make the doctor less careful.


I totally agree with you on this one, but it is not the topic at hand. The article is not about using ML for feature detection, but on enhancing subsampled data.

I actually find what you are saying it to be a much better usage of ML in this sector.


Why would the ML algorithm necessarily change the scan. The radiologist could still look at the unadulterated MRI.


There is no unadulterated result, you are doing less sampling and relying on ML to fill the gaps. So you either have the ML reconstructed result or a subsampled MRI.

Healthcare is an area where we need good and clean data as much as possible, let’s use ML reconstruction somewhere else.


Interesting. Would love to see an example of a tumor so small a radiologist could see it but that a ML algorithm would smooth out


A lot of the problem comes from the use of generative neural networks. If the prior is that the reconstructed images should "look" a certain way, then the algorithm will favor that. Some of our colleagues did early work with DL and got scared off of generative models due to finding issues with nonphysical results (read: broken layers of cortex in the brain, completely non-physical anatomy) that these models can generate from the undersampled raw data.

That said, there are other great ways to incorporate DL into MRI other than recon. I'm more interested in the use of DL for image segmentation, feature detection, potentially denoising, or other techniques on the image processing side. Those make a lot more sense as "top down" tasks that are well suited for neural networks.


It is not about big or small, I don’t think you understand how ML work.

And by the way, tumors can be really small.


Not obvious the human chosen function for reconstruction is necessarily better than the ML one. The human function doesn't save all the data either.


Out of interest, do you understand how MRI reconstruction works?


If your question is literally “you understand how it works?” the answer is yes, I do.

If your question is more nuanced to mean “do you really really know how it works, meaning you could work on it tomorrow?” the answer is no, it is not my field.


There are definitely ways to work well with subsampled data, see Lester Mackey's recent work.


Could please share a link to the work you are referring to? I would really appreciate it (not ironically, it would be truly appreciated).

I know we can work around subsampling, actually we have always been very good at it since our sampling data back in the day was way smaller than what we refer as subsampled today.



Thanks!


MRI is completely adulterated at every stage. Algorithms and filters make the final result palatable. The raw data is a k-space data file. It’s not really human readable (though you can spot noise spikes etc).


Could this be taken one step further... Use the ML in-the-loop during an MRI scan, to look at the data collected so far, then decide which frequency should be measured next to most improve the quality of the result?

This can also all be simulated offline without an MRI machine to test on with just access to a few full scans... So could be a good weekend project for someone here on HN, and your technique might even be in use by the time you need an MRI scan and will mean your doctor can get results slightly quicker and you get better healthcare, together with hundreds of millions of other people!


Need training data? Offer scans at half price in America if they agree to hand over their images!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: