
FastMRI leverages adversarial training to remove image artifacts - olibaw
https://ai.facebook.com/blog/fastmri-leverages-adversarial-learning-to-remove-image-artifacts/
======
DataDrivenMD
A physician's $0.02 - The clinical relevance of FB's work is clearly stated in
the blog post: "While state-of-the-art facilities today use 3 Tesla MRI
machines, scanners with lower-strength magnets (1.5 Tesla, for example) are
still commonly used around the world." Considering that a 1.5T MRI machine
costs about $1M less than a comparable 3T model (+/\- the cost of warranty,
support, and installation), FB's work in this area has the potential to make a
BIG positive impact on the lives of millions of patients. Which is why I will
be cheering them on.

If they reproduce their results in other clinical settings, the immediate
impact on patient care includes: 1) accelerating diagnosis (and treatment) for
patients with traumatic brain injuries (by effectively up-scaling lower
resolution scans) 2) healthcare providers in developing countries will
effectively get a low-cost "upgrade" to their existing equipment 3) cancer
patients in rural America could be monitored for treatment response in a
setting that is closer to home (because rural communities tend to be resource-
poor in terms of medical technology).

If we consider that a logical extension of their work could be to develop a
compression algorithm for MRI data, then it's easy to see an even broader
impact that includes: 1) connecting rural patients with high-quality
radiologist services (i.e. remote MRI interpretations), and 2) decrease the
cost of long-term storage, access, and retrieval for MRI data.

On the topic of FB's issues with privacy: I agree that FB has a long way to
earn my trust as a doctor and a patient. That being said, it's important to
give credit where credit is due. It seems that FB gained access to the imaging
data by working collaboratively with NYU on this specific project. By
comparison, it's an open secret among those of us in the biomedical
informatics community that over the course of many years Google Cloud has
quietly gained access to the personal health information of millions of
Americans. So, when it comes to privacy concerns, it's important to avoid
being myopic - the concern is valid, but the primary threat may not be as
obvious as it first seems.

~~~
orr721
> 2) healthcare providers in developing countries will effectively get a low-
> cost "upgrade" to their existing equipment

I am VERY pessimistic about this. I don't know how well you know medical
equipment providers but this will never be sold as a low-cost "upgrade" to
existing machines. It will be sold with new equipment only and with a hefty
surcharge as an option enabling higher patient throughput.

There is no real money in upgrades. Most equipment lasts only 8-10 years
anyway.

~~~
DataDrivenMD
Your point is well-taken. I agree that such an upgrade is unlikely to be sold
as a standalone product. What is more likely to happen is that it will be
included for a nominal fee as an add-on to a new purchase or service
agreement.

To understand how this would work, we need to 1) understand the lifecycle of
big-ticket medical equipment (ME) and 2) recognize that ME products are at the
core of multiple revenue streams. The first point has to do with the
renewed/refurbished market for used/last-generation ME. The second point has
to do with the service agreements/warranties/support contracts that are needed
in order to keep the ME operational. These factors combine to yield a sales
process with multiple negotiating dimensions.

How these negotiations actually play out depends on whether you're a deep-
pocketed healthcare system or not (it sucks, but it's true). If you can afford
it, you'll have lots of ways to sport the latest and greatest ME without
breaking the bank on any single purchase. Some of your old stuff will end up
in the renewed/refurbished ME market, thereby offsetting your total cost of
ownership (either directly or indirectly). Once used ME hits secondary
markets, the customer profile changes: these customers are not looking to keep
up with the Cleveland Clinics and Stanford's of the world. They're looking for
long-term value, so reliability and longevity is top priority - and this is
where I see software "upgrades" coming into play. Some of these customers may
already have one or two MRIs, while others may not. In either case, the
software "upgrade" becomes a differentiator that speaks directly to the
priorities of these customers.

TL;DR - Today, healthcare providers with limited financial resources (e.g.
those in developing countries, rural areas) are incentivized to purchase
capital equipment through "discounts" on service/support. In the future, we're
likely to see software "upgrades" (such as those made possible by FB's work)
bundled/leveraged as an incentive. The net effect is the same: extend the
clinically useful lifespan of medical equipment (MRIs in this case) and
greater access to medical technology around the world.

------
ebg13
A lot of people here are rightly concerned about the dangers of falsely
marking something as an artifact, but let me present additional data that will
hopefully sway you a little bit...

If you need an MRI or a CT of an area adjacent to orthopedic implants, you are
currently 100% SOL because distortion or reflection artifacts from the metal
completely destroy the imagery across a medically significant distance. There
are computational filtering techniques for reducing these artifacts, but,
respectfully, they are still really terrible, and close to the implants you
can't see shit. All advancements in this area short of inventing new imaging
physics will most likely be purely computational corrections. Consider that.

~~~
vardump
I think I'd prefer radiologists use both computationally filtered and this.
Computational filtering has also advanced over the years.

~~~
ebg13
This _is_ computational filtering. It's not philosophically any different.
Every filtering method algorithmically guesses what's important or what's real
and what's not.

~~~
creato
> It's not philosophically any different.

I disagree. I think using techniques that work by attempting to model physical
processes that we understand are philosophically different from ML approaches
that are learning arbitrary functions.

~~~
shoo
I agree there is a gulf of difference between modelling based on physics and
mere empirical modelling fitting functions to data

------
est31
I'm no fan of this. What if it treats a tumor as an artifact? This reminds me
of the xerox scandal about broken OCR that erroneously deduplicated parts of
images that had different contents.

This module might work well, but the modules by cheap competitors might have
such behaviour, and it's extremely hard to test that an implementation is bug
free.

~~~
yboris
What if doctors get both, the untouched originals _and_ the images with the
artifacts removed? Seems like it solves the problem you're concerned with?

~~~
est31
The xerox scanners had a setting to disable compression as well. People are
lazy and don't enable the compressions. Although they are highly skilled,
radiologists don't have time to inspect each image, so why bother looking at
the raw originals?

The question is rather: does this feature improve diagnoses? Sure, the images
look nicer now. But that's not why they are being created. MRI images are made
for inspection by trained radiologists who are already filtering out
artifacts. So is this tool better at this job, or does it actually _worsen_
the ability of the radiologists to read the images like those xerox scans?

Maybe I'm a bit paranoid, idk. After all, diffusion MRI is already being used
for surgical planning even though it has several shortcomings. But in that
instance there are probably no good alternatives, while here the alternative
is the trained eye of a radiologist.

~~~
ska
It gets even worse than that sometimes. For example, I remember a study from
back when digital xray was getting going, where radiologist were asked to say
which processing they liked better (since none of them looked quite like the
very non-linear film versions) and scored on performance.

They didn't perform best on the types they liked best. This wasn't a great
study in terms of power, but it was interesting.

I've met plenty of rad-oncs and radiologists who are convinced they can "read
through the noise" just fine, and want consistent imaging more than artifact
reduction. I'm not sure how empirically this has ever been tested.

~~~
lostlogin
Digital and computed radiography are quite poor examples of progress though,
as the resolution is worse and the radiation dose was higher than film
radiography. This may have changed in the last few years but was strikingly
true at the outset.

The advantages they gave were in every other way (physical storage,
availability, duplication, speed at which they could be accessed etc).

~~~
ska
The point I was trying to make has nothing to do with image quality.

The issue was, radiologist had to deal with a choice of different post-
processing of this data. The processing they said they liked best (somewhat
consistently) was not the processing that they performed best on, empirically
(somewhat consistently).

This is related to the issue of evaluating the value of ML post processing, we
could see a similar effect there. After all one school of thought was that
preference was in some sense driving by familiarity rather than what they were
actually able to discriminate.

FWIW IQ evaluation in MRI is a somewhat problematic thing anyway, but
acceleration certainly tends to make it worse in some ways. It's not obvious
how effective various mitigation approaches are.

~~~
lostlogin
Thanks - I missed your point. Image quality in MR is very much a moving target
too as it varies between patients and there is a far bit of variation in
practice. Scans are speed up or slowed down for a variety of reasons. Making a
scan faster to fit in another patient or any number of other reasons is
something that happens regularly.

------
mikeortman
Please don’t make sweeping, generalizing opinions on the implications of the
work. It’s a subjective problem to solve, so if are not a radiologist who has
first-hand experience with this issue, stop.

Here are the results from the paper:

The radiologists ranked our adversarial approach as better than the standard
and dithering approaches with an aver- age rank of 2.83 out of a possible 3.
This result is statisti- cally significantly better than either alternative
with p-values 1.09 × 10−11 and 2.18 × 10−11 respectively, and the adver-
sarial approach was ranked as the best or tied for best in 85.8% of 120 total
evaluations (95% CI: 0.78-0.91). The dithering approach is also statistically
significantly better than the standard approach. We also asked radiologists if
banding was present (in any form) in the reconstructions in each case. This
evaluation is highly subjective, as “banding” is hard to define in a pre- cise
enough way to ensure consistency between evaluators. Considering each
radiologist’s evaluation independently, on average banding is still reported
to be present in 72.5% (95% CI: 0.62-0.82) of cases even with the adversarial
learn- ing penalty. The radiologists were not consistent in their rankings;
the overall percentages reported by the six radiol- ogists were 20%, 75%, 75%,
80%, 85%, and 100% for the adversarial reconstructions. In contrast, for the
baseline and dithered reconstructions, only one radiologist reported less than
100% presence of banding for each method (80% and 85% presence respectively,
from different radiologists). We believe these numbers could be improved if
more tuning went into the model; however, it’s also possible that features of
the sub-sampled reconstructions generally may be con- fused with banding, and
so any method using sub-sampling might be considered by radiologists as having
banding. Sub- sampled reconstructions generally have cleaner regional
boundaries and lower noise levels than the corresponding ground-truth.

~~~
p1necone
Intuitively I don't see that there's much value in asking radiologists to
subjectively "rank" the images. Surely the thing that needs to be tested here
is patient outcomes?

~~~
vanderZwan
That needs to be tested _eventually_ \- there's a reason we go from petridish
testing to animal testing to human testing with medicine, it stands to reason
that medical tools should follow similar stages.

------
mustachionut
Even without anything fancy, is there a speed vs clarity parameter(s) when
doing an MRI? It seems an easy improvement would be to spend more time getting
a clear picture of the specific area of interest, vs now where the whole scan
seems to be done at full clarity.

~~~
throwaway4220
Yes, definitely true for many artifacts! Although due to Nyquist, ghosting
artifacts sometimes require you to increase the field of view.

What bothers me here is when the artifacts hide underlying pathology, and
these algorithms "learn" what a normal knee mri looks like and just show you
that. IMO it is a medical liability that must be addressed.

~~~
viraptor
Yeah, I'm worried how any automatic correction which is not completely
specified can be used in medical imaging. We sometimes fail to even compress
images correctly (remember the scanners changing numbers due to compression?),
so trying to automatically remove artefacts sounds dangerous. We already teach
doctors about the artefacts and how to handle them. The image doesn't need to
be pretty - just functional.

~~~
lostlogin
This is mostly handled by MR techs and it is their job to sort this out. Many
of the automated tasks are pretty good, and those that aren’t get rejected
fast. We don’t tend to get a new sequence/tool/parameter and just run with it,
it’s used with the old one until a degree of trust and understanding is
established. I’m an MR tech shirking off.

~~~
throwaway4220
Yeah, I would trust an MR tech's tried and tested parameters way before
trusting any fancy algorithm or even a new sequence.

------
lvs
No thanks. If it can remove artifacts, it can also introduce them. Nobody
should be using this on patients. This is a straightforward misapplication of
AI.

------
throwlaplace
Isn't this basically SRGAN?

Edit: sorry I guess since there's an explicit rotation module it's closer to
SRGAN+deformable convolutions.

~~~
zn473
The adversary in this work never sees non-reconstructed images, so it looks
like it's completely unrelated to SRGAN.

------
voicedYoda
Facebook has absolutely no reason to be doing work with healthcare. Sure they
have great computing power and top engineering talent to figure out how to
sell more ads, but the trade-off for any educational facility to freely hand
over medical data (de-identified or not) is wreckless.

~~~
nradov
What exactly is the concern with de-identified medical data? This is common
practice in medical research and explicitly allowed under federal law.

~~~
pmiller2
Let's start with how it's very difficult to properly de-identify medical data:
[https://www.careersinfosecurity.com/patient-data-be-truly-
de...](https://www.careersinfosecurity.com/patient-data-be-truly-de-
identified-for-research-a-12708)

------
heyitsguay
I just gave a talk on similar work being done in microscopy:
[https://leapmanlab.github.io/nihai/jan20/](https://leapmanlab.github.io/nihai/jan20/)
.

The tl;dr (in microscopy but apparently also in mri) is AI imaging can
evidently enable new concrete solutions to intractable imaging problems, but
the failure modes are really treacherous. The example on slide 39, taken from
another excellent review paper, does a great job illustrating the problem. I
think these methods will get more trustworthy, but i wouldn't stake my life
(or my paper's prestigious research results) on them at the moment.

------
deepnotderp
This is a bad idea, neural nets upscale by "hallucinating" in the details.
That's fine for videos for entertainment, not for medical imaging.

And this is distinctly different from compressed sensing which uses a high
frequency and mathematical basis.

~~~
zn473
I went to a medical imaging workshop recently, and the consensus was that deep
learning approaches will completely replace classical compressed sensing. They
are using the same principles of acquiring randomized samples, so it's still
compressed sensing, they just produce _dramatically_ better results than
classical CS techniques.

~~~
deepnotderp
> They are using the same principles of acquiring randomized samples, so it's
> still compressed sensing,

See the second part of my comment. This is only in principle. In practice
compressed sensing uses a higher frequency basis and more importantly, this
basis is generally not _learned_ , preventing common case bias. Ie a rare
condition won't be ignored because it isn't statistically common enough for
the NN model to learn.

------
2ndwind
Does anyone know how this differs from what Subtle Medical is doing?
[https://subtlemedical.com/](https://subtlemedical.com/)

~~~
rontoes
Similar approach, although Subtle Medical are not using adversarial training,
just plain old conv-nets with a non-adversarial loss.

~~~
pg_sha18
Mainly driven by FDA feedback I guess. Have been hearing FDA is spooked by
GANs in general, which is a good thing.

------
Gatsky
Is anyone else concerned that facebook has an interest in MRI?

------
lokimedes
Could be useful for SAR and SAS imagery as well perhaps.

