
Flaws uncovered in the software researchers use to analyze fM.R.I. data - the_duck
http://www.nytimes.com/2016/08/28/opinion/sunday/do-you-believe-in-god-or-is-that-a-software-glitch.html
======
caminante
Whoops...

    
    
      "there are abundant opportunities for error, particularly when you 
      are relying on software to do much of the work. This was made glaringly 
      apparent back in 2009, when a graduate student conducted an fM.R.I. 
      scan of a dead salmon and found neural activity in its brain when it 
      was shown photographs of humans in social situations. Again, it was 
      a salmon. And it was dead."

~~~
ChuckMcM
I read a great (in the sense that it was laugh out loud preposterous) paper
that used this experiment to assert that re-incarnation was a thing, as the
salmon, post mortem, had been re-incarnated as a person but not yet born,
their portal to the soul was now interpreting pictures as if the fish were
both human and alive. It was a tour de force in circular reasoning. It may
have been in the Journal of Irreproducible Results.

One way to read scientific papers is to deconstruct the apparatus in the
experiment to see if it could actually produce the data in the paper, and if
so, what would have to be true for it to do so. Good papers go through that in
experimental design section, bad papers gloss over it or toss it out as a
stipulation "We used the same instrument that everyone else uses to read brain
activity the fMRI scanner." (bad) "We used a technique called fMRI scanning
which measures the magnetic resonance of hemoglobin in the blood, specifically
it measures the difference between oxygen rich and oxygen poor hemoglobin."
(much better) Then as a reader you can see that any change in oxygen, whether
it is brain activity or bacterial decomposition would show up in such a
resonance scan.

~~~
kobayashi
Would love to see that paper!

~~~
wutangson1
Downvote me all you like, but it has to be said that Kobayashi is responding
to McManis :0

~~~
kobayashi
lol, great notice!

------
lbenes
The authors' of the paper feel it is being misinterpreted and tried to submit
errata to PNAS:[1]

They tried to change the following sentence:

“These results question the validity of some 40,000 fMRI studies and may have
a large impact on the interpretation of neuroimaging results.”

To

“These results question the validity of a number of fMRI studies and may have
a large impact on the interpretation of weakly significant neuroimaging
results.”

Link to discussion in /r/NeuroScience:
[https://www.reddit.com/r/neuroscience/comments/4ri72b/the_so...](https://www.reddit.com/r/neuroscience/comments/4ri72b/the_software_for_fmri_analysis_results_in/)

[1]
[http://blogs.warwick.ac.uk/nichols/entry/errata_for_cluster/](http://blogs.warwick.ac.uk/nichols/entry/errata_for_cluster/)

------
sctb
Recent discussion:
[https://news.ycombinator.com/item?id=12032269](https://news.ycombinator.com/item?id=12032269)

------
irremediable
Much better article and discussion in the link that sctb posted. This article
has a number of issues... it's conflating several different problems with fMRI
without really explaining them, then arguing that it's worthless -- as opposed
to just needing more groups to follow best practices.

~~~
ramblenode
Yes, this article is really just a laundry list of problems. It barely even
touches on the headline problem, which is the software.

------
coleca
This story reminds me of the Therac-25 (although no deaths were attributed to
this bug). The book "Fatal Defect" should be required reading for any software
engineer.

[https://www.amazon.com/Fatal-Defect-Chasing-Killer-
Computer/...](https://www.amazon.com/Fatal-Defect-Chasing-Killer-
Computer/dp/0679740279)

------
mwest
Whoops. Certainly an interesting contrasting article to the recent:

Why bad scientific code beats code following “best practices” (2014)
[https://news.ycombinator.com/item?id=12377385](https://news.ycombinator.com/item?id=12377385)

------
brainsturgeon
How does this affect (or does this apply to) the recent "new brain map" paper
from the Human Connectome Project?

[http://www.nature.com/nature/journal/v536/n7615/full/nature1...](http://www.nature.com/nature/journal/v536/n7615/full/nature18933.html)

~~~
irremediable
Without looking into this a lot, probably not much. This parcellation is
described as multi-modal, so it's not using only structural or only diffusion,
but I doubt it's using fMRI much if at all. Even if so, I'm sure they
validated the parcellation against pre-existing parcellations. I'll double-
check this tomorrow when I can access the full paper more easily.

~~~
brainsturgeon
The parcellations in the paper were defined in part on task-dependent
activity. I'm asking a simpler question: did the HCP paper use the false-
positive prone analysis software?

------
libeclipse
Am I mistaken or has this been on the front page multiple times already?

