
fMRI software bugs could upend years of research - taylorbuley
http://www.theregister.co.uk/2016/07/03/mri_software_bugs_could_upend_years_of_research/
======
maweki
"and along the way they swipe the fMRI community for their “lamentable
archiving and data-sharing practices” that prevent most of the discipline's
body of work being re-analysed."

That's quite funny. My girlfriend recently finished her master's thesis on
data sharing for neuroscience data and created a model for universal access to
research data across institutions, but came to the conclusion that making
researchers share their data is a bigger hurdle than actually implementing the
system.

The main reason for lack of sharing, she postulated, is, that studies (that
create funding for the researcher who publishes them) can be done using just
the raw data and researchers who create data want to publish all the
studies/papers for themselves (because "they" paid for the data acquisition)
and are also afraid to publish underlying data for it to be harder for others
to falsify their results, which would lead, in their opinion, to funding going
away.

Edit: of course there are privacy issues for the test subjects as well.

~~~
vanderZwan
Meanwhile, the research I work for publishes all of their data. In fact, the
main reason I have been hired by their professor as a programmer/interaction
designer (through HN no less) is because the next project will produce a
mountain of data and he does not want it to collect dust after the project is
done. I am supposed to make the data even more accessible to other researchers
through a web-interface.

Aside from the ethical motivations, other benefits are pretty obvious: there
is at least one postdoc in the research department that I know of was
recruited because he found a completely different use for the data shared in
the past and published a paper based on his own analysis.

(The research group works with mouse brains so privacy is not an issue)

~~~
a_bonobo
I guess it depends on research field and journal - in biology most journals
require you to deposit your raw genomic reads to the Sequence Read Archive,
this is still sometimes missed by editors and reviewers.

Random example for the requirement of data depositing from Journal of Plant
Physiology: "If new sequence data are reported, insert the following:
“Sequence data from this article have been deposited at XXX under accession
number(s) YY000000.‘‘
[http://cdn.elsevier.com/promis_misc/jpp_Instruction.pdf](http://cdn.elsevier.com/promis_misc/jpp_Instruction.pdf)

Another random example from the Nature Group
[http://www.nature.com/authors/policies/availability.html](http://www.nature.com/authors/policies/availability.html)

AFAIK only few funding bodies require the deposit of raw data...

------
randcraw
As someone who works in the biomedical imaging business and is also a fan of
philosophy, I think this news will matter more to folks in the latter camp.
For a couple years now philosophers have insisted that fMRI images prove there
is no such thing as free will. Today's revelation should put an end to that
whole line of reasoning (and the absurd amount of fatalism that it
engendered).

(The back story: Apparently fMRI showed motor signals arising _before_ the
cognitive / conscious signals that should have created them, assuming we
humans have free will. This has led to the widely adopted belief among
philosophers that we humans act before we think, thus we don't and can't act
willfully and freely. To wit, science has proven there is no such thing as
free will; we're all just automatons.)

Just this week there was an article in The Atlantic on how we all must accept
that we're mere robots and we don't really choose our actions (nor can we
choose to believe in a god).

Ah well. It seems philosophers STILL haven't learned the importance of
applying the scientific method before leaping to a conclusion -- sometimes
just to check that someone else didn't just abuse the scientific method.

~~~
hammock
What are the actionable insights that have come out of fMRI studies? Even when
properly conducted (no false positives), the conclusions that are often drawn
have always felt dubious to me. Basically you are looking for regions of the
brain that light up with various stimuli. Except that's as far as it goes, we
don't yet understand much beyond that.

It's as if you figure out that your car is making a funny sound, and you can
pinpoint where it is coming from, you can even reproduce the sound on demand -
but you have no idea WHY it sounds the way it does.

~~~
Tomte
I was participant in a study in linguistics [1] that compared native Polish
and native German speakers who were put into an fRMI and played speech sounds.
Both Polish and German ones.

It clearly showed that speech sounds from your native language are processed
in a different part of the brain than non-native speech sounds.

Yes, that does not explain much. But it leads towards all kinds of questions.
And I found that fascinating.

[1] Silvia Lipski, Neurosci Lett. 2007 Mar 19, „A Magnetoencephalographic
study on auditory processing of native and nonnative fricative contrasts in
Polish and German listeners.“

~~~
shmel
A small correction: cited study used MEG, not fMRI, as modality.

The conclusion itself doesn't look very surprising for me. We already know
that the sound processing in general is the same in both hemispheres while the
speech processing is very lateralized. From the continuity I could say that
there should be a border where speech-like sounds sound like speech and
therefore they are processed differently between hemispheres. This study seems
to estimate this border.

~~~
Tomte
Thanks for the correction!

I misremembered because my professor's group did a lot of fMRI stuff, as well,
and in the seminars we mostly talked about those.

Speech/language and brain is fascinating. There are resident linguists at
major hospitals who are consulted before neurosurgery. Speech sounds are
processed faster than other sounds in our brain. Rearranging sentences from
active voice to passive voice, silently in your head, lead to easily seen
activity in fMRIs, distinct from non-linguistic mental actions. And so on.

------
jballanc
The real takeaway lesson from this research should be the _vital_ importance
of Open Data to the modern scientific enterprise:

> "lamentable archiving and data-sharing practices" that prevent most of the
> discipline's body of work being re-analysed.

Keeping data private before publication is (at this point in time)
understandable. Once results are published, however, there is no excuse for
not depositing the raw data in an open repository for later re-evaluation.

~~~
bpchaps
Yeah, it's pretty frustrating. I tried getting my own data from an fMRI study,
but was told that the signed paperwork specifically disallowed this sort of
thing. Not even my doctor could request it. The only option I have is to
completely withdrawal myself from the study, but that would be a pretty dick
move. The other option is to wait until next year when the study's wrapped and
then request it. Though, I'm not even sure if that'll get me the actual data..

~~~
ihnorton
FWIW, typical raw fMRI data is mostly useless unless you know the design
parameters of the study (stimulus timing, imaging onsets, etc.). Though there
are some interesting data-driven analyses techniques, especially for resting-
state data.

Many research studies in the U.S. are required to do "structural" scans (high-
resolution T1 or T2) and send them to a safety read by a radiologist, for
liability reasons. At very least, this scan should be available directly from
the hospital imaging department. If you are lucky all the data will have been
sent through the hospital PACS and the imaging department will
indiscriminately dump everything. At a research-only center it might be more
complicated because such images are likely sent off-site for the safety read.

~~~
bpchaps
Right, they did actually give me the surface scan. It was really, really cool,
though with really low resolution.

What got me interested in getting the data was this: [1]. It might be
difficult to work with, but there was a lot of motivation to learn it.

[1]
[http://nbviewer.jupyter.org/github/GaelVaroquaux/nilearn_cou...](http://nbviewer.jupyter.org/github/GaelVaroquaux/nilearn_course/blob/master/rendered_notebooks/1_Introduction.ipynb)

------
nonbel
I would send it back and ask for a detailed description of the null hypothesis
they are testing, because they are not clear on this point at all:

>"All of the analyses to this point have been based on resting-state fMRI
data, where the null hypothesis should be true."

They are not careful to explicitly define this null hypothesis anywhere, but
earlier in the paper they describe some issues with the model used:

>"Resting-state data should not contain systematic changes in brain activity,
but our previous work (14) showed that the assumed activity paradigm can have
a large impact on the degree of false positives. Several different activity
paradigms were therefore used, two block based (B1 and B2) and two event
related (E1 and E2); see Table 1 for details."

This means that they actually _know the null model to be false_ and have even
written papers about some of the major contributors to this:

>"The main reason for the high familywise error rates seems to be that the
global AR(1) auto correlation correction in SPM fails to model the spectra of
the residuals"
[http://www.sciencedirect.com/science/article/pii/S1053811912...](http://www.sciencedirect.com/science/article/pii/S1053811912003825)

If the null hypothesis is false, it is no wonder they detect this. In fact, if
the sample size was larger (they used only n=20/40 here) they would get near
100% false positive rates. The test seems to be telling them the truth, it is
a trivial truth, but according to their description it is correct nonetheless.

Edit: I was quoting from the actual paper.

[http://www.pnas.org/content/early/2016/06/27/1602413113.full](http://www.pnas.org/content/early/2016/06/27/1602413113.full)

~~~
nickledave
See mattkrause's comment below. I think you might not be understanding what
they're testing. They're taking _only_ resting state data, randomly sorting
some of it into "pretend active state data" and then asking whether they get
any statistically significant difference between these two groups when they
_shouldn 't_. But they _do_. That means the tests they used, the same tests
many authors use, are giving false positives. The null hypothesis is "there
will be no difference between testing state data and whatever data we get when
we ask the subject to do some activity". They can "reject" that hypothesis
using only randomly shuffled resting state data, so there's something wrong
with the stats packages themselves

~~~
nonbel
I think that like mattkrause (and the authors of the current work), you have
forgotten that the null hypothesis is something larger than condition 1 ==
condition 2. There are various other components, usually (somewhat
misleadingly) referred to as assumptions, that can also cause the predictions
derived from the null model to deviate from the data.

For stuff like a t-test, one parameter value (ie the mean of the distribution)
gets all the attention. But this is wrong, it is only one part of the model
being tested.

~~~
nickledave
I'm aware that there are assumptions implicit in what the null hypothesis is.
You are the one who keeps saying the authors don't even realize what those
assumptions are, but you haven't pointed out anything besides what the authors
said. What are the other faulty assumptions you've identified that the authors
are missing? I guess you're referring to some sort of issue with power that
you mentioned in your previous comment?

~~~
nonbel
>"You are the one who keeps saying the authors don't even realize what those
assumptions are, but you haven't pointed out anything besides what the authors
said."

I am saying they are confused because they say "the null hypothesis should be
true" under their conditions, when they know for certain that it is false!
Therefore these "false positives" are not false at all. They are totally legit
"true positives".

These authors are blaming the statistical test when the problem lies with
their crappy choice of null hypothesis.

There may very well be other issues, but I have not inspected the code or done
anything other than read the description in the paper.

~~~
mattkrause
Okay, why do you think the null is crappy?

In the absence of any information whatsoever, the idea that an analysis
pipeline produces false positives at or below its nominal rate seems pretty
reasonable.

But, they have some prior information.

Let's look at the E1 paradigm (2 sec on, 6 sec off). In the NeuroImage Paper
(Figure 1A, 2A), the FWER on voxel tests is statistically indistinguishable
from 5%. In other words, it's appropriately sized. They replicate this result
in the rightmost panel of the _PNAS_ paper, where it's also within the 95% CI
around 5%.

Now, for the cluster inference, look at the left-most panel of _PNAS_ Figure
1A. The E1 paradigm is the green bar. Using the defaults for FSL (left panel)
and SPM (middle panel), the FWER is about 30% and 25%, respectively. That is
_not_ good.

I agree that the Block designs look awful in the _NeuroImage_ paper, which
makes it hard to say whether this phenomena makes it worse in the _PNAS_ data.
It's unfortunate that these numbers are going to be in the press release (70%
is much sexier than 30%), but 6x the nominal rate is still bad.

~~~
nonbel
>"Okay, why do you think the null is crappy?"

Because their goal is to determine if some sort of treatment has an effect. If
the null is false for other reasons, then statistical significance can't be
used to support the existence of a treatment effect. So these would be
pointless, pedantic calculations.

~~~
e12e
@nonbel: Are you saying that an fMRI, when taken of a subject at rest, twice -
then what the data here show - is that this is likely to be interpreted as a
subject in two different states? And the researchers are ignoring the fact,
rather insisting that we _should_ be able to tell that these two states are
the same (and then, perhaps tell them from other states)?

I see a few ways this could come about: perhaps the way we record and model
activity doesn't conform to the distribution we assume (I'm not sure if they
assume a normal distribution here - or if that even makes sense given the
nature of the data) -- or perhaps the issue is with taking 3d/4d data and
"turning it into" an easy-to-model statistical model (like the normal
distribution)?

At any rate, it does seem that they're saying we can't tell that one
individual at rest, measured twice, is in the same (rest) state both times?
Hence, they're null hypothesis is bunk?

~~~
nonbel
>"Are you saying that an fMRI, when taken of a subject at rest, twice - then
what the data here show - is that this is likely to be interpreted as a
subject in two different states?"

Yes, that is what they seem to be saying. I didn't read the code, or even the
paper very closely. However, from what I quoted, they seem to be saying there
is some assumption about autocorrelation that introduces what they call "false
positives".

I am saying they have mischaracterized the problem. These are true positives.

------
honkhonkpants
Doesn't sound like a straight up bug, but rather unsound statistical methods
which can happen with or without software. You get the same problem with
finite element analysis software: the operator has to be aware of all the
assumptions baked in, and has to ensure that the input conforms to them.

~~~
greenyoda
I'd say it's a bug, since the unsound statistical methods are incorporated
into the three most common software packages that are specifically intended to
be used for fMRI analysis. If these methods don't work well for fMRI data,
fMRI software shouldn't be using them.

From the paper: _" Using this null data with different experimental designs,
we estimate the incidence of significant results. In theory, we should find 5%
false positives (for a significance threshold of 5%), but instead we found
that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can
result in false-positive rates of up to 70%."_
([http://www.pnas.org/content/early/2016/06/27/1602413113.full](http://www.pnas.org/content/early/2016/06/27/1602413113.full))

Choosing a bad algorithm for your software is as much of a bug as a null-
pointer crash, and is something that needs to be tested for when building
software.

~~~
sverige
And it's a little disheartening when it takes this long to find such a
significant bug. Lots of work gone to waste. I wonder if the data can be
rerun? Sounds like lots of the raw data wasn't archived correctly or at all.

~~~
nonbel
The real "bug" here isn't even really statistical. It is the usual logical
issue that has been well known for years (yet the problem has only grown and
spread):

"In most psychological research, improved power of a statistical design leads
to a prior probability approaching 1/2 of finding a significant difference in
the theoretically predicted direction. Hence the corroboration yielded by
“success” is very weak, and becomes weaker with increased precision.
“Statistical significance” plays a logical role in psychology precisely the
reverse of its role in physics." [http://cerco.ups-
tlse.fr/pdf0609/Meehl_1967.pdf](http://cerco.ups-
tlse.fr/pdf0609/Meehl_1967.pdf)

In other words, the fMRI researchers have been testing a null hypothesis known
to be false beforehand, so statistical significance tells us nothing of value,
and definitely nothing about the theory of interest.

This paper is really interesting, because the authors don't seem to realize
that this is what they are discovering. Getting a "statistically significant"
result is the correct answer for the tests to return, these are not false
positives. The problem is a poorly chosen null model.

~~~
mattkrause
Are you sure about that? As I understand it, they are testing a really-really-
null hypothesis.

Specifically, they started with some resting state data (a condition where the
subjects just lie there, attempting to remain alive). They then took this data
and imposed fake task structures over it, as if the subject were doing two
different things (e.g., condition #1 starts at t=0, t=1 min, t=3 min, t=6 min;
condition #2 starts at t=2, t=4, t=5 min). Once this is done, the task+data is
submitted to their analysis pipeline.

The null hypothesis (fake condition #1 == fake condition #2) _has_ to be true,
by construction. This assumes that the original data doesn't have any
structure, which may or may not be true (they discuss it a bit towards the end
of the paper).

~~~
nonbel
>"The null hypothesis (fake condition #1 == fake condition #2) has to be true,
by construction."

The null hypothesis is not only that condition 1 == condition 2, it involves
other assumptions being made. From their own description, they _knew_ it was
false before even doing this analysis and used this knowledge to design the
study. Apparently there is some incorrect assumption about autocorrelation
being made.

It seems the authors did not really grasp what hypothesis they were testing,
leading to their confused (but still productive) description of the problem. I
go into more in this post:
[https://news.ycombinator.com/item?id=12032772](https://news.ycombinator.com/item?id=12032772)

------
UVDMAS
The paper has been rebutted by other researchers who argue that the original
results hold:

"This technical report revisits the analysis of family-wise error rates in
statistical parametric mapping - using random field theory - reported in
(Eklund et al., 2015). Contrary to the understandable spin that these sorts of
analyses attract, a review of their results suggests that they endorse the use
of parametric assumptions - and random field theory - in the analysis of
functional neuroimaging data. We briefly rehearse the advantages parametric
analyses offer over nonparametric alternatives and then unpack the
implications of (Eklund et al., 2015) for parametric procedures."

[http://arxiv.org/abs/1606.08199](http://arxiv.org/abs/1606.08199)

~~~
dewarrn1
Sort of. The rebuttal (by Flandin and Friston) suggests that _properly-
applied_ parametric statistics of the kind they favor are valid. Eklund et al.
wouldn't disagree with that because their own findings support it, but they
would point out that not all researchers necessarily adhered to the
conservative statistical approach that F&F discuss. More specifically, both
sets of authors describe the importance of using a conservative "cluster
defining threshold" to identify spatially contiguous 3D blobs of brain
activation. Eklund et al. use their findings to raise the question of whether
the bulk of fMRI reports were conservative in this regard.

------
nerdponx
"Further: “Our results suggest that the principal cause of the invalid cluster
inferences is spatial autocorrelation functions that do not follow the assumed
Gaussian shape”."

This has nothing to do with bugs and everything to do with bad statistical
analysis. It's Google Flu all over again.

~~~
AlexCoventry
It's all relative. At the right level of abstraction, bad statistical analysis
_is_ a bug.

------
williamscales
"Our results suggest that the principal cause of the invalid cluster
inferences is spatial autocorrelation functions that do not follow the assumed
Gaussian shape."

In other words, researchers cut corners. You should never assume that
something is a certain way without rigorously proving it. How did these papers
make it past peer review?

~~~
trentmb
n=30 ought to be enough for anybody

~~~
psycr
Depends how big the effect size is. For putting a loaded revolver next to your
head, and pulling the trigger, n=30 is plenty.

------
greenyoda
Link to original paper:
[http://www.pnas.org/content/early/2016/06/27/1602413113.full](http://www.pnas.org/content/early/2016/06/27/1602413113.full)

------
pfooti
The dead salmon study seems relevant here in discussion of how fMRI is used,
especially the theory -ladenness of observations.

[http://blogs.scientificamerican.com/scicurious-
brain/ignobel...](http://blogs.scientificamerican.com/scicurious-
brain/ignobel-prize-in-neuroscience-the-dead-salmon-study/)

~~~
nkurz
Maybe I missed the link, but the full text of the readable, relevant, and
enjoyable article that blog post discusses is here:

 _The principled control of false positives in neuroimaging_

 _Bennett, Wolford, Miller 2009_

[http://scan.oxfordjournals.org/content/4/4/417.full](http://scan.oxfordjournals.org/content/4/4/417.full)

~~~
pfooti
Thanks. I'm on a mobile in a foreign land, so had issues tracking down a
useful link.

------
Toenex
And from a couple of days before.

[https://news.ycombinator.com/item?id=12019205](https://news.ycombinator.com/item?id=12019205)

------
iamleppert
Just goes to show when you're doing science you need to test and validate your
experimental methodology, including the tools you use. In computer vision, its
common to need to do some kind of calibration for many algorithms which can
usually reveal some kind of statistical error or problem. I wonder why none of
the researchers thought to do some very simple validation of the data?

And I wonder if the software was at one point correct and then this bug was
introduced at a later point? Many times it feels like after a company does a
formal scientific validation they never do it again despite the fact they have
engineers constantly working on the code...

------
Trombone12
Well, I think the problems with interpreting fMRI scans have been at least
vaguely known since that time a dead salmon activated its neurons when asked
to judge the of a person from a photo, this was in 2009.

[http://www.wired.com/2009/09/fmrisalmon/](http://www.wired.com/2009/09/fmrisalmon/)

~~~
dewarrn1
The dead salmon article is a bit of a red herring here. It's a clear
demonstration that a shoddy statistical approach can undermine fMRI findings.
Critically, the implications of the current paper extend to even research that
has been rigorously analyzed using field-standard software. Statistical issues
are at the heart of both papers, but the newer paper identifies problems that
are subtle and ubiquitous.

------
chrramirez
If this results to be true, this could be one of the most expensive bugs in
computer history.

~~~
FrojoS
Only, if you assume that those studies were of value without the error.
obligatory xkcd: [https://xkcd.com/1453/](https://xkcd.com/1453/)

------
bjourne
Does it mean studies like these are likely bunk?

    
    
      http://kangleelab.com/articles/Paper0002_0009.pdf
      https://med.stanford.edu/news/all-news/2016/05/moms-voice-activates-different-regions-in-children-brains.html
      https://www.theguardian.com/science/2015/apr/21/babies-feel-pain-like-adults-mri-scan-study-suggests
      https://news.brown.edu/articles/2013/06/breastfeeding
    

And with bunk I mean doesn't show what they claim.

------
iLoch
" _How_ X looks like" \-- what's with this gramatical mistake? I see it
everywhere. Is it a regional thing?

~~~
strictfp
It's a common error for native Swedes, It's a direct translation from Swedish.
Could be true for other languages as well, I believe most Germanic languages
use "how".

~~~
waqf
English uses "how" too: it's correct to say "how X looks", with no "like".

"what … like" is synonymous with "how".

------
Alex3917
It's already been known for several years that almost all MRI brain scan
research is wrong, what exactly is new here?

~~~
JumpCrisscross
> _almost all MRI brain scan research is wrong_

Source for a layman?

~~~
goalieca
I wasn't in fmri research but some fellow students in my lab were on it. I do
know there was a study with dead salmon which were showing up as having
"active" brain regions. I have to find it but am on my phone.

It's easy to imagine the main difficulty with this is mapping signal to actual
thoughts and regions. It's a really complex biology and physics to reduce.

~~~
mattkrause
The salmon thing was a statistical problem, not a technical one.

fMRI data generate an incredible number of data points: imagine a movie, but
in three dimensions, so you get a sequence of x/y/z-volumes. A typical scan
has ~128 to 256 voxels in each spatial dimension, for ~1 million voxels per
volume.

This means that if your analysis contains voxel-by-voxel tests, you're going
to be running a huge number of them. Even if each test has a fairly low false-
positive rate (say 0.1%), there are still a huge number of tests, and thus, a
huge number of false positives.

There are principled ways of correcting for this; there are also hacky "folk
methods" like setting a more stringent false positive threshold. The fish
poster argues that the latter doesn't work, using a deliberately silly
example.

[http://prefrontal.org/files/posters/Bennett-
Salmon-2009.pdf](http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf)

~~~
Alex3917
> The salmon thing was a statistical problem, not a technical one.

How is that relevant to the question of whether or not most fMRI research is
wrong?

~~~
mattkrause
1\. It's trivially checkable for any existing individual paper. You skim the
methods or search for "multiple comparisons" or "false discovery rate" or
something like that.

2\. For papers where this wasn't done, the already-collected data can be
reanalyzed. In fact, you can often get correct it without access to the raw
data (at least approximately).

3\. It means that future papers (where future is somewhere after 2008-9 here)
can be done correctly from the get-go; it's not a limitation of the technique
or the signal itself.

~~~
Alex3917
If you believe that the majority of papers in the field are not incorrect,
what are the assumptions behind your estimation of the numbers?

------
jamesrom
I thought that fMRI hasn't been taken seriously since they scanned a dead fish
and found it was thinking.

~~~
omginternets
>fMRI hasn't been taken seriously

I don't know where you got that (very strange) impression.

The dead fish paper showed that multiple comparisons yield false positives,
which is a problem that is much broader than fMRI methodology. With proper
method, fMRI is a very reliable and insightful tool. It's just very difficult
to do properly.

------
seesomesense
fMRI is great for generating headlines and pretty pictures for the popular
media.

However most neurologists view the vast majority of fMRI research as junk
science.

------
jey
Title should really read "fMRI" instead of "MRI". The referenced journal
article is titled "Cluster failure: Why fMRI inferences for spatial extent
have inflated false-positive rates".

~~~
_delirium
Fwiw there's a response to an earlier (preprint) version of that paper from
some of the developers of the packages in question.

Preprint version of the "Cluster failure" paper, from last year:
[http://arxiv.org/abs/1511.01863](http://arxiv.org/abs/1511.01863)

Response: [http://arxiv.org/abs/1606.08199](http://arxiv.org/abs/1606.08199)

~~~
dewarrn1
Flaundin & Friston's response is interesting because it essentially endorses
the findings of Eklund et al. (except for one element of the Eklund analysis
that they suggest is a modest error). F&F believe that by setting one
parameter correctly (i.e., using a conservative cluster forming threshold) the
validity of their preferred parametric statistical approach is upheld. Eklund
et al. might quibble because their take-home message is that non-parametric
methods should be used instead, but their findings are not misrepresented by
F&F.

Regardless, an open and important question is how often other authors used a
sufficiently conservative cluster forming threshold for their fMRI analyses.
If nothing else, Eklund et al. will cause future reports to be more cautious
in this regard.

------
SubiculumCode
What a crap article.

------
jcoffland
This is why I always eat my science which a large helping of humble pie with
extra skepticism.

------
multinglets
So people don't perform complex, goal-focused motor tasks without having a
goal ahead of time after all.

Wow, philosophy people.

EDIT: Cry about it all you want. It won't change the fact that in 100 years
people will look back and wonder if an entire academic discipline was
afflicted with some form of literal mental retardation.

