
Parasite test shows where validation studies can go wrong - HandleTheJandal
http://www.nature.com/news/parasite-test-shows-where-validation-studies-can-go-wrong-1.16527
======
tjradcliffe
There are two things humans are really bad at:

1) Thinking

2) Communicating

It's difficult to tell which one we're worse at because they depend on each
other. Muddy thoughts can be clearly communicated, or clear thoughts can be
poorly communicated, and it has the same result. In the usual case we have
muddy thoughts that we communicate badly. Hilarity ensues.

In this case, the replication team didn't initially use precisely the same
molecule as the original work because the original team just assumed the
replication team would process the raw molecules the same way they had.

This is a very common failure mode in both thinking and communicating: we
implicitly assume something, then proceed as if it were generally true (it
isn't) or that everyone knows it (they don't). That's not the only failure
mode, but definitely a very popular one.

Good scientific communication involves over-communicating nit-picky details,
and even then it can be stymied by the use of conventions that are less
generally known than the authors assume.

I got a call once from someone working on a similar experiment to one I'd
published, asking where a particular factor of two had come from in an
equation: I had left implicit the limits on an integral that some people took
over a full sphere and some people took over a half-sphere (with a factor of
two due to symmetry). He basically wanted to make sure I hadn't screwed up,
which would have given an extra factor of two in my result and explained a
difference with his. If I had been explicit about the limits of integration
I'd used it would have saved a phone call.

That was an easy and obvious case. When attempting to express complex ideas
that you yourself are frequently unsure of (that's the nature of research)
things can get far, far worse, to the point where it's fairly amazing we can
communicate our imperfect thoughts at all.

~~~
has2k1
I do not see it as humans being "really bad" at those things. Comparatively,
across all mammals and other animal species we are really good at it that we
engage in thought and communication beyond what is necessary for immediate
survival. So we have deliberate (necessitated by purpose and aided by thought)
communication across time and space.

I see it as, the purpose to which thought and communication are employed is
complicated.

That is a nice supplementary anecdote.

~~~
Houshalter
Compared to animals sure. But we are the first thing to evolve intelligence,
it's unlikely we are anywhere near optimal at it. It'd be like the first
amphibian believing it was good at running.

------
Retric
It saddens me that people look at this and don't see a new experement
demonstraiting the initial paper was wrong. Instead they treat it like the old
paper was right and the new experement just added pointless clarification.

Sorry, if someone following your procedure as written would fail to replicate
your results then your paper is flawed and it should be withdrawn. Otherwise
labs have a huge incentive to fudge things slightly so they get a little
longer to explore the new information without competition.

~~~
Fomite
Or, alternately, since it's a minor fix, you could just issue an errata, which
is a thing journals do all the time.

~~~
Retric
Sure, but the root problem IMO that still leads to a wide range of perverse
incentives. Science has a huge push to be the first to publish and little
emphasis on accuracy. Which leads to a lot of crap being published making some
disciplines nearly worthless.

~~~
Fomite
And suggesting that if a minor and immensely flexible _non-error_ but simple
lack of detail in a paper that could be instantly solved with errata will
instead get it retracted is a major overreaction in the other direction.

An emphasis on reproducibility is good. An emphasis on going back and making
sure papers say what we think they said is good. Automatic retraction is like
zero-tolerance policies in schools - you'll get absurd and entirely
foreseeable outcomes from wanting to appear to be tough, and not actually
solve the problem of a screwy incentive system.

That a paper that _can_ be reproduced and be fixed with a minor errata, from
authors that were fully cooperative with the replicating study, is the one
being pointed at for that kind of treatment is illustrative of exactly that
problem.

------
texuf
This sounds like what would happen if I built a new javascript library, but
instead of publishing source code, I just wrote blog post describing how to
build it again in natural language. I've never been in a lab or witnessed one
of these experiments, but wouldn't it be great if you could write up a set of
instructions and feed that into a machine anywhere in the world.

