
A researcher's response to people pointing out 150 errors in four of his papers - frgtpsswrdlame
http://andrewgelman.com/2017/02/03/pizzagate-curious-incident-researcher-response-people-pointing-150-errors-four-papers-2/
======
jtraffic
Wansink applied to the phd program I'm in for graduate school, and didn't get
it, and then he applied for a professor position at my school and didn't get
it. One of the faculty members told me she thought it was "a missed
opportunity."

One thing I have trouble figuring out: is this indicative of a long-term
trend? Or did his success create an expectation he couldn't keep up with and
so felt a need to rush (or cheat)?

Incidentally, there is another professor at my school who has built a career
out of exposing similar problems in research (especially p-hacking). It takes
a brave person to inhabit that niche. It is a service to the research
community, but I'll bet he sits alone at conferences.

------
sverige
I went to high school with Brian Wansink. He is brilliant, funny, and affable.
I could hear his voice speaking his responses to the objections raised, and
snickered a little bit.

I'm sure that Wansink cares about his research and its quality. He's that sort
of person. I don't know what happened with these papers, but it seems likely
that answering bloggers publicly would not serve the greater good.

It seems to me that all the hand-wringing about these four papers misses the
bigger picture: What can we do to improve the practice of science across the
board? Why is there almost no field of study which isn't tainted by the
politics of left vs. right?

My suggestion is that it began when people began to believe the idea that all
political and social questions are best served with some sort of applied
scientific analysis. I counter that, in fact, scientists are some of the worst
people to ask for solutions to political and social problems, since most of
them have almost no understanding of the relevant fields of study, which
mainly include history and philosophy, and perhaps psychology. Even fields
like theology, linguistics, and literature and the arts have more to say to
most political problems than, say, physics or chemistry or meteorology,
because they speak to human behavior rather than the behavior of inanimate
substances.

Please understand that I say all this not to provoke an argument, necessarily.
But as someone whose formal studies were mostly in political history, in
general, listening to HN talk about politics is a lot like what I must imagine
it is for most on HN to listen to people on TV news shows talk about tech in
general and software in particular: they sort of get it, but not really, and
certainly are not as expert as they imagine themselves to be.

~~~
apathy
What does any of this have to do with politics?

Wansink doesn't appear to be troubled by his prodigious contributions of
unreliable results to the literature, nor does he appear inclined to correct
them. All of science and discovery is a joke to him.

Left or right, the troubled extremes of politics start where respect for
evidence and truth ends. Scientists are, nominally, people who search for
truths about our universe with some degree of rigor, some respect for
evidence, some appetite for truth (and usually an insanely competitive nature,
but I digress). Absent that, what's the point?

This isn't about society, in my opinion. It's about having some respect for
evidence and truth. Without these, science isn't science. And what wansink is
doing isn't science. Cold fusion had more rigor. Yet he will probably be
rewarded with grants of taxpayer money, taken from fellow citizens as taxes,
under threat of force. Meanwhile pediatric bone marrow transplanters will shut
down their labs for "lack of funding". Or, more accurately, lack of standards
in science.

Yes, it affects people when dishonesty is accepted.

------
tyingq
I imagine Wansink does get it. Now that he's outed, he either trolling for
fun, or hoping the blasé approach is the one that quiets all the fuss the
fastest.

~~~
nonbel
I wouldn't be certain. A lot of researchers really are clueless about anything
to do with statistics.

It takes much more work/time to do real research, so the funding system has
actually been selecting for people who can remain ignorant (so it is not
fraud) and just produce p-hacked "results". In many areas, this has been going
on for multiple generations now and you are trained to do it as a grad
student.

~~~
froindt
> I wouldn't be certain. A lot of researchers really are clueless about
> anything to do with statistics.

This is definitely true. I once saw a book with a title similar to "Statistics
for Dummies" in a professors office. He had plenty of access to staticians at
the university too. Unfortunately if a given field involves many people with
ignorance of statistics, these problems may not be called out during the peer
review process.

~~~
pbhjpbhj
>I once saw a book with a title similar to "Statistics for Dummies" in a
professors office. //

Could easily have been a handy reference text for lending or showing to erring
students/faculty.

~~~
froindt
I really hope so. It was a stat strong department (but not statistics).

~~~
closed
I have all kinds of books like that, for teaching / perusing, but now am going
to sandwich them with hardcore math books out of paranoia

------
MR4D
It's weird. If conservatives don't believe scientists, it's because the
conservatives are labelled "morons". Could it be that conservatives just think
there's a lot more of this crappy "science" out there, and that a situation
like this is the canary in the coal mine?

When presented with examples like this, it's actually _irrational_ to think
that there aren't other egregious papers out there. Possibly just a few, but
possibly a large number of them.

Thoughts?

~~~
throwaway729
My thought: ridiculous false equivalency.

Conservatives don't distrust science carte blanc. Many of them will chuck
money at asinine bullshit as long as it's billed as a way to kill the other
team. Check out some of the more ridiculous stuff to come out of DoD.

They distrust climate science. And they don't distrust the actual science,
just the conclusions. And increasingly not even the most basic conclusion,
just the most obvious causal theory behind that conclusion.

Let's not throw the baby out with the bath water.

~~~
theWatcher37
Meanwhile, anti-vax is _huge_ in liberal soccer-mom only-shops-at-whole-foods
cities.

The left has its fair share of science-denying morons as well.

~~~
viggity
The difference between vaccines and climate change is that we have hard
evidence on vaccines. You can create a hypothesis and do a randomized double
blind experiment. The results are conclusive.

Climate Science is a bunch of computer models are not testable. When they're
run and compared to what actually happened they're always off. e.g. Name a
single climate science model that predicted "the pause" starting in '98?

Climate Scientists don't release their model's code. They don't release the
exact data that went into their model. Shit, isn't that one of the complaints
about Wansink? That he isn't sharing his data?

I'm not saying climate change isn't real, I'm saying that the level of
certainty that climate scientists proclaim is problematic. I'm saying that
labeling dissenters as deniers (reminiscent of holocaust deniers) is beyond
the pale. I'm saying that fossil fuels have done more to advance the human
condition than anything else in history of mankind.

~~~
nonbel
>"The difference between vaccines and climate change is that we have hard
evidence on vaccines. You can create a hypothesis and do a randomized double
blind experiment. The results are conclusive."

I tried to find this a few years ago in the case of measles and it turned out
to seem no such studies existed. Have you seen one?

~~~
viggity
it took a little searching, but I found one for an alternative manufacturing
process of the measles vaccine.

[https://clinicaltrials.gov/ct2/show/NCT01536405?term=AMP&ran...](https://clinicaltrials.gov/ct2/show/NCT01536405?term=AMP&rank=12)

~~~
nonbel
Here is another one, apparently we can get a drop of up to 99.5% just by
switching to lab tests from clinical diagnosis: "Indeed, an average of only
100 cases of measles are confirmed annually [32], despite the fact that
>20,000 tests are conducted [28], directly suggesting the low predictive value
of clinical suspicion alone."
[http://jid.oxfordjournals.org/content/189/Supplement_1/S185....](http://jid.oxfordjournals.org/content/189/Supplement_1/S185.full)

I just don't think these alternative explanations have been investigated very
well, and the blinded RCT (vs placebo) is missing. Another thing, the various
proxies for "vaccine success" they use do not correlate with each other...

"Our data demonstrate that regression analysis shows only limited correlation
between NT results and the ELISA values. This is in agreement with other
reports [4]. Similar limitations in the correlation were also reported for
other viruses like Cytomegalovirus (CMV) [10]. In case of the gamma globulin
samples, the low correlation might reflect the wider spectrum and
heterogeneity of the involved or measured measles antibodies."
[http://www.ncbi.nlm.nih.gov/pubmed/17308917](http://www.ncbi.nlm.nih.gov/pubmed/17308917)

~~~
mattkrause
That's not a sensible interpretation of that measles data.

Since measles is very contagious, there's a lot of effort putting into
identifying and isolating cases. This necessarily involves screening many
cases where the disease is only weakly suspected. On top of that, the current
protocols cast a really wide drag net. Australia, for example, tests every
susceptible patient who shared a waiting room with a measles patient, as well
as those who entered less than two hours after the index case left. In a busy
hospital or practice, this could be a _lot_ of children.

Thus, the high ratio of tested to confirmed cases doesn't mean doctors are
_bad_ at diagnosing measles; they may just be cautious.

~~~
nonbel
So how large do you think such a diagnosis effect may be? That other paper
reported only 7% of clinical diagnoses were lab positive for measles in
Singapore.

And yet another effect to deal with is that people used to _spread measles on
purpose_ :

“Before the introduction of measles vaccines, measles virus infected 95%–98%
of children by age 18 years [1–4], and measles was considered an inevitable
rite of passage. Exposure was often actively sought for children in early
school years because of the greater severity of measles in adults.”
[http://jid.oxfordjournals.org/content/189/Supplement_1/S4.fu...](http://jid.oxfordjournals.org/content/189/Supplement_1/S4.full)

I think for these reasons it is clear the effectiveness of measles vaccines
has been overstated, it is only a matter of how much.

------
Andys
I still don't understand: Is there any good reason that raw data should not be
published along with each paper?

~~~
wfunction
> I still don't understand: Is there any good reason that raw data should not
> be published along with each paper?

Confidentiality? What if correlations give away subjects' identities?

~~~
nonbel
Does a substantial portion of the subjects actually care about this?

~~~
wfunction
> Does a substantial portion of the subjects actually care about this?

I don't think that's how laws or ethics rules work, for one. And I wasn't
talking about a particular experiment, for two.

~~~
nonbel
I've found I'm exceptional in being at all concerned about privacy. I think
that if you asked, most people would only want minimal effort put towards
confidentiality, especially since the trade-off is that other researchers
won't be able to double check the analysis.

~~~
danso
I've worked as a journalist in which my (and many other data journalists')
modus operandi was to publish the data. Often because the data was public
record anyway. I now work in academia and the mentality is significantly
different. Some of it is logistics -- I would say most traditional news
organizations do not have the internal incentive or habit to figure out a way
to publish data. Whereas with newer organizations, such as 538 [0] and
Buzzfeed News [1], the data teams have editors to whom open-source and digital
publishing is more the natural way of things.

But in academia, there are also set rules and precautions governing every
study. I haven't proposed any research yet but my understanding is that if
your study requires collecting data from participants, the Institutional
Review Board requires you to be very clear to participants about privacy and
confidentiality and that you follow the guidelines to the letter.

Additionally, there are datasets only available to academics that aren't
available to non-academics (i.e. journalists), which speaks to the expectation
that academics be very mindful about confidentiality promises.

[0]
[https://github.com/fivethirtyeight/data](https://github.com/fivethirtyeight/data)

[1]
[https://github.com/BuzzFeedNews/everything](https://github.com/BuzzFeedNews/everything)

~~~
froindt
To add a bit more for people who haven't been involved in an IRB:

The strictness varies by university. Ultimately the IRB is there to ensure
safety of the participants and minimize the chance of negative consequences. A
couple examples I've heard from around my university include...

A study was taking place outside. There was a chance of a bee sting occurring
to the participant. The IRB required that the researcher have an epi-pen ready
just in case along with any required training.

A paper used in a study had the wrong stamp on it (not the most recent IRB
stamp). The document had not changed from the last time, but the rule was
simple. Every document presented to the participant had to have the most
recent stamp.

On a study in which it was expected that 1/2 of the participants would feel
nauseous, they had to provide a place to sit, water, and small snacks.

And perhaps most importantly, you can quit any study you're participating in
at any time, for any reason. Compensation is figured out ahead of time, and
you're going to get lots of questions from IRB if you want to do anything to
reduce the compensation for leaving part way through the experiment.

------
franciscop
See this explain XKCD, it is really informative:
[https://explainxkcd.com/882/](https://explainxkcd.com/882/)

------
cscurmudgeon
A sidenote:

Using a snappy XKCD is not a valid logical argument even though the poster
might think it makes one look hip and cool. I wish researchers were not this
lazy.

There are at least a handful of valid arguments that one could use rather than
an XKCD cartoon.

~~~
darkkindness
The argument is that Wansink is responding to criticism (like the XKCD) with
an air of indifference. Wansink's response to the XKCD is simply one of the
examples; the XKCD itself is not being used to support the author's argument
that the act of publishing non-rigorous work is too widespread in quantitative
science today.

~~~
cscurmudgeon
My point was not about Wasink or the original article. My point its that the
XKCD cartoon doesn't convey anything other than snark.

------
yeukhon
Are we sure this wasn't some early April Fools' work?

------
it
Unintentional irony: "The only new thing about it is the 150 errors—when does
that every happen?"

------
valuearb
"except that this sort of ridiculous hype is standard operating procedure
among celebrated and leading researchers in psychology and economics."

Thank god it's not SOP among actual sciences.

~~~
KKKKkkkk1
I hope you're being sarcastic. I'm a bit hesitant to provide examples, but a
lot of top results in computer science are sold as being much more far
reaching than what they really are.

~~~
kem
Speaking from my own areas of experience, I've been struck by how much some
celebrated methods and results in machine learning, broadly speaking,
basically amount to tinkering without any hard generalizable proofs. In at
least one case I'm aware of, there was a statistical proof later that made it
clear the original researchers were close, but if they had a different dataset
they would have come up with a different result. In fact, as I write this, I
think I recall that different researchers did come up with different results,
and it was in large part due to tinkering on different datasets.

So yes, even in computer science you run into similar things. Maybe not the
same, but similar processes leading to similar problems.

~~~
throwaway729
The problem is that universities have a strong incentive to optimize for juicy
press releases. A theorem -- unless it's 100 years old or otherwise somehow
easy to relate to the general public -- does not have this effect. Winning a
competition or "outperforming humans" does.

Aside from bumping up the quality of K-12 science education by at least an
order of magnitude or two, I'm not sure there's a good solution to this
problem. It's the human condition to be impressed by some things and not
others, regardless of their relative actual importance.

