Great to see this continuing to make headlines. Given that food science seems like a relatively small field, and Dr. Wansink one of its few giants, it seemed like this was going to be a niche scandal that died down over time from apathy, despite the massive pattern of glaring errors, which are apparent even without access to the original data. I wonder how noticeable this scandal would still be had it not involved government funding.
One of the more amazing things about it was how it started with Dr. Wansink writing a personal blog post [1] that was meant to advise and inspire struggling students on how to get their PhDs. His message was basically "never say no" to doing extra work, and he described a student who worked hard to produce 5 peer-reviewed papers using the data from a previous "self-funded, failed" study. Wansink would still be venerated today if he hadn't decided to blog that day.
>His message was basically "never say no" to doing extra work, and he described a student who worked hard to produce 5 peer-reviewed papers using the data from a previous "self-funded, failed" study. Wansink would still be venerated today if he hadn't decided to blog that day
What do you mean? The article reports him doing a lot of unethical stuff after that, I don't see why "not making that blog post" would have saved him from derision.
The blog post basically encouraged students to engage in p-hacking (take data from failed studies and find something interesting in it!), which caused other researchers to critically review the methodology and statistics in those papers and then all of his other papers
I remember hearing about this guy when he got into his initial twitter war when the other researchers called out his shoddy methods. Who knew there was so much drama around food science! I guess when it's such a niche field there isn't much scrutiny for the subject matter experts.
Sadly indications are strong that this is not just a problem in a niche field. As I understand it, the problem is not even just limited to the social sciences (although it might be a bigger problem there).
The studies that are currently under scrutiny were conducted well before he wrote the blog post. All of the errors and methodologies that have come to light had always been visible.
Wow. This is everything that's wrong with science and the science-policy interface today.
>Brian Wansink of Cornell University publishes headline-friendly studies about food psychology and oversees a $22 million federally funded program that uses his research to promote “smarter lunchrooms” in nearly 30,000 schools.
Yep, the same problem as with the food pyramid: using a small amount of evidence to justify a massive intervention into a complex system. The way it should work is, the bigger the change you want to make to a system, the better your model needs to be.
Deploying it to 30,000 schools? The results need to be solid enough to be in textbooks, not something the researcher is still actively working on.
>His experiments have found, for example, that women who put cereal on their kitchen counters weigh more than those who don’t, and that people will pour more wine if they’re holding the glass than if it's sitting on a table. Over the past two decades he’s written two popular books and more than 100 research papers, and enjoyed widespread media coverage (including on BuzzFeed[1]).
The overhyped, small-effect-size, never-replicated studies that even Kahneman[2] admits to, about subtle environment impacts on behavior.
>Yet over the past year, Wansink and his “Food and Brand Lab” have come under fire from scientists and statisticians who’ve spotted all sorts of red flags — including data inconsistencies, mathematical impossibilities, errors, duplications, exaggerations, eyebrow-raising interpretations, and instances of self-plagiarism — in 50 of his studies.
Another thing you should check for before rolling out to 30k schools.
>Both studies claimed that children are more likely to choose fruits and vegetables when they’re jazzed up, such as when carrots are called “X-Ray Vision Carrots” and when apples have Sesame Street stickers.
Cool, so more misconceptions we have to correct when kids grow up. ("No, carrots won't give you x-ray vision.")
>Almost 30,000 schools have adopted those techniques, and the government pays each one up to $2,000 for doing so.
Another problem: public schools being such cheap dates.
[1] Props to clickbait-hungry Buzzfeed for noting that they got affected by the frenzy.
The belief that eating carrots improves night vision is a myth stemming from propaganda used by the Royal Air Force during the Second World War to explain why their pilots had improved success during night air battles, but was actually used to disguise advances in radar technology and the use of red lights on instrument panels.
I think it only points out a problem with the science-policy interface.
Science has correctly identified that dodgy results were being published, and they're being retracted. Ideally peer review would have picked up those problems before publication, but this is the next best thing.
My guess is rather than being malicious, he actually just does not have a strong grounding in the scientific process despite years of professional experience. I hope he is now learning his lessons. My guess is his degrees in business administration, journalism, and marketing did not have the statistical and scientific rigor we would expect of someone conducting many high level statistical studies. Of course he could've learned these skills on his own, I just point this out as an interesting observation.
I also don't think that only people with science degrees can conduct science. However, I think there might be a correlation here — it sounds like Dr. Wansick was better at marketing his studies than conducting them with statistical rigor.
I studied psychology and I found it shocking how terrible everyone was at statistics/mathematics. I've worked with PhD's (both psychology and political science) whose work relied on statistical analysis on a dataset, and I can say with absolute certainty that any 'true' results were purely accidental, based on their atrocious dataset and non-existent understanding of statistics. It was one of the main reasons why, sadly,, I decided to not pursue academia any further.
It might be useful to note who really benefits by causing this kerfluffle: purveyors of unhealthy foods. So while I see the scientific situation, I also see the political situation.
You believe the researchers who have found errors in Dr. Wansink's work are in league with the food industry? You don't think the mathematical errors stand as errors in their own?
I am one of these researchers (see https://twitter.com/sTeamTraen/status/913546338842939393 for a crude attempt to establish my authenticity, which doubtless would have zero validity for actual spies). Let me assure you that none of us is in league with the food industry. In fact since Dr. Wansink's work seems to go down pretty well with the food industry (cf. his various consulting gigs), I'm not sure what they would have to gain from us.
This really just reads like a smear campaign. I'm not making the claim that there's not something wrong, just that this article does an incredibly poor job of showing evidence that there's a real problem, or even what that problem is.
They claim that he's "come under fire from scientists and statisticians who’ve spotted all sorts of red flags ... in 50 of his studies", and provide three links to back it up. Two of those three links just focus on the same 4 articles, and the third claims 45 papers contain "minor to very serious issues"[1]. The majority of those issues are minor, and out of those the vast majority of those are "data duplication" and "self plagiarism". I can see that being an issue as far as professional conduct goes, but it wouldn't reverse/invalidate the results of the study.
Left over from that, there are 15 papers with "critical data" issues, three of which have had corrections issued. And from the titles of these papers, they don't sound like groundbreaking science that's likely to lead to policy changes anywhere. And my guess would be he knew they were useless and didn't put much time into them.
So, maybe the guy really is a piece of #(&, but this article did a pretty poor job of making a convincing argument to show it. The focus on minor issues, and issues that have no regard on the outcome of the paper, and papers that have pretty useless outcomes make this smell more like a smear campaign than an attempt to protect the public from evil scientists.
If this is what it looks like, that ought to be a career-ending problem. It is very hard to imagine an accidental explanation, particularly since the lead author stated, in a post that he later deleted but which is archived here http://web.archive.org/web/20170316133823/http://foodpsychol..., that "a master’s thesis was intentionally expanded upon through a second study which offered more data that affirmed its findings with the same language, more participants and the same results". The same results. To two significant figures. In 17 out of 18 cases. Sure.
And if you still want to separate out data duplication as being in some way a less serious problem: as a minimum, it means one of those two studies is very likely completely wrong.
Concerning "they don't sound like groundbreaking science that's likely to lead to policy changes anywhere": I agree that this work mostly sounds like fluffy BS that you might expect to see in a science fair project. But on the back of his reputation acquired precisely through these studies, the principal author became the leading authority on food policy (especially for school-age children) in the United States.
I'm not claiming the duplication issues aren't issues with his professional conduct. But from a purely scientific point of view, they do not change the results. This article is trying to make the claim that 1)this guy has issues in some of his publications, 2) some of this guy's publications were used for the "Smarter Lunches Program", so 3) the Smarter Lunches Program is flawed. The only attempt they make at connections between these three points is that the same guy is involved. They made no attempt to say the flawed publications were used to design the lunch program, and made no attempt to show the lunch program was flawed as a result.
At this point, we should stop considering psychology science. Instead of a real search for how the universe really works, it is about getting a pet hypothesis, acquiring some data, and then torturing the data with various statistical instruments until it confesses what you wanted it to.
If science is done right, it should not matter what the underlying biases and beliefs of the investigator. However, especially in psychology and social sciences, you can predict the conclusions of an article by just knowing who the authors are and what their pet theory is.
> At this point, we should stop considering psychology science. Instead of a real search for how the universe really works, it is about getting a pet hypothesis, acquiring some data, and then torturing the data with various statistical instruments until it confesses what you wanted it to.
That's not how it works in practice. I would encourage you to read up on psych methods and engage with the field.
'Psychology' is not one thing. For example, incredible strides have been made in curing phobias, and the entire field of marketing depends on many insights that came from the field of psychology. CBT is a measurably effective tool to deal with, among other things, anxiety.
I do agree that as a whole we are realizing that psychology is nowhere near as 'scientific' as it was thought to be, and that's a good thing. But that's a process that has been going on for a while, and many things have improved.
I find it interesting that his results inspired changes in 30,000 schools but the article mentions no follow-up studies.
Did nobody recognize the research opportunity inherent in launching these programs across so many schools? I assume the original studies used small samples. I would have been trying to collect new data from this large sample to re-test the original hypotheses.
Based on my limited experience in the field, you would've been pressured to not do so because it's not sexy to re-test things.
In fact, you'd probably be to busy writing yet another chapter of yet another a low-quality book that your PhD advisor (who somehow always happens to be good at self-marketing) or department head or whatever has to publish yet again to stay relevant, you'd be busy finding funding for some 'sexy' thing to study which is probably built on one of these shoddy studies. If you're good at self-marketing and/or going to as many conferences as possible, you might get lucky and get funding for some 'sexy' subject that you actually care about, but quite possibly you won't, and really you're just wondering if your particular skillset could be applied outside of academia. If not, well, get writing on that chapter of a book that your 'boss' wants you to write.
In my opinion, even though I realize I'm missing a lot of nuance here, academia is often worse than the business world, because at least in the business world there are some concrete measures of success (selling widget x or service y). That maybe applies less to BigCorps, but still.
Sorry for being ranty. I know it's not quite like that in every case, but it's the sordid story I've experienced myself (at a top research group, no less!), and been told by many PhD's in the humanities.
EDIT: Let me add that I truly, greatly admire those who choose to stay in academia, diligently working at that one thing they care about. I bailed out and became a web developer, and I'm not necessarily proud of that.
Thanks to the original comments on the blog for setting this ball rolling. They did not hesitate to call the prof out. I love this kind of freedom and knowledge. Big salute. Also, good investigative journalism.
One of the more amazing things about it was how it started with Dr. Wansink writing a personal blog post [1] that was meant to advise and inspire struggling students on how to get their PhDs. His message was basically "never say no" to doing extra work, and he described a student who worked hard to produce 5 peer-reviewed papers using the data from a previous "self-funded, failed" study. Wansink would still be venerated today if he hadn't decided to blog that day.
[1] http://archive.is/cPxmm