Hacker News new | past | comments | ask | show | jobs | submit login
Registered Replication Report – Facial Feedback Hypothesis [pdf] (psychologicalscience.org)
124 points by Jerry2 on Aug 21, 2016 | hide | past | web | favorite | 43 comments



It you don't have the time to read the whole paper, jump to page 11 and check out the graph at the bottom of the page. Top confidence interval is from the original study and the rest are confidence intervals from studies that tried to replicate it.


The 95% CI for the original study contains zero? How does a study with a non-significant result which can't be replicated become "seminal"?


> How does a study with a non-significant result which can't be replicated become "seminal"?

How does anything become "seminal"? People we trust told us so. Whenever something like this happens, you should be asking yourself who you're trusting that you probably shouldn't be.


The same way people write "citation needed" and then put their brains to sleep.

It's not as if they will then go to replicate and evaluate those citations (if they were, they could just as well try to evaluate what the other person told them directly, or at least try finding those citations for or against it themselves).

They just need some comforting assurances -- which will then repeat freely, few (or nobody) bothering to examine those "citations" at any step of the chain.

For me, unless the science is old enough to be including in University 101 courses, I don't much care about it. Unless you work in the field (and will actually use and evaluate the results mentioned in it) a peer reviewed paper is worth almost nothing to the casual HN reader.


By the way, "holding a pen in your mouth makes you feel smiley" is definitely in psych 101 courses, and social 201, and also showed up in cognition 301 if memory serves.


Also see previous discussion at https://news.ycombinator.com/item?id=12030791


I'm pretty sure the first picture on page 4 shows a misunderstanding of how the pen is supposed to be used. You're supposed to put it in your teeth cross-wise, so your mouth is forced into a kind of smiling position.


Actually, the replication seems to be correct. See Figure 1 in the original here: http://datacolada.org/wp-content/uploads/2014/03/Strack-et-a...

I applaud both the original and replication authors for including clear illustrations. And at a glance, Strack et al seems to be a well written paper.


LOL anecdotally putting it crosswise was far more amusing just now. Although, that graph on page 11 is pretty damning.


I thought I knew the basic premise of "Thinking Fast, Thinking Slow" even though I haven't read it... but I don't understand why this would be a key study for the book. Can anyone provide a quick explain?


Biting a pen with your teeth or your lips should induce one of two different "Type I Thinking" facial expressions, associated with Kahneman's "fast thinking" as I understand it.

This study appears to discuss the link between forcing a type I physical response, while attempting to prompt a type II response (laughter/humor). As I'm reading it, this study is talking about a previous study in which fuzzing the "fast thinking" system affected the "slow thinking" system. This new paper calls those results into question.


I agree - this study doesn't seem key to the premise of the book, the title seems misleading.


Here's a link to the 1988 paper they're disputing http://datacolada.org/wp-content/uploads/2014/03/Strack-et-a...

I wonder if it'd make a difference if the pen was horizontal in the participant's mouth, touching both corners of the lips. That's the mental image I walked away with when I read "Thinking, Fast and Slow".


There's a fun p-hacking game embedded in this article from Fivethirtyeight:

http://fivethirtyeight.com/features/science-isnt-broken/

Really drives home how tough it is to thread the needle of both interesting and useful results.


The more I see these the more I am thinking these psychological conclusions are just placebos. You pick the one that appeals to you most and ignore the rest. They work only because you want them to be true.


It's funny, because your statement is a topic in the book.


From the pdf, data and registered protocols in the Open Science Framework site: https://osf.io/pkd65/


Psychology (and social science in general) has a serious problem.

Standards for publication are far too low, incentives for replication almost nonexistent, negative results rarely reported.

How many of the published results in social science are trustworthy?

https://www.xkcd.com/882/


>Psychology (and social science in general) has a serious problem.

Weren't the same (and worse) problems found in hard sciences?

For Biology e.g.: http://journals.plos.org/plosmedicine/article?id=10.1371/jou...


Check out http://retractionwatch.com for a convenient compendium of scandals with published papers.

I think that in the technical community we have a habit of treating scientific publications as absolute truth, ignoring the many susceptibilities in the publication process. I guess people do that because it's the firmest way to centralize on the truth that they know how, which is fine as far as it goes.

I've found that these same people are quick to nitpick studies that don't reach conclusions they like and quick to weaponize studies that do, beating people over the head as "science deniers" for holding a non-compliant opinion (even if it's not necessarily a minority opinion).

Political, personal, and commercial agendas all seriously influence our output, even the output that gets published in peer-reviewed journals. Let's all agree that as humans, all of our work, developments, and opinions are subject to bias and error. Considering this, we shouldn't be too hostile to anyone who may have a different perspective.


> Considering this, we shouldn't be too hostile to anyone who may have a different

I can conclude some things with greater confidence than others. When an expected benefit can be concluded with greater confidence and sufficient magnitude than its expected costs, a decision can - and should - be made.


Sure, I agree that decisions should be made and that the decision process can rightfully incorporate scientific data and consensus. What I'm saying is that if someone refuses to accept our position despite what we consider an abundance of authoritative data, we should sympathize and be kind despite our disagreement, instead of labeling them as ignorant science-deniers. Assuming the person holding the opposing position is well-informed, we should accept that they simply don't recognize the same publications as authoritative, and that there is legitimate room for doubt not only of specific papers but of "scientific consensus" as a whole, especially when that "consensus" is weaponized for political use.

Academia draws people with specific backgrounds and biases. Groupthink is a real and substantial risk, not only because people quite frequently simply copy each others' output, but also because large-scale ostracization is a real risk if one publishes something that goes against the grain. Organizations and institutions pull funding if a finding is too controversial. Studies are often backed by large donors who, whether the pressure is obvious or not, are trying to get a specific result. Graduate students are under a great deal of personal pressure to perform and justify their loans. There are many non-scientific social factors that affect scientific rigor, even in peer-reviewed journals.

Like I said, the convention is that studies that support the speaker's preferred social or political views are usually considered credible, whereas studies that don't are nitpicked and labeled "questionable", for any of a myriad of reasons: the author(s) come from an institution the speaker dislikes, the sample wasn't representative, and so on.

The only common thread is that people won't be swayed in their political or social positions by academic papers -- they'll only use them to justify their pre-existing set of beliefs. This is true for virtually everyone. So I'm suggesting that instead of mistreating someone because we think the "science" bares out our point of view, we recognize that "science" itself is a fallible process susceptible to all kinds of externalities, and that it's often reasonable to mistrust a purported "consensus". Therefore, we should politely accept the difference in opinion instead of getting into a zero-sum rhetorical exercise of "Study X proves my POV" / "Study X was done by clowns! Look at Study Y, which proves my POV", and ends up with both sides detesting each other all the more.


>people won't be swayed in their political or social >positions by academic papers -- they'll only use to justify >their pre-existing set of beliefs. This is true for >virtually everyone.

Speak for yourself. There are plenty of us who are willing to consider our positions falsifiable and actively seek objective answers in good faith.

Your point that academia is fallible is well taken, but this doesn't mean "virtually everyone" is so biased/personally invested as to lack critical thinking skills.


The investment required to truly consider the evidence is so high that none of us can manage that investment for more than a few questions.

We all have to rely on expert opinion in most cases. Even leading experts on one question have to rely on other experts for other questions.


>Your point that academia is fallible is well taken, but this doesn't mean "virtually everyone" is so biased/personally invested as to lack critical thinking skills.

And yet, if history has taught us anything, it's exactly that.


No field is immune from this, but social sciences are just much harder to do experiments in.

Primarily because, as opposed to atoms and anti bodies, your test subjects have intelligent minds of their own.


Do we have to rehash this conversation every time a psychology article is posted?


Another article about a study failing replication?

I too wish we didn't have to have this conversation so often, but I think the problem is that poor studies are published, not that we "rehash this conversation" in response.


what do you think can be done here? These people obviously know that their studies can't be replicated, they are not ignorant so they are obviously malicious.


> These people obviously know that their studies can't be replicated

I don't think that's fair to say at all. I can come up with several ways a study can non-maliciously arrive at a false conclusion:

1. The researchers may accidentally leak information to the participants regarding the hypothesis being studied. When using human subjects, it's very difficult to avoid biasing them toward results they may think you're looking for.

2. Researchers may not sufficiently blind themselves during the experiment, causing them to have undue influence on the outcome, even if they're not consciously trying to exert it.

3. Some unknown confounding effect may be at play during the experiment that wasn't properly accounted and controlled for.

Those are just three I could think of. When dealing with behavioral sciences, I can't imagine how difficult it must be to design a test protocol that eliminates all the messiness of the meatbags being studied.


I helped run a clinical trial for an antidepressant one time. It was a double-blind randomised within-subjects-replicated crossover design. All that control was complicated, time consuming and very expensive. Only the hospital pharmacy knew whether participants were in treatment or placebo for a given session, and we never met the pharmacist, just received an orange vial. But we're pretty sure information still leaked, because drugs have side-effects. There's nothing to be done about this. Use an active placebo, you might say, but now you've added "get ethics approval to administer a nauseating placebo" to your 9-month-long ethics todo list. And then your placebo is worse than your active and the whole study is screwed because everyone spewed up in the MRI. Meatbags are tricky indeed.


4. Bad luck

A proportion of studies will produce false positives just by chance even if they're conducted perfectly.


How exactly do you jump into the first conclusion:

"These people obviously know that their studies can't be replicated"

and what's so "obvious" about it?

Or, also, about the second jump to conclusion: "they are not ignorant, so they are obviously malicious."

Perhaps they fell prey to the same thinking that everything is "obvious" instead, taking for granted a lot of things that they should have checked?


I don't mean to sound rash, but have you ever been trained in an experimental science?

If I had a dime for every time a physics undergraduate makes a subtle and unintentional flaw during an experiment that, were it not a mistake, would overturn a century of science that we know Just Works.


Just treat everything you read in social science with a grain of salt. The studies will never be able to alleviate problems of reproducible with the sample sizes we have.


It's "Thinking, Fast and Slow", and the study in question is hardly "key" to the book, it gets all of one paragraph in chapter 4.


Both valid points, but this failure-to-replicate does erode some confidence in the general notion of embodied cognition (and I believe deservedly so because these phenomena have always seemed rather incredible). Much of Kahneman's other work has a more solid empirical foundation.


Is the post title referring to "Thinking, Fast and Slow" by Daniel Kahneman, or something else?


That study wasn't really key to the book.


It's not key study, as title suggests.


Also, smelling farts prevents cancer. Look it up.


Sigh. I wish I could explain this better to my business managers who read PopSci like this, and try to remold the business to follow them.


The rise of social 'science' is equal to the demise of intellectual rigour and common sense. The sad part is that social science could be done right, but few do it right because most people who don't suck at statistics go for fields that are more worthwile.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: