How does anything become "seminal"? People we trust told us so. Whenever something like this happens, you should be asking yourself who you're trusting that you probably shouldn't be.
It's not as if they will then go to replicate and evaluate those citations (if they were, they could just as well try to evaluate what the other person told them directly, or at least try finding those citations for or against it themselves).
They just need some comforting assurances -- which will then repeat freely, few (or nobody) bothering to examine those "citations" at any step of the chain.
For me, unless the science is old enough to be including in University 101 courses, I don't much care about it. Unless you work in the field (and will actually use and evaluate the results mentioned in it) a peer reviewed paper is worth almost nothing to the casual HN reader.
I applaud both the original and replication authors for including clear illustrations. And at a glance, Strack et al seems to be a well written paper.
This study appears to discuss the link between forcing a type I physical response, while attempting to prompt a type II response (laughter/humor). As I'm reading it, this study is talking about a previous study in which fuzzing the "fast thinking" system affected the "slow thinking" system. This new paper calls those results into question.
I wonder if it'd make a difference if the pen was horizontal in the participant's mouth, touching both corners of the lips. That's the mental image I walked away with when I read "Thinking, Fast and Slow".
Really drives home how tough it is to thread the needle of both interesting and useful results.
Standards for publication are far too low, incentives for replication almost nonexistent, negative results rarely reported.
How many of the published results in social science are trustworthy?
Weren't the same (and worse) problems found in hard sciences?
For Biology e.g.:
I think that in the technical community we have a habit of treating scientific publications as absolute truth, ignoring the many susceptibilities in the publication process. I guess people do that because it's the firmest way to centralize on the truth that they know how, which is fine as far as it goes.
I've found that these same people are quick to nitpick studies that don't reach conclusions they like and quick to weaponize studies that do, beating people over the head as "science deniers" for holding a non-compliant opinion (even if it's not necessarily a minority opinion).
Political, personal, and commercial agendas all seriously influence our output, even the output that gets published in peer-reviewed journals. Let's all agree that as humans, all of our work, developments, and opinions are subject to bias and error. Considering this, we shouldn't be too hostile to anyone who may have a different perspective.
I can conclude some
things with greater confidence than others. When an expected benefit can be concluded with greater confidence and sufficient magnitude than its expected costs, a decision can - and should - be made.
Academia draws people with specific backgrounds and biases. Groupthink is a real and substantial risk, not only because people quite frequently simply copy each others' output, but also because large-scale ostracization is a real risk if one publishes something that goes against the grain. Organizations and institutions pull funding if a finding is too controversial. Studies are often backed by large donors who, whether the pressure is obvious or not, are trying to get a specific result. Graduate students are under a great deal of personal pressure to perform and justify their loans. There are many non-scientific social factors that affect scientific rigor, even in peer-reviewed journals.
Like I said, the convention is that studies that support the speaker's preferred social or political views are usually considered credible, whereas studies that don't are nitpicked and labeled "questionable", for any of a myriad of reasons: the author(s) come from an institution the speaker dislikes, the sample wasn't representative, and so on.
The only common thread is that people won't be swayed in their political or social positions by academic papers -- they'll only use them to justify their pre-existing set of beliefs. This is true for virtually everyone. So I'm suggesting that instead of mistreating someone because we think the "science" bares out our point of view, we recognize that "science" itself is a fallible process susceptible to all kinds of externalities, and that it's often reasonable to mistrust a purported "consensus". Therefore, we should politely accept the difference in opinion instead of getting into a zero-sum rhetorical exercise of "Study X proves my POV" / "Study X was done by clowns! Look at Study Y, which proves my POV", and ends up with both sides detesting each other all the more.
Speak for yourself. There are plenty of us who are willing to consider our positions falsifiable and actively seek objective answers in good faith.
Your point that academia is fallible is well taken, but this doesn't mean "virtually everyone" is so biased/personally invested as to lack critical thinking skills.
We all have to rely on expert opinion in most cases. Even leading experts on one question have to rely on other experts for other questions.
And yet, if history has taught us anything, it's exactly that.
Primarily because, as opposed to atoms and anti bodies, your test subjects have intelligent minds of their own.
I too wish we didn't have to have this conversation so often, but I think the problem is that poor studies are published, not that we "rehash this conversation" in response.
I don't think that's fair to say at all. I can come up with several ways a study can non-maliciously arrive at a false conclusion:
1. The researchers may accidentally leak information to the participants regarding the hypothesis being studied. When using human subjects, it's very difficult to avoid biasing them toward results they may think you're looking for.
2. Researchers may not sufficiently blind themselves during the experiment, causing them to have undue influence on the outcome, even if they're not consciously trying to exert it.
3. Some unknown confounding effect may be at play during the experiment that wasn't properly accounted and controlled for.
Those are just three I could think of. When dealing with behavioral sciences, I can't imagine how difficult it must be to design a test protocol that eliminates all the messiness of the meatbags being studied.
A proportion of studies will produce false positives just by chance even if they're conducted perfectly.
"These people obviously know that their studies can't be replicated"
and what's so "obvious" about it?
Or, also, about the second jump to conclusion: "they are not ignorant, so they are obviously malicious."
Perhaps they fell prey to the same thinking that everything is "obvious" instead, taking for granted a lot of things that they should have checked?
If I had a dime for every time a physics undergraduate makes a subtle and unintentional flaw during an experiment that, were it not a mistake, would overturn a century of science that we know Just Works.