Hacker News new | past | comments | ask | show | jobs | submit login
We tried to publish a replication of a Science paper in Science (slate.com)
290 points by hhs 25 days ago | hide | past | web | favorite | 156 comments



The Associate Editor of Science who rejected this paper when it was submitted happens to be my wife's friend. They have been having long discussions about this issue. My understanding is that although it's a valid study, it just doesn't bring anything new to the table that is worthy of publication. This paper refutes a 2008 paper that has already been refuted. So the Associate Editor decided there is no reason why this new refutation deserves to be published in Science. That's all.

I don't think there is any case of scientific malpractice here. Keep in mind a high-profile journal like Science is extremely competitive and receives many paper submissions. They have to filter what they choose to publish based on some reason. And it seems to me their reason is perfectly valid.

Now this is turning into a bigger dispute because the researchers started arguing with the Associate Editor on Twitter (I heard the verb "harass".) So he blocked them on Twitter. And this is making them more upset. Etc.


> Keep in mind a high-profile journal like Science is extremely competitive and receives many paper submissions. They have to filter what they choose to publish based on some reason.

This is quite true, but I'd be wary of what you justify based on this fact alone. Discounting all of the interpersonal politics that goes into paper acceptance, the overall editorial policy of "high-profile" journals like this has a considerable effect on the scientific climate.

It would be unfairly cynical (IMHO) to say that the editorial policies will be geared toward whatever the publisher expects to be the most profitable (even in the long term), but on the other hand it would be naïve to expect that they're even trying to do their best for the expansion of human knowledge and the betterment of society.

What I'm getting at is that choosing what to include in Science et al. is a subtle but far-reaching tool, and maybe one better put in the hands of someone at least somewhat impartial, even if it's just some simple popularity contest counting votes on arXiv or similar.


This paper refutes a 2008 paper that has already been refuted.

Could you cite?


I don't know, sorry. I'm just repeating what I heard from my wife. I will ask her.


I think this would be strong evidence for why this should be published, even if some no-name journal happened to publish a refutation at some unknown time in the past. I just searched for the paper 'Political Attitudes Vary with Physiological Traits' on Google Scholar and found a large number of papers citing the paper in question, and searches on regular search engines turn up an even more vast number of mass media references to the paper. Yet I'm unable to find any reference to any replications of such including with keyword tuning.

Whatever replication come refutation there may have been seems to have eluded not only society, but also scientists at large. I do believe both her and Science that the study was refuted at some point, but if nobody knows this and people keep assuming the study is meaningful - then this refutation is completely meaningless. Science as a journal brings the eyeballs and thus has, in my opinion, an obligation to publish data indicating that extremely influential studies they have previously published are likely flawed.


Please do. This study of "46 adults with strong political beliefs" from 2008 has since been cited by 576 other publications, including books (according to Google Scholar).


I would be most curious for the cite myself!



I gave up on Science decades ago. I started reading more Nature instead thinking it might be better, but I've given up on it now, too. Some folks who have read Nature for far longer than I did will tell you that it has always been problematic, even more so than Science. I fail to understand why either of these publications is held in particularly high regard.


> We believe that it is bad policy for journals like Science to publish big, bold ideas and then leave it to subfield journals to publish replications showing that those ideas aren’t so accurate after all.

The authors of the repudiation article are, without much doubt, trying to pick a fight.

However, the larger issue is whether they should be. The article has some good points in it, the comments here on HN also pull out some good point (and some bad ones), and the 'replication crisis' has been raging for almost a decade now with any sign of slacking. The authors are being jerks, I feel, but dammit, they make a good set of points.

Look, I've been there too. The deadlines just keep piling up, the students stay just as needy, and the grant funding is just getting harder and harder to find (esp for new PIs). A bunch of older white guys trying to be asshats on purpose over in the corner and then pointing things out for political points (my opinion) is just like: Dude, really?, screw right the hell off.

Its just another thing on the giant pile of crap that you can't deal with at all.

But, we all know that US based 'Big-S/R1' Science is not a healthy place. It's never really been one, but we all know it now, and, to me at least, it feels like it's not getting any healthier anytime soon. The scathing blog posts keep coming, the #MeToo issues keep getting swept aside, the flagrant abuse of grad students is getting worse.

Writing these guys off is not healthy and is not going to help. Yeah, they're kinda jerks. But we have to listen to them and take them just as seriously as anyone else. Their points are valid. They are right to call out Science magazine for not publishing recalcitrant papers and relegating them to journals that often have insane paywalls.

Yeah, I know, Science gets enough papers by 9am January 1st to run for the whole year and never see a drop in quality or impact. I've heard that exact same phrase from every journal editor, it's like 'live, laugh, love', it's 'basic' now. And it's a crap excuse.

Trying to shoe horn these jerks as 'alt right' or 'political' is not helping. Honestly, breaking Science up so that it would be able to publish these smaller repudiation articles would be better, and we all know the older PIs wouldn't let that happen, as their egos are tied up in the metrics too.

Brushing these jerks off isn't the right thing to do. Science, as a R1 type thingy, is heading for a meltdown in the US. We need to take these jerk-bags seriously, along with all the abused grad students, the new PIs, the people of color, the women, etc.


There is something to the complaint that Science shouldn't publish "Rabbits are telekinetic psychopaths" but leave the non-replication "Nope!" article to the Journal for the Society for the Study of Coprophagous Mammals, circulation 10.

They could probably run a page or two of capsule summaries of such non-replication papers, with full content on their website.


Interesting. The fundamental question is: do journals have an ethical obligation to publish credible results which refute previously published results in the journal?

The authors argue yes. I'm not sure myself -- what is a refutation vs. a different perspective/interpretation or other addition and where do you draw a line?

But it's intriguing to think if this would be a meaningful force for better results -- and could be an incentive to replicate findings you think might not hold up, just from being in a prestigious journal.


I would argue yes. If the answer to a question is sufficient to warrant publication, any significant revision/contradiction to a published result obviously warrants publication.

Further follow-ups need not be published in the same journal, but the first correct/convincing refutation of a published article definitely should be.


The authors are arguing that it should at least be sent out for review—not necessarily published.


I found that weird. Sending something out to review that would definitely not get published regardless of the review is a complete waste of time for the journal and the reviewers. I would find it insulting as a reviewer to be asked to do that.


Why would it definitely not get published? I found their request totally reasonable. They have followed the scientific process well. Even the journal's editorial team acknowledged that. Then they have an obligation to at least facilitate peer-review. Especially since it will further the science on a subject they have already published.

It is important that we call on such publications and demand an explanation. If you or your organization is a paying subscriber, then demand to know their policy on this matter and let them know what you think. Otherwise they have no accountability.


> Why would it definitely not get published

Because the journal already made that decision, and that is their right.

> Then they have an obligation to at least facilitate peer-review

Reviewers work for free and are an extremely scarce commodity. Review of a work like this is effectively expending tens of thousands of dollars of people's time. It isn't something we can just indulge in for the fun of it.

Besides that, you might not appreciate exactly how much chaos would break out and how much it would undermine the academic publishing system if any paper rejected had to be reviewed just because the submitter demanded it. The system simply wouldn't work at all.

There are a lot of journals in the world and if the author's work is good, they should be able to get a different journal to publish it.


> The fundamental question is: do journals have an ethical obligation to publish credible results which refute previously published results in the journal?

The answer is: yes, because that's how science works! And it's especially true if your journal is named "Science".


Not just "Yes" but "There is no case not to."

Science works best as a debate and not as a statement of truth from on high.

If you only publish "This is true" while excluding "No it's not because..." you're doing missionary work and career enhancement, not science.


No. This belongs in a journal dedicated to the field of study, not a general issue journal like Science.

The authors are silly to think otherwise. It is also not surprising that Science published a flashy article without a lot of oomph behind it. It's what they do.


Doesn't taking that approach then become something like:

"Drinking goats milk gives X-ray vision𝆯, according to new study!"

...

𝆯 - Accuracy to be followed up in paragraph 42, subsection 3, in a subsequent journal from a third party.

So, the flashy headline gets used by "whoever" wants clicks (etc), but the follow up details are unlikely to be seen.

Sounds like a literal plan for knowingly spreading misinformation.

Side note - Anyknow know how to escape an asterisk here on HN, so it displays instead of starting an italic block? Only workaround I've found is to use 𝆯 instead, which isn't quite right. :/


I mean, I guess its a mild problem? Its no different than pulling a quote from a pay to publish article though, the only difference is the amount of cachet these journals have.

Science/Nature/etc are broadcast journals. They're for publishing results that people outside your field would find interesting. Typically these are new and flashy results and sometimes theyre slightly suspect. Mostly theyre not. A refutation or null result typically isn't interesting to the audience of these journals- nor should it be.


> Typically these are new and flashy results and sometimes theyre slightly suspect. Mostly theyre not

How do we know that? I mean, we know that science gets things wrong sometimes, and that's fine because it is usually correcting itself. But this knowledge relies on the fact that if group X does bad research, there's always groups Y and Z that would try to replicate and report if it failed. However, if you get your knowledge from publications like Science, then you'd only read about group X but never about Y and Z. So what would you knowledge that "most flashy results are not suspect" base on, given current replication numbers, strong incentive to publish flashy results ASAP, and a policy of not publishing refutations?

> A refutation or null result typically isn't interesting to the audience of these journals- nor should it be.

Why? As a layman with interest in science, to me "scientists from group X discovered that drinking camel milk will make you live forever" is substantially different that "scientists from group X claimed that drinking camel milk will make you live forever but 5 other groups of scientists say it's bullshit and that research is garbage". The former presents the result as unchallenged pinnacle of scientific knowledge, and the latter as highly suspect and very much possible baloney. I as a layman who has no way to verify any of that myself and read all special press would have a lot of value in distinguishing the two situations.


> ... nor should it be.

This bit I don't really understand.

If someone is interested in X, why would they be ok with carrying around what turns out to (likely be) incorrect knowledge about it?

That approach is more equated with (say) Wired, who are famously inaccurate in many details "for a good story". eg Wired is entertainment.

That's not what Science (the journal) is supposed to be about though. Or is it?


Yes, believing false knowledge is unsettling. But it's unlikely the results in question were that momentous that they stayed with people outside the field. If the results were that, impactful the refuting article would belong in Science.


Articles from BBC, CNN blogs, “featured on an episode of NPR’s Hidden Brain podcast,” a video on YouTube from National Geographic (11m subscribers), all since 2012.

Lots and lots of people outside the field continue read about this and remember it. Many millions of people. I remember when I first heard about it. I’m not in the field.

A notice in Science is at least noteworthy to that same public, even if it still won’t get a fraction of the same coverage. At least people who tried to track down the original research would find the follow up study as well.


The public and scientific discourses are different and Science/Nature/etc have a target audience firmly in the scientific discourse.

Most of the arguments here (and in the slate article) have been talking about Science shirking their scientific duty. They are not.

Perhaps you can argue that they owe it to the quality of public discourse to print this article, but they certainly don't owe it to the scientific discourse.


Yes it absolutely should be if the original is highly cited on an ongoing basis.

For you to dismiss the fake news that this Science journal article generated, combined with the justifications of ignoring or dismissing arguments by all conservatives as essentially a scientifically proven mental disability is astounding.


You're supposing that this article won't get published at all or won't be noticed by people citing the original simply because it's in a different journal. That's not what will happen.

I'm not dismissing anything here except the argument that a journal is required to print all counter research to all their published articles


Actually, you claimed much more than that.

> A refutation or null result typically isn't interesting to the audience of these journals- nor should it be.

In the context I don’t see how to read this as anything other than denying that it matters that millions of people believe this to be true. You literally say they shouldn’t be interested.


My comment had an implied context that I didn't communicate.

That context is one of a professional scientist who understands the caveats of new (i.e. not improved/refined) research being published in a journal like Science. That is the audience that Science's editors select their articles for.

The fact that (some of) these articles get gobbled up by uncritical media is a difficult problem to address. I doubt printing this refuting article in Science as opposed to FieldJournalA would yield much improvement to the situation.


> I doubt printing this refuting article in Science as opposed to FieldJournalA would yield much improvement to the situation.

I think you're wrong. The uncritical media does read Science but it won't ever read special journals. Thus, the wider public will be exposed to original bad research, but will never be exposed to the correction. The fact that people in the field who are specifically interested in this specific topic of research (all 25 or so of them) would know the truth is nice, but nobody would even think to seek out these people and ask for their opinion, when everybody already knows the "truth" (which happens to be false).


Thanks. Helpful addition.


So a journal can get a free pass on an obvious violation of scientific ethics just by saying that they're a general issue journal?

The field of study has nothing to do with it. If a journal publishes a result, they are ethically obligated to publish further work that refutes that result.


A middle ground could exist here...

Rather than publish the entire paper, they could publish the single sentence:

> Issue 55, Page 8, the paper we published has been replicated with more data, and results do not support the original research conclusions. See paper entitled "Foo" published in "other journal" for details.


“If a journal publishes a result, they are ethically obligated to publish further work that refutes that result.”

Too general. Few in science would accept this. The claim here is the weaker one that journals are ethically obliged to accept failed replication studies - and even that is non-obvious to me. (Should Science also be obliged to accept _successful_ replications? No? So it should accept papers on the basis solely of the results? Hello, publication bias.)


Nice attempt at reframing there, but if this new paper is correct then it's not a 'failed replication', it's a refutation. Big difference.


Aren't they obligated to investigate? If the methodology applied is sound and conclusions are different, isn't that scientific progress in and of itself? The ask here is to send it for peer-review so that they get some different points of view which may guide their own further study. They are ok with rejection but not without peer-review.


Refutations and competing null results get published in different journals all the time. Seminal results get shown to be kind of not seminal 10 years later more often than you might think.

> If a journal publishes a result, they are ethically obligated to publish further work that refutes that result.

This isn't a violation of scientific ethics and I think its a silly argument to make. Journals aren't the parents of research results they publish and somehow responsible for all counter arguments or competing data points.


> It's what they do.

Indeed, legally and ethically Science is free to do what they want and should be held so by all external parties.

However, we, its consumers, are equally free to no longer pay attention to it, since its quality as a scientific journal is somewhat suspect. Anyone who truly believes in science (with a small s), ought to appropriately lower the amount of prestige with which they view science after such accusations.


Well, its more like new and flashy results are likely because new techniques were developed or a technique was applied in a novel way. Science/Nature/etc are publishing the first results with these new techniques so of course theyre going to be a bit buggy.

Subsequent refinements and discovery of systematics and how to account for them improve the research, and often change the results, but those results don't make it to Science/Nature/etc because they don't belong there (usually).

So, really you should be taking the mindset that articles in these journals should be taken with a grain of salt. Whether that should have an impact on their prestige? I suppose thats a personal decision.


"Science is facing a 'reproducibility crisis' where more than two-thirds of researchers have tried and failed to reproduce another scientist's experiments, research suggests."

https://www.bbc.com/news/science-environment-39054778

As such, this would seem to be a common occurrence and Science would have to decide whether to devote equal attention to every failed reproduction.


> this would seem to be a common occurrence and Science would have to decide whether to devote equal attention to every failed reproduction

I think the answer to this is a rather blatant “of course!”

If Science has the time to support false studies that retrograde the scientific cause, it certainly has the time to publish their corrections and refutations. The scientific process is not about how much stuff you can publish, it's about how much true information you can convey. A journal half the size of Science but twice as trustworthy is at least as valuable, if not vastly more.


> I think the answer to this is a rather blatant “of course!”

It's not as blatant as you think. It takes experienced scientists to review submitted papers, and their time is an expensive resource. Is it interesting that a research team with a sloppy adherence to a methodology failed to replicate a study? Not really. Should Science give resources and a platform to people who are just careless researchers? Obviously not. It's easy to fail to replicate an experiment; just do a bad job.


There's an implication in your argument that the science posted in the journal is of a good enough quality to be published - otherwise why wouldn't you apply the same standard you are applying here? Doing a bad job isn't limited to just reproduction.

Considering the reproducibility crisis, is it your position that people are worse at reproducing research than they are at conducting it? An alternative interpretation would be that people have more incentive to publish hastily-made "failed to reproduce" papers than they have "findings" papers, but I don't think that's the case either, especially considering the reply from Science to this case.


I think the rationale for Science to say they already fulfilled their obligations the first time around is that they thoroughly scrutinized the methodology used the first time around. This is asking them to now scrutinize two methodologies used and for less payoff than the first time.


> Science would have to decide whether to devote equal attention to every failed reproduction.

Not to every failed reproduction. Only to failures to reproduce results that Science had previously published. And if that means they end up publishing a lot of failures to reproduce, then perhaps they need to re-think their criteria for publishing the original results. After all, why do you think this "reproducibility crisis" exists in the first place?


The thing is this study is featured a lot everywhere.


Maybe so, but Science featured it only once. I think it's reasonable to demand that the journal regularly note corrections to stuff that it's published. It's absolutely insane to demand (via a giant article on Slate, no less) that they publish every refutation of anything they ever ran.

The only reason this is generating clicks is the politics angle. This is boring science. The original study was probably bad (as, let's face it, most psychology studies are), but they ran it anyway (because clicks). Now the other side gets its clicks on the "censorship" angle.

The science? Still boring.


> I think it's reasonable to demand that the journal regularly note corrections to stuff that it's published. It's absolutely insane to demand (via a giant article on Slate, no less) that they publish every refutation of anything they ever ran.

I don’t understand the distinction you are making here. Why would it be insane to publish refutations? In this case, they refused to even review the refutation.


> Why would it be insane to publish refutations?

It's not at all insane to publish refutations. A really good refutation of a long-accepted result is great science, and this kind of thing makes news and runs in major journals all the time.

It's insane to demand that Science publish a refutation of anything they happened to run in the past just because your results (or, let's be honest here, your personal politics) disagree. Most refutations, too, are bad science!

mentat 24 days ago [flagged]

They expected to replicate the findings. They fail to do so. Just because this contradicts your personal politics isn't a reason to ignore or suppress the information. They promoted it, they need to set the record straight.


"The information" is literally plastered all over a giant article on Slate and we're yelling about it as we speak. That's not very effective suppression.

The demand seems to be that "If Science ran an article they're obligated to publish anyone's refutation of that article". And that's insane. It only seems sane to you because your political outrage glands demand justice for someone using the previous study as spin, but that's not Science's problem, and it shouldn't be. Go run the outrage pieces on Slate, that's what it's for.


I would say that if Science ran an article they are morally obligated to seriously consider a refutation of that article, and if no problems can be found, publish it.

I did not get the sense that the refutation was seriously considered, and my impression is that the original article has not been retracted in any way, correct? So Science's public stance could be seen as the original article is still the best truth they've published on the topic, which feels wrong.

Wiggle room may be appropriate if the article is already retracted or has already been refuted.


No, that's wrong. The purpose of Science is to publish new and interesting results. Some of those will be wrong. It happens. That's science. Sometimes the fact that existing consensus is wrong is notable, and if so, sure: publish the refutation.

But what is up with this "moral obligation" nonsense? Being wrong isn't "immoral", it's just a mistake.

You're trying to apply ideas of politics and rhetoric to argue that Science somehow "advocated" for a result that made democrats look good and republicans look bad, and that they're now "obligated" to make democrats look bad too. And that's wrong. Like I said if you want political argument fodder go to Slate (or, sadly, HN). That's not what Science does.


I don't actually care at all what the subject of the study is, so the Democrat/Republican thing is just a total nothing to me personally.

Being wrong is not immoral. Being wrong, telling bunches of people the wrong information, and then making no effort to fix your mistake once you have the chance is at least dubious on my personal moral spectrum.


The original paper was an n=46 study, and they have regression equations there with 5 coefficients. It has been known for a long time that you can't successfully adjust for confounding variables with such small data. The fact it didn't reproduce is easily predicted and hardly noteworthy. Everybody in the field is aware of this issue. This article is just a bunch of academics tooting their own horn. The Journal's response to their request was 'Who cares?', and this is the right answer.


They published it. Clearly everyone didn't know that was nonsense or they knowingly published garbage because it would make for nice press and now they refuse to retract it. Either its shameful behavior in the past, or shameful behavior now, or maybe both.

If my friend asks me what the score in lasts night's game was, and I tell him 4-3, but then later realize that it was actually 4-2, I'll send him a txt to correct the mistake. If your argument is that a lower standard applies to one of the premier scientific journals, I think that's pretty messed up.


There is a reproducibility crisis in psychological research. The main reason for this is that the experimental methods they use are not robust (although they might be rigorous). A second reason for this crisis is that the things they are measuring are often hard to actually define (eg is there a definition of 'conservative views' that we can all agree on, and does any such categorization in an experiment really mirror external reality?).

In reality what happened with the first paper is the editors of the journal sent it out for review, and whoever reviewed it was happy that it met the methodological standards considered sufficient at the time. This is not the same as being happy that it was true. You might consider that 'messed up' (and you are right, it is messed up), but that was how the sausage was made in psychology research in 2008. There is no galactic council that imposes standards of evidence in a journal. They rely on the reviewers, and the reviewers are active researchers in the field.

As an aside, publication in a top journal is not a guarantee that results are water tight, and should never be considered as such. If you want to believe a result, you need to read the paper and think about it carefully. Even then, this approach is susceptible to outright fraud, which also does happen.


If we have a reproducibility crisis in psychological research, it seems to me like one of the best things we can do to address it is to incentivize people to try reproduce other group's findings. It's not sexy and it won't earn headlines, but it is important. The absolute best way to kill such an effort is for a journal that published a bad study to refuse to even consider publishing a study that refutes it.


You’ve misdiagnosed the problem. The best thing we can do is incentivize research psychologists to produce robust research in the first place. This is unfortunately at odds with publishing lots of papers, making money off pop-psych books and giving TED talks.


wait, what?

We're allowed to question what a peer-reviewed article published in a top-flight journal says?

There might be a possibility that the science in Science might be wrong, and we need to work that out for ourselves?

This contradicts a lot of the messaging about science these days...


No, you're not allowed to question.

You're required to question!

-At least if you actually want to derive any benefit from the process.

Science is most assuredly a "use your (own) brain" kind of endeavor. ;-)


Unless you try questioning Climate Science, of course. That science is totally settled and unless you have four papers published in a climate-specialist journal you are not qualified to question it.

Climate Science isn't alone in this, though. Archeology also requires a PHD before you can comment on the purpose of any artifacts dug up by archeologists, even though archeology degrees don't teach anything about (for instance) textile production.

And then there's the various interesting parts of Social Sciences, which often require you to have a particular identity or experience before you can question any part of their conclusions.

I agree that Science should be a "use your own brain" kind of endeavour, but increasingly it is not, unless you have the appropriate letters after your name.


> The Journal's response to their request was 'Who cares?', and this is the right answer.

That's what they should have said to the first paper, but since they decided to publish it they can no longer say "who cares" to the correction. They should either retract the original, or publish this.


Problem is it's not a "correction". There was nothing really there in the first place, which is what the first paper said to anyone with a modicum of mathematical knowledge.

It's like any other paper that reports a statistically non significant result. The appropriate response is, "Well, OK. Sure, whatever." But yeah, we're not gonna take up space from something else that may be important, just to go back and forth over this statistically non significant result. You let them do that in your journal that crap will go on forever and you'll never have space for anything substantive. I'm sure liberal and conservative political types must have some other place they can do that at.

If you have a new result in some new area, that's one thing. But if all you have is the exact same result saying that the old non significant result is, um, non significant? I mean, I gotta agree with HN User "Gatsky". "Who cares?"

But I'm a research scientist by training and a stickler for protocol. So it's likely my age is showing and the rest of the world wants to loosen up these days? I'm just the old man yelling at the liberal and conservative kids to get off my lawn.


Then why was the original study published? And why did it get so much press?


It was likely published because it was "new". You always need a belt-and-suspenders study if you're sniffing around for interesting phenomena. With that said, often times "Extraordinary claims require extraordinary evidence" as they say.

As for why it got lots of press: It's the kind of study that would drive clicks/engagement, increasing ad impressions. So... Capitalism.


So what you're saying is Science publishes studies they know are garbage? Good to know, I'll link your comment anytime someone sends me one of their articles.


Just don't trust anything published in Science or Nature until it's been replicated by independent researchers.

The same is true of all studies, but for some reason people lose their critical faculties when considering work published in the "top" journals.


> The Journal's response to their request was 'Who cares?', and this is the right answer.

Science cares, clearly!


Then why publish the original paper?


Replication might suggest they obtained the same results.

A better title for laypeople would be "We _refuted_ a Science paper that can't be replicated and Science won't publish it"


Except that's not how science works. Publishing a paper with contradictory result is not a refutation. It's a data point.


I'm not educated in how it works for this kind of (sociological? Psychological?) experiment but finding repeated instances - as per the trials reported in the article - seems like more than just a data point. They have many more negative data points than the original paper had positive ones

In pure formal logic they have at least proved it is not true for all cases, so it's a refutation in that sense at least

But in any case, "refutation" is closer in popular lingo to what happened than "replication"


This gets into deep philosophical questions. Logic can be deductive or inductive. Your use of formal logic implies deductive logic, but empirical science is inductive.

With induction, one can only have negative knowledge (this is know as the problem of induction or the black swan problem), but it gets more complicated than that. Most studies are trying to look at probabilistic outcomes and are trying to get significant p-values. There is a myriad of problems with this beyond simply the existing probability that their sample was bias. E.g., their methodology could be improper, or even just less than ideal. There could even be unknown problems with the underlying assumptions and the model being used by researches.

To suggest that an opposite result refutes a previously obtained result deeply mis-characterizes how science and human understanding works, and the problems inherent in this thought process deeply overvalues the knowledge we have and undervalues the more-than-likely seriously uncertain data we work with every day.

If you're interested in these problems, i'd suggest Karl Popper's work on philosophy of science.


Thanks for your comment, I see it can help frame the conversation, but I meant to say I'm not versed in the terminology for social sciences. I'm about to finish my Doctorate heavily involving formal logic and logic programming, and have read a sufficient bit about the philosophy of science

This is an interesting discussion but I'm unfortunately out of time now. I'd only point that most of science is abductive, rather than inductive, and that the whole concept of refutation will take us to discuss what is scientific _advance_


best just think in terms of Bayesian update.


Indeed, abduction is a way to base your priors in Bayesian nets!


In the same vein, a positive result doesn't prove anything, it's merely a data point in the opposite direction.


Any publications in particular you suggest?


A failed replication isn’t just a contradictory result. That would be if you tried to test the same hypothesis a different way. A true replication attempt is when you try to repeat the same experiment and fail.

Likewise, if the original paper is a data point, a failed replication gives you two data points that cancel out.


Science is supposed to be about the scientific method--a testable hypothesis. This strict definition has been forsaken for the benefit of all who would like to publish, and take credit for, science.


This is bound to happen in a social sciences experiment where the environment can’t be completely controlled.


Social science often isn't science at all. It's often really bad.

Social experiments by their very nature tend to pollute the data. This is part of why I don't believe UBI would actually work: Historical real world examples of the idea consistently fail. I think the UBI experiments are all nonsense.


This is clearly a side branch, but I'll bite. Can you point to historical real world examples?


Communism promised to provide equally well to all people. It's considered to be very much a failed experiment.

Early settlers in the US promised to provide for everyone. When freeloaders eventually ruined that deal, they changed it to "If you don't work, you don't eat."

Every time this has been tried out in the world, it works for a small group of committed, like-minded people. When it is opened up to "anyone/everyone," they inevitably have to come up with rules to account for free riders who want to take advantage.

Another real world example: When Spanish Conquistadors sent large quantities of silver back to Spain from the Americas, it didn't make everyone in Spain rich. It merely fueled inflation.

That's what happens when you add money to the system without adding goods and services: Prices go up.

The current UBI experiments are so limited in scope that the conclusion is akin to saying "People with kindly rich uncles are better off than people without such." But when it stops being a small group of privileged experiment subjects and becomes everyone, the conclusions drawn from the experiments won't apply anymore.


From a completely different point of view, a lot of countries already have some form of social security. If you run into some bad luck, you are not left to die in the streets; instead you get enough money to have a roof over your head and food to eat.

The person who originally thought up this concept was Otto von Bismarck, a highly conservative prussian politician. The idea was that people who are not destitute but merely poor are less likely to start a (communist) revolution.

In more modern times, it means that if you fall on hard times, you need not immediately be written off from society. Eg. people who are made redundant during an economic down-turn will still be available to the working pool during an economic recovery.

Of course, the implementations are not perfect.

In eg. the Netherlands, one issue is that getting a job and going from social security to minimum wage actually leads to a negative change in net income. This means that a rational agent might actually choose not to work once they become jobless the first time.

What one might prefer is a situation where working will always yield a larger net income than not working. In such a situation, the rational choice for most people would be to always at least try to do (some) work.

In such a situation where funds are already allocated, and the downsides are already proven to be minimal; a form of UBI might be one way to provide that dynamic. In such a situation it may prove to be a rational tweak to increase the working population.


You seem to be missing the detail that UBI is being proposed as a means to help keep people afloat in the face of mass permanent unemployment. Then articles that propose that as the raison d'etre for UBI don't actually explore that dystopian scenario where robots have eliminated jobs paying $20k to $50k and you get like $10k to live on with no hope of a job.

Instead, they tell glowing feel-good stories about how social workers could pay down debt and save a few bucks or whatever.

https://www.santacruzsentinel.com/2017/05/20/tech-giants-elo...


Obviously, a "Kim-style" UBI would be a bit different from the one proposed in that particular article.

I'm looking at something that might be a good workaround to actually get to a workable implementation of Negative Income Tax: https://en.wikipedia.org/wiki/Negative_income_tax

... which is definitely financially feasible and quite likely a good idea (and likely to save money in so many ways). In itself it's just a little bit impractical due to the sheer amount of work that rewriting tax laws would take. Hence: workaround.

Also note that I am speaking from experience-on-the-ground in Europe, and my reasons to propose UBI are thus a little different from those of Elon Musk or Sam Altman.


Ah, saving money. Sure. Let's disinvest in solving the hard problems that are typically the root cause of intractable poverty. To save money.

That makes me think of this line:

UBI Is a Transfer of Wealth from the Needy to… Everyone

From this excellent analysis:

https://medium.com/s/free-money/after-universal-basic-income...

I'm not for UBI. If you are looking to convince someone otherwise, you are wasting your time.

I know plenty of things that actually do work to solve problems and bring down expenses. No one cares to listen to any of that.

But you do you and let's just agree to disagree. Some desperately poor American woman who thinks UBI is nonsense is zero threat to your plans to change European tax codes or whatever.

Adieu.


[flagged]


She is an actual person, she posts a fair bit around here.

I'm 99% sure she wasn't trolling you. Maybe she was just frustrated.

I often appreciate her POV. It's good to have different perspectives in a discussion.


Except that this is social sciences, not science. Unfalsifiable theories and unreplicable "experiments" are common place but unfortunately the word "science" in the title makes it look like it is something else than a mix of journalism and philosophy.


Depends - in physics it’s most definitely a refutation.


Replication in this case refers specifically to the replication of the original experiment itself, not its results: https://en.wikipedia.org/wiki/Replication_(statistics)


It sounds like they gave up after Science wouldn't send it out for peer review themselves; whether it's a replication or a refutation, it is not yet even peer reviewed, much less published.

I don't think they provide any compelling reasons why the same journal is obligated to publish replications of any arbitrary study they've published in the past. It wouldn't exactly scale.


What I don't follow - and this is where I "side" with the authors - is how Science says the field has advanced past that paper if it hasn't been refuted yet and keeps being cited


By replication, let's refer to refutations. I'd unfortunately agree that positive replications would provide relatively little value. But replications that seem to refute previously published studies are critical for at least two reasons I immediately see, and undoubtedly many more:

1) Replication efforts need to be strongly incentivized. Science in modern times has been taking repeated hammer blows to its credibility. The systems we have in place seem to result in more and more simply 'fake' science being published, including in the top journals. This is not an easy problem to solve since replication efforts can be very costly and things like pre-publishing still ends up relying on good behavior, which is something we can no longer necessarily take for granted. The one small tool we do have against bad science is replication efforts. Every effort should be made to incentivize these. The willingness to publish significant refutations to influential studies is well below the bare minimum we should expect from journals on this front.

2) What is the purpose of a journal, outside of making money? It's to inform people by providing a filtered repository of the information the reaches a high level on the scale between reliability and relevance. When a journal publishes a study that may have been false, they end up doing the exact opposite of this - they are misinforming society. And this is especially true for the study which seems to have been refuted. It was not a footnote - it has been widely referenced and had a significant impact on public discourse. This all leads to a strong obligation to publish.

3) (another point derived from #1 while writing) - Publication of refutations aligns journalistic interests with public interests. In our current system journals are primarily motivated to publish the most 'meaningful' results which is ultimately a euphemism for the most shocking results. So they have an incentive to want to give these studies the benefit of the doubt. Yet these are the very studies which should be held to the highest degree of scrutiny and criticism. A journal that is willing to publish refutations of the studies it publishes returns this alignment since if Science found itself full of little more than refutation of 'meaningful' studies it published in the past, it would quickly lose its prestige. Yet at the same time if it was willing to publish refutations, yet they were few and far between, one could hold what was published within the journal to a much higher standard of reliability - which would be good for both the journal and the authors who were able to meet what would undoubtedly be their now increased standards.


> It wouldn't exactly scale.

Am I the only one here that thinks that there should be no overlap between the set of 'studies that are completely non-replicated' and 'the set of studies that need to be distributed at scale'?


To say nothing of 'studies from which other studies then reference and build upon.'


The thing I find most interesting about this is that I think people's responses here are being driven in significant part by what was refuted. In particular the bad science was, as is frequently the case, on a political topic and one that I think affirmed the biases of many. In this very thread you can already see people literally arguing that the study that refuted it "was just a data point" which is a direct attempt to try to ignore the far more substantial data provided by the replication effort in showing that the original study was unsupported.

Imagine if the paper that was refuted was one that claimed to show that e.g. humans were not playing as significant a role in global warming as thought. And then Science refused to publish a study that seemed to show conclusively that the original study was, at best, deeply flawed. Would people be responding in the same way?


Given that publishing an article costs money publishing an attempted replication is not likely to happen - if it succeeds it won't happen because we basically already gave you the information, and if it fails it probably won't happen because so what - a replication failed, should they publish it, what if someone comes along and says no we succeeded in our replication should that be published as well because someone else had a replication fail?

How much of the journal should be given over to replications that fail, and then the replications that succeed where another failed, and any potential back and forth?

At any rate a replication study while important for the process of science is not so important for the process of news and as Science is a business and using their resources for publishing replications does not seem likely to increase their profit (but maybe even reduce it) while increasing their cost, I am surprised anyone would even think it should or would happen.

Perhaps what there should be is a reproducibility column, where studies attempting to replicate previously published studies are noted. A paragraph summation could be given. As reproducibility of a finding becomes accepted further replications would be dropped from the column as redundant.

on edit: grammar fix.


If you don't publish replications and failures to replicate, you aren't publishing science. If a study is worth publishing once, it's worth publishing enough times to establish its result as conclusive. Otherwise, you haven't discovered anything, and the whole scientific process has failed, since the purpose of the scientific process is to discover conclusions.

You're right that in business, there's a disincentive to publish replications, because business treats science publication as literary nonfiction: they're trying to publish the most sensational results, because the most sensational results sell journals. This is one of the strongest arguments that scientific publication shouldn't be a business: making scientific publication a business creates incentives for scientific publications to not publish science.


that's all very well and good, please show how the needs of science to publish all these studies will correspond with the needs of Science to make money to continue publishing studies. My assumption is that the need to make money will beat the need to publish replications.

on edit: because it seems like your argument really is that Science the business should shut down.


I don't care about the needs of Science the business.

Science the business should shut down, not of their own volition, but because the scientific community shifts away from for-profit publications and Science has the respectability of a tabloid.


Great, that's an honest response - an honest argument to be made.

A dishonest response is saying they should publish things that makes no business sense for them to publish, and the logistics of which really makes no sense for their business model, with the seeming purpose of upbraiding them when they fail to do this thing they would never do.


You're basically saying that Science (the business) should do whatever makes the most money, always, and anyone who criticizes them for doing that is "dishonest".

Science (the business) is being dishonest by pretending to publish science (the method for obtaining facts). It's not okay just because it makes them money, and it's not dishonest to upbraid them for their harmful behavior.


um, what? It isn’t the obligation of Science to publish any random piece, that the authors conveniently argue is foundational. Null results are useful, but not the most useful thing.

“Best Actor nominee argues why his performance is the most impactful,” etc, etc.


Sure, Science isn't obliged to publish an article which refutes a previous Science article. They could instead simply retract the article which has since been proven wrong.

But to not publish the refutation and not retract the refuted article is to make a deliberate decision to continue to endorse research which has been proven to be incorrect.


The original article was published in Science, so this isn't a "random piece". It's a specific refutation of something they saw fit to publish.


I’m not sure “refutation” is the right word.

> “We still believe that there is value in exploring how physiological reactions and conscious experience shape political attitudes and behavior, but after further consideration, we have concluded that any such relationships are more complicated than we (and the researchers on the Science paper) presumed.“

That sounds more like a clarification or a letter to the editor. I don’t disagree that nuance is important, but I assume Nature believes they can get more nuance out of visiting new studies than reinterpreting old ones.


They failed to replicate the study over multiple attempts, 1 with four times the sample size.

Using the time reversal heuristic, if we flip the order, so the replication came before the original, how would you feel about Science publishing a paper with a positive result, when multiple studies--with larger samples and the same materials--had not obtained the result?

If you think the original should still be published in Science in that case, why? Since the original is published and highly visible, it's dangerous that Science won't correct the highly publicized, unlikely-true result.


> ...publishing a paper with a positive result, when multiple studies--with larger samples and the same materials--had not obtained the result?

That happens all the time. Papers with null results tend to either sit in drawers unpublished, or never get written at all. Nobody knows about the multiple studies because nobody will publish them.


And that is a huge problem. Are you arguing that because null results are rarely published, we should publish them even less?


Where are you possibly getting that from?


We agree that currently it happens all the time.

Some number of people (including eg. Roca) say that this is a problem - a "replication crisis" if you will.

Those people say that perhaps papers with null results should be published. Perhaps -they say- publication of null results could be (one possible) (part of) the solution.

Your reply sounded like you don't completely agree? Is that true, or did Roca read you wrong? Could you clarify?


I agree that null results should be published. I was making the point that some otherwise reasonable people disagree, for career or other reasons. If I were a researcher, I probably would not attempt to get a null result published; I’d work on something else that actually had a chance of going into my tenure file.


A null result is the core mechanism of science. Nothing can be proven true. You can only prove something false, and that is done by providing a counter example. That's what this group did.


It was rejected by advisory board, not even sent for peer review.


Getting into Science is about their editorial board’s wacky definition of notability. It has little to do with quality or even impact.


Perhaps it’s the political nature of the study but everyone seems to be missing the basic flaw in the argument for publishing this reproduction, that the flaws of the original research ( small sample size, weak correlation, etc ) were published for everyone to see. We’re talking about a science journal here and the general mindset of the comments seems to be equating it with a magazine or something. Educated readers, the real audience of this journal, would’ve read the research ‘against the grain’, if you will, and immediately saw that there was little or nothing there. This begs the question of why publish in the first place, to which an answer might be that it was unique research at the time. The methods pass muster and the data is there to be scrutinized by the reader. These guys are basically taking advantage of the obvious and attempting to, as they say themselves, ‘make their careers’.


But “we” didn’t see the flaws. It has been a high impact study, and people both inside and outside of academia have been working off the assumption that it is true.


There’s no reason not to have seen the flaws ( chief among them the minuscule sample size particularly for such a broad conclusion ) if one read the study critically. This gets at a symptom of our culture’s lack of understanding of how research works. That’s why one of the most powerful marketing techniques today is to merely state: ‘reasearch shows xyz’ with only a footnote referring to the underlying studies that no one will read, and then those that do will do so with specific predjudice rather than general skepticism. Those who work under the assumption you mention are making the same mistake of reading the headline and taking it as knowledge, usually because it supports some bias of theirs or some point they’d like to make one way or the other, just like those who are unnerved by the journal not publishing the replication study.


What's the modern peer review process for science? Is there an open forum filled with reputed researchers who are willing to write their reviews, ask questions or give suggestions to young researchers who are submitting their work for review in a well-formed format? And is there a role for other bystanders who are observers out of curiosity and may be inspired to become scientific researchers or reviewers themselves? In this age of Internet, isn't this kind of online collaborative open forum going to scale much better?

Why are these age old print journals still in charge of this process with print era processes??


In my opinion, the most reliable science comes from mid-tier journals. The highest impact journals are mixed with the absolute best OR the absolute hyped.


as well as top sub-discipline journals, which often have an impact factor of around 4-5, but within the sub-discipline e.g., developmental psychology, these journals has both respect and influence


I'm somehow both surprised at the comments here and also not surprised. A remarkably large number of people posting find no problem whatsoever in a supposedly top journal claiming "the field has moved on" from a major and famous study that turns out to be wrong. Heck, I'm not a psychologist and I've heard of this study!

All sorts of important questions are raised by this:

- Should we trust anything published by Science or other top journals at all?

- Is the lackadaisical attitude related to the subject matter? I've read that studies casting conservatives in a good light get replicated almost immediately, and psychology is completely dominated by liberal voters (>90%). Which leads to:

- Why did it take over ten years for such a widely cited study to be replicated? Nobody cared because it fed their own political biases?

And finally:

- What do we do about it? Can academia reform itself at all or is that a lost cause? If so perhaps psychology isn't really worth studying, given the large expense involved and the fact that the outputs so often seem to be noise.


As an (old/ex-ish) wikipedian, I often hear people say "Wikipedia is unreliable"; which is indubitably 100% technically correct.

But the statement is highly misleading, because what are you comparing it with?

Put another way: There's no such thing as a 100% reliable source for anything. You need to apply judgment in all things.

'Science' is actually a pretty decent journal. It didn't get its reputation for nothing. This doesn't mean that you can just assume they are a magical source of all truth; just like you can't assume 'Wikipedia' is (_definitely_ not), or 'Nature', or 'Britannica' or etc.

- Should we trust anything published by Science at all? Only to a degree. One must always be skeptical and think for oneself, no matter what the source.

- Why did it take ten years for the study to be replicated? Maybe it didn't. Perhaps these are just the first authors to speak up about it.

- What can we do about it? Continue working to improve the process, as we have done over the past centuries or millennia. The authors of this particular paper spoke up. That's part of the process too.


Whilst I see your point, the problem here is not "people want Science to be a source of truth" but rather "people want Science (and other journals) to strive for the truth", which is subtly different. A part of striving for truth is being interested in cases where you were wrong as well as right. Science has given the impression here that they don't care if astonishing claims turn out to be false. This is the sort of behaviour associated with tabloids. Is Science an entertainment magazine or is it striving for truth? The standards of behaviour expected are different.


On one hand their research is important and should be published. On the other hand, do they have a right to demand it to be published in the most prestigious journal in the world? High-profile journals reject good articles all the time for various reasons. Especially "the field has moved on" is a grating reason for rejection but is used all the time.


In general they don't have a right to demand it. However, in this case it is a direct refutation of a previous article that the journal published. At the very least their refutation deserves to be evaluated by a peer review process and considered for publication. In fact, I will go further, if a paper directly refutes older work in a journal, then that journal probably DOES have an obligation to publish it as long as it is done at least as well as the original.

The claim that the field has moved on is complete garbage and nonsense. It is farcical on its face. The older work is still being quoted.


Maybe you are right, but Science isn't an outlier here. All journals are loathe to publish direct refutations of previously published articles. Conference papers are even worse in that regard - zero room for refutations. "The field has moved on" is probably, second to "The article contains factual errors" the most common reason for article rejections. This problem is not limited to Psychology either. It is just as prevalent, if not more so, in Computer Science.


TL;DR: they replicated the famous study about liberals and conservatives reacting differently to threats, finding no difference. Science (a famous journal) refused to publish the null replication.

>We believe that it is bad policy for journals like Science to publish big, bold ideas and then leave it to subfield journals to publish replications showing that those ideas aren’t so accurate after all. Subfield journals are less visible, meaning the message often fails to reach the broader public. They are also less authoritative, meaning the failed replication will have less of an impact on the field if it is not published by Science.

Agreed, but maybe the issue here is journals shouldn't be prestigious. I understand journals' historical usefulness, but the Internet has made them obsolete.


> Agreed, but maybe the issue here is journals shouldn't be prestigious. I understand journals' historical usefulness, but the Internet has made them obsolete.

Prestige and historical usefulness are practically synonyms. You're arguing that prestige shouldn't command respect, but this is like arguing against human nature.

"Science" should recognize its importance to the world at large and take extra care to publish failures of replications of papers that it reports. The more prestigious the journal, the more carefully they ought to follow this proscription, especially if they wish to maintain their prestige.


Agreed, the whole narrative of the internet democratizing things has been largely overstated and poorly supported.

So many of the existing social and industry hierarchies were merely replicated on the Internet. As we see with all the popular print newspapers and magazines often being equally or more successfully online.

Maybe there were hierarchies for a reason and they were a good thing. Rather it's about building the better hierarchies rather than replacing them entirely with some academic idealistic pipedream (ditto for markets in economics).


Widespread adoption of the internet has dramatically changed the world. "Democratization" was an early term used to describe the opportunities of the early internet and it remains ongoing. There have been many great collective accomplishments in the last 20 years that are indicative of that democratization.


> TL;DR: they replicated the famous study about liberals and conservatives reacting differently to threats, finding no difference

I think this is a confusing way to word it; it implies that they successfully replicated the original study, and found no difference between their results and the original study. I might say 'they were unable to replicate the famous study about [...]' or 'they attempted to replicated the famous study [...], finding no difference between liberals and conservatives'.


Wow, English is terrible. You are totally right.


In this particular case I think it's less English's fault and it's due to the improper use of replicate.


> TL;DR: .... Science (a famous journal) refused to publish the null replication.

They refused to send the null replication out for peer review. I think that is an important difference. The optics make it look more like suppression, rather than a rejection based on quality.


> We should continue to have frank discussions about what we’ve learned over the course of the replication crisis and what we could be doing about it (a conversation that is currently happening on Twitter).

What's this twitter conversation they reference?


failed to replicate != refutation. the latter is definitive, as in deductive logic, bit science is inductive. Retractions of articles that are no longer convincing in light of new data is the wrong way to go.


I applaud authors for doing a laborious attempt at replication. But, this comes across as sour grapes. They dont make a convincing case for imposing an obligation on Science to peer review their paper.


While I keep my mind open to the idea that science might have new results in this area, there's a certain degree of ideology that goes into coming up with these studies in the first place. If this study had come out at a different point in history, its significance would be less important. However, because of the breaking down of the political order in the United States, people (and especially liberals) are scrambling to explain it. Why are they different from us? How could they believe something different?

While people do have varying anxieties, knowledge, and reactivity to aversive stimuli, there's a stronger prior that political beliefs are based on a few different things.

1) Class Position: People with money or whom believe they are likely to ascend classes trivialize the struggles of others and ascribe problems to individual character rather than systemic design flaws. If an airplane crashes, do we blame the pilots?

2) Culture / Milieu: What ideas are floating around? What do people whom you speak to on a regular basis believe?

3) Personal Experience: Have you had a powerful experience that confirms or contradicts the broader culture?

4) (often but not always most importantly) What will personally benefit you or people you care about?

There's a few other things you could throw in there as well, but Marxists would say to liberals that the reason they are different from you is:

1) They aren't that different from you. You both believe in the integrity of the capitalist system.

2) Often they are from dominant racial or economic castes.

3) There's an intentional campaign on the part of the elites to divide the working class against each other, and one way is via scapegoating minorities and women. Everyone likes to say "No matter how bad it is, at least I'm not X".

4) There is an intentional campaign waged on the part of elites to tie American national identity to its military campaigns overseas.

5) There is an intentional campaign on the part of elites to use race and religion as a wedge issue to divide the people so they do not blame the people causing their problems.

That's not a scientific explanation, but my strong prior is that there's variation amongst people, but we're basically all the same and we're mentally flexible enough to think through adversity and come to our own conclusions.


Imagine if refuting a famous study is guaranteed to get you a publication into a prestigious journal - it's impossible to disentangle the conflict of interest.


Edit to my own comment to clarify: while I believe replication should be encouraged, because statistics!, this might set a precedent that gives incentives to scientists to replicate high-profile articles AND refute them since the original journal would be guaranteed to publish them. If you've been in academia, a publication like this could grant your grant, pardon the pun.


You don't think there is a problem with it being that easy to convincingly refute articles in the most prestigious science journals? I think that encouraging those refutations would be good, both to force the reviewers to do a better job and as you said encourage opportunists to poke hole in existing papers. It would be good if a significant amount of scientists built their entire career around poking hole in bad papers like this.


Technically, not a refutation because a statistical analysis never refutes a hypothesis, only supports it with a certain probability or does not support it. That is, a negative result is inconclusive. No, I do not think "poking holes" should be incentivised - I believe REPLICATIONS of studies should be incentivised, because every study = a sample. Statistically, the more samples, the better able we are to filter through the noise. What you suggest will encourage scientific sensationalism.


Perhaps what should happen is that replications should be published beneath the paper it replicates, as an addendum.


What did the original paper imply? I didn’t see a clear explanation in TFA. I suspect that it implies that conservatives are more likely to ignore threats but I’m not sure.


It was more of the opposite. The study claimed that conservatives had a more physiological reaction to threats.


misleading title


* I just failed to read the next paragraph, and should have finished the article first. Just ignore this.

"We conducted two “conceptual” replications (one in the Netherlands and one in the U.S.) that used different images to get at the same idea of a “threat”—for example, a gun pointing at the screen. Our intention in these first studies was to try the same thing in order to calibrate our new equipment. But both teams independently failed to find that people’s physiological reactions to these images correlated with their political attitudes." Emphasis mine.

Why they didn't used the same images?


Continue reading down the article - they did do a study with the exact same images.


Just in the next paragraph. I should have finished the article first.


For those curious, this are the descriptions of the images from the original study, which they linked:

> a very large spider on the face of a frightened person, a dazed in-dividual with a bloody face, and an open wound with maggots in it




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: