And that's how the most effective fake news works. A tiny fraction of the people who see (and spread) the original article will ever see the retraction. And ever fewer will spread it.
Alternatively, I think fake news is artificially engineered to achieve the effect. So unless this group was deliberately producing fake conclusions I personally wouldn’t call them fake news.
A team I ran at one of my past jobs was interviewed by NYTimes once. What was published was extremely editorialized to drive clicks, and as a result bore little resemblance to what people actually said. We didn't even bother asking for a retraction, but after that incident I've been extremely distrustful of basically everything I read in the media, no matter the source, unless I see direct, unedited evidence. And even then I'm distrustful if evidence appears to be taken out of context, which it is at least 90% of the time. I just wish the "journalists" would stop killing their own profession, and start behaving like adults. The only prominent voice I trust these days is Glenn Greenwald.
This simple practice would take care of all sorts of issues (bugs, fraud, bias, random fluctuations).
Anyway, my point is if there is an 80% chance of getting a result and you run two trials with those same odds, then only about 64% of the studies will replicate with the correct result (in R):
p = 0.8
res = t(replicate(1e5, sample(0:1, 2, prob = c(1 - p, p), replace = T)))
# 0 = Replicated wrong result
# 1 = Non-replicated result
# 2 = Replicated correct result
0 1 2
3944 32135 63921
 80% is the target, but is probably usually optimistic
And when it's finally published, do you publish both papers and credit both teams? If not what's the incentive for the replicators to consider a non (yet) influential work?
And really, what's better? 400 studies that don't replicate, or 200 that do?
If not enough is known to do that, then there needs to be a pilot study done to figure it out. Basically what is being published now is a bunch of pilot studies.
Of course, it’s not in the hands of the author that they be taken seriously. Essentially what op is saying is the collective community needs to be more scrutinizing, a statement laced with many levels of irony.
Honestly this seems like more of a media issue. They take a story and run with it and don’t care if it’s right.
That said, I don't think it's normal for a "software bug" to result in such an egregious error as to require a retraction of the study. I'd like to know why the peer review process failed to detect the problem.
Only because most papers never get the work checked at that level. Software bugs causing errors in studies is extremely common. There's a reason a tiny tiny fraction of researchers share their code without kicking and screaming, and it's not because they'll be scooped on their next paper if they do. (I'm slightly jaded here)
A Bug in FMRI Software Could Invalidate 15 Years of Brain Research 
That being said, as a writer I might not be able to resist pointing out the irony that the paper on fake news was fake news :). I really can't fault the author.
not genuine; counterfeit.
"fake designer clothing"
synonyms: forgery, counterfeit, copy, sham, fraud, hoax, imitation, mock-up, dummy, reproduction, lookalike, likeness;
Part of the purpose for publishing in the first place is to see if other people can find mistakes in your logic. This is particularly the case at the real frontiers of science and theoretical mathematics, where truly new ground is being broken. Computer-aided theorem provers would like to improve this, but they are currently fairly specialized tools with limited use.
It was the demands of early primitive and unreliable computing hardware that forced developers to focus on provable software correctness: it was the best way to find hardware bugs! Dijkstra has a great monograph on developing early interrupt handling correctly. More people should read it, since as anyone who has worked with Unix signals knows, it's easy to get wrong in subtle and nasty ways.
Edit: Also to speak to the pure math point, Dijkstra's official title after graduating was "Mathematical Engineer."
Regarding Dijkstra, I personally consider TCS to be a branch of applied math or maybe applied logic. What most programmers do in practice, however, is more akin to engineering (or I'd say, is a type of engineering).
But it is true that it is a difficult battle to fight.
Just read the news on any political topic to see they have been getting away with reporting rumors from "unnamed officials" as news for decades, with no one even recording the track record of these anonymous sources. So we have no way to figure out if they are reliable or not.
It reminds me of when wikipedia became a thing and no one trusted it since "anyone could edit it". I was like "you should be skeptical of the regular encyclopedia too."
One can't account for everyone, but there's no conversation to be had ever, about *anything, if you automatically presume that the other person is not at least somewhat serious about the subject matter and does not care about it at least a little bit.
I'm sure grown men and women will not lose their capacity for critical thinking after a laugh. If anything, the humor's role is to relieve tension. Maybe it would enhance the conversation.
Not that there is any, at the moment. The paper got retracted, and we know why, and we know the reason wasn't a matter of falsification, but in the following of the ethics of science. The way I see it, there are other subjects around the news that are worth considering – for example, as someone in the comments has already mentioned, how come the bug hasn't been noticed before.
I am, of course, guilty of this all the time. The nuanced difference is when the individual no longer cares about correcting that vice (I know because, well, they've said so).
You're right, I am exaggerating the danger. I will assert, however, that the danger is nonzero and deserves more than dismissal. Mostly I urge thought about how the weight of these sorts of jokes can subtly affect the perceptions of people rather than insisting on the triviality of a single event.
They messed up and told everyone so. That's pretty much the opposite of propaganda falsehoods.
The authors get some points for issuing a retraction, but they lose some for not doing proper due diligence first. Especially when they likely published their paper to cash in on the zeitgeist.
And so does it's demise. The term backfired and began to be used against mainstream sites when they ran articles that were 'factually challenged'. And though 'fake news' has hardly faded, now that the big players have almost entirely stopped using the term - as quickly as they had chosen to start using it in the first place, it's trending back to 0. You even now have articles such as CNN suggesting we "ban the term 'fake news'" , a WaPo columnist suggesting, "It’s time to retire the tainted term 'fake news.'", and so on. .
The moral of the story being just yet another telling of Frankenstein. 'Fake news' as a term probably did well in focus tests, but once the term was used in the wild it took on a life of its own leading those that created it to desire nothing other than its extermination.
 - https://trends.google.com/trends/explore?date=all&geo=US&q=f...
 - https://edition.cnn.com/2017/11/26/opinions/fake-news-and-di...
 - https://www.wired.com/2017/02/internet-made-fake-news-thing-...
What once was a real media outlet arguing to ban words they don’t like... hmm
Many of the original news stories about the paper were the "fake news".
I wonder if news organizations will bring in rules not to report on them until there's been a number of successful replications. They just mislead people.
Consider the replication crisis in psychology as another embodiment of this. This was all really started when a replication study attempted to replicate a number of impactful studies from top tier psychology journals. It turns out that 64% of all studies, including 74% of social psychology studies, could not be replicated. That means if somebody actually tried to replicate any given study, they'd more likely than not find it was dodgy. But nobody did this, because there was not much motivation to do it. Refuting others' studies hurts them, likely creates enemies for you, and doesn't do all that much for your own career.
It's a messed up system that basically needs ideological heterogeneity to create that motivation needed to ensure good quality. But such heterogeneity is practically nonexistent in many soft sciences now a days, and to some degree the problem is even starting to seep into the hard sciences.
Maybe the authors had some problems using these methods for another more recent project, and found the error. Retraction doesn’t mean ‘it’s wrong please pretend this never happened,’ they will pull the paper as it contains an error, fix it, then resubmit it.
That's a huge issue with finding credible media sources nowadays. Depressingly few will admit they're wrong/update an incorrect story, and those that do will barely advertise that.
(Source appears hugged to death.)
It wasn't clear from the RetractionWatch page that the authors think that at least some form of the paper is still worth republishing.
I think it's this ambiguity (depending on the speaker) that causes some distaste for this term. But I think we should instead choose to embrace the former meaning and reject the latter. Or if absolutely, positively necessary, consider the latter to be the true meaning and come up with a new term to describe the former.
I don't think we need to throw up our hands in the face of this lexical challenge. To do so would forfeit all debates to some epistemological quandary.
Great summary. My perception is that was the original usage, and I am okay with it. I wouldn’t say it’s the common usage now since the term was hijacked by the president to mean the latter definition, and his reach and influence beats that of those using the term in the original sense. I’m not sure if it can be rescued at this point, so I prefer the solution of ditching it.
I don’t expect to hear the term much by the time we don’t hear about him much.
It would make sense if the term was only leveled against specific news that is not based on openly verifiable fact. However, some individuals use it to refer to entire news organizations, seemingly ones that oppose their views, while not applying it to other organizations that clearly have the same or lower standards for verifiability.