There's a expose like this every couple of years, but nothing ever changes. My adviser told me entertaining stories of pranksters putting fake articles into CS journals from "The Austrian Naval Academy". I suspect one reason things don't change much is that the publishing game is shades of grey all the way up to the best publications - a third-tier journal won't publish complete gibberish, but you certainly might be able to get some pretty shallow stuff into it and I would be very surprised if anyone, say, checked the references or reality-tested the results.
I know it's funny now because Austria is landlocked but back when it controlled Croatia they did have a naval academy near Rijeka. It's where Captain von Trapp went (from Sound of Music)
Yes indeed, there was a substantial Austro-Hungarian Navy. I wound up reading a decent amount about this when I went looking for an on=line account of my adviser's story. Still, I suspect the Austrian Naval Academy was not a powerhouse of computer science research at the time...
Scientific journals are written for a scientific audience. For the most part, scientists get very adept at identifying good articles from bad, even in well regarded journals. Thus the system polices itself, by rewarding good articles with citations and letting bad or trivial articles wither.
Unfortunately, the general public lacks that inherent skepticism, leading them to cite spurious articles from the East-Austrian Journal of Ass-scratch.
Impact and correctness are two very different things. Cool results often get cited quite a bit, without much checking if the result actually is valid/can be repeated.
(but yes, peer review doesn't solve this either, especially not in the current form)
Say you a are researcher leading a group. Then your name is going to appear on all papers published by your group (mentioned last). I don't think it should count the same as being the first mentioned author. Simple citation count is not good enough.
Another situation: say you are the author of a dataset. Then all papers using your dataset are going to mention you, but such mentions are qualitatively different. They should not be counted in the same category.
I know of research groups that intentionally cite other members of the same group to get citation numbers up. Sure, some of it is legit, but some is clearly not - particularly when actively ignoring relevant work from other groups.
Or the classic... "Reviewer #1: Nice paper, but seems authors have missed this recently published work by TotallyNotReviewer1 et al. which could add..."
>There's a expose like this every couple of years, but nothing ever changes.
Why would it? These are not "predatory journals," they are akin to diploma mills for quack scientists and it's a mutually beneficial arrangement. Scientists get published, and the publisher gets paid.
The only one harmed in this arrangement is a hypothetical third party, and that is dubious.
I've spent basically zero time in academia. I've heard the phrase 'publish or perish', and I know the evils of Elsevier, and the popularity of sci-hub - but that's about it.
How are these guys making money? Is it just the 'editing fees' for people trying to publish? Presumably advertisers are savvy enough to not throw money at these types of journals? Proper scientists / researchers are (hopefully) also not easily fooled - and whoever peer reviewed this article (assuming it actually was peer reviewed, as claimed) won't want their name attached to the publication.
These stories come up often enough to suggest it's a well known problem, and the culprits are easy enough to identify. But that these stories do keep coming up suggests there's good money being made somehow, and attempts to thwart these publications are failing.
Yes, it is the editing fees. Nth-tier universities in many countries are pushing faculty members to publish more. This is all done with the best intentions -- we all want faculty at the cutting edge of research. So many of these universities offer financial incentives of various sorts to faculty who publish more. For instance, they may tie pay hikes or promotions to the number of publications. There's often grant money available for research (and publication) to support these activities.
This has led to the inevitable unintended consequences. Most of these faculty members do not have the resources or the ability to publish in "real" journals, nor do they even understand what a good journal is. So they publish in whatever venue that accepts their paper. If it requires them to pay, then so be it. And thanks to the beauty of capitalism, a bunch of predator journals have emerged to lighten the purses of these poor folks.
> a bunch of predator journals have emerged to lighten the purses of these poor folks
Are the poor folks not getting exactly what they paid for in this instance? They wanted to be able to hit a published papers quota to get a raise/hired/not fired/whatever and the journal provided that. I don't feel like it's warranted to treat the journals customers as victims here. Both the 'predatory' journal and the customer are just playing the same shitty game set up by silly incentives being put in place by outside forces.
OK, so if the customers are victims of their universities, why would we call the journals who help them out "predatory"?
The quoted phrase, "a bunch of predator journals have emerged to lighten the purses of these poor folks", clearly implies that the journals are out to hurt their customers.
Don't be easily fooled. Even people in 3rd world countries who work at universities understand what a predatory journal is, and in fact publish there because they get easy publication with no sweat.
I'm not talking about faculty members in any halfway reputable university. They obviously publish in reputable journals. I used the word "these" here refers to faculty members at the bottom of the academic ladder, many of whom are based in developing countries and are being forced to publish (often due to governmental initiatives.)
I don't want to pick a particular university or person to name and shame. But my suggestion would be to look at authors in the OP's journal and evaluate for yourself whether these authors are publishing in "respected" journals.
It seems odd to call the journal "predatory" in this situation. Who is it preying on? The person who got rejected from a journal with standards but still wanted to keep his job?
> Ah, then society's trust in science should be warned to watch out.
Perhaps you've noticed an alarming rise in society's distrust in science, perhaps partly exacerbated by an increase in poorly peer-reviewed publications.
No, that's mostly a result of global warming denialism paid for by fossil fuel companies, vaccine denialism pushed by chancers like Andrew Wakefield, and the completely awful state of media pop-science.
Except that - if you've ever argued with a conspiracy theorist online, you will know their propensity for posting links to really bad papers as "proof".
I'm not sure on this one; I suspect there's a lot of "media management" going on by Monsanto, as well as a lot of activists who are under-informed on the technology. Personally I thought that Big Ag was in favour of GMO and that organic food wasn't big enough to fund a disinformation campaign.
If a local producer is getting outpriced by a foreign company with more resources, turning it into "stop importing harmful GMOs" issue is rather convenient.
GMO is not inherently unsafe, nor is it inherently safe. Plants can be engineered to produce additional nutrients and better withstand drought, but they can also be engineered with insecticidal proteins that may be harmful to humans. This is perfectly analogous to pharmaceuticals. If someone said "ban all pharmaceuticals", we'd agree that they're crazy, but the correct response is not "pharmaceuticals are perfectly safe". Some pharmaceuticals are perfectly safe. Some are deadly poisons. GMOs need to go through an evaluation process before being approved for the open market.
The problem with these journals isn't that they accept low-quality papers. They're called that because they don't make an honest attempt at peer-review, and the publishing/editing fees are pocketed by the publishers.
IEEE Access is an example of a journal which charges a hefty "article processing fee" and publishes rather mediocre papers. But I wouldn't call them a predatory journal. They do get feedback from real researchers working in the field, they do reject fake and obviously-flawed papers, and they do not just use the publishing fees to enrich themselves. The journal in the OP did none of these things.
I am happy to stipulate that these journals do no review and pocket their editing fees. But they are not victimizing the authors, they're helping them. That's why they can charge fees.
There area also a lot of people naive to how BS publishing has become. Plenty of PhD students get sucked in, a common one is being asked to publish your thesis as a book. Learning hubris comes before an embarrassment is a right of passage to the vast majority of people who think anyone really wants to read their thesis in a bound form from a third rate publisher created last year.
In some arts/humanities subjects publishing a book is equivalent to publishing papers. These publishers are filling the same gap as the lower quality pay to publish journals.
I would like to cite this comment as a great example of having a potentially valid point but presenting it in such a way as to make the audience reject it.
Real publishers make a good chunk of their profits from selling subscriptions of their journals to libraries. These allow anyone at a university to read the journal, but the fee can be many thousands per year (per university). These are legitimate businesses, problems aside.
Fake ones like this make it off the $800 publishing fee. Since it's "open access" they just put the PDF online and call it a day. $800 isn't bad for a few hours of proofreading and updating one web page.
Note that there are plenty of legitimate journals, like PLOS, that charge a fee to publish. It's a common business model for open access journals. Usually they have some mechanism for waiving the fee if an author can't afford it.
Also, allegedly many of these journals actually have per-paper expenses in the hundreds of dollars (!!), mostly from editor time spent soliciting and managing reviewers. That seems crazy. In contrast, major ML venues like NIPS handle all the review management in a mostly automated way and are run by volunteers. But then again, the average review quality at ML conferences seems to be pretty poor, at least compared to what I've heard about in some other fields.
>>> I've spent basically zero time in academia. I've heard the phrase 'publish or perish', and I know the evils of Elsevier, and the popularity of sci-hub - but that's about it.
I don't know if they are so evil as people say. They send me nice invitations on linkedin to interview with them :D
They appear to have interesting data and text analysis problems to solve.
this particular case is what is called an "open-access journal" which is a controversial topic. the nominal goal of journals of this sort is to break up the fairly awful stranglehold on academic publishing that companies like Elsevier have created. however, it is a deeply problematic space and many of these open-access journals are fraudulent publications in that they don't even attempt to do appropriate and necessary peer reviewing. it's just pay to play at a corrupt journal like this. pay your fees get your publication. it's like vanity publishing. independent of academic merit.
the ideal is an open-access journal that is open in terms of who can read it but very closed in terms of who can successfully get an article published. that's not what we're getting these days though.
international policy pissing contest: countries actually rank education internationaly by counting number of published articles. you can't make this up.
I've been drafting a P2P app to (legitimately) share publications without intermediaries. Authenticity and traceability of all data - including metadata such as tags to represent peer approval - are supported using PGP cryptography. If anyone is interested to at least provide a critique to the code (it's my pet project so it's also rife with "let me try this new framework" syndrome) please give me a ping. If you want to contribute yay!
Peer approval at respected journals is anonymous: the writer doesn't know who the reviewers are. Sometimes the reviewers don't even know the identity of the author. How does your system allow for that? PGP signatures do not permit anonymity.
They're anonymous but after the reviews the authors are then usually (always?) revealed to the reviewers. Let's not kid ourselves though, academia, and especially in smaller subfields, is made up of small communities where you can usually guess whose paper you're reviewing unless it's a brand new contribution in a field by someone who doesn't usually publish there. You can often guess who the review was performed by too, since the program committee is always public.
You could implement something like Certificate Transparency's "precertificates" where you store a hash of the digital signature in a public Merkle tree, so during review the reviewer's identity isn't known. But once the paper is published, the actual signature can also be published and checked against the review hashes in the paper. If they're missing or inconsistent, treat the paper as unreviewed.
Eh ... in my experience as an editor, author, and reviewer, this varies quite a bit.
Sometimes you can take a pretty good guess who it is based on the topic and the references (e.g., citations to in-press papers on obscure topics is usually a give away). Other times, though, it's difficult to tell who the authors are.
People often also think they know who the author and/or reviewers are when they are really totally wrong. I've reviewed papers and seen other reviewers cite my papers in a way that I thought "damn, that looks like something I'd write" when it was not. Similarly, I've seen reviewers think they know who the author is when they're completely off.
I personally think anonymous reviews are critical, because it keeps some modicum of shielding of reviewers against retribution for negative comments. It's not 100% anonymous of course, but it's anonymous enough to create enough doubt about who the reviewers and authors actually are.
> You can often guess who the review was performed by too, since the program committee is always public.
This must vary across disciplines. While in your field the pool of reviewers might be public, in my own field most journals keep confidential their set of preferred people to turn to for reviews.
There was an article on HN a while ago from someone whom decided to start signing his reviews. To me, that seemed like a decent initiative.
The only real downside I see is that it allows the authors to contact and potentially bias the reviewer to affect a potential review of the written material.
Well, the other downside is that reviewers may be unduly lax in signed reviews. After all, people are often emotionally invested in their work / publications, and rejection sometimes offends.
SciPy did signed reviews for the 2017 conference the opposite reason. Obviously SciPy is not Science or Nature, but still. They cite the idea that if your review is tied for your name, you probably aren't going to do a crappy review because it will hurt your reputation. There's probably a balance.
I'd expect that people might soften the wording of their reviews, but to the detriment of their own time (i.e. spend more time by offering the same critique more constructively). I don't worry that people might soften their reviews by simply being less critical.
PGP keys can be used with any authentication mechanism or none. A PGP signature is always associated with a particular key, sure, but a key ID is just a random number.
Whatever your model is, you can implement on top of PGP. E.g. if you want pseudonymous reviewers selected by the editors of the Foo journal (the current model), there could be a key that's publicly linked to the Foo journal and you could use that key to sign short-lived pseudonymous "review keys" for "approved reviewer of paper xyz" which could sign their reviews.
So the publication gets shared, and if you think its a good paper you set the a tag? This is a pretty good idea - but do you make the reviewer identify known?
hey - I'm also working on something along these lines [1].
v1 will be a search + reading interface for scientific articles with p2p datasources (provided as hyperdrive feeds [2]). v2 will have arbitrary features that can be used to customise the reading experience, such as marking retractions/update/expressions of concern, connecting post-publication peer reviews to articles, etc.
we're at #dat and #sciencefair on freenode, would love to talk
Unfortunately Wakefield's paper that essentially kicked off the "MMR vaccine controversy" was published by the Lancet. They eventually retracted but enormous damage was done (including deaths apparently).
For the unknowing, the Lancet was at the time this paper was published considered to be a top-tier medical journal; the equivalent of Nature or Science for medicine.
I cannot speak to its prestige now, however. I'm sure that its reputation was somewhat damaged by publishing this paper but I imagine it's still considered a "good" journal.
I worked for one of these outfits for a summer job. I needed the money and they seemed alright on the outside.
There was a strange obsession regarding looking official and academically legitimate, to the point where they would attempt to recruit professors to do the peer-reviewing, then override what they said in the final "edit" stage of the review and approve the submission anyway. Since it was anonymous, there was no way to tell if "Reviewer 3" was actually bumped or just that someone else got to that submission first.
That said, their business model was a bit different. They weren't open-access: they made their money selling conferences (which were mandatory to attend if you wanted your paper to actually get published in the journal).
Often, they would resell gifts from the venue such as comped hotel rooms and airport shuttles at above market prices to the attendees as well as part of a "package." As well, the venues usually also matched where-ever the founder wanted to go on vacation.
Out of paranoia as much as cost-cutting, they ran the offices very lean and centralized authority in the founder and his family. They probably would have had a more successful operation had they gotten good lieutenants who were better capable of maintaining the facade. My local university used to warn people off of publishing with them by name, which I thought was a remarkable step considering the precarious state of Canadian libel law.
Other staff was mostly early-stage "green card"-esque workers who they would hold the threat of dismissal over their heads (forcing those workers to rush to get a new job before they timed out and had to leave the country) and students like myself.
The year before I got there, they had a major publicity crisis in which they took substantial heat in academic circles for basically auto-publishing plagiarized articles from anybody with an email address. Part of my work was integrating one those "turn it in" style plagiarism detectors into their submission funnel.
By the end of the summer they were in deep with the tax authorities from a backlog of unpaid taxes; the founder bragged to me that he considered paying corporate income tax a kind of "game" in which the penalties for losing were insubstantial. I'm sure by now the penalties have grown in seriousness, though the last time I looked them up they still seem to be publishing journals and hosting conferences.
It was a good lesson for me about what to look out for in the future when trying to select a small business/team to work for.
> which I thought was a remarkable step considering the precarious state of Canadian libel law.
Taking a University to court for calling you a fraud only gives the University a great opportunity to prove it in a public court room :)
Also you can probably interview current and former employees, like yourself. Not to talk about the presumably long list of sketchy things published, which should be sufficient on its own.
If you want to maintain some illusion of legitimacy, suing a University isn't going to get you far.
Journals are flooded in a way you would never believe if not spending a day on the other side of the desk. Many papers are presumptuous and preposterous at best, from developing countries but also from reputed institutions. The model is clearly flawed and the ecosystem saturated, there are a small number of institutions being accepted by default and a lot of the other not being given any sniff in. I am not sure how to rectify this distortion, predatory journals being one effect more than a cause: you can't stop shit / compilative / small incremental papers being put forward and spammed if not charging them a fee? The more you charge, the more probable they desist. Point is such money should be reinvested into the ecosystem, not drained out by greedy publishing corps imho.
What people don't realize is that it is YOUR responsibility to make your research any good. Peer review won't significantly improve your research, and btw it is very rarely constructive criticism, and it certainly doesn't make you feel you and reviewer are in the same team (but you should be in a perfect world). So what the author of original article confirmed here is that he can falsify results, his name, etc. But that's ON HIM!
Wow! There are multiple wars against science, exploiting "science". There seems to be a strong need for some service that can flag fake news, fake research, etc. in facebook, google and others.
The government which funds the vast majority of scientific research should insist that real research is freely accessible not in these gated journals that are ridiculously expensive (as much as $25k/year for a single subscription)...so that there was an easy way to access real scientific research.
Incidentally a similar stunt was pulled with a humanities journal several years ago and the gibberish paper got published...but this was in something a lot more like a real journal.
There is quite a bit of 'regular' research published in Frontiers and they do have peer review, but also a lot of crap (a similar point can be raised for Nature's new Scientific Reports, but also for PLOS ONE, which all cost money to publish in). The border between 'predatory' and just 'kinda crappy' journal is hard to set. The big difference between most predatory OA journals and Frontiers is that Frontiers at least responds to criticism.
>who gets to say what's fake news and fake journals, who gets to be the gatekeeper?
That's not how reputation works. There isn't a single institution who gets to say "This is good science, but this isn't", but scientists as a body can.
An institution can of course choose its own standard regarding specific journals, but it cannot impose them unto others.
U understand that the government has been printing fake science for years now? NIDA is the perfect example. Only science that fits the governments narrative is allowed to be funded and published.
Its just instead of influencing policy used to target people they deem threats, these guys simply trolling big pharam and the bureaucratic machine it has manifested.
So essentially the fake journels are selling ad space & resume fillers.
> In the decades that followed, the hoax proved to be a significant setback for modernist poetry in Australia. Since the 1970s, however, the Ern Malley poems, though known to be a hoax, became celebrated as a successful example of surrealist poetry in their own right, lauded by poets and critics such as John Ashbery, Kenneth Koch and Robert Hughes. The poems of Ern Malley are now more widely read than those of his creators, and the affair has inspired works by major Australian writers and artists, such as Peter Carey and Sidney Nolan. American poet and anthologist David Lehman called Ern Malley "the greatest literary hoax of the twentieth century".
Perhaps I'm missing some of the context, but I don't see the problem of publishing low quality fictional literature.
In contrast to science publications, where the reader expects that some scientific quality standards have been met, I'd assume that the market demand will be enough to wipe out low-quality entertainment publications.
There's nothing wrong with printing bad literature. The problem was their lying about being a traditional publisher that only accepted high quality manuscripts, flattering people to fleece money out of them.
That's not how these "publishers" work. They don't rely on the ability to sell the final product to make money. What they do is charge the hapless and naive, budding author for the privilege of being published and various associated "services." i.e. They make money by fleecing authors who don't know any better.
Quote: "My long-term goal—an ambitious one, I know—is to stop the production of predatory journals altogether."
I salute your energy and your objective. Having observed individual scientists over a period of decades and noting the degree to which a modern scientist's professional life is ruled by "publish or perish", I think some very basic changes would have to take place to eliminate phony journals.
On the one hand, phony journals, and corner-cutting episodes like the recent retraction of 107 cancer research papers from otherwise legitimate scientific journals[1], only show the desperation in the lives of many scientists and pseudoscientists.
On the other hand, freedom of the press allows pretty much any nonsense to be put on paper and online, and efforts to stop phony publications often collide with a very permissive attitude toward printed expression, including those that walk a thin line between fact and fiction.
To me, the central problem is that publishers make too much money from technical and scientific publication -- it invites cheating and exploitation. Maybe in the future there will be some kind of publication arrangement that is (a) beyond reproach and (b) not undermined by absurd access prices.
To complete the expose of broken peer review, there ought to be a kickstarter / crowd funding for the $799 fee.
Cash for publication. Sad state of science. Terribly irresponsible that this is medical science.
I'm not emphasizing blame for the individuals involved...I think the way the system operates incentivizes things that don't work to produce solid results.
A new peer review system that does work to reliably generate reproducible results would be a significant technological innovation that improves basic publicly funded STEM research.
The current system for publishing needs to be overhauled. This is difficult because those early in their careers have to establish themselves, and publications are the de facto metric for doing so. However, number-of-publications is not necessarily an accurate surrogate for the quality of a person's work since all journals do not have equally rigorous review processes (or any review process as shown by this article and many others).
Long term solutions for this problem often focus on (1) new systems for reviewing/publishing scientific work as a means to make number-of-publications a more accurate quantifier of the quality a researcher's body of work, or (2) incorporating metrics other than publications.
What about short term solutions? What about an IMDB-like review and rating system for publications? This way, researchers still publish wherever they like, but could be potentially reviewed by a independent system. Would such a system be too readily game-able?
The main challenge that I can see is that it takes real effort to review papers. Without that effort "popular" publications may end up spreading materially wrong results. The shortcoming of current peer review systems is that anonymous peer reviewers are expected to put in this work without receiving commensurate (not necessarily monetary) compensation.
This is qualitative and easily gamed though. Science can get mean and highly political. Adding a qualitative metric to scientific achievement would just give ruthless people a tool to hurt good scientists.
Science is a small world. Among your peers in the field, it's not your number of publications that matter but the quality of that work. Your results build your reputation, and that is what gets you jobs.
Can't websites like Google Scholar help here? There should be some algorithmic way to identify such journals. One way could be to use PageRank for a paper to create a PargeRank of journals. Then take the bottom of journals and use either wisdom of crowd or human editors to de-list them or flag them. If publications from bad journals stops showing up (or at least get flagged) in search results and/or Google Scholar, they practically don't exist.
That is irrelevant. Anyone who is remotely competent to read any scientific article will recognize predatory journal with a quick glance. I doubt that predatory articles have even a couple of reads. The problem is something else: those articles are used to beef up the CV of the "scientists" who are then promoted on their jobs over honest candidates. That may be rare in USA (but I am not sure) but it is rampant in developing countries (of that I am sure).
This would to more harm than good. Using automated methods to evaluate science can only make things worse.
Hiring committees and funding authorities already rely on way too many automatic indicators such as impact factors and citation indices anyway, despite the fact that nobody has ever been called a great scientist because he or she published a lot, and, even worse than that, in the humanities people often get quoted a lot and make a career because they published a really bad article that somehow made it into a good journal. Everybody tries to correct them, they publish a few even more outraging and stubborn follow-ups and pronto - they've got their tenure. I've seen that more than once.
Besides, whether journals show up in search results or not is also not that important. This is important to find readers and followers, but what counts more for many individual researchers nowadays is whether or not publications show up in their CV.
To fix the problem, universities and funding authorities must return to putting a higher emphasis on the scientific evaluation of the actual work of scientists, evaluate their results, their approaches and how promising their line of research is, instead of bean counting. There is no other way to ensure quality other than evaluating on the basis of quality rather than quantity.
Google Scholar already has something called Metrics [0]. You can basically go through any field and its sub category to get ranked lists based on h5-index and h5-median. For example, this is a list of top ranked conferences for Databases & Information Systems via the h5 index [1].
PageRank doesn't work. With both scientific publications and web sites, those wanting to game the system will simply create lots of page/publications that link a lot to each other. That's doable because it is very cheap to produce a zillion web sites/journals and populate them with 'content'/'scientific papers'.
In the end, somebody has to make a judgment call as to what a link is worth.
I'm sorry, I only read the abstract[1] but this is excellently written, in a way that only people with real scientific understanding are able to write. This isn't a "hoax", it's simply unconscionable on the part of the author.
Let me translate this to terms people here might understand. This is like writing an O'Reilly book entitled "Essential System Administration In Client-Side Javascript" detailing a cookbook of the most common techniques you would want to use to do local system administration from Angular, React, Meteor/Ember, etc, with special attention to the pitfalls of identifying whether the local system is a Windows,Mac, Linux, a mobile browser, tablet, etc, and listing the various ways you can set up the local user's device to your own liking.
Where it crosses the line is if it's written in a highly informative way without any indication WHATSOEVER of being satire. Local system administration from javascript frameworks doesn't exist. It's not a thing. You couldn't so much as set the time.
But when you write with authority on how to do that and more, suddenly it does exist. What's more, it will be interesting and useful. If you write with authority and an overview of the subject, in the way done here (it starts by identifying the scope of the issue - the very first words of the abstract are "Uromycitisis is a rare but serious condition that affects over 2,000 mostly adult men and women in the United States each year". From the moment you've written those words -- you even say who it affects and the exceptions, since it affects adults but mostly adults -- you are now an expert and you're not writing a satire or hoax: you're writing fraud.
It would be as if I began the above with:
"Web-based administration of local devices with a standard browser serving as the remote administration client is a small but growing choice of hundreds of large companies with 5,000-50,000+ employees. Its advantages include leveraging development practices that may already be familiar, web standards, regular security upgrades from the major browsers, centralized distribution of small updates while leveraging the security mechanisms (such as HTTPS and signed certificates) already present in the browser, and cross-platform availability across many devices. Simply put, System Administration in client-side javascript is the closest you can come to fully controlling your user's devices without administrative tools at all. While it has important limitations, such as being unable to repartition hard-drives or install a standard image while the operating system is running, in other respects it enables all of the power of many administrative tools without many of the downsides. In this book we will cover some of the basic functions you may want to do, such as applying security updates, installing or removing software, creating user accounts and setting their privileges, and, in the case of Windows computers, setting up roaming profiles. Let's get started."
Is that a hoax? No, it's more like simple fraud. (Our author invented a research institution, created fake gmail accounts, etc.)
Sorry. I don't support the author in how they went about this. It would be different if what I wrote was hilarious - such as bemoaning that for the moment Apple Watches unfortunately do not support a standard browser, so you will have to break into them by finding and applying zero days for the iPhone, which are unfortunately patched regularly; a list of Russian and Chinese sources is contained in Appendix 1, and you can usually ask for a sample while promising bitcoins - but is the above paragraph I wrote hilarious? No. It's not hilarious and that means it's not okay.
Yeah, that was more or less my impression. The implication that the reviewers should have caught on because it was a reference to a well-known Seinfeld episode is particularly silly. Many people who grew up in the United States haven't watched Seinfeld. I definitely would not have gotten the joke, and I certainly wouldn't expect a random journal reviewer (who could be from anywhere) to be intimately familiar with American popular culture, and I wouldn't expect them to google the author's name to figure out if they're the target of a practical joke.
Reviewers usually have to assume on some level that those who submit articles aren't lying. They can't verify every claim. Maybe a case could be made that the reviewers in this case didn't do enough due diligence, but just because they didn't get the joke doesn't mean they're doing a bad job.
yes, thanks. Especially given that he introduces the prevalence as being 2,000 people nationwide. (That is beyond rare. Albinism for example - extremely rare - is 5 people per 100,000 in the United States. 2,000 people in the whole country is 0.62 per 100,000.) So when recording such an incredibly rare disease it would be normal for no one but, for example, the sole specialist in the entire country to have even heard about it. It would be different if he had reported that 2 million people are effected. I don't like the whole thing.
It's even worse because the report deserves to be published. As written (though I didn't read beyond the abstract and what he wrote about it), it deserves publication. This isn't like this other hoax, https://en.wikipedia.org/wiki/Sokal_affair -- "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity" which on its face does not deserve to be published.
I would publish this paper, and more than that, I would argue that papers like this must be published despite not finding anything about it.
Integrity is important.
Requiring the standard of scholarship to be raised to "integrity isn't enough. If I can't verify the scholarship independently then I would rather shut out exemplary papers than allow frauds through" is not going to benefit science in the long term.
Many important researchers did not come with ready-made citations. When Einstein published four groundbreaking works in the Annals of Physics[1] that tore apart the very foundations of space, time, mass, and energy, would someone Googling have found that it almost certainly wasn't a hoax?
At some point you just have to believe the papers, and prefer to believe and publish Einstein on the strength of what he's written than to deny him on the strength of a paucity of corroboration.
This exercise doesn't help anyone. The paper is just too well-written.
Do you believe the author would have been able to expose the lazy and dishonest behavior of a journal like this with a paper that was plainly satirical to a layperson? It has to walk the line in a way to show that the journal purports to contain real papers, but that the staff has no capacity or interest in the veracity of its content, just making money. The fact that the author published the expose gives me everything I'd need to know about how conscionable it is.
He could have written it poorly, though. That abstract is excellently-written. It's better-written than the hoax I put into HN terms with my paragraph above -- my paragraph (by contrast) is not that well-written: its sentences are long and it goes on a few asides. His abstract is a model of clear writing and deserves to be published without question. Such a submission is going too far.
If a paper is published after 2010 and the corresponding author's email is from a free service (gmail, yahoo, yandex) or is not from an institution that is a university or firm with an R&D department, you shouldn't trust the journal or the paper.
It's a simple litmus test and while it generates many false negatives, there are almost zero false positives.
EDIT: At least, for biomedical papers... can't speak for other fields
Disagree. Sometimes the main (corresponding) author is exactly a Master or PhD student, and will leave the institution after submitting the article but before it is published. Or even a Professor will change the institution he is in.
I prefer to read a paper or article by its merits. E-mail from the author had little impact in that sense.
(Speaking for areas outside biomedical papers, for those, I don' know.)
The peer-review system, as very flawed as it is, protects us from a certain level of bad and in some cases fake science.
It's not very difficult to establish a peer-review ring when you use freely available emails, and journals are increasingly aware of this [0,1]. That's why I'm often suspect of journals who let this slide, because it's a sign they don't screen very well.
In 2017, in light of the widely successful and low-barrier ways to game the peer-review system, I don't buy your "leaving an institution" argument.
Corresponding author is more commonly the professor than the main author, and faculty at academic institutions almost always have "e-mail for life" privileges.
Disagree early career scientists move frequently and many prefer to use correspondence addresses they will keep. However the institution and the postal address are worth glancing at.