
The Problem of Publication-Pollution Denialism - denzil_correa
http://www.mayoclinicproceedings.org/article/S0025-6196(15)00190-1/pdf
======
Sharlin
Seems to me these publishers are simply an obvious-in-retrospect symptom of
the perverse incentive facing researchers everywhere to churn out publications
as fast as possible regardless of quality, as long as the papers fulfill some
superficial criteria of "scienceness". It's cargo cult science plain and
simple [0]. Meanwhile, Moloch just laughs [1].

[0]
[http://neurotheory.columbia.edu/~ken/cargo_cult.html](http://neurotheory.columbia.edu/~ken/cargo_cult.html)

[1] [http://slatestarcodex.com/2014/07/30/meditations-on-
moloch/](http://slatestarcodex.com/2014/07/30/meditations-on-moloch/)

------
dash2
Speaking as an academic, this seems overblown.

Sure you can publish any crap in one of these fake journals. But nobody cares
about those journals, or has even heard their names. For all the credibility
you gain, you might as well upload it to scribd.

~~~
greenyoda
_" But nobody cares about those journals..."_

People who are familiar with the field don't care about these journals. The
problem is that poorly fact-checked popular media are more than happy to cite
these journals in click-bait articles touting "A Miraculous Cure for
Pancreatic Cancer", which are widely disseminated to readers who don't have
the scientific background to evaluate them rigorously. They see that the
research was published in The Foobar Journal of Crypto-Oncology, and assume
that it's reliable.

And if no reputable journal has published a cure for pancreatic cancer yet,
the junk-journal articles will be at the top of the search results.

------
noobiemcfoob
This has been a well known problem for a while, though the internet may be a
multiplying force here, and this article didn't do much beyond stating the
obvious for anyone even peripheral to science fields.

What I would like to see is actual discussions on how to address the issue.
You can't stop those publications. Despite the article saying talk of free
speech is preventing action, these publishers have every right to do what they
are doing, predatory or not.

Doesn't it, then, fall on the science community to build an institution that
can be easily recognized and trusted to aggregate the "true" (or at least
vetted and peer-reviewed) work in a way approachable by the general public?

~~~
eevilspock
Why do we trust journals at all, instead of the community of respected
scientist?

We could capture a network of trust among scientists, where individual
scientists vet other scientists and articles. Think of it as ScienceRank, a
PageRank where the nodes are individual scientists and the individual articles
they publish, and the links are publish, review, reproduce and consistent-with
events:

    
    
        - scientist A published article X
        - scientist B gave a positive peer review of article X
        - scientist B gave a positive peer review of scientist A
        - scientist B gave a negative peer review of article X
        - scientist B gave a negative peer review of scientist A
        - scientist C independently reproduced the experiment in article X
        - scientist C failed to independently reproduce the experiment in article X
        - article Y is consistent with article X
        - article Y is inconsistent with article X
    
    

Trust would flow from trusted scientists. Scientists gain and lose trust via
the positive and negative reviews they or their publications receive. The
algorithm would be a little more complex than PageRank's, given the different
treatment required for the different links.

Technology could be a multiplying force in the positive direction instead.

~~~
semi-extrinsic
I think negative reviews are hard to interpret, especially algorithmically. My
favorite example is Arthur Kornbergs JBC papers on polymerase in 1957 [1]
where the reviewers recommended rejection with amongst others the comment “It
is very doubtful that the authors are entitled to speak of the enzymatic
synthesis of DNA”. Just two years later he received the Nobel prize in
medicine for that work.

[1]
[http://m.jbc.org/content/280/49/e46.full](http://m.jbc.org/content/280/49/e46.full)

~~~
eevilspock
I'm a big fan of Thomas Kuhn's _The Structure of Scientific Revolutions_. But
revolutions are supposed to be hard, as are changes to the Constitution, etc.
But I'm thinking an open trust network can actually support revolutionary
research: acceptance/rejection is not limited to the small subset of
scientists that control journals. Fringe scientists can provide positive peer
review to something, and add supportive research, allowing a gradual growth of
support. And if this fringe that went against the grain early, and are
ultimately proven right, they gain a lot of trust in the system for being
early.

------
epe
Sadly this isn't entirely new news:

[http://www.physics.nyu.edu/sokal/lingua_franca_v4/lingua_fra...](http://www.physics.nyu.edu/sokal/lingua_franca_v4/lingua_franca_v4.html)
[http://www.fudco.com/chip/deconstr.html](http://www.fudco.com/chip/deconstr.html)

------
drpgq
I agree it is a problem, but for those in a particular field, you learn pretty
quick which are the reputable journals and conferences.

~~~
tjradcliffe
This has always been a problem, and "trusting papers written by people I know,
or who have worked with people I know" has always been an adequate solution
for people in the field, whatever the field is.

But with the democratization of scientific communication in the Internet Age,
as well as the massive over-production of people with PhDs in the sciences
relative to the number of academic jobs available, the number of accessibility
of bogus publications has skyrocketed at the same time as the pressure to use
them has increased enormously. Meanwhile, the amount of good science has
stayed pretty much constant, so the good/total ratio is dropping
precipitously.

So while insider information is still effective for us, it's much less so for
others, and in particular the popular science press <em>and the majority of
the people who read it</em> are interested entirely in page views on the one
hand and being wooed by the "next amazing breakthrough!" on the other, rather
than anything to do with actual science.

There was a comic someplace (should be xkcd but doesn't seem to be) saying of
the average reader if "IFLScience" and similar sites, "You don't love science,
you just want to look at its ass as it walks by." This is not the worst of all
possible worlds (in other places they want science to never show its face in
public, much less its ass) but it does create an environment where science
communication is hard, and the proliferation of bogus journals and people
willing to publish in them is problematic.

It wouldn't hurt for hiring committees to have an explicit rule that anyone
who has published in a problematic journal has that publication counted
against them, but mostly there needs to be some better form of triage than
conventional peer review (which is weak enough as it is) or we're going to
drown in papers debating the number of angels than can dance on the head of a
pin, and lose that one weird trick that discovered by a post-doctoral mom that
actually solves an interesting problem.

~~~
semi-extrinsic
The comment you cite about IFLscience etc. is extremely poignant. Would you
mind trying once again to dig up the source? Sounds like a comic I should
read.

~~~
tjradcliffe
Found it! "Cyanide and Happiness".

[http://explosm.net/comics/3557/](http://explosm.net/comics/3557/)

~~~
semi-extrinsic
Sweet, thanks!

------
ssivark
I think this is sharply correlated with what Tom Nichols' termed ['The death
of expertise']([http://thefederalist.com/2014/01/17/the-death-of-
expertise/](http://thefederalist.com/2014/01/17/the-death-of-expertise/)) and
how we tend to assign equal weight to all opinions, irrespective of the voice
attached to them. Hence, we end up measuring _volume /loudness_ rather than
quality.

------
innguest
Maybe if academics were engaged in _real work that can 't be faked_ this
wouldn't be a problem.

For instance, why not only accept papers that come along with cold, hard
evidence for what the paper is saying? If it's a chemistry paper there should
be a video of all the lab work and the results being shown. If it's a computer
science paper then have a runnable docker image of the program. If it's a
linguistics paper then have a program that includes the corpus search and the
searches you performed along with your statistical analysis files.

So long as academics don't have to _prove_ they did anything, there will be
fakers and posers.

~~~
tjradcliffe
Since outside of math the goal of science is not proof but plausibility, this
would not be an effective approach. Science is the discipline of publicly
testing ideas by systematic observation, controlled experiment and Bayesian
inference, and as such is aimed at changing the posterior plausibility of some
proposition, not "proving" anything. Proof and certainty are the Alchemist's
Stone: philosophers sought after them for thousands of years the way
alchemists sought after the secret of turning base metals into gold, never
realizing that the fundamental problem wasn't in their methods (although their
methods had problems) but in their goal, which was impossible and wrong. We
should seek knowledge, not certainty.

The range of means by which that can be done is huge (though considerably less
than "anything goes") and there is a definite role for work that is
exploratory and speculative, up to and including stuff that is almost
certainly wrong but worth publishing because a) the error is not obvious and
b) publishing creates the opportunity for others to respond to it, hopefully
putting the error to rest for good and all. Some of the early work on the "no
cloning" theorem was motivated by publications of this type (it turns out if
you could clone a quantum state you could use entanglement to communicate
faster than light, and there was a series of papers in Physics Letters in the
late '80's proposing to do just that.)

So trusting in the self-correcting ability of the discipline of science is
fundamental to its progress, and therefore insisting on some alchemical
standard of "proof" as the goal for publication would fatally cripple the
scientific enterprise. For science to work we have to be tolerant of the
publication of error.

But we need to keep the rate of erroneous publications down to a manageable
level. Peer review and society membership were ways of doing this in the past.
They have broken down today, and we are still casting about for new ways to
keep the error rate well above zero, but not so high as to swamp everything
else.

~~~
semi-extrinsic
I think the OP is saying that you should prove that you actually did the
science, not implying you have to prove that you are correct.

