
Academics argue over how many people read and cite their papers (2014) - testrun
http://www.smithsonianmag.com/smart-news/half-academic-studies-are-never-read-more-three-people-180950222/?no-ist
======
veddox
I think that _is_ something worth investigating - because if it is true that
50% of all papers written never get an audience, then I would see that as a
pretty major flaw in the way we do science. (As if we don't have enough of
those already...) I mean, many people having been saying for years now that
the scientific community is much too focused on churning out papers, but this
would take things a step further by saying that all those extra papers are
essentially useless. And if they are indeed useless, then they represent a
phenomenal waste of time and money that needs to be stopped ASAP.

(Personally I doubt those numbers, but I have no hard evidence to back that
up.)

~~~
feral
'I know half the money I spent on advertising is wasted, but I don't know
which half.'

A lot of research is supposed to be new, uncertain, high-risk, high-reward. So
it should fail often.

[A lot is also to be meticulous fact-checking, building certainty to later
build new breakthroughs on.]

IMO, there is a real problem with people doing bad research they know is bad,
but I wouldn't say that the 50% unread figure is the thing to focus on.

~~~
veddox
Your quote is thought-provoking, but I do not quite follow the rest of your
argumentation:

> A lot of research is supposed to be new, uncertain, high-risk, high-reward.
> So it should fail often.

Setting aside the question of whether or not science _ought_ to be the way you
describe it, what do you mean by research "failing"?

IMO, there are two levels on which research can fail. The first (and I get the
impression that you are describing this type) is the failure of some new
theory to be corroborated by the facts, or the failure of a new approach to
fulfill its promises, or a later experiment contradicting an earlier one. But
this is part and parcel of the scientific method. In fact, this leads to an
advancement of science, as we then know which explanation is not correct, or
which observations need more detailed scrutiny. And so, such a failure of
research is actually a success of science.

However, this scientific process is based on scientists exchanging their
findings freely so that they can cross-check each other. Thus, the second and
much greater failing of research would be to fail to communicate itself. And
that is exactly what (appears) to be happening. If scientists aren't reading
what their colleagues publish, that is not a failure of research - the
research itself might be fantastic, but if nobody reads it, who is going to
know? This is not a failure of research, this is a failure of science. And
that is a much more serious issue altogether.

~~~
feral
>whether or not science _ought_ to be the way you describe it,

I'm assuming a charitable readership - but its frequently stated, and I agree,
that industry is for high value things we know will work,
science/research/academia is geared for less certain / more speculative
projects difficult to commercialise yet.

> If scientists aren't reading what their colleagues publish, that is not a
> failure of research

Well, maybe?

Or maybe the paper authors hoped the results would be amazing, but they were
only mediocre [research issue]; but they decided they might as well publish
after getting the results (which is good), and some people skimmed the
abstract; but it wasn't as interesting to others as the authors hoped (diverse
perspectives; good); or the whole field went a different direction; or the
specific competition found a better technique/result in the interim [research
issue] and got all the love.

That's all just how research goes - doesn't mean anyone is failing to
communicate _necessarily_.

> This is not a failure of research, this is a failure of science.

Maybe; maybe not.

I agree there's lots of problems in science, but based on my limited
experience, I'd expect and think its OK for lots of stuff to be unread; lots
of dead ends.

Lots of startups should fail too, that's not a bad thing; its like there's
simply a high bayes error.

IMO real failure is things like people making up data, or doing research they
know is bunk (maybe they hacked their p-values or left out data, or something)
or continuing research they discovered is useless but their Adviser is
politically wedded to etc.

~~~
erispoe
Yes we should totally incentivize publication of negative results / failed
experiments. If anything, we need more publication of negative results, that
are likely to not be cited ever for the most part. It's a big problem that
many failed experiments go un-noticed because researchers are dis-incentivized
to publish negative results.

~~~
glandium
This brings the interesting question: how many failed experiments are being
"replicated" just because the past failed attempts are not published? Said
differently, how many researchers are wasting time on something that is bound
to fail, like all the researchers that happen to have tried the same idea in
the past failed, but didn't publish, because who wants to publish failure?

------
simonster
> Hopefully, someone will figure out how to answer this question definitively,
> so academics can start arguing about something else.

The reason academics argue about citation counts isn't necessarily that they
care, but that, at many institutions, citation counts are directly tied to
hiring decisions.

------
jfoutz
So what's the modern academic clickbait? I've only read the greatest hits like
Part time Parliament or Goto Considered Harmful. But those were really
successful.

Is there something widely read that is not particularly insightful?

Is there a buzzfeed of journals?

Not trying to sleight an author, just curious what attention culture looks
like in academics. Perhaps it doesn't exist, or it's driven by author
reputation or some other effect.

~~~
Balgair
Nature and Science, kinda. Those articles, though very good, can be very
buzzfeedy in their 'zeitgeist-ness'.

I got to talking with one of their editors a while back, they know they are
the 'gatekeepers' to a scientific career, and they hate it. The guy said that
in the first week of the year, the submissions that they have for just the
first week for the first issue are so much, and of such high quality, that
they could just shut down the rest of the submissions for the year and see no
drop whatsoever in quality. Getting into Nature or Science is so prestigious,
it will make your career if you are grad student. Unfortunately, it is
effectively a lottery to get in them.

Not to say you can't "make it" in other journals, but Nature and Science are
sure things. If you are a post-doc in some fields, you pretty much have to get
in Nature or Science, sometimes twice, to be considered as a faculty recruit
worth mentioning. It's a giant god-damn mess of a system. It is very 'up or
out' and as such, we waste billions training students that have no chance,
doing work that languishes in unranked journals, and generally loosing the
trust of the public that pays us.

NIH is trying to help, but it and the NSF are so inundated with Bullshit
Artists and Yes-men that I doubt they can get away from their sclerotic
bureaucracies and change.

------
veddox
Here are the original papers cited:

Meho 2007:
[http://iopscience.iop.org/article/10.1088/2058-7058/20/1/33/...](http://iopscience.iop.org/article/10.1088/2058-7058/20/1/33/meta)
(not open access)

Evans 2008:
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.183...](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.183.3668&rep=rep1&type=pdf)

Lariviere & Gingras 2008(?):
[https://arxiv.org/pdf/0809.5250v1.pdf](https://arxiv.org/pdf/0809.5250v1.pdf)

The third paper is especially interesting, as it brings forward what seems to
be a pretty solid analysis that goes against what the previous two authors
(and the Smithsonianmag) say about the state of science. So maybe there is
hope yet :-)

------
cbanek
Curious if this takes into account online publishing and online views. These
days, with search engines indexing everything, while something might not seem
immediately useful doesn't mean it's worthless.

Sometimes you're just ahead of your time, or in too niche of a field.

All that being said, the writing papers does seem to be the academic version
of the corporate hamster wheel.

~~~
tokai
Look in to the budding field of altmetrics if that kind of thing interest you.

[https://en.wikipedia.org/wiki/Altmetrics](https://en.wikipedia.org/wiki/Altmetrics)

------
kafkaesq
It's just a symptom of how dysfunctional the current funding & promotion
practices are. For many years now these decisions have been based directly on
measures more or less equivalent to "inbound links" to ones publications, to
an extent that such measures have become a de-facto form of capital -- to be
fostered and protected accordingly.

------
usernam
It's not surprising, but it's also not a problem.

First, the need to publish is due to the publish-or-perish strategy, that
forces authors to put out 1-2 papers per year in order to stay afloat. If you
work in research, you realize at some point that advancing the state of the
art in any field is hard. It takes intuition, dedication, some luck, and most
importantly a _LOT_ of time. That is, /years/ of work where nothing comes out.
The world-class papers you see in Nature are works of years, not months.

Ignore bad research for a moment: imagine you had only great researchers,
which is what we'd all like to sustain. This is /fundamentally/ incompatible
with the current model. You have two strategies as a researcher: put out
crappy work to sustain a larger project, or publish bite-sized pieces of your
larger work over time, which leads to incredibly specific papers that taken by
themselves seem useless. If you take the second path, it's obvious that you'll
also take advantage of the weak spots of the system, and cite yourself on each
subsequent paper you publish. This is common knowledge. Because you need
whatever you have to stay afloat.

The first strategy (keep the bigger task going) is almost never attainable. In
fact, if you need infrastructure, such as a lab of any kind, it's impossible
to pull off. This is something that only a tenured researcher working in a
very successful team or with good connections might be able to do. Aside from
your vision, and any grant you might have, you need mindshare in your fellows
and your supervisors to simply let you do it. This is much harder than it
needs to be, with too many factors out of your control.

As for the "unread" output, I'm not worried. I assume there's a good 30% of
papers which are just turmoil due to the above system. As said, this very
system has pushed authors to publish even more, increasing artificially the
number of publications one would have written. This is also placing a burden
on new authors that need to do literature research. Finding all relevant
articles about any subject has become a massive task, even ignoring the
problem of getting the articles themselves.

But how can you tell if the published works are worth it or not? It's
impossible. Consider the evolution of all the odd fields of math, most of
which would have seemed just ludicrous at the time of publication for anybody
but the author. Stuff like origami to fold an antenna on a space telescope.
Again, if you considered all past works and projected forward, you'd realize
there's _no_ way an honest reviewer could tell "this is useless".

------
Pica_soO
A idea, propably a bad one- what if, not by accident, but by intention, there
was a flaw in every paper, and a reader was only allowed to cite, if he could
state the line and type of flaw?

~~~
TeMPOraL
Someone would quickly set up a database that would tell for each paper where
the mistake is.

------
TeMPOraL
Relevant XKCD: [https://xkcd.com/1447/](https://xkcd.com/1447/).

