
The Glacial Pace of Scientific Publishing - ingve
http://www.fasebj.org/content/26/9/3589.full
======
untilHellbanned
Huge irony in Leslie Vosshall's suggestions. She is a HHMI investigator and
has "made it". Telling others not to try for the top 3 journals which is the
EXACT path she used to get to her spot is insulting. Basically don't do all
this stuff she did to get ahead because you'll make the system worse.

It's like Mark Zuckerberg telling young programmers, don't start an internet
company because society is suffering from all the evils of internet.

~~~
rgejman
I think this is an unfair characterization of her editorial. She did not get
to her position by getting mediocre papers published in good places! She is
just asking authors to be honest about whether their findings are of
sufficient importance/impact to be published in one of the top journals. I
have seen many many colleagues run the journal mill, aiming as high as
possible for as long as possible, just like she describes. It indeed wastes
time and clogs up the system!

~~~
untilHellbanned
I think this is an unfair characterization of my comment. I didn't imply she
published "mediocre papers". I don't think anyone intends to publish mediocre
papers. Everyone is trying their hardest and believes in their work.

What I'm arguing is that she, like you apparently, aren't seeing the situation
from the other person's perspective. Telling everyone else they are "wasting
time and clogging the system" is just a game of "he said, she said". It's a
battle nobody will win.

Papers are subjective. Notice Leslie Vosshall continues to publish frequently
in these top 3 journals after her FASEB post (2012). Is she really being
honest with herself? It is pretty easy to come up with reasons why anybody's
papers should or should not be published in the top 3 journals. Again, its
subjective.

What we need is not a pointing fingers solution, but a meritocratic one like
the internet itself. We need data-driven decisions and not yes man/woman
editors and wink-wink, nudge-nudge 3 crony reviewers behind closed doors.

It continues to baffle me as to how the scientific community can be so
scientific about their research and yet so unscientific about how they
communicate this research. Technology is making more science possible, and yet
we stick with thousand year old practices of evaluating and communicating
science?

------
OopsCriticality
Perhaps this problem is worse among biomedical journals? I've routinely had
publications in ACS journals that go from submission to acceptance within two
months, usually closer to one---I hear the same from colleagues.

~~~
rcthompson
Yes, I'm not sure what other fields may be affected, but I can confirm that
this is a huge issue in biomedical journals. It's especially a problem for
bioinformatics, where the combination of multi-year publication latency and
the typical fast pace of software development means that tools are often long
obsolete by the time they are published, and this results in researchers
relying on publications and choosing algorithms that are years out of date.

For example, if a researcher chooses an algorithm with a "fairly recent"
publication date of 2 years ago, that paper may have been first written 3
years prior to that, after a 1-year process of developing the algorithm and
another 2-year process of generating and analyzing a dataset that showcases
the algorithm so they can get it published. So that "2 year old" algorithm is
actually 2 + 3 + 1 + 2 = 8 years old. If the algorithm is in a fast-moving
field like next-generation sequencing analysis, it is probably obsolete
several times over (i.e. the tool that obsoleted it is already itself
obsolete), if it isn't completely useless now due to increases in dataset size
during the past 8 years (e.g. it can only handle 1 million sequences, but
today's datasets are 100 million sequences).

Any bioinformatics programmer knows this, of course, and only relies on the
publication record as a last resort for information about bioinformatics
algorithms. But a biologist who has a dataset they want to analyze and is
looking for an algorithm to use probably won't know this, and will end up
selecting a hopelessly outdated algorithm as a result.

~~~
chrisseaton
If you write a research result up as a blog article that doesn't stop you
publishing it in a journal does it? You could use blogs to communicate and
journal articles for more formal write-ups.

~~~
rcthompson
Most journals require that any research submitted to them must not have been
published elsewhere. On the surface, this makes sense, especially in the pre-
internet world in which scientific publishing developed. You wouldn't want to
allow someone to submit the same paper to multiple journals to inflate their
publication count. However, things like preprint servers and blog posts have
become a gray area with respect to these rules, and different journals have
different policies. I haven't heard of any paper being rejected due to a blog
post being considered "prior publication", but I also wouldn't be surprised if
I heard of it happening. But rejections due to using preprint servers
definitely do happen fairly commonly. The idea of preprint servers is
unfortunately still fairly new in the biomedical field and is just starting to
gain traction in the past year or two, so many journals still don't even have
an explicit policy on preprints, leading to surprise rejections.

~~~
toufka
Submit to Biorxiv [1]. Most journals have actually recently changed their
terms to explicitly allow pre-print publication.

[1] [http://www.biorxiv.org/about-biorxiv](http://www.biorxiv.org/about-
biorxiv)

~~~
a_bonobo
Here's a useful list of the bigger journals and their stance on pre-
publications:

[https://en.wikipedia.org/wiki/List_of_academic_journals_by_p...](https://en.wikipedia.org/wiki/List_of_academic_journals_by_preprint_policy)

------
chrisseaton
I'm glad I'm in programming languages, where we usually publish in conferences
instead of journals and you can get a paper out in 7 months if you want top-
tier, 4 months if you want mid-tier, or as little as 3 months if you just want
it published somewhere respectable.

And really you can shorten those timelines by a couple of months because
people make their papers available as soon as they're publicly accepted.

~~~
coliveira
This is not a privilege of CS. Most fields will have conferences where you can
learn about new research before it is published. The difference is that CS
people rely on proceedings of the conference, while traditional fields
postpone formal publication to established journals.

In my opinion this puts an unnecessary burden on conference organizers,
because they have in short notice to make a decision about the correctness of
the article as well as other more mundane matters such as the formatting and
presentation for the proceedings publication.

------
nonbel
>"Is this you? Your Ph.D. student has just finished 5 years of spectacular
work, the manuscript and figures go through 12 revisions, then your colleagues
and friends provide input for the next 10 revisions, you submit the perfect
“version 22” manuscript to a carefully targeted scientific journal appropriate
for the manuscript, and then you wait. And wait. And wait. Months elapse"

Funny stuff. In the end most of these papers are not reproducible anyway, so
what is taking so long?

[http://www.nature.com/articles/483531a](http://www.nature.com/articles/483531a)

~~~
epistasis
>The term 'non-reproduced' was assigned on the basis of findings not being
sufficiently robust to drive a drug-development programme.

Funny stuff. This editorial is about making it so that preclinical
publications are only published when they're ready for the clinic. It's not
about the same experiments being reproducible. It's not an indictment about
the quality of research or the utility of research for advancing knowledge.
However, this editorial is often cited by people outside of the field to
bolster something that it does not bolster.

~~~
nonbel
Forget whether independent replication worked out and how the authors make
confident claims in the paper ("this shows that", "therefore this is true")
then back down and admit it was preliminary data afterwards.

Can you find one biomed paper published this month that is reproducible in its
current form? I mean no missing/conflicting methodological information.

~~~
epistasis
>Forget whether independent replication worked out

Ok, so totally forget how you're citing an editorial that doesn't support your
pint and move on to new baseless accusations?

>and how the authors make confident claims in the paper ("this shows that",
"therefore this is true") then back down and admit it was preliminary data
afterwards.

This would be of interest if you had documentation of your extraordinary
complaints.

>Can you find one biomed paper published this month that is reproducible in
its current form? I mean no missing/conflicting methodological information.

My experience is that methodological descriptions are sufficient for
reproduction in the papers that I read. Given the outlandish claims you've
made, and your changing the subject when a bad cite is called out, I'd rather
see some documentation from you than go on a fishing expedition.

~~~
nonbel
Sorry for the miscommunication. I was trying to give biomed an easier hurdle
to surmount (reproducible at least in principle), not change the subject.

Your reading of the paper did not make sense to me, so lets figure that out
first. You wrote: "This editorial is about making it so that preclinical
publications are only published when they're ready for the clinic."

To me, the problems identified and suggestions provided by Begley and Ellis
sound more like science 101 lessons rather than complaints about premature
publication:

"In studies for which findings could be reproduced, authors had paid close
attention to controls, reagents, investigator bias and describing the complete
data set. For results that could not be reproduced, however, data were not
routinely analysed by investigators blinded to the experimental versus control
groups. Investigators frequently presented the results of one experiment, such
as a single Western-blot analysis. They sometimes said they presented specific
experiments that supported their underlying hypothesis, but that were not
reflective of the entire data set. There are no guidelines that require all
data sets to be reported in a paper; often, original data are removed during
the peer review and publication process.

[...]

As with clinical studies, preclinical investigators should be blinded to the
control and treatment arms, and use only rigorously validated reagents. All
experiments should include and show appropriate positive and negative
controls. Critical experiments should be repeated, preferably by different
investigators in the same lab, and the entire data set must be represented in
the final publication. For example, showing data from tumour models in which a
drug is inactive, and may not completely fit an original hypothesis, is just
as important as showing models in which the hypothesis was confirmed."

So why have we been able to interpret the meaning of these words so
differently?

------
mirimir
The paper doesn't mention [http://arxiv.org/](http://arxiv.org/).

~~~
drmcninjaturtle
Because closed-access journals (which are sadly still the prestigious ones)
prohibit pre-publishing elsewhere.

------
33a
Maybe if they paid reviewers they'd get the job done faster.

~~~
rgejman
You probably couldn't pay them enough to get the job done faster. Any amount
you could pay them would probably be insulting compared to the # of hours it
takes to properly review a paper. Also, we don't really want to increase the
cost of submission/publication/journal subscriptions any more than necessary.

------
brudgers
Date: 2012

~~~
auntienomen
Submitted 2012, published 2015.

------
coliveira
These complaints are just nonsense propagated by people who don't understand
the academic process. As a researcher, I don't need to wait for a paper to be
published in a journal to have access to its contents. Most respectable
research is already publicized in targeted conferences and stored in websites
such as arxiv.org and similar. That's exactly why researchers go to
conferences every few months, in order to learn about the new developments
before it appears on archived journals.

The reason why we still have journals is that they work as a recording of past
research that has been peer reviewed by a well known group of experts (the
editorial board). In fact, with the Internet pretty much any journal will have
a list of accepted papers so you can access them before they are formally
"published". So you can read these papers as soon as they receive the OK from
the editorial board.

~~~
et2o
Do you really think this reviewer for FASEB, an HHMI Investigator and a
tenured professor at Rockefeller University, doesn't "understand the academic
process?"

For better or for worse, biology isn't like computer science. Wet lab
experiments take orders of magnitude more time, interpretation of results is
often pretty complex, and there are a lot more articles published, which
together combine to make this situation a little different.

There have been several moves to establish ArXiv-like compendia for biology,
but they haven't really caught on yet. I'm not sure exactly why.

~~~
coliveira
This only proves that this person is working in a particular dysfunctional
field, among thousands of other scientific fields that have no problem with
the process. There is probably no research area more competitive than
astrophysics. The guys working there, however, decided to use arxiv.org to
make available research that costs them millions of dollars and several years
of work to produce. I don't see why the biological science community couldn't
agree on something similar. It is a social problem, not a process or
technological problem.

~~~
untilHellbanned
Yes, its a social problem. The reason is because for biomedical sciences there
are many more tangible levels of success - personal, professional, financial,
commercial — to be had. Nobody is gonna win in astrophysics like they can win
in the biomedical sciences. Therefore what you end up with is much more of a
hairball, mutually assured destruction scenario.

