
After 350 years of academic journals it’s time to shake things up - Hooke
http://www.theguardian.com/science/political-science/2015/apr/04/after-350-years-of-academic-journals-its-time-to-shake-things-up
======
arcanus
"Indeed it has been said that Democ­racy is the worst form of Gov­ern­ment
except for all those other forms that have been tried from time to time."

-Winston Churchill

\-----------

While I broadly agree that the dissemination of academic research can and
should improve, it is not clear to me what will amount to a superior (while
still robust) system. Nor does the article clearly articulate a viable
alternative.

The mathematical sciences have provided remarkable utility for the last 350
years in part because of the robust feedback loop of, 'hypothesis-experiment-
disseminate'. I believe this, 'science is broken' hype is largely just that.
No one throws out a codebase before a superior (and regression tested!)
alternative is ready to be deployed.

My 2 cents: 1) Let's push for publicly funded research to be made publicly
available, regardless of what journal it was published in.

2) Let's think about incremental changes to peer-review that can improve
dialogue and speed-up review times. Instead of waiting 6 months for a late
paper reviewer to drop a series of comments in my lap, why not a wiki or
github-like editing and annotating environment where we can (anonymously)
generate comments and a fruitful dialectic. Then, let's include the comments
(again, anonymously) with the paper. Show how the sausage is made-- scientists
have little to hide.

FWIW; I am a practicing scientist in the field of computational engineering.

~~~
Retric
A reasonable compromise IMO is for Journals to have a very temporary lock on
new publications. Public access after say 1 to 2 months seems to meet the vast
majority of "Open Science" requirements without much change required.

2) IMO a much more stringent form of peer review is necessary. P<0.05 is only
useful in _very_ limited situations. I would argue that unless you republish
your exact experimental design beforehand you would need to drop that to P<.01
to cover for a lot of _marginal_ research out there. The old do 10 studies and
you can likely publish one of them is harder when you need to do 50 or fudge
the numbers harder.

~~~
munificent
> 2) IMO a much more stringent form of peer review is necessary.

That disregards the cost of review. Reviewing and repeating experiments uses
resources that could be spent elsewhere. On the long tail of papers that are
not high impact even if correct, that's probably not a good allocation of
effort.

What I think would be better is that as a paper crawls up the impact curve
(probably measured by citations), then additional resources should be spent
validating and repeating it.

In theory, that's how the system already works since everyone wants to build
on top of or refute famous papers. In practice, I don't know if it does.

~~~
bjelkeman-again
Repeating experiments is not a wasted effort. "More than half of biomedical
findings cannot be reproduced – we urgently need a way to ensure that
discoveries are properly checked" \- Elizabeth Iorns is co-founder and CEO of
Science Exchange.

Science which can't be repeated, that is problematic.

[1] [http://www.newscientist.com/article/mg21528826.000-is-
medica...](http://www.newscientist.com/article/mg21528826.000-is-medical-
science-built-on-shaky-foundations.html)

------
capnrefsmmat
There's an interesting asymmetry in peer review. If I invite an outside
scientist to review my work once it is complete, that's external peer review,
and it confirms that my work is sound. If I invite the outside scientist to
collaborate on the work from the beginning, so they can catch mistakes before
they ruin the work, that's _not_ peer review -- I'd have to go out for peer
review at the end.

Why the focus on catching errors when they've already been committed?

Some journals
([https://osf.io/8mpji/wiki/home/](https://osf.io/8mpji/wiki/home/)) have
started "registered reports", where you send your _methodology_ out for peer
review before running the experiment, and the journal commits to publishing
the results if the methodology is sound. This seems vastly more reasonable,
though obviously it's only possible if you know exactly what you want to do in
advance.

So why not switch toward reviewing work earlier, through registered reports or
collaboration, and end the artificial bottleneck at publication? It would
prevent wasted time on flawed experiments and remove some of the bias against
negative results.

(Of course, the other problem is that peer reviewers aren't very good at
detecting errors -- even serious ones. Medicine has moved towards standard
checklist for common types of research, requiring papers to report every
important methodological detail so review can be more thorough. But I don't
think you'll improve the overall quality of peer review without changing the
current "fine, I'll review it in between writing three grant proposals and
grading 150 exams" voluntary review culture.

~~~
Kalium
When collaborating, people become attached to their work. This inevitably
clouds their judgment when it comes time to review, especially given that
financial ties are often involved.

External peer review reduces those problems.

------
benbreen
I just submitted my first peer review for a journal and it was obvious to me
within the first page who had written it. Not necessarily a condemnation of
the process (I think it remains the best we have available) but now that I see
behind the curtain a bit, it's striking to me how peer review is at best a
semi-blind process. (Likewise, looking back at peer reviewer's comments on my
own work, I can now guess who most of them were with the benefit of a few more
years experience in the field, mainly because everyone tends to cite their
friends).

The biggest current problem seems to me to be the ridiculous time lag between
submission and acceptance or rejection (up to a year in many cases). If there
were more of a financial incentive, I can imagine someone making a peer review
platform that circumvents having to add track changes comments on Word and
manage everything via emails (which is the norm in my field). The thing I'm
imagining would be both a document sharing and editing platform and a CMS that
allows journal editors to manage scheduling and assigning tasks, sort of like
a customized version of Asana wedded with Google Docs. Anyone know if
something like this already exists? I'm coming from a social sciences and
humanities background so I don't know what the norm is in STEM fields.

------
curo
One thing I noticed is that there's a lit review at the beginning of most
research papers. That's a lot of work just to get people up to speed.

Thought not a complete solution, I envision a github-like environment for the
research and links to wikis and other repos for the lit.

People could openly submit "bugs" with methodology or conflicts between
studies (as a replacement or supplement for peer review). You'd be encouraged
as an author to publish research that was well followed/starred, but most
importantly you'd be flagged if you had a bunch of open bugs.

(Bug resolution then would have to be approved by not just the researcher, but
perhaps an Stack Overflow-like "close" voting system.)

~~~
irremediable
This is a nifty idea. The only problem I see with it is that it makes papers
far less self-contained. IMO some of the best-written papers let you
understand something of their impact even if you're not too familiar with the
field. Taking away the lit review hurts that.

You might argue that a single collaborative lit review would do -- so readers
could just print out that plus the paper. But often that lit review section is
tailored to your specific goals and audience. To do it well, I think you'd end
up having hundreds of such lit reviews for a given topic. In which case, why
even bother?

(I didn't mean to sound so harsh, by the way! I do quite like the idea of
bringing some of software's collaborative tools over to research.)

~~~
curo
Yea that's true. And I even enjoy that self-contained aspect of papers,
especially when I'm reading up on an unfamiliar field. But perhaps they could
be more high-level.

Some authors go crazy. Say for instance, by explaining philosophy-of-mind
concepts in an addiction paper.

------
b_emery
> “Authors still create journals in prose-style — do we really need to produce
> all that text?”

I used to think that the answer was no, but recently I've become convinced
that the answer is yes, we really do need to produce all that text. First,
'all that text' is really a misnomer, scientific papers are pretty dense (c. f
any letter to Nature). Second, the audience here is not a machine, it's a
human, and humans are inherently biased toward stories. The best papers have a
story (in the journalism sense, no the fiction sense), and they use this story
to make a convincing argument. Without the story, you have data and data is
not an argument. As much as I'd like to believe that the data would be enough,
we have to remember that we're not producing arguments in a vacuum. The
audience comes to the table with a lot of preexisting ideas, biases and
beliefs and unwinding these is non-trivial. Think about evolution and Origin
of the Species. Looking at the data now, it seems so obvious, but even the
data along with stacks of arguments was not enough to convince people for a
very long time. It's work to re-write a person's mental models - all that text
is needed to get the information in there.

------
japhyr
I hope people who care about this issue are aware of the Open Science
Framework [0], which is being developed by the Center for Open Science [1].
It's a great project that's helping to open up the entire process of doing
science.

[0] [https://osf.io](https://osf.io)

[1] [https://centerforopenscience.org](https://centerforopenscience.org)

------
JimboOmega
I work for a company that is trying to change this - academia.edu. We're all
about open access to the world's academic research. Peer review is definitely
something we're working on!

We're also hiring in SF, if this interests you (jon at academia.edu)

~~~
throwaway2934
My privacy expectations for peer review are quite high. Perhaps higher than in
any other case.

However, I expect the entire world to know any time I accidentally follow a
link to academia.edu. And I especially expect that academia.edu will try their
damnedest to figure out who I am and sell that information to someone. Put
simply, academia.edu feels like linkedin.com.

This juxtaposition is going to be a huge issue for academia.edu if they try to
get into the peer review software game.

I would sooner not participate on an editorial board/program committee than
use academia.edu to submit a blind review. I simply would not trust a platform
associated with that site or its parent company to take privacy seriously.

I suspect this attitude is fairly common. Linkedin/Facebook/Academia.edu exist
solely because privacy isn't perceived as more valuable than the services
provided. But every academic I know is extremely serious about protecting the
anonymity of reviewers.

If you want such a product to succeed, you'll need an extremely strong pro-
privacy pitch (backed up by The Truth) along the lines of "your data is used
for exactly nothing other than displaying peer review results for this
particular paper to the authors and the other board/committee members".

Especially since most academic's perception of academia.edu is "that site that
lets me see who reads my papers."

~~~
hoopd
1) Have a good idea X. 2) Reach out for funding and technical knowledge. 3)
Investors and new co-founders want "Facebook for X" 4) Good idea is now a
social network embedded in X's problem space.

------
taksintik
I think the Internet has done that already. Peer reviewed now represents a
much larger critical constituency.

