
Academic publishing as ruinous competition: Is there a way out? - randomwalker
http://33bits.org/2013/07/15/academic-publishing-as-ruinous-competition-is-there-a-way-out/
======
frossie
There's work being done in this area; whether it is going to be ultimately
successful I don't know.

In terms of generating metrics better than journal publications and citations,
there's projects like this:

[http://impactstory.org/](http://impactstory.org/)

(no connection, I just heard of them in this context). These kind of metrics
would have addressed the OP's "blogger" problem.

Another strand, which at least my small scientific field is pursuing, is to
facilitate citations of datasets, or dataset+software combinations (think of
it as making a directory structure citeable). This (in theory - it's a recent
development) rewards people who take their time to clean up and give open
access to their datasets, instead of doing the minimum to extract a couple of
papers out of it and move on.

------
infinity0
I think academia has a lot to learn from internet-based free open source
projects.

I find much easier to learn from RFCs, git-hosted documentation projects, and
wikis. (Going from more professional/less open to less professional/more open
- not that these are necessarily exclusive, but it seems costly to have a
system that satisfies both.)

For documentation, I really love sphinx[1] and rst[2]. I know some people love
markdown and it's good for light manuals/READMEs but IMO not adequate for full
documentation.

[1] [http://sphinx-doc.org/](http://sphinx-doc.org/) [2]
[http://docutils.sourceforge.net/docs/user/rst/quickref.html](http://docutils.sourceforge.net/docs/user/rst/quickref.html)

~~~
javert
I really disagree. Writing actual papers and having them peer reviewed is a
useful model for actually doing science, and you can learn a ton from reading
good academic papers.

I don't know why you bring up documentation, the current tools used to write
academic papers seem fine.

As the OP states, the problem is the incentive system, not the particular set
of tools and practices.

(However, I don't think his suggestions are fundamental enough to change
things, and I am skeptical about his claimed expertise on the stock market and
copyright.)

~~~
jbooth
Writing down your thoughts and having them read by people is indeed pretty
critical for science.

What does the current publishing model do for us that a blog wouldn't? What
does academia-speak accomplish? From the few CS papers I've read, generally
speaking the harder the paper is to understand, the less profound the insights
contained. What does the formalism get us, besides formalism itself?

~~~
glesica
Peer review doesn't demand arcane terminology or difficult-to-understand
writing. Ideally, peer review is just someone writing down some thoughts and
other people who have some expertise in the field providing comments on those
thoughts before they are published.

The problem with a blog is that there is no way to screen for quality or
obvious mistakes. Of course the current method of doing this might be
imperfect, or even completely broken depending on the field, but the solution
isn't to just tell academics to start blogs and call it a day. It's a more
complex problem than that.

~~~
jbooth
I guess I'm not seeing a categorical difference. Someone posts a blog to HN
that's full of garbage and bad numbers and it gets debunked. We don't have
journal subscriptions or piles of bureaucratic BS, just some people who are
knowledgeable in their area talking shop.

I don't see where all the ceremony creates enough value to pay for itself.

~~~
glesica
That's true, and it works well for subjective or popular topics. However, it
works much less well for highly specialized, technical topics. I have noticed
that quite a few very technical, or highly specialized articles end up with
just a small handful of comments. This is a problem in research because just
about everything is technical and specialized.

Another problem is recruiting enough people to actually review all the
research produced. HN has a handful of links per day. There are tens of
thousands of researchers around the world doing work. Peer reviewers aren't
generally compensated monetarily, but they have a strong moral incentive to
help out and review articles, especially if they themselves have published in
the journal in question.

Again, I don't want to come across as defending the entire system. There are
certainly things to be improved. All I'm saying is that I don't think the
answer is to just tell everyone to start a blog. It's an appealing idea from a
populist, egalitarian, and perhaps "hacker" perspective (just get the
information out there!), but the current system provides a bit more than a
simple communications medium.

------
abecedarius
I would recommend the book mentioned at the bottom as well --
[http://www.amazon.com/Reinventing-Discovery-The-Networked-
Sc...](http://www.amazon.com/Reinventing-Discovery-The-Networked-
Science/dp/0691148902)

------
ekm2
We need to have HN for academic research papers:Karma counts as the citation
index,or something similar.

~~~
betterunix
I see a bigger problem: the fact that things like citation counts are being
used as a metric. We should be judging researchers by whether or not they are
solving problems (or if they are helping other researchers solve problems --
like developing new research tools and techniques, or publishing negative
results that tell other researchers what to avoid). It makes no difference
whether a paper is cited one time or a hundred times. It makes no difference
if a research publishes one big paper or ten smaller papers. What matters is
whether or not progress is being made.

~~~
czr80
But this avoids the central issue - how can we know if a paper helps people
make progress on solving problems? Citation counts are just a proxy for
exactly that - to move away from them you have to come up with a better proxy
or evaluation method.

~~~
testbro
In the UK universities' research output are assessed by HEFCE using the
"Research Excellence Framework" (which replaced the Research Assessment
Exercise) [1]. Sadly, it suffers the same problem of being another metric [2].
It's a time sink for academics and it fosters a cutthroat atmosphere in the
run-up. I heard some unconfirmed rumours that academics not submitting case
studies to the REF will have their contracts re-assessed.

I think this exposes the bigger problem of metrics in general. It's hard to
determine how much impact research has, particularly in the short term. I'd be
interested to see what the outcome of the REF is, but I remain sceptical and
apathetic towards it, as does everyone else I've spoken to.

[1] : [http://www.ref.ac.uk/](http://www.ref.ac.uk/)

[2] : I get the impression it does anyway - I'm not involved in it this round.

