I am an academic mathematician (albeit one with significantly less clout than Gowers). A few comments on both the article and some of the other comments:
(1) I wish I could upvote this article 100 times. I am in complete agreement with Gowers. I published a couple of articles in Elsevier journals in grad school, because my advisor thought it would be important to get my first job, but I'm pretty confident I can avoid this from now on.
(2) There are free online-only journals, e.g. http://www.integers-ejcnt.org/, unfortunately they are not very prestigious. I don't know what can be done to remedy this.
(3) One commenter suggested that peer reviewers should be compensated, but I disagree. First of all, you don't really "sign up" to do it; typically editors pick someone they know and just ask them to referee. I do a fair bit of this. It is not an unproductive use of time, as keeping up with research literature and thinking critically about it is already part of our job.
In addition, we are paid in a somewhat unusual way; we get flat salaries (plus grants) and are expected to do "service" in addition to research and teaching. If refereeing paid substantial money, where other informal methods of participating in the mathematical community do not, I think this would lead to an odd system of incentives. For example, for me a referee report might well take anywhere between five minutes and twenty hours. What amount of compensation would be fair? And would there be pressure for more favorable reviews?
(Note that "informal methods of participating in the mathematical community do not pay" is only mostly true.)
I'm an academic scientist. There are still huge problems with Elsevier in science, but there has been a huge amount of progress in making open-access journals more prestigious, which may work in mathematics. My favorite example is the collection of PLoS journals (Proceedings of the Library of Science). PLoS Biology was started in 2003, and in 6 years, they became the highest ranking Biology journal according to the traditional (though perhaps highly flawed) impact factor rankings. This is significant, because there's more money invested into research in biology/medicine than all other disciplines combined, so winning in such a monetized market represents a great step forward.
The best part of the PLoS journals is that anyone anywhere can read them, you can send them to your colleagues, since they're licensed under Creative Commons. And they tend to have the best user interface of any of the journals' online access sites.
There's a downside in that PLoS journals tend to ask for contributions from publishing authors to help cover costs. This is less of an issue for, say, a large biology lab with millions of dollars in funding, than it is for a theoretical physicist, or in your case, a mathematician. I think it would be worthwhile to add a part to grants asking for money for publishing in open-access journals. I don't know how grants work in mathematics, but in biology they're pretty flexible.
As another data point, in my area, artificial intelligence, the open-access journals have overtaken the closed-access journals and don't charge any publishing fees. They generally do this by, quite literally, having close to zero expenses, and covering the rest from donations/sponsors.
For example, the Journal of Machine Learning Research is now the most prestigious ML journal, and it has a $0 budget: it's hosted online on donated servers from MIT (http://jmlr.csail.mit.edu/), authors are expected to deliver publication-ready PDFs via LaTeX, administrative work is done by volunteer editors and students, and archival copies are printed off as print-on-demand by a third party publisher.
The Journal of Artificial Intelligence Research, the top-ranked journal in general AI, runs a somewhat more institutionalized operation (it actually has a few paid staff), but pays for it out of sponsors rather than author fees: http://www.jair.org/
I confess I'm not familiar enough with academic publishing to know why there's such a large difference in cost structures, and why other areas can't do AI-style free-and-open-access journals.
+1, I'm glad to hear that the rest of science is doing better than us!
Mathematics grants are relatively small, on the order of $20-60k a year. But senior mathematicians (of which Gowers is most certainly one) typically have larger grants. In mathematics, $1-3k (the publishing fee for PLoS) seems like a rather high barrier -- but I hope the coming years prove me wrong here!
The fee does seem high for individual academics, but I bet the costs of running a journal are low compared to what libraries currently pay for getting them. Moreover, storing information and making it available is a library's mission. Therefore, I hope that libraries, rather than individual researchers, start to host the next generation of open access journals.
Libraries also have infrastructure in place - they already host their own websites, so running a journal just requires adding another page to it.
The most annoying part would be coordinating the different reviewers, but I think that job would be done by the editorial boards, composed of researchers in the specific fields.
"There's a downside in that PLoS journals tend to ask for contributions from publishing authors to help cover costs."
If you look at libraries and academics (and their expenses) under the umbrella of the university as a whole, I have to wonder (and have no way of knowing) what the totals costs to the university would be, in comparing:
- The current model where the library (university) pays large money for publications they don't want.
- The possible model where the journals are free to the world, and the university pays the donation or fee for publication of specific papers.
Technically, what is required to overtake Elsevier is to retain the same level of editorial oversight, and then to provide more quality signals.
Sadly, none of the business roles of Elsevier can be replaced by an academic, whose schedule is busy enough.
What I'd like to see is more "republishing", like "reblogging" where instead of a journal being a one-shot affair, it is gradually cited up, and then eventually meets a threshold to be republished in a more important journal.
> There are free online-only journals, e.g. http://www.integers-ejcnt.org/, unfortunately they are not very prestigious. I don't know what can be done to remedy this.
Maybe some of the top tier folks who hate this can band together to publish their own research under some prestigious journal, then expand to include other worthy research and create an open and free rival?
The currently pending Research Works Act is a SOPA-like debacle that seeks to impede the free flow of scientific information to a degree previously unheard of. Thank you for lending your voice to stop this counter-productive madness.
Summary: Gowers outlines the extraordinarily oppressive business practices of academic publisher Elsevier, explains why they are able to continue to do so in spite of widespread anger amongst the community (collective action problem), and goes on to explain how we might be able to solve this problem by publicizing the actions of people who've taken a stand.
This is quite important. The main lock in effect of a Elsevier journal is the impact factor. Researchers careers are made on the regard for their papers, and the most visible component of that is the citation rate. The academics who can afford to refuse to publish in Elsevier journals are the well known and well regarded ones. So getting the top academics - such as Gowers - to publicly disavow Elsevier is the first step.
Considering the entire system of journals and papers is about reputation rather than profit (from what I understand, nobody in academia gets money from the publishing process), it’s a prime candidate for disruption. If a small group of universities started publishing all their papers on an official website (maybe with an opportune system of ranking, to somehow reflect quality of the reviewing process and make it really equivalent to traditional journal publishing), then the incentives to publish in an Elsevier paper would disappear. The system could then grow as more universities join.
I’m surprised nobody has done it yet, there must be some stumbling block I’m not aware of.
A record of publishing in prestigious journals (many of which are run by the major publishers) is generally good for tenure review and grant proposals. Starting a new publication is challenging because you have to attract the best quality work in order to gain prestige.
The current publishing model is a frequent topic of conversation among my colleagues. We share and discuss papers (formerly on Google Reader, now using substitutes like G+) from many sources. I would like to see http://arXiv.org acquire a public review process.
That's the main problem I see, really: buy-in from high-profile institutions. If a decent number of them started to publicize that they see papers published with New-Uber-Journal-System.XYZ as "preferred" over traditional papers, then people would queue up to publish there, and would force others to follow suit or be seen as "2nd rate". To do that though, NUJS better be have a bombproof method to produce quality reviewing (high-profile reviewers, a system of incentives targeted on quality not quantity etc etc).
You can do that de facto in smallish areas through "journal revolts", which have happened a few times, where the senior people of the field all band together to endorse/run a new open-access journal, and basically say, "as far as we're concerned, this is now the top place to publish". For example, when nearly all the senior editors of the journal Machine Learning resigned to form the open-access Journal of Machine Learning Research (http://www.sigir.org/forum/F2001/sigirFall01Letters.html), that also sent a pretty direct signal that JMLR was the new place to publish, at least in these senior researchers' opinion.
I find it baffling that the academics who peer review journals aren't well compensated monetarily for their efforts. Peer review is the only value added to the journals, especially with websites like Arxiv around. As I understand it, academics sign up to be peer reviewers because it adds prestige to their carriers.
Two things need to happen to change academic publishing:
1. Being a peer reviewer for a corrupt journal needs to be viewed not as a feather in an academics cap, but as a contribution to a corrupt system. Basically reviewing Elsevier journals should hurt your carrier not help it.
2. A new system of publishing needs to arise based on well-compensated (and hopefully more effective) peer review and cheap access.
Start-ups can address #2, but you will be dependent on high profile academics like Gower constantly speaking out against the old system.
Perhaps the stumbling block is the personalities of the kind of people who want to work on it. They tend to be open, generous, liberal, accepting. Perhaps journals become prestigious only when run by high-handed, arrogant, rejecting editors.
Prestige is not something you decide you want in the morning and have by evening. A lot of the most prestigious universities are very old, and so it is with the most prestigious journals.
The whole academic culture needs to be changed. People like Gowers who already earned their stripes can begin to change attitudes. 99% of mathematicians can do nothing except hope they can get a permanent job one day. And even then their institutions are going to force them to undergo yearly evaluations where they have to give an account of what papers they wrote that year (too bad if you are working on a multi-year breakthrough) and whether the papers were published in journals they deemed respectable (universities are also competing with each other for prestige).
Another problem is that too many mathematicians are devoutly religious when it comes to speaking out. Any kind of rocking the boat is likely to affect your karma (chance of getting tenure/promotion/funding).
Okay, so where's the Reddit/Digg for scientific research papers and articles? There could be some sort of "dual voting system" where against each article two vote counts are maintained -- one set of votes by the editorial team ("peer review") and the other set of votes by the community at large.
The editorial team could be selected through a semi-democratic process, if required.
Is it that tough? How many big names in science are required to pull this through? The technology is dead-simple - the main problem is to cross the critical threshold of number of articles submitted and number of editors.
The technology isn't the hurdle here, it's the prestige. The academic publishing world is all about who you are, what institutes you belong to, etc. To put it bluntly, a Reddit-like journal site just wouldn't have enough "cred" to attract decent content.
Indeed, get someone like MIT to do an OpenJournal to complement the rest of their OpenX initiatives and you might start getting buy-in.
The problem is that the system is broken - publish n papers => you're better than before (ignoring the content of the papers of course). Academics are encouraged to publish (and re-publish older stuff with a slight tweak) to meet publishing 'targets' handed down by government.
This model feeds into the Elsiver etc closed publishing as academics are forced to compete to earn their stripes - so now we have a model that encourages re-publishing crap and then locking it behind a paywall, but when I ask PhDs if they think this is fine, they don't see the problem :(
There's http://www.peerageofscience.org/ which is trying to fix the problem of peer-review quality and speed: instead of submitting to a sequence of journals and waiting for reviews from each one before you can submit to the next one, you would first get your paper properly reviewed and then offer it to journal editors. There would be some kind of peer control of review quality, so reviewers would have an incentive to do a good job.
That company is explicitly not trying to do anything about evil publishing houses, but if it takes off, it will move one part of the publishing process out of the control of publishers, into the hands of scientists. If reviews through this system get to be known for high quality, then maybe a good review from them for your open-access paper will be more prestigious than getting it published in an Elsevier journal.
You could reference another paper that discusses it (assuming there is one), perhaps one that references the paper in question, ideally by the original authors. It's common for authors to write a series of papers on some topic.
This doesn't give the ideal credit, but it addresses the concern you raise.
In some cases that can be true, but with the current shotgun approach to academic publishing, the authors have often republished variations on a paper, covering slightly different aspects with different framing, in 3 or 4 different venues, so sometimes there'd be scope to choose to cite a non-Elsevier one.
Ok. This means that even more authors might republish the same research in different venues - just in order to avoid their research being ignored because of their choice of journal. Seems like another danger of an Elsevier boycot to me.
This whole situation is doubly absurd when you consider clinical trials, where patients have risked their helath or died to prove that certain treatments work or do not work. That data then gets locked down by these shameless monopolising profiteers.
I think it is completely unethical that such a patient cannot access the final report about the trial they participated in without paying $31 to Elsevier, or just settling for an abstract.
There are web apps that do this already. The process itself is straightforward (at least for mathematics). An editor-in-chief and a group of associate editors (who are always top academics in related fields) accept publications, sometimes to their email addresses, other times through some automated system. The papers are then assigned to the appropriate associate editor who is handling that particular subfield. The paper is then sent to a referee (sometimes two or three, especially if the first review is not satisfactory). The referee will be an expert in the field capable of evaluating the work. The referee (after some months, or more) will return a written report. Part of the report will be for the editor, the other part for the author of the paper. Some journals also request that the reviewer fill out some additional details, such as scores on how important, correct, appropriate the paper is, etc. The decision is passed on to the editor and if the editor agrees the author is informed of the outcome, along with perhaps an edited version of the author comments. They may be required to amend some minor problems in the paper, rework it substantially and resubmit or it may just be rejected.
Publication itself can be complex. It often involves journal style files, professional typesetters, sending page proofs to the authors for approval, getting transfer of copyright forms signed, dealing with diagrams and so on. This side of things is the side that fewer mathematicians actually care about. Much of it is not relevant for web-only publications.
high quality work could be measured by community support.
Absolutely not. Assessing the quality and correctness of groundbreaking research requires a large amount of both talent and effort; a reddit or HN style voting system would swamp the signal with uninformed noise.
Absolutely agree. It seems to be really hard even for experienced researchers to assess which papers will be seen as important in the future .
However, maybe something like PageRank might work out well. The more citations your papers get, the more weight your reviews will have. Certainly, there are problems, like people learning how to game the ranking system.
On the other hand, the current system is certainly worse in this regard.
That would certainly help. But more than anything else, I think what we need (at least in computer science) is a mechanism to penalize people who publish the exact same research at half a dozen conferences.
Running pagerank on authors with an unnormalized weighting function of "how many papers does author X have which cite something written by author Y" (so that if author Y publishes the same paper multiple times it doesn't have any effect even if both copies get cited, while author X publishing the same paper several times downweights all of his outgoing links equally) could work, though.
Don't search engines already penalize duplicate content?
Maybe it would suffice to 'just' put all papers online, convert all references to hyperlinks, and let Google sort out what is important and what not.
Actually, (at least in computer science) I don't think that stretching one's research across several publications is being done widely - at least when looking at 'top' conferences.