> someone has to bear the cost of the review process
Maybe you are aware of it, but this argument is bogus in any case, because reviewing is unpaid labor. You can make an argument that someone has to organize the review process, but publishers usually don't pay for that process (except paying token amounts to an editor for some journals).
The cost of the review process in this particular two sided marketplace (which publishing is - how do you decide which journal to send your paper to?) is not just the unpaid labor of the reviewers, but rather the establishment of the credentials and authority which the journals build up over time, which basically drove down the cost of reviewing to be much smaller than if it were simply a free for all.
To see this in action: Many OSS software projects have trouble attracting quality contributions because they are simply not well known. To the community as a whole, the cost of soliciting contributions is not merely the difficulty of modifying the software, but in addition the promotion of the project itself to the point where the only costs have to do with making said modifications.
I could have worded it better (maybe cost of the review + credentialing process), but I think this is going to be the pain point in open access, just like even folks here on HN say that OSS sometimes resembles the wild west.
I think you'd be amazed how small most of the scientific communities in certain areas of research are. If you have the names of a few well-known researchers on the editorial board or in the list of reviewers of an open-access/electronic journal, this will instanty establish its credentials and authority. After all, researchers care mostly about the quality of the work and the results, and not so much about the branding.
To be honest, I don't really know why this hasn't picked up yet. Maybe the scientific community is not very easy to get organized, as researchers are pretty busy with research, teaching, writing grant proposals, serving on committees, acting as reviewers, etc. There have been some attempts in CS with various success, though.
Another factor may be that most research (at least in CS) gets presented at conferences, gets published in conference proceedings first, and then journal publications are mostly an afterthought, and in many cases are skipped entirely. Organizing a conference with open-access proceedings may not be as cheap and easy as setting up a web-site and getting a few well-known names to serve as reviewers...
Plenty of rankings of universities are done based on research output, based on upon where publications are, often based on rankings that are rarely updated of journals and conferences.
There's not much credential setting. I was getting proposals for reviewing papers shortly after my first one published, and honestly, it wasn't an incredibly nice one that could make me a name.
Publishers just coast on their current name, and are in a winner takes all market where the best authors will try to publish at the most famous journals. They are purely rent seekers, and won't go out without some external intervention.
I believe we both agree. However, when evaluating the costs of the mediating entities such as Elsevier the assumption is that you will be starting from scratch if those entities were to disappear tomorrow, and you will not simply be usurping their brand names/prestige/authority etc. The publishers may be coasting today on their current names - but note the comment in an adjacent thread [1] which talks about new publications and their general challenges - there was a real effort at some point to build up a name they can coast on, and that cost was probably not trivial.
In fact, even though I started the Elsevier bashing in this thread :-) - until we find out what the replacement system looks like I think we may not even completely see all the costs involved. I don't think the existence of the internet is suddenly going to turn the research publication process into a very resource efficient system.
The rent they are coasting on is dictated by government and NGO bureaucracy. Until universities and grant distributors stop judging researchers mainly by the amount they publish on Elsevier journals, no newer publication will become prestigious. In fact, a few years ago I was doubting newer publication would ever get as far as they did.
Besides, research publication does have hight costs. But nearly all of that cost is not beared by publishers.
> The cost of the review process in this particular two sided marketplace (which publishing is - how do you decide which journal to send your paper to?) is not just the unpaid labor of the reviewers, but rather the establishment of the credentials and authority which the journals build up over time, which basically drove down the cost of reviewing to be much smaller than if it were simply a free for all.
This is a valid point, but why should the commercial publisher reap all the benefit of this authority and brand value? The brand value was built by researchers publishing high quality work and doing high quality reviews.
Further, why couldn't just the same thing happen with non-commercial free open access journals?
Right now, I am outside the academic system, so obviously I only speak for myself, plus based on my knowledge from many years ago. Maybe the whole process has changed drastically - so please correct me if things have changed.
Even today, every single piece of publication already has multiple choices: a) be uploaded as an unreviewed PDF on the authors' websites - this grants the paper absolutely no credibility obviously, except it may have been written by a respected authority - and definitely you get no benefit as a researcher b) be sent for review, become subjected to copyright, but on most occasions where an unformatted (i.e. preprint) can be uploaded on the website with the express understanding that no one would ever cite that version because it is possible for content to change from that version to the final print version and c) be sent for 'stamping' as the authorized final version, at which point the trouble usually starts.
As you progress along each step, you are basically getting additional benefits - they are somewhat intangible, but for researchers these benefits do translate directly into the currency that they care about - acknowledgment of their work as a part of the citation graph (which then translates into career benefits). So no, the commercial publisher does not "reap all the benefit of this authority and brand value". The commercial publisher does reap all the tangible monetary benefits of course.
Very few authors would willingly submit themselves to the painful process of paper review if they felt someone else was getting ALL the benefits.
Your question could be phrased as whether the publishers get an unfair share of the benefits - yes they do. And do they use some strong arm tactics to preserve their ability to get gobs of money for effectively very little work - yes they do.
And "couldn't just the same thing happen with non-commercial free open access journals?" I think it will, I just feel the road will be quite bumpy until we get there.
I will assume for a bit that the vast majority of participants in the review system are academics. So, one of the main reasons the publishing process moves at glacial speeds is because many reviewers are time constrained academics. Nothing in the open access process will change that fact, but many people will be affected by the shift to open access - suddenly you don't even have an organization that can be held nominally accountable for speeding it up. Academics very rarely like being told what to do, and at the slightest sign of dissent against their handling of the review process, are more likely to stop contributing their efforts towards organizing because the rewards are not very tangible.
The OSS ecosystem faces similar issues and still gets a lot of work done at impressive speed - but remember that usually the top contributors in OSS have immediate positive feedback in terms of the adoption of the software (at least) but often much more tangible benefits such as acknowledgement of their efforts in public and VISIBLE forums, sometimes even employment. Very little of this is true for participants in the review process.
Yes, improvements in search will help, but the costs won't go down that easily. In Google's case, the initial seeding of page rank was quite manual. And then think of the cost of upkeep - people are trying to game search engines continuously, Google has to update its algorithms on a consistent basis, content farms have profited enormously at various points of time and needed to be literally programmed against, and finally Google guards the actual search algorithm closely.
In the research domain, solving these problems would actually be even harder (in my view). How do you know if you found the best paper, or just the paper which is the best match for your keywords? At least Google has a feedback mechanism - someone stays a long time on a given webpage if it is very relevant to what they are looking for. This is not a good metric obviously, it might happen on a research paper simply because it is too obscure :-)
Well, you could always look at citations. And I don't think they are as easy to game as links between general websites, because in research at least the authors publish using their own names (and you don't want to get a bad rep for gaming the system).
With the disclaimer that this is an anecdote and not data:
You would be surprised at how easily a winners win situation happens in research. The citation based search would reinforce it. And while the gaming may not be search engine focused, I think getting the best papers via algorithmic methods can omit the crown jewels through less insidious (but quite common) issues such as citation graphs which orient in the direction of the flow of funding.
But you say, maybe winners win for a reason. This is only personal experience, but the single most profound, creative paper I ever read during my years of research was written by a lone wolf (i.e. no collaborators) in a somewhat unknown institution who turned out to be a sort of one hit wonder. This person's h-index may very well have been exactly 1 at that time. I honestly think algorithmic methods of searching for literature would have skipped past that paper.
You could make the case, though, that a thorough literature survey should be as exhaustive as possible and not omit ANYTHING. Well, very few people are that thorough - and even when they are, there is a tendency of reading papers from the most popular authors first. I am just glad I did my work before the days of Google Scholar becoming the de facto starting point, and I did not have the bias of a pre-ranked list.
I think that is the actual fear: I was able to find this crown jewel precisely because the publishing process at that period was more centralized (although quite likely also less competitive), and that paper was eventually published at a pre-eminent conference - which is how it came to my attention. With a search-engine driven open access, I think this lone wolf would have had a harder time getting that fantastic piece of work in front of a big audience because many of the common signals would have been too weak.
With all that said, when open access becomes more pervasive, great search technology will be a big part of the cost reduction and I definitely look forward to that.
Good point, but if you make the comparison with the web, you can find the more popular pages using a search engine, and you can find those lesser known jewels by using services like HN :)
Seriously, everyone from the reviewer to the AE to the editor is doing this for free in almost every case. Or not for free exactly, but not for money either. For academics, reputation is the coin of the realm. Academic publishers have tried (quite successfully, until the last couple of years) to corner the market on prestige. This means they can get academics to do all kind of work, without spending any actual dollars.
I submit that it is immoral to review for a for-profit journal without receiving appropriate compensation[1]. Doing so and then complaining about the extortionist behavior of the established publishers is also hypocritical and/or stupid.
Attempts to boycott a publisher or two have happened before, but they are useless. Scientists are hurting themselves by not publishing in the high ranking journals or by not reading them. Boycotting their exploitative review process, however, would hit them where it hurts and cost very little.
[1] Uncompensated review services for any non-traditional journal are equally immoral, but that's another topic.
Maybe you are aware of it, but this argument is bogus in any case, because reviewing is unpaid labor. You can make an argument that someone has to organize the review process, but publishers usually don't pay for that process (except paying token amounts to an editor for some journals).