Hacker News new | past | comments | ask | show | jobs | submit login
French Universities Cancel Subscriptions to Springer Journals (the-scientist.com)
555 points by apsec112 on Apr 2, 2018 | hide | past | web | favorite | 160 comments

The alternative business model that the world is moving to is Open Access. The difference being that instead of paying to access the journal or paper, the researcher or institution that wishes to publish pays up front to have the paper published.

Open Access has its own problems, such as predatory journals, where researchers who don't know better or who are desperate to publish are more or less lied to as to the reach and validity of a journal. It has become an area ripe for a new kind of scammers. This has prompted efforts such as Think, Check, Submit[1].

There's also the problem with raising the funds to publish something as Open Access. It's not always the case that the researcher actually has the means to pay for Open Access.

Nevertheless, Open Access is clearly the way where the research community is headed, and we're going to see a steady growth in percentage of research published as such over the next decade or so. But it does come with its own set of problems to solve.

1: https://thinkchecksubmit.org/

It is not true that all open-access venues require authors to pay article processing charges (APCs) to publish with them. For instance, in my field, https://en.wikipedia.org/wiki/Logical_Methods_in_Computer_Sc... is a reputable journal which is free to read and free to publish; it is hosted by the Épisciences platform which is managed by a French public agency (CCSD). People usually use the name "diamond open access" to refer to such journals, which I believe are the best direction for scholarly publishing. By contrast, we talk of "gold open access" to refer to the situation where traditional publishers charge APCs: as you point out, this approach has several problems, in particular the fact that the APCs are often extortionate (e.g., $2000 for a 12-page PDF provided by the authors in a completely typeset and publishable form). However it's not correct to say that open access is expensive -- that's only the case if one stays with traditional publishers.

It's also important to keep in mind that "Think, Check, Submit" is an initiative backed by traditional publishers (of the "gold open access" kind), and I think they are making this a bigger problem than it actually is. In my area, everyone knows which conferences/journals are reputable, and everyone knows that all others are essentially scams (especially if they have APCs). I have never seen anyone make the mistake of publishing valuable material with a scam publisher.

This. Besides "diamond open access" some in the open access community try to establish the "fair open access principles" [1]. For a journal to comply to these principles means that they are owned by the scholarly community, free for authors and readers, authors retain copyright, and any other costs paid by the journal to a publisher are low and reasonable. We have started to put up a network for journals that follow these principles with the "Free Journal Network" [2] to promote and support each other.

Concerning other comments on funding, I wanted to add that some journals don't need much money (hosting is basically free, workload shared well by editorial board). On the other hand funding can and does come from universities and their libraries (money not spend on subscriptions), research institutions, museums, and donations (sometimes seen as volountary APCs by those who have the money from grants).

[1] https://www.fairopenaccess.org

[2] http://freejournals.org/

From what we've seen, the problem that Think, Check, Submit adresses is not necessarily a huge problem in countries that has a strong academic tradition. However, once you start looking at places where academia is just emerging, the difference can be huge.

Also, it's absolutely worth noting, like you point out, that different fields have different processes for accomplishing the same thing. It seems that younger academic fields are a bit more independent of the traditional structures. I hear that in Computer Science, conferences carry more importance than they do in some other fields, for example.

In other fields, there's a lot of inertia to deal with.

I'm going to be honest. IMHO publishing a paper should be like doing a PR on github and there is no reason it shouldn't be.

You do your research, read the "PR" i.e. submission guidelines and submit it, - sure let the committer and reviewers be anonymized until the PR has been accepted - then just pull it into a "Repo" aka journal and be done.

How can this process cost tons of money?

For the same reason that any journal with a full time editor and not a lot of subscribers costs a lot of money per issue or per subscription:

The editor has to put in the hours to reject the crap that people submit, and those hours don't come free.

The vast majority of hours spent has nothing to do with any editorial functions. All reviewers volunteer their time to review submitted papers, so that's not a cost. Typesetting to a publishable state is not even included in some cases (see a3_nm's comment). What else is left? Filtering through already reviewed papers to list out already typeset documents? Why are researchers paying $2k+ for that?

I work for one of these publishers doing all of the nothing you're complaining about, so to clarify: how do you suppose we determine who reviews which paper? How do we convince them to actually write the review? How do we know the review is any good? Also, we do typeset and improve prose quality, not sure what those other slackers are doing...

> how do you suppose we determine who reviews which paper? How do we convince them to actually write the review? How do we know the review is any good?

At least in theoretical computer science these tasks are all done by the volunteer conference organizers.

It's possible that this is different than in fields that primarily publish in journals, but it certainly seems like the volunteer approach works just fine.

I have also never seen anything getting improved typesetting or prose. The reviews may reject based on bad writing, or the journal may reject due to latex warnings, but never more than that.

I wonder what journal you work for?

There are many academic fields where people work long hours for little pay, but I imagine theoretical CS is much more comfortable. I'm not surprised people have time and effort to spare, but they are still getting paid, you know. It just happens that the resume byline "conference organizer" is valuable enough in that field.

Sometimes conferences pay a publishing company to do the review process for submissions, by the way. I'm not sure that your submissions are actually getting processed for free by conference volunteers.

I'd rather not disclose details of my employer, sorry.

No conference in CS pays the publishing company (usually IEEE or ACM) to pick reviewers. None whatsoever. Instead, program committee (PC) chairs are selected, and they pick PC members who are responsible for reviews. All of this occurs on a volunteer basis, no matter which institution the PC chairs/members belong to.

> just happens that the resume byline "conference organizer" is valuable enough in that field.

Being editor of a journal/PC chair is a valuable thing in any field, not just CS.

> Sometimes conferences pay a publishing company to do the review process for submissions, by the way. I'm not sure that your submissions are actually getting processed for free by conference volunteers.

Well, I'm sure, because I also review for the same conferences and journals that I submit to!

I don't think anyone will say that publishers should do their work for free or are totally useless.

The problem is about the business practices of the big publishers - asking huge amounts of money for access to journals, forcing universities to buy bundles of journals, to get access to interesting articles they also have to subscribe to stuff they don't want, and so on.

What field are you in? Because at least in mine (information retrieval/machine learning) all the stuff you're talking about is done by volunteers, not paid staff.

> The vast majority of hours spent has nothing to do with any editorial functions.

Unfortunately, I have to admit that most of the money I pay for scholarly papers goes to Springer/Elsevier shareholders and not the editors who barely edit.

But the principle still remains. Reading through crap papers so you don't have to is hard work that should be paid for. (And editors have to do this before sending stuff out to reviewers, or else the reviewers drop out.)

ANd I would rather pay the editors out of my own pockets than expect the authors to do it, or an ad-supported mechanism, because if you're not the customer, you're the product.

> The editor has to put in the hours to reject the crap that people submit

Does that process need to be any different from PRs on github? Can't all that be distributed across a team of researchers working on the topic of the journal, just as PRs on github are distributed across all developers working on the software?

>Does that process need to be any different from PRs on github? Can't all that be distributed across a team of researchers working on the topic of the journal, just as PRs on github

It's not the same as a github PR because wading through a mountain of badly written papers is work researchers do not want to do. That's what a publisher (like Elsevier) does. See example of Elsevier employee Angelica Kerr.[1] It should be easy to see that scientists would think it's a waste of their time to do Angelica Kerr's work.

I didn't get a chance to respond to the reply by allenz that suggested that journal chiefs should hire their own editing and administration staff. Again, that "solution" makes the same mistake of thinking scientists will do something they have no interest in doing. They don't want the hassles of "contracting" an editing team.

The research publishing is not a "software problem" that's solvable by a "software platform solution." You can't solve it by applying social platform mechanics such as github PRs and/or votes of StackOverflow/Reddit. (My parent comment tries to explain this.[2])

[1] https://news.ycombinator.com/item?id=15278621

[2] https://news.ycombinator.com/item?id=15269673

So I read your links. How is Angela Kerr's job any different than a spam filter function? We've got production grade techniques for handling that now, and without going into technical details everything you mentioned as being in her set of tasks could be implemented using ML and NLP techniques, most of them not even that cutting edge. In many ways, an algorithmic approach could do her job better because some "crackpot" articles will not be crackpot but genius and if you give researchers the ability to tweak the filter settings by adjusting sensitivity / specificity to their own tastes you're mathematically guaranteed to increase the likelihood of getting the next breakthrough out there.

The actual problem, which I think you implicitly identify when talking about Nature and Cell, is the "prestige" factor. As long as researchers are motivated by a prestige level of a journal b/c they may fear being ostracized by their peers or not getting recognition for their research - which has monetary and career costs involved - I think it will be very difficult to convince anyone to switch, regardless of how effective the platform could be.

I haven't thought about that problem before so I'm not sure how to address it as of now.

Some things may be better shown by example.

Here's the first paper off the top of today's submissions to gr-qc: https://arxiv.org/abs/1803.11224

Is it right?

How do you know?

To whom should you send it in order to get an expert review?

Which of those people has an ax to grind with one of the authors?

Of the remaining people, who do you think you could convince to invest a day or more to carefully evaluate the paper?

Okay, so you got replies back from two of your carefully selected referees (after two months of badgering one of them):

Referee A thinks the paper is great, insightful, and advances the field, but wants extensive changes.

Referee B thinks the paper is derivative drek and should be rejected because his friend C has already done something similar.

Your journal publishes twenty similar articles a week but receives three hundred a week. The careers of the authors are partially on the line, as is the prestige of the journal, the attention of the readership, and the future submission of articles by prospective authors.

Good luck training a neural net to do this well. I suspect a neural net can be trained to reject the worst crackpots, but little more, without rejecting insightful but unique/important papers.

The problem is that most of the (substantial) work you highlight above is done by the academic community (mostly paid by the public purse), while most of the (substantial and above-market) profits go to the private publishers.

I described the job of the editor -- for the good for-profit journals, they are, to the best of my knowledge, paid:

e.g.: https://www.nature.com/nphys/about/editors https://www.nature.com/nature/about/editors

I read from the linked comments:

> In addition to basic copy-editing, she prevents crackpot articles such as "Darwin's Theory of Evolution is proven wrong by Trump" from reaching the journal editors and wasting their time. (She may inadvertently forward some bad articles but she has enough education to reject many of them outright to minimize the effort by the journal's scientists.)

Do I read it right that an Elsevier manager with no degree in the subject rejects research papers without consulting the editors? To me as scientist this is a big red flag and concern about journal's quality. While the example given here looks obvious (and probably contrived), most real examples are less so. And while recognizing those takes little time and effort to a trained eye, this task certainly should not be left to a subject non-specialist. It will not save much time and create more concerns than help.

As a researcher, in a lot of cases, you want to be published in a reputable journal. Publishers have learned to charge for the reputation over the years. Reputation has a lot of inertia.

The PR model would have to answer the question of how reputation is built. A "star" system would likely be inadequate for a number of reasons. But the major obstacle would be in getting researchers on board with the new model.

Early adopters would have to withstand scoffs from their peers who haven't accepted the new model yet.

Even Open Access, which could be regarded as an incremental improvement, has years, if not decades, of grueling work behind it.

The network effect is very strong within the research community, which results in a situation which is very close to "all or nothing."

> Early adopters would have to withstand scoffs from their peers

More importantly: punishment from the people who give them money.

Also, the persons most comfortable with a radically different publishing model like PR are apt to be young researchers, but they are the ones most in need of establishing a reputation by publishing in established, prestigious journals.

Spot on. There's a huge potential for optimization in the entire field, but also an amount of inertia rarely seen elsewhere.

Sounds like a case for "shaming" those who rely on 'prestige' since that seems like the only way to break entrenched habits. I am not in research or publishing or anything else directly involved. This is just a generalization that exposure and denigration of egregious cases tend to be the most effective initiators of systemic change, given our hard-wired approval seeking.

The useful part of the service does not a lot as many examples of community run journals shows, such as http://www.mathoa.org/journal-of-algebraic-combinatorics-fli...

The reason the publishers charge their exorbitant rates is because they have acquired the journals leaving no freedom to switch to alternative providers offering better service at lower cost. Nor do they allow saving costs by not using services that are not needed.

How would you incentivize peer review with your approach?

Do you think Springer pays for reviewing? No publisher does. It's all voluntary work by the same people who write the papers. It's the modern day slavery.

The parent comment implied that reviewers would be anonymized with this Github-based approach unless the PR was approved, which seems like it would lead to more acceptances so that the reviewers could get credit.

By providing opportunity to find errors in research of people you don't like!

More seriously though, peer review is usually free or almost free, reviewers do not get anything near to the ridiculous amount of the money that "open access" journals demand.

Besides, having articles on github, with all the data, so that any reader can check the results, and comment, would be much more powerful than the type of peer review we have today.

First, I'd question that you really need to incentivize it. People - myself included - are doing research in a topic because they are passionate about it and most are more than happy to look at new research and ideas about things they are passionate about. Will you really need incentivization to get people to interact with the topic they have devoted their life towards studying?

Let's say however that you need incentivization, I suppose one approach would be at a legislative level where you require all researchers publish on a decentralized open access platform where every citizen - they are the one's funding the research after all - can freely and uninhibitedly access the information.

> Will you really need incentivization to get people to interact with the topic they have devoted their life towards studying?

Perhaps not. But people keep telling me in this thread that reviewers work because the journals expect them to or they will get future papers rejected/lose conference access, not for the love of the topic or because they get paid or otherwise any kind of special visibility for it. I wonder whether an approach that promised only publicity for having done reviews or relying on altruism would really produce the quality of reviews necessary.

> I suppose one approach would be at a legislative level where you require all researchers publish on a decentralized open access platform where every citizen - they are the one's funding the research after all - can freely and uninhibitedly access the information.

That's not really incentivizing the _reviewers_, though, right?

The same way we do now? There's a strong cultural expectation that one will agree to review, or have one's graduate students review, a reasonable number of papers. If you're a member of the Foo committee of the Society for Bar, you're not going to refuse to pull your weight as a reviewer for the Society for Bar's International Journal of Foo unless you want to get the cold shoulder at your next conference. It's not like anybody gets paid for this today.

Clearly, blockchain!

But no seriously, this seems like an actual technology / problem fit, although you'd need a decent amount of tooling, product design, etc for it to work - but some of that's already being explored by the "open source + blockchain" endeavors.

In basic form, I imagine you'd basically turn the transaction fees into review fees.

(The product design and tooling comes from how you prevent abuses to that basic format, I think)

I actually think this is a pretty good use case, since it fits the decentralized trust aspect that many common blockchain use cases lack.

> some of that's already being explored by the "open source + blockchain" endeavors.

Any links?

I work for an Open Access publisher/research discovery technology company, and we literally have this on the drawing board. We're small, but well connected in the world of publishing and research.

I for one would be also be interested in links.

One thing we don't want to do is reinvent the actual blockchain technology, so we're also interested in finding partnerships in terms of platform technology.

A proof of stake smart contract would be perfect for this. The currency is reputation earned by writing well-done reviews and 'staked' behind reviews they believe are well-done. Papers need a certain number of well-done reviews with the stakes on both sides accumulating behind approve/deny. You have to pay with the currency to have your paper reviewed and the currencies are distributed to the reviewers to allow monetization via exchanges with a small amount withdrawn as a siphon to support the continued development of the network and marketing/public relations.

Agreed! There's a lot in the blockchain model that seems to lend itself to this problem in particular.

Ideally, we'd also like to set up a foundation to maintain and manage the ecosystem and invite both old and new players to participate. A lot of blockchain efforts seem to be a little too much like a hope for a monopoly wrapped in a thick layer of talk about free markets and democracy.

We're actively looking for collaboration and partnerships around this. If you, or anyone, would like to get involved we'd be happy to discuss. My email is in my profile.

Neat. I'm at a crypto conference in a couple weeks, I'll see if there's any sparks on this idea.

I agree with an above post that the starting point is that you get paid to review, and you pay to get reviewed; although you'd need another way to monetize the token or it's limited to this use case.

The overlap with open-source seem clear: How do you validate the value of the crowd-sourced contributions? In code, you could do it via TDD...

OH! OR - (well, you could make both), you pay to review by _betting for it's validity_. There may be a way to combine or borrow from each approach to fill in the gaps in the other.

I'm one of the main developers of OpenReview.net, and I would love to discuss this more with you, but like the other commenter, I can't figure out how to get your email from your profile (I'm also generally new to HN and may just be missing something - for example, is there a direct message-like option here?)

> My email is in my profile

I am shamefully bad at encryption stuff. How do I get your email from the string in your profile?

Oh geez. No? I may also just be confusing my own (ancient) thinking on how to do this with things that actually exist.

The central overlap is validating the value of contributions.

Reviewers get their names on the paper too. Which means if the paper was bad, well, they can be held responsible for it too.

In what field is that? I've never seen this happen, nor would I want to be on any paper I merely reviewed.

I think this was meant as a suggestion, I've also never seen this happen. However, I am wondering why and so may I ask why you do not want to be mentioned as reviewer on a paper, maybe even with your review?

not parent, but research association conference presentations might help to centralize announcements/review efforts but also keep things under researcher purview..

I am curious who/what administration is impressed by publications in predatory journals when listed on a researcher's CV.

P.S. Jeffrey Beall kept a list of predatory publishers but was shut down.

> In January 2017, Beall shut down his blog and removed all its content, citing pressure from his employer.[] Beall's supervisor wrote a response stating that he did not pressure Beall to discontinue his work, or threaten his employment; and had tried hard to support Beall's academic freedom.[]

> The alternative business model that the world is moving to is Open Access. The difference being that instead of paying to access the journal or paper, the researcher or institution that wishes to publish pays up front to have the paper published.

Open Access is not a business model. It is about the Access, as the name says. The author pays is a business model, not the best one, and is the one with many problems.

See the Fair Open Access Principles https://www.fairopenaccess.org/ and the Publishing Reform Discussion Forum https://gitlab.com/publishing-reform/discussion/issues

> Open Access has its own problems, such as predatory journals, where researchers who don't know better or who are desperate to publish are more or less lied to as to the reach and validity of a journal.

This is not specific to Open Access, there are low quality subscription journals as well. No serious scientist would send articles to unknown journals without checking their credentials, whether the journals are open access or not.

> The alternative business model that the world is moving to is Open Access.

That's one of the possible alternative, not the only alternative. Another alternative would be to have peer review as done today where authors pay for PDF hosting, server maintenance costs etc. and not the cost of "publishing" as in the traditional sense.

Agreed, clearly not the only model. But it's the one that seems to be gathering the most momentum at the moment.

Yup. PLOS was founded partially to do just that - prove that Open Access can work as a business model.

> the researcher or institution that wishes to publish pays up front to have the paper published

Isn't this the case with the subscription-based model as well?

Yes, many top journals charge for both subscription and publishing. The only thing that open access journals lack is reputation in many fields (low number of good citations). Hopefully this will change in the next few years.

What has always underpinned this flawed and collapsing business is that successfully published works in “prestigious journals” are a form of private property to the authors. A series of such publications can result in tenure or lifetime job security and prestige for the author. The PRESENT VALUE of cash flows associated with such publications is a number of million dollars in advanced societies, and no cash investment was required on the part of the beneficiary. And the publishing fees are paid by someone else. So the researchers went along with the system which only began to crack due to two factors: (1) Developing economies needed the information but couldn’t afford it at absurd prices (2) Commercial application began to eclipse government and defense as a research driver.

Your post lacks crucial detail. I worked in a scientific journal and am familiar with the pipeline of article creation. Can you elaborate on who "someone else" is that is paying the publishing fees? These are often derived from grants authors have received. Furthermore, where is your evidence that commercial applications have eclipsed government/defense as research drivers? The vast majority of the studies I reviewed in the scientific journal were not directly related to commercial applications, but were most often funded by government organizations of one kind or another (CDC, NIH, defense, tech innovation funds, etc. from various countries). Your comment just doesn't square with my experience in the industry.

As a government/defense researcher, there is definitely more money available from industry sponsors. When I go to internal conferences with my bretheren, their salary is paid for, they have interesting problems, and no money or political will to organize basic processes, like a common infrastructure.

Meanwhile, my project, a collaboration with industry, has literally anything we need and more.

(1) Someone else in this context refers to anyone other than the author-researcher. The point I am attempting to make is that researchers, as long as they are on an academic grant and salary, and with the prospects of generating millions of dollars of private property, have no incentive to change the system. (2) It is clear that in electronics, biochemistry, and even in my little field, laser physics, the commercial applications overwhelm traditional pure research in degree of economic input, persons employed, turnover, etc. This is true in all hard sciences and life sciences. If it is not true in other fields, do they outweigh the scientific engineering fields? I think not.

Best move would be to institutionally forbid, or disincentive somehow, publication in non open access journals.

Look at the UK's regulations. If UK govt money goes into a research project, it must be open access. They are, to my knowledge, the only country which uses this policy.

That is actually true in the US as well with NIH funded research [1] (the majority of biomedical research). HOWEVER, the research need only be opened after a time-embargo (generally a year). Additionally larger universities like the University of California system also 'require' open access publication [2] - however waivers to that requirement are often sought and easily granted.

In both cases, the policies (NIH in 2009, UC system in 2013) were shots across the bows of the large publishers, and permit a gradual easing of the culture without just blowing it up. So long as the eventual goal of all open-access is met in a relatively timely fashion, I think the strategy of slowly cinching down the rule is a reasonable compromise.

[1] https://publicaccess.nih.gov/policy.htm

[2] https://osc.universityofcalifornia.edu/open-access-policy/in...

The EU as a whole is also moving towards this: https://ec.europa.eu/programmes/horizon2020/sites/horizon202...

should be easy to do for publicly funded research.

Agree, if funded with public money, the outcomes (papers, ...) should be open and freely available to everyone.

In related news, the academic publishing problem is still unsolved. There is no standard model to fund the resource intensive process of peer review in the open access journals and their role as a gatekeeper for scientific relevance, advancement and funding.

> There is no standard model to fund the resource intensive process of peer review in the open access journals

This process doesn't necessarily need to be funded. In my own field, most journals are published by learned societies. They were founded with endowments large enough to cover the costs of publication (i.e. printing) in perpetuity, but the work of editors and peer review is unpaid. This doesn't strike most of us as a problem.

Even the big publishers do not compensate peer reviewers or sometimes even editors. They don't even provide typesetting or copyediting anymore -- authors are expected to provide camera-ready output. So, a lot of the money being gained by the big publishers does not actually go to fund the whole process of creating those journals.

> Even the big publishers do not compensate peer reviewers or sometimes even editors. They don't even provide typesetting or copyediting anymore

It sounds like they (major journal publishers) provide practically no value whatsoever. Why even use them then?

We are forced to.

For example, in my country, every assessment we have to take (be it for a tenure-track hiring process, for getting tenure, for asking for a grant, etc.) has as the most important criterion "publications in journals indexed by ISI JCR" together with their quartile.

Most of the journals in ISI JCR follow this model where they cost money (be it to publish or to read) and provide very little value... except for being on that list and being necessary to (aspire to) stay in academia and feed your family, of course.

Other countries have better systems in the sense that they may be more open to other venues not in ISI JCR, some may even actually look at the quality of the papers instead of just blindly following rules to score quartiles. But scientists everywhere have the same problem in larger or smaller degree.

A solution that is sometimes proposed is that authors who are no longer struggling for their career (e.g. tenured full professors) take a stand and refuse to publish there. Some movements have been made in that direction, e.g. in mathematics. But in most fields a senior professor will work together with Ph.D. students and postdocs who are in the struggle, so it isn't realistic either.

The truth IMO is that the solution must come top-down, from governments. The EU has made some progress, e.g. mandating open access for EU grant holders, but what happens then is that publishers will let you make your paper open access in exchange for a hefty fee (which again, is paid from taxpayer money). The real solution would be to mandate by law that research paid by taxpayer money is published in non-for-profit venues, period.

It's a coordination problem - if all scientists in a given field moved away from the established Elsevier journal to a new one, everyone (except Elsevier) would be better off. However, if any individual academic tried to move, he'd be much worse off.

Historically, coordination has worked sometimes. In 2003, after prodding by Don Knuth [1], the editorial board of the Elsevier Journal of Algorithms resigned en masse [2] and started a new cheaper journal, ACM Transactions on Algorithms. A few years later the Elsevier journal was shut down.

But I agree, it seems we can't rely on this process, and the solution must involve regulation.

[1] https://www-cs-faculty.stanford.edu/~knuth/joalet.pdf

[2] https://freedom-to-tinker.com/2004/02/09/journal-algorithms-...

My incomplete understanding is that publishing in big-name journals provides prestige and improves funding prospects for academicians. Since academia is very competitive, researchers will do whatever it takes to publish in the most presitgious journals. In other words, they provide a brand that researchers want to associate with; analogous to how rappers mention luxury brands (sometimes with but often without being paid) in their songs.

I don't know what benefits reviewers receive, but they are gatekeepers to the journal's brand, so conceivably they are able to obtain some benefit to themselves.

> I don't know what benefits reviewers receive

As a reviewer, you get to read relevant new research in your field several months before it gets published. This doesn't work in physics and maths, thought, where the whole field has the habit of pre-publishing manuscripts in arXiv, so everyone gets to read everything before it's published.

Is there a reason arxiv doesn't directly manage what would be an analog to paper reputation?

My impression is that journal selection is doing some work of signaling how awesome scientists think a particular paper is, either actually or aspirationally, and so is capturing some sense of group regard. It seems like just keeping track of views, downloads, and "likes" on arxiv might serve much the same function although would clearly require a lot of work to get right to be credible.

Overlay journals and the reputation graph as the result of citations. The more reproducible your work is, the higher the reputation should be. Part of getting an undergraduate or masters degree should be in reproducing research.

More easily gamed, I'd think. Also, if you remove the curators (journals), then people looking for research in the first place (who are driving the ones arxiv stats) won't know where to look first.

I'm not sure the previous poster is 100% correct but honestly I have not looked into it in depth. I know the times I have gone through it the journals did provide typesetting and copy-editing for some aspects of the paper.

However, to get to your actual question a big reason people use journals is simply the name. As a researcher getting your paper published in Nature is not only big because the prestige but presumably more people read/see nature articles so you have a better chance at high impact.

TL;DR Their value is their reputation and reach.

Exactly the point.

> There is no standard model to fund the resource intensive process of peer review in the open access journals

I have been a peer reviewer over the last 5 years or so for multiple journals and conferences in Computer Science like Elsevier, IEEE etc. Not once have I receive any remuneration for my comments in the article. Most of us do this as voluntary community service.

Yet the publishers get paid exorbitant fees for access to work they didn't pay for.

I'd imagine that for every Elsevier-published paper that you review, that is a non-Elsevier-published paper that you did not get a chance to review.

Would it be possible for you to prioritize review of open-access work? (This is a question, not a criticism!)

Would you consider not reviewing for Elsevier anymore, given their practices?

Yeah, while it's easy to criticise, based on Elsevier's reputation, reviewing for them strikes me as somewhat unethical by most standards.

EDIT: by most standards that at least don't involve some situation of comfort. If you're comfortable in academia, you're probably enabling some unethical shit, is what I've come to believe. It's up to you to decide if you can live with the degree of it.

You're the resource :) The alternative to you doing it for free would be paying someone. Having said that, I presume that you're doing it because you work in a certain role, and because it's advantageous to your career. I.e. you're getting paid by someone, and it's assumed that you'll review as part of your job.

Peer reviewers are unpaid. Editors are usually unpaid though some large / prestigious journals can manage a few paid staff to edit/layout, the publishing decisions are usually taken by committees of unpaid volunteers (renumeration is basically reputational.)

The solution is technically easy - reciprocal peer review so you pay other people to review your papers by agreeing to review others’ papers, and then publishing on arXiv. Computer science basically does this but replaces arXiv with an industry body the ACM. I don’t see why it needs to be any more complex than that.

Can't we have a system where a paper is not necessarily reviewed at first, but as time goes by, more and more reviewers endorse the paper?

The review process gives authors feedback which they use to improve their paper before it's published, and authors usually appreciate that.

It's also a chance to stop major errors being published, and again authors usually appreciate things like that being caught before the paper becomes public.

I mean, why not though? We do it with wikipedia. Maybe a publication could be submitted for review, but not made public (like, its up on the wiki, but not 'publicly view-able yet, only available to editors).

This is what happens, it's the "peer review" thing being described. The reviews are issued so that corrections can be made before publication.

Wikipedia/ Stackoverflow style review process. Also, ffs can we get hyperlinked citations? Its fung ridiculous that I can't just click the link for your citation and go to that paper.

This is hard for new authors as you don’t know if the reviewer is competent. So new authors would have to pay. So a problem.

I don’t literally mean you do n reviews to get your review - I mean members of the community generally spend some time reviewing. If you're new you will get some of your papers reviewed before you start to contribute back but it’s not a problem. It’s how it works now in CS.

> reciprocal peer review so you pay other people to review your papers by agreeing to review others’ papers, and then publishing on arXiv.

This is fraught with ethical problems.

It doesn’t need to be directly reciprocal. You could have a system where having your paper peer reviewed costs X tokens, and you earn tokens by peer reviewing other papers that don’t have citation links to yours. Or you buy tokens with cash which sustains a fund for external reviewers. Or other people can transfer their tokens to you.

Would that still have problems?

Sure, but the identical exchange of value occurs in the current system, it just goes unsaid/unspoken. You review my journal and maybe ignore some of the problems, poor assumptions, whatever crap I used to make the sausage, just let me get it published; I do the same for you. You may not 'know' who is reviewing your journal, but at least in the natural sciences, there are so few technically qualified individuals to review some very domain specific publications, so its almost inevitable that although you may not know for sure who is reviewing your paper, you can figure it out without too much work. Since both reviewers and authors benefit from getting their own work published, there is a silent consensus for letting bad work slip through.

No obviously I don't mean you review a person's paper and they review yours - use your common sense, mate.

I mean if you publish a paper and get four reviews, you then review another four papers at some point - perhaps at an entirely different conference later that year. You just make sure you review at least 4n papers for each n papers you publish.

> Computer science basically does this but replaces arXiv with an industry body the ACM.

There's a big difference: arXiv papers are open-access (everyone can download them), but ACM papers are closed-access (you need a subscription to read them). I wish that computer science research were published on arXiv, but we're not there yet...


1) the ACM lets each author link to their papers on the ACM library with free access.


2) in practice you could always find pre-print copies of any CS paper on their author's website already and everyone knew and nobody ever cared

3) the ACM also let you negotiate an agreement so that you retain copyright for the paper so you can publish it yourself - my company does this

I disagree with these points.

1) Essentially no one knows about the author-izer system or understands how it works, especially outside academia. Readers in companies, poorer countries, etc., can't be expected to guess that the way to read an article is to go to the author's webpage and follow author-izer links (assuming they have been set up). What these potential readers will do is: search something on Google (or follow a link from somewhere else), hit the paywalled ACM DL page, give up. This convoluted system of "open-access from one place, closed-access from another" makes no sense.

2) For authors who actually post preprints of their work, yes, you can read it this way. But then you end up with multiple versions of the same work, that are often subtly different: does the author's preprint integrate reviewer feedback? does it fix some bugs that were found after the camera-ready version was submitted? And anyways, preprints posted on authors' websites usually disappear when they change institutions or retire, so it's not a good solution.

3) Yes, you can retain copyright on papers published with ACM, but then you need to give them an exclusive license to publish, so this still limits what you can do with your work (besides some narrowly worded exceptions). There is also a 3rd option of making the work open-access with no exclusive transfer, but this costs at least $700 per article, which is obviously excessive compared to the actual costs of hosting a 12-page PDF.

So I don't think it's fair to compare publishing with ACM and publishing on arXiv, because ACM is not open-access and publishing with them requires you to pay excessive fees or sign agreements restricting how you can publish your work, i.e., the opposite of what's in the interest of science.

Indeed, these and other problems exist with author-posted versions, also putting researchers at disadvantage by being less read and cited: https://gitlab.com/publishing-reform/discussion/issues/17

Re 2), there were instances I seem to recall when academic publishers went after authors that had preprints of their own papers on their own website.

Purely open-sourced publication and review; you have to publish your data along with your paper. A wikipedia style review process (or maybe more of a stackoverflow/ wikipedia hybrid).

The biggest and most critical miss of the whole process is not having the data a paper is based on published along with the paper. If something is irreproducible, is it really scientific? If I don't have your data, can I really reproduce your results?

If you can’t, the results are invalid.

I can give you data that say anything. The reproduction that counts is where you obtain similar data independently.

If you want to validate that my claims are justified BY MY DATA then you need my data.

If you want to validate that my claims are justified by the universe, you need to get your own data and should avoid being polluted by mine.

Having said that, I competely support publishing data by default. But not for the reason you state.

> resource intensive process of peer review in the open access journals

I think you may be unfamiliar with how the reviewing process works. I get an e-mail asking me to review an article for a journal. If I say yes, I donate between two and ten unpaid hours of my time producing a review. After I am done with my review, I send it to the AE. The associate editor donates additional hours of his/her time reading my review, plus one or two others, along with potentially the entire manuscript, in order make a decision on the paper. This recommendation gets forwarded to the editor, who makes the final call on publication. With few exceptions (the only ones I'm aware of being the biggies like Nature, Science and Cell) every single person involved in this process is uncompensated. In some cases the editor may receive a small honorarium, but it's trifling compared to amount of time it takes to run a large, prestigious journal.

You're absolutely correct that reviewing is a resource intensive process. But you're wrong if you believe that the publishers are shouldering a significant part of the resource burden. This is exactly why people are so pissed off when these same publishers turn around and charge our own campus libraries a five-figure sum to access the same journals that we work basically for free to produce.

Peer review is rarely compensated, publishing revenue goes to the publisher, not the volunteer reviewers.

Here's a solution: https://medium.com/@followrosemary/a-blockchain-solution-for...

A token-curated registry (TCR) is an emerging model of information curation developed by Mike Goldin & Simon de la Rouviere from Consensys. It's an incentive system which decentralises the work of creating and maintaining high-quality repositories of valuable information.

Academic publishing seems like an ideal use-case for this model. There's a frenzy of activity going on in this space at the moment, and hundreds of details to work out - rather than get wrapped up in theorising, we want to release a v1.0 quickly and learn from there.

Please reach out to me via LinkedIn if you want to contribute to the pilot, or can help us raise awareness among the academic community. We're a small but dedicated group, open to partnering with universities, journals, crypto-developers, and other interested parties (especially academics).

Note: we have no plans or desire to make money out of this project. We're a group of well-connected enthusiasts who want to make headway on solving this problem.

Here's Mike Goldin's intro to TCRs, for context: https://medium.com/@ilovebagels/token-curated-registries-1-0...

I’m not an academic so forgive me for a noob question.

Is there a good introductory essay/book on how the academic publishing industry is setup, the workflow and the incentive structure for each party involved?

As an employee at an open access publisher, I can't agree that funding the peer review process is the biggest problem. Our surveys of our own reviewers have shown that only a small (<20%) minority of reviewers wish to be paid a fee for their reviews. The majority of referees prefer the current model of crediting volunteer reviewers in a regularly published "acknowledgements" article. I assume the incentive may be that reviewers show these to their tenure committee.

In fact the most common complaint from peer reviewers is about the length of the review period. Because open-access journals have authors as our customers, the market pressure is to provide excellent customer service, and authors prefer publishers who will process their papers quickly. This pressure is passed on to reviewers who must complete reviews much faster than the old norms under Springer.

Funding, or a startup enterprise, is certainly needed to administer the peer review process. I'm willing to bet it would be less than $10 per paper in universal use. No money is needed to pay reviewers. In fact the savings would be so great, their could be a courtesy payment. The ruinous censorship and false ownership of the publishers should be broken.

Do you really need peer review? Isn’t number of citations a form of peer review? Of course you sort of need to assign a reputation to these citations.

Citation count isn’t always the best indicator depending on the field.

It’s hard enough to find good papers without the literature being further polluted with substandard work.

The job of a reviewer isn’t a simple yes/no answer. Reviewers often suggest sweeping changes to work to make it publishable. This helps to ensure that the literature is populated with well put together work that is free of bias and glaring mistakes. Otherwise we’d spend every minute helping students to spot the difference and wading through misleading research.

>Otherwise we’d spend every minute helping students to spot the difference and wading through misleading research.

I already spend far too much time doing this. I can't imagine how futile literature research would be without peer review. There are already enough illogical conclusions and poor study designs to wade through.

You absolutely need peer review. You’d be stunned at some of the rejected submissions

Absolute crap will be roundly ignored with or without peer review. Science and the Humanities worked fine before peer review and if it’s dropped they’ll work fine again. If Grigori Perelman puts another paper on arxiv that’s groundbreaking people will look at it without benefit of peer review, just like the last one.

> Science and the Humanities worked fine before peer review

Before peer review they were not dealing with the volume of scientific content that is submitted nowadays. One could debate whether citations are the only metric necessary, but there needs to be a filter before that even happens, or else we will be flooded with an avalanche of garbage research. It would be trivial for someone with deep enough pockets to order a network of junk papers citing each other and debunking climate change or evolution and make a real mess. As painful as reviewing is, it's a necessary evil for now.

> Absolute crap will be roundly ignored with or without peer review.

Things might be ignored, but only after a person has already wasted their time looking at it and determining it is crap.

It was harder for cranks to publish back when journals were purely physical and a crank would have to come up with the printing costs himself. Now that anyone can publish for free on the internet, peer review is even more important for establishing what content out there is worth paying attention to and which is not.

You neglect the costs of submission, revision, resubmisssion etc. which are borne by the authors. Even if we assume that all involved are pure of heart and no one is deliberately delaying publication of their rivals’ work submission to publication in economics is on the order of two years. This is insane so the actual intellectual conversation has moved to working papers with final publication being in a journal being as much for archival and career progression metrics as anything else. I believe the situation in much of computer science is similar, with conference papers serving as the workaround for the fact that pre publication peer review is unbearably slow, not working papers.

Peer review happens anyway, but faster, and in public, without the insanity of revise and resubmit.

Journals are not where the action is in Economics or Computer Science. It works for them. Why not for everybody?

> You neglect the costs of submission, revision, resubmisssion etc. which are borne by the authors.

I neglect those because journals in my own field are both free to publish in and, more often than not, open-access. I do understand that not everyone is so fortunate, however.

I’m talking about time, not money. Because those costs absolutely are borne by the authors. Insofar as peer review hinders the free dissemination of ideas it also hurts the scientific community and those who depend on its research.

> You’d be stunned at some of the rejected submissions

You'll be stunned by some of the accepted submissions, such as these highlighted by "New Real Peer Review", @RealPeerReview on Twitter:

Glaciers, gender, and science: A feminist glaciology framework for global environmental change research. "Merging feminist postcolonial science studies and feminist political ecology, the feminist glaciology framework generates robust analysis of gender, power, and epistemologies in dynamic social-ecological systems, thereby leading to more just and equitable science and human-ice interactions." http://journals.sagepub.com/doi/abs/10.1177/0309132515623368

Black Anality "In turning attention to this understudied and overdetermining space — the black anus — “Black Anality” considers the racial meanings produced in pornographic texts that insistently return to the black female anus as a critical site of pleasure, peril, and curiosity." https://read.dukeupress.edu/glq/article-abstract/20/4/439/34...

The Perilous Whiteness of Pumpkins (specifically, Starbucks’ pumpkin spice lattes) http://www.tandfonline.com/doi/abs/10.1080/2373566X.2015.109...

EGO HIPPO: the subject as metaphor about a trans animal scholar identifying as a hippopotamus. http://www.tandfonline.com/doi/full/10.1080/0969725X.2017.13...

Rum, rytm och resande: Genusperspektiv på järnvägsstationer PhD thesis on gendered train stations. "The overriding aim of this study is to examine how male and female commuters use and experience railway stations as gendered physical places and social spaces, during their daily travels. [...] Through this theoretical frame the thesis analyses gendered power relations of bodies in time, space and mobility." http://liu.diva-portal.org/smash/record.jsf?pid=diva2%3A7424...

“I'm a real catch”: The blurring of alternative and hegemonic masculinities in men's talk about home cooking "while many participants drew on what they saw as alternative masculinities to frame their cooking, these masculinities may in fact have hegemonic elements revolving around notions of individuality and romantic or sexual allure." https://www.sciencedirect.com/science/article/abs/pii/S02775...

becoming cyborg: Activist Filmmaker, the Living Camera, Participatory Democracy, and Their Weaving "Throughout this article I use lowercase letters to deemphasize the importance of the individualized human in cyborg connection." http://irqr.ucpress.edu/content/10/4/340

Disaster Capitalism and the Quick, Quick, Slow Unravelling of Animal Life "Sea otters have barely survived centuries of colonial and capitalist development." https://onlinelibrary.wiley.com/doi/abs/10.1111/anti.12389

Queer organising and performativity: Towards a norm-critical conceptualisation of organisational intersectionality http://www.ephemerajournal.org/sites/default/files/pdfs/issu...

Speciesism Party: A Vegan Critique of Sausage Party "In this article, we have described how Sausage Party reflects and reproduces intersecting oppressive power relations of species, gender, sexuality, ethnicity and different forms of embodiment." https://academic.oup.com/isle/advance-article/doi/10.1093/is...

"Wow, that bitch is crazy!" PhD Thesis about watching "Bachelor" with friends. https://search.proquest.com/openview/f4a6dbda2dc523609ad56c5...

Diaries, dicks, and desire: how the leaky traveler troubles dominant discourse in the eroticized Caribbean Paper on being a sex tourist in the Caribbean. http://www.tandfonline.com/doi/full/10.1080/14766825.2011.65...

Your choice of articles says more about you than the quality of the articles. Also, a number of these articles were published pursuant to mandatory thesis requirements. They weren't intended to expand the frontiers of human knowledge; they were intended to show that the degree-earner is capable of deep analysis in their respective PhD program.

The otter article, for example, appears to be a well-researched analysis of otter populations in Alaskan waters correlated with human settlement and economic activities. The pumpkin latte article looks into the correlation of light colors and whiteness and advertising in the US, an issue which frequently pops up in marketing gaffes (including one as recently as last week for a beer commercial). The queer organizing article looks at diversity in organizations, analyzed using sociological framework (i.e., "norms") rather than the business management framework. The Sausage Party article is an academic analysis of the movie, which is a surprisingly deep commentary of race, gender, and ethnicity in the U.S.


The above is a completely fraudulent article that was accepted for publication to demonstrate the counter point to your argument. The article is literally gibberish, but got through 'peer-review'. I know the lead author on this one and she did this to make a point about the inconsistency of peer review. The article so clearly fake that its laughable, but that is the point.

The choice of articles is not mine, but recent examples from said Twitter account. There are many more (eg hashtag #NRPR50K)).

I don't want to discuss the merits of individual papers, but I think as a whole they say something worrying about the current state of academic publishing.

So, let me just tell you why I am annoyed and concerned with these and similar papers:

1. Often, they torture language, in that the authors don't seem to write to elucidate and educate, but to obfuscate. (I'm well aware that scientific fields develop their own jargon; that it is useful; that the meaning of a term doesn't necessarily correspond with the ordinary meaning; etc. But even granting all that, it seems to me that a lot of that writing is wilfully opaque and unnecessarily jargon laden.) (Note BTW that you communicated the gist and utility of the articles much better than the authors themselves.)

2. Publishing "pursuant to mandatory thesis requirements" is part of the problem: people get degrees and university positions with research that does not expand the frontiers of human knowledge. Autoethnographic research is particularly galling in that respect (such as a recent paper about the time the author fell of a chair).

3. They delegitimise academia. There used to be a very broad consensus in most societies that education and research is immensely valuable and ought to be supported by government, and that academics should have immense freedom to pursue what they deem important and valuable without any interference and censorship (the essence of tenure). Papers such as these corrode that consensus.

4. Critical theory papers are ineffectual, I'd say, in achieving their laudable goals. I was going to mention MLK and Marx and Keynes ("Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist."), but this is too long as it is.

5. "Publish or perish" and Pay-to-publish open access also rear their ugly head.

I maintain that not only rejected papers, but also some accepted papers are shocking.

Number of citations is gamed too easily. You would first get self-citations and, after you started ignoring those, citation rings (groups of people colluding to citate each other excessively).

So, yes, you would need to assign a reputation to citations. I don’t think that’s a solved problem or even simpler than the problem of assigning reputation to papers.

This is the same problem google faced and 'solved' by with the page-rank algorithm. I've seen some attempts at applying this to the reference graph, but it doesn't seem to be taking off.

It's not taking off because, out of the box, it can't replace the gatekeepers: there needs to be a certain time period, sometimes decades, from publication until the moment a major work is recognized and cited. Cites form a DAG (old papers never cite newer ones) so do not converge on a recursive application of PageRank, newest works have all identical weights by definition.

So in the short run, you still need a prospective measure of scientific value, and acceptance for publication in a selective journal ("impact factor") serves that purpose well.

Ah of-course. I forgot the convergence requirements for pagerank. Indeed, the fact that cites are a DAG really does kind of screw with pagerank.

Note that my supposed idea was to use 'pagerank' to replace reference count as the value of a work. Not to replace peer review.

Good heavens no. A large amount of citations happen when somebody looks for something to cite and cites the first one that looks relevant.

The key thing not being pointed out is that these papers are generally printed in a journal or as proceedings to a conference. So there needs to be a hard deadline by which the work needs to be complete, and seen as valid, for printing. So in that sense, there needs to be a peer review, because there aren't any take-backs at this stage.

If you're really interested in the question, there is a field of research called scientometrics and that is dedicated, among other things, to study this kind of question.

No one has mentioned the heroic work of Sci-Hub yet, so here's an obligatory link: https://news.ycombinator.com/item?id=16332139

Meanwhile, German universities are teaming up for joint negotiations with journal publishers. Their first target is Elsevier. My university came close to losing access to Elsevier journals as a part of this move.

I work at a small non-profit anthropological database. Our primary target is academia. We can only employ a small team of developers (2) and 12-13 total staff. About a decade ago Germany negotiated with us our only perpetual access license for all of their universities.

Consortia memberships in the US and Canada are not uncommon either. However, Germany possess our only perpetual license.

A lot of us desire open access, however, we are not sure how we would fund ourselves. Our subscription rates are generally very low. Especially compared to these large journals.

Could you get a grant or two?

>Could you get a grant or two? --

Re:open-access? Perhaps. I should note that this area is not my expertise. However, what to do when the grant runs out?

Even so most of our revenue comes from the subscription. A grant may pay for hosting, but what about the time of the developers. Or those working on the publication side. We are already on a near skeleton crew. Especially compared to what we had here in the 1940-80s.

However, what to do when the grant runs out?

Well, you get another. Lots of non-profits run only on grants.

Have you thought about applying for EU grants? Open Access is one of the goals on Horizon 2020, the €80B EU program[1], but even after that there's no shortage of EU cash being dumped on anything even barely related to "innovation".

[1] https://ec.europa.eu/programmes/horizon2020/sites/horizon202...

That's also what the French are doing ; first paragraph of the article:

> [...] an impasse in fee negotiations between [Springer Journals] and Couperin.org, a national consortium representing more than 250 academic institutions in France.

They should go even further and unite between nations. If most of the universities of europe would negotiate as a consortium they would have even more power.

Of course I would prefers solutions which get rid of elsevier, springer etc...

Science: journals are too expensive! they should be cheaper!

HN: Just do it yourself! It is easy! You already do the hard part! Yay!

-- -- -*- At the same time.

Poster: this is how you run your small db

HN: Oursource your databases! Outsource your apps! Oursource your auth! Outsource your mail! It is difficult!

I don’t see the paradox?

People who say outsource everything to techies are the people who tell scientists to insource their journals.

Aren’t they saying scientists can now insource their journals because everything but the labor they are already doing has been outsourced to software?

I’ve been reading these comments because I’m always seeking ideas for how to distribute peer review and provide signals about paper quality. How could we leverage the Internet and a large group of authors and papers to show readers which papers thrown online are of citable quality? Surely there’s something we can do beyond counting page views.

The system as it is requires experts to review - peer review isn't "let any random person review" so I'm not sure leveraging the Internet in the traditional sense is the way to go. There are scientist social networks, but they don't have a lot of traction, and there's something to be said for the refereed process that exists.

Both knowledge extraction and signaling paper quality are fascinating, hard problems in modern science. I wish I had answers beyond criticism of the current system, but it might be a problem for people far smarter than I.

Supposing one of the scientist social networks implemented the refereed peer review process for some or all papers they host, funding it with ad revenue, APCs, and/or freemium instead of charging universities and readers, what more could they do to improve the system? Better post-publication review/redaction/amnendment for work we don’t realize is wrong until much later? A rating system?

I have not given this much thought or did any research. It is just an idea that popped into my head reading this thread. Why not just replicate the open source software model? Author opens a git repository on a github like repository. Reviewer can comment on the content repository. Article versions are stored and changes are stored for future reference. Even research data can be stored in same repository. Reviewers can build up a reputation by reviewing articles. Articles can even be forked or referenced. Articles with the most references and reviewed by reviewers with a good reputation can thus rise to the "top". Technically everything is in place. Or I am too much of a technical optimist?

That'd be nice if top repos on github were the best ones. But they're quite often the most marketed instead (though these two qualities are not necessarily mutually exclusive).

As a programmer, it’s hard not to wish we had git-for-publishing, git-for-Photoshop, git-for-homework, etc. it’s just such a powerful system! I think the steep learning curve is the reason it hasn’t already caught on beyond where it’s absolutely essential in software engineering. Maybe that means we need a more approachable git. It would also mean getting people onto the platform. Maybe a platform that already has the users could gradually introduce a version control system piece by piece.

FWIW, I've already got git-for-homework, git-for-academic-articles, git-for-slide-decks, git-for-lesson-planning, and probably a few other things. I'd be curious to see what passes for version control in fields other than software/CS, but I don't think I've heard of any such thing in use. I don't even hear "git is too complicated" -- people don't seem to talk about (anything I recognize as) version control at all.

From what I've seen, the average number of citations for papers published in the previous year is something like 0.3. A significant proportion are never cited.

Most people give no shit about most papers. The fact that even the shittest publication will be reviewed by a handful of experts, even though most people will never read the paper, is as good as you're gonna get imo.

Good thing when rent-seekers are countered. It doesn't happen enough.

Related discussion on the Publishing Reform Forum: https://gitlab.com/publishing-reform/discussion/issues/38

Please help us by expressing your opinion and public support on the forum. There is still a lot of work needed to convince the journals' editors.

Here's an interesting and relevant project, scoap3. It tries to convert high quality journals to open access with an interesting model.

[1] https://scoap3.org/

Disclosure: I work at the same section as scoap3 team, at CERN.

Interesting that the publishers are still allowing the universities access (at least for now) while the universities refuse the publisher's wishes.

Because if the Universities truly leave, the publishers are screwed. This is cycle of feudalisation or corporatization, unchecked power created an imbalance, but that imbalance threatens the power holder because their power is only symbiotic, not absolute.

Publishers need Universities to both consume the product and create the product. If they cut them off, it will only force the inevitable.

Elsevier and Springer are European companies. I wonder if the EU countries could just nationalize them.

My hope is that the ethos of FAIR* data sharing principles spreads and researchers finally replace the current commercial aspect of publications with an endowment funded system.

* https://www.monash.edu/ands/working-with-data/fairdata

>replace the current commercial aspect of publications with an endowment funded system

The research itself is funded by the tax payer. The peer review is funded by the tax payer. What endowment funding is needed to replace whatever Springer is actually providing?

This whole mess is solvable with a single law. Just have the EU or the US legislate that any tax payer funded research (including research with external grants but done by researchers in public universities) needs to be available free of charge for download to any citizen. The whole "sector" would unravel pretty quickly after that as it should. Charging the tax payer, through huge contracts like this one being renegotiated in France, huge sums for access to the very research the tax payer has already paid for to be done in the first place is a complete travesty.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact