Open Access has its own problems, such as predatory journals, where researchers who don't know better or who are desperate to publish are more or less lied to as to the reach and validity of a journal. It has become an area ripe for a new kind of scammers. This has prompted efforts such as Think, Check, Submit.
There's also the problem with raising the funds to publish something as Open Access. It's not always the case that the researcher actually has the means to pay for Open Access.
Nevertheless, Open Access is clearly the way where the research community is headed, and we're going to see a steady growth in percentage of research published as such over the next decade or so. But it does come with its own set of problems to solve.
It's also important to keep in mind that "Think, Check, Submit" is an initiative backed by traditional publishers (of the "gold open access" kind), and I think they are making this a bigger problem than it actually is. In my area, everyone knows which conferences/journals are reputable, and everyone knows that all others are essentially scams (especially if they have APCs). I have never seen anyone make the mistake of publishing valuable material with a scam publisher.
Concerning other comments on funding, I wanted to add that some journals don't need much money (hosting is basically free, workload shared well by editorial board). On the other hand funding can and does come from universities and their libraries (money not spend on subscriptions), research institutions, museums, and donations (sometimes seen as volountary APCs by those who have the money from grants).
Also, it's absolutely worth noting, like you point out, that different fields have different processes for accomplishing the same thing. It seems that younger academic fields are a bit more independent of the traditional structures. I hear that in Computer Science, conferences carry more importance than they do in some other fields, for example.
In other fields, there's a lot of inertia to deal with.
You do your research, read the "PR" i.e. submission guidelines and submit it, - sure let the committer and reviewers be anonymized until the PR has been accepted - then just pull it into a "Repo" aka journal and be done.
How can this process cost tons of money?
The editor has to put in the hours to reject the crap that people submit, and those hours don't come free.
At least in theoretical computer science these tasks are all done by the volunteer conference organizers.
It's possible that this is different than in fields that primarily publish in journals, but it certainly seems like the volunteer approach works just fine.
I have also never seen anything getting improved typesetting or prose. The reviews may reject based on bad writing, or the journal may reject due to latex warnings, but never more than that.
I wonder what journal you work for?
Sometimes conferences pay a publishing company to do the review process for submissions, by the way. I'm not sure that your submissions are actually getting processed for free by conference volunteers.
I'd rather not disclose details of my employer, sorry.
> just happens that the resume byline "conference organizer" is valuable enough in that field.
Being editor of a journal/PC chair is a valuable thing in any field, not just CS.
Well, I'm sure, because I also review for the same conferences and journals that I submit to!
The problem is about the business practices of the big publishers - asking huge amounts of money for access to journals, forcing universities to buy bundles of journals, to get access to interesting articles they also have to subscribe to stuff they don't want, and so on.
Unfortunately, I have to admit that most of the money I pay for scholarly papers goes to Springer/Elsevier shareholders and not the editors who barely edit.
But the principle still remains. Reading through crap papers so you don't have to is hard work that should be paid for. (And editors have to do this before sending stuff out to reviewers, or else the reviewers drop out.)
ANd I would rather pay the editors out of my own pockets than expect the authors to do it, or an ad-supported mechanism, because if you're not the customer, you're the product.
Does that process need to be any different from PRs on github? Can't all that be distributed across a team of researchers working on the topic of the journal, just as PRs on github are distributed across all developers working on the software?
It's not the same as a github PR because wading through a mountain of badly written papers is work researchers do not want to do. That's what a publisher (like Elsevier) does. See example of Elsevier employee Angelica Kerr. It should be easy to see that scientists would think it's a waste of their time to do Angelica Kerr's work.
I didn't get a chance to respond to the reply by allenz that suggested that journal chiefs should hire their own editing and administration staff. Again, that "solution" makes the same mistake of thinking scientists will do something they have no interest in doing. They don't want the hassles of "contracting" an editing team.
The research publishing is not a "software problem" that's solvable by a "software platform solution." You can't solve it by applying social platform mechanics such as github PRs and/or votes of StackOverflow/Reddit. (My parent comment tries to explain this.)
The actual problem, which I think you implicitly identify when talking about Nature and Cell, is the "prestige" factor. As long as researchers are motivated by a prestige level of a journal b/c they may fear being ostracized by their peers or not getting recognition for their research - which has monetary and career costs involved - I think it will be very difficult to convince anyone to switch, regardless of how effective the platform could be.
I haven't thought about that problem before so I'm not sure how to address it as of now.
Here's the first paper off the top of today's submissions to gr-qc: https://arxiv.org/abs/1803.11224
Is it right?
How do you know?
To whom should you send it in order to get an expert review?
Which of those people has an ax to grind with one of the authors?
Of the remaining people, who do you think you could convince to invest a day or more to carefully evaluate the paper?
Okay, so you got replies back from two of your carefully selected referees (after two months of badgering one of them):
Referee A thinks the paper is great, insightful, and advances the field, but wants extensive changes.
Referee B thinks the paper is derivative drek and should be rejected because his friend C has already done something similar.
Your journal publishes twenty similar articles a week but receives three hundred a week. The careers of the authors are partially on the line, as is the prestige of the journal, the attention of the readership, and the future submission of articles by prospective authors.
Good luck training a neural net to do this well. I suspect a neural net can be trained to reject the worst crackpots, but little more, without rejecting insightful but unique/important papers.
> In addition to basic copy-editing, she prevents crackpot articles such as "Darwin's Theory of Evolution is proven wrong by Trump" from reaching the journal editors and wasting their time. (She may inadvertently forward some bad articles but she has enough education to reject many of them outright to minimize the effort by the journal's scientists.)
Do I read it right that an Elsevier manager with no degree in the subject rejects research papers without consulting the editors? To me as scientist this is a big red flag and concern about journal's quality. While the example given here looks obvious (and probably contrived), most real examples are less so. And while recognizing those takes little time and effort to a trained eye, this task certainly should not be left to a subject non-specialist. It will not save much time and create more concerns than help.
The PR model would have to answer the question of how reputation is built. A "star" system would likely be inadequate for a number of reasons. But the major obstacle would be in getting researchers on board with the new model.
Early adopters would have to withstand scoffs from their peers who haven't accepted the new model yet.
Even Open Access, which could be regarded as an incremental improvement, has years, if not decades, of grueling work behind it.
The network effect is very strong within the research community, which results in a situation which is very close to "all or nothing."
More importantly: punishment from the people who give them money.
The reason the publishers charge their exorbitant rates is because they have acquired the journals leaving no freedom to switch to alternative providers offering better service at lower cost. Nor do they allow saving costs by not using services that are not needed.
More seriously though, peer review is usually free or almost free, reviewers do not get anything near to the ridiculous amount of the money that "open access" journals demand.
Besides, having articles on github, with all the data, so that any reader can check the results, and comment, would be much more powerful than the type of peer review we have today.
Let's say however that you need incentivization, I suppose one approach would be at a legislative level where you require all researchers publish on a decentralized open access platform where every citizen - they are the one's funding the research after all - can freely and uninhibitedly access the information.
Perhaps not. But people keep telling me in this thread that reviewers work because the journals expect them to or they will get future papers rejected/lose conference access, not for the love of the topic or because they get paid or otherwise any kind of special visibility for it. I wonder whether an approach that promised only publicity for having done reviews or relying on altruism would really produce the quality of reviews necessary.
> I suppose one approach would be at a legislative level where you require all researchers publish on a decentralized open access platform where every citizen - they are the one's funding the research after all - can freely and uninhibitedly access the information.
That's not really incentivizing the _reviewers_, though, right?
But no seriously, this seems like an actual technology / problem fit, although you'd need a decent amount of tooling, product design, etc for it to work - but some of that's already being explored by the "open source + blockchain" endeavors.
In basic form, I imagine you'd basically turn the transaction fees into review fees.
(The product design and tooling comes from how you prevent abuses to that basic format, I think)
> some of that's already being explored by the "open source + blockchain" endeavors.
I for one would be also be interested in links.
One thing we don't want to do is reinvent the actual blockchain technology, so we're also interested in finding partnerships in terms of platform technology.
Ideally, we'd also like to set up a foundation to maintain and manage the ecosystem and invite both old and new players to participate. A lot of blockchain efforts seem to be a little too much like a hope for a monopoly wrapped in a thick layer of talk about free markets and democracy.
We're actively looking for collaboration and partnerships around this. If you, or anyone, would like to get involved we'd be happy to discuss. My email is in my profile.
I agree with an above post that the starting point is that you get paid to review, and you pay to get reviewed; although you'd need another way to monetize the token or it's limited to this use case.
The overlap with open-source seem clear: How do you validate the value of the crowd-sourced contributions? In code, you could do it via TDD...
OH! OR - (well, you could make both), you pay to review by _betting for it's validity_. There may be a way to combine or borrow from each approach to fill in the gaps in the other.
I am shamefully bad at encryption stuff. How do I get your email from the string in your profile?
The central overlap is validating the value of contributions.
P.S. Jeffrey Beall kept a list of predatory publishers but was shut down.
> In January 2017, Beall shut down his blog and removed all its content, citing pressure from his employer. Beall's supervisor wrote a response stating that he did not pressure Beall to discontinue his work, or threaten his employment; and had tried hard to support Beall's academic freedom.
Open Access is not a business model. It is about the Access, as the name says. The author pays is a business model, not the best one, and is the one with many problems.
See the Fair Open Access Principles https://www.fairopenaccess.org/ and the Publishing Reform Discussion Forum https://gitlab.com/publishing-reform/discussion/issues
> Open Access has its own problems, such as predatory journals, where researchers who don't know better or who are desperate to publish are more or less lied to as to the reach and validity of a journal.
This is not specific to Open Access, there are low quality subscription journals as well. No serious scientist would send articles to unknown journals without checking their credentials, whether the journals are open access or not.
That's one of the possible alternative, not the only alternative. Another alternative would be to have peer review as done today where authors pay for PDF hosting, server maintenance costs etc. and not the cost of "publishing" as in the traditional sense.
Isn't this the case with the subscription-based model as well?
Meanwhile, my project, a collaboration with industry, has literally anything we need and more.
In both cases, the policies (NIH in 2009, UC system in 2013) were shots across the bows of the large publishers, and permit a gradual easing of the culture without just blowing it up. So long as the eventual goal of all open-access is met in a relatively timely fashion, I think the strategy of slowly cinching down the rule is a reasonable compromise.
This process doesn't necessarily need to be funded. In my own field, most journals are published by learned societies. They were founded with endowments large enough to cover the costs of publication (i.e. printing) in perpetuity, but the work of editors and peer review is unpaid. This doesn't strike most of us as a problem.
Even the big publishers do not compensate peer reviewers or sometimes even editors. They don't even provide typesetting or copyediting anymore -- authors are expected to provide camera-ready output. So, a lot of the money being gained by the big publishers does not actually go to fund the whole process of creating those journals.
It sounds like they (major journal publishers) provide practically no value whatsoever. Why even use them then?
For example, in my country, every assessment we have to take (be it for a tenure-track hiring process, for getting tenure, for asking for a grant, etc.) has as the most important criterion "publications in journals indexed by ISI JCR" together with their quartile.
Most of the journals in ISI JCR follow this model where they cost money (be it to publish or to read) and provide very little value... except for being on that list and being necessary to (aspire to) stay in academia and feed your family, of course.
Other countries have better systems in the sense that they may be more open to other venues not in ISI JCR, some may even actually look at the quality of the papers instead of just blindly following rules to score quartiles. But scientists everywhere have the same problem in larger or smaller degree.
A solution that is sometimes proposed is that authors who are no longer struggling for their career (e.g. tenured full professors) take a stand and refuse to publish there. Some movements have been made in that direction, e.g. in mathematics. But in most fields a senior professor will work together with Ph.D. students and postdocs who are in the struggle, so it isn't realistic either.
The truth IMO is that the solution must come top-down, from governments. The EU has made some progress, e.g. mandating open access for EU grant holders, but what happens then is that publishers will let you make your paper open access in exchange for a hefty fee (which again, is paid from taxpayer money). The real solution would be to mandate by law that research paid by taxpayer money is published in non-for-profit venues, period.
Historically, coordination has worked sometimes. In 2003, after prodding by Don Knuth , the editorial board of the Elsevier Journal of Algorithms resigned en masse  and started a new cheaper journal, ACM Transactions on Algorithms. A few years later the Elsevier journal was shut down.
But I agree, it seems we can't rely on this process, and the solution must involve regulation.
I don't know what benefits reviewers receive, but they are gatekeepers to the journal's brand, so conceivably they are able to obtain some benefit to themselves.
As a reviewer, you get to read relevant new research in your field several months before it gets published. This doesn't work in physics and maths, thought, where the whole field has the habit of pre-publishing manuscripts in arXiv, so everyone gets to read everything before it's published.
My impression is that journal selection is doing some work of signaling how awesome scientists think a particular paper is, either actually or aspirationally, and so is capturing some sense of group regard. It seems like just keeping track of views, downloads, and "likes" on arxiv might serve much the same function although would clearly require a lot of work to get right to be credible.
However, to get to your actual question a big reason people use journals is simply the name. As a researcher getting your paper published in Nature is not only big because the prestige but presumably more people read/see nature articles so you have a better chance at high impact.
TL;DR Their value is their reputation and reach.
I have been a peer reviewer over the last 5 years or so for multiple journals and conferences in Computer Science like Elsevier, IEEE etc. Not once have I receive any remuneration for my comments in the article. Most of us do this as voluntary community service.
Would it be possible for you to prioritize review of open-access work? (This is a question, not a criticism!)
EDIT: by most standards that at least don't involve some situation of comfort. If you're comfortable in academia, you're probably enabling some unethical shit, is what I've come to believe. It's up to you to decide if you can live with the degree of it.
It's also a chance to stop major errors being published, and again authors usually appreciate things like that being caught before the paper becomes public.
This is fraught with ethical problems.
Would that still have problems?
I mean if you publish a paper and get four reviews, you then review another four papers at some point - perhaps at an entirely different conference later that year. You just make sure you review at least 4n papers for each n papers you publish.
There's a big difference: arXiv papers are open-access (everyone can download them), but ACM papers are closed-access (you need a subscription to read them). I wish that computer science research were published on arXiv, but we're not there yet...
1) the ACM lets each author link to their papers on the ACM library with free access.
2) in practice you could always find pre-print copies of any CS paper on their author's website already and everyone knew and nobody ever cared
3) the ACM also let you negotiate an agreement so that you retain copyright for the paper so you can publish it yourself - my company does this
1) Essentially no one knows about the author-izer system or understands how it works, especially outside academia. Readers in companies, poorer countries, etc., can't be expected to guess that the way to read an article is to go to the author's webpage and follow author-izer links (assuming they have been set up). What these potential readers will do is: search something on Google (or follow a link from somewhere else), hit the paywalled ACM DL page, give up. This convoluted system of "open-access from one place, closed-access from another" makes no sense.
2) For authors who actually post preprints of their work, yes, you can read it this way. But then you end up with multiple versions of the same work, that are often subtly different: does the author's preprint integrate reviewer feedback? does it fix some bugs that were found after the camera-ready version was submitted? And anyways, preprints posted on authors' websites usually disappear when they change institutions or retire, so it's not a good solution.
3) Yes, you can retain copyright on papers published with ACM, but then you need to give them an exclusive license to publish, so this still limits what you can do with your work (besides some narrowly worded exceptions). There is also a 3rd option of making the work open-access with no exclusive transfer, but this costs at least $700 per article, which is obviously excessive compared to the actual costs of hosting a 12-page PDF.
So I don't think it's fair to compare publishing with ACM and publishing on arXiv, because ACM is not open-access and publishing with them requires you to pay excessive fees or sign agreements restricting how you can publish your work, i.e., the opposite of what's in the interest of science.
The biggest and most critical miss of the whole process is not having the data a paper is based on published along with the paper. If something is irreproducible, is it really scientific? If I don't have your data, can I really reproduce your results?
I can give you data that say anything. The reproduction that counts is where you obtain similar data independently.
If you want to validate that my claims are justified BY MY DATA then you need my data.
If you want to validate that my claims are justified by the universe, you need to get your own data and should avoid being polluted by mine.
Having said that, I competely
support publishing data by default. But not for the reason you state.
I think you may be unfamiliar with how the reviewing process works. I get an e-mail asking me to review an article for a journal. If I say yes, I donate between two and ten unpaid hours of my time producing a review. After I am done with my review, I send it to the AE. The associate editor donates additional hours of his/her time reading my review, plus one or two others, along with potentially the entire manuscript, in order make a decision on the paper. This recommendation gets forwarded to the editor, who makes the final call on publication. With few exceptions (the only ones I'm aware of being the biggies like Nature, Science and Cell) every single person involved in this process is uncompensated. In some cases the editor may receive a small honorarium, but it's trifling compared to amount of time it takes to run a large, prestigious journal.
You're absolutely correct that reviewing is a resource intensive process. But you're wrong if you believe that the publishers are shouldering a significant part of the resource burden. This is exactly why people are so pissed off when these same publishers turn around and charge our own campus libraries a five-figure sum to access the same journals that we work basically for free to produce.
A token-curated registry (TCR) is an emerging model of information curation developed by Mike Goldin & Simon de la Rouviere from Consensys. It's an incentive system which decentralises the work of creating and maintaining high-quality repositories of valuable information.
Academic publishing seems like an ideal use-case for this model. There's a frenzy of activity going on in this space at the moment, and hundreds of details to work out - rather than get wrapped up in theorising, we want to release a v1.0 quickly and learn from there.
Please reach out to me via LinkedIn if you want to contribute to the pilot, or can help us raise awareness among the academic community. We're a small but dedicated group, open to partnering with universities, journals, crypto-developers, and other interested parties (especially academics).
Note: we have no plans or desire to make money out of this project. We're a group of well-connected enthusiasts who want to make headway on solving this problem.
Here's Mike Goldin's intro to TCRs, for context: https://medium.com/@ilovebagels/token-curated-registries-1-0...
Is there a good introductory essay/book on how the academic publishing industry is setup, the workflow and the incentive structure for each party involved?
In fact the most common complaint from peer reviewers is about the length of the review period. Because open-access journals have authors as our customers, the market pressure is to provide excellent customer service, and authors prefer publishers who will process their papers quickly. This pressure is passed on to reviewers who must complete reviews much faster than the old norms under Springer.
It’s hard enough to find good papers without the literature being further polluted with substandard work.
The job of a reviewer isn’t a simple yes/no answer. Reviewers often suggest sweeping changes to work to make it publishable. This helps to ensure that the literature is populated with well put together work that is free of bias and glaring mistakes. Otherwise we’d spend every minute helping students to spot the difference and wading through misleading research.
I already spend far too much time doing this. I can't imagine how futile literature research would be without peer review. There are already enough illogical conclusions and poor study designs to wade through.
Before peer review they were not dealing with the volume of scientific content that is submitted nowadays. One could debate whether citations are the only metric necessary, but there needs to be a filter before that even happens, or else we will be flooded with an avalanche of garbage research. It would be trivial for someone with deep enough pockets to order a network of junk papers citing each other and debunking climate change or evolution and make a real mess. As painful as reviewing is, it's a necessary evil for now.
Things might be ignored, but only after a person has already wasted their time looking at it and determining it is crap.
It was harder for cranks to publish back when journals were purely physical and a crank would have to come up with the printing costs himself. Now that anyone can publish for free on the internet, peer review is even more important for establishing what content out there is worth paying attention to and which is not.
Peer review happens anyway, but faster, and in public, without the insanity of revise and resubmit.
Journals are not where the action is in Economics or Computer Science. It works for them. Why not for everybody?
I neglect those because journals in my own field are both free to publish in and, more often than not, open-access. I do understand that not everyone is so fortunate, however.
You'll be stunned by some of the accepted submissions, such as these highlighted by "New Real Peer Review", @RealPeerReview on Twitter:
Glaciers, gender, and science: A feminist glaciology framework for global environmental change research. "Merging feminist postcolonial science studies and feminist political ecology, the feminist glaciology framework generates robust analysis of gender, power, and epistemologies in dynamic social-ecological systems, thereby leading to more just and equitable science and human-ice interactions." http://journals.sagepub.com/doi/abs/10.1177/0309132515623368
Black Anality "In turning attention to this understudied and overdetermining space — the black anus — “Black Anality” considers the racial meanings produced in pornographic texts that insistently return to the black female anus as a critical site of pleasure, peril, and curiosity." https://read.dukeupress.edu/glq/article-abstract/20/4/439/34...
The Perilous Whiteness of Pumpkins (specifically, Starbucks’ pumpkin spice lattes) http://www.tandfonline.com/doi/abs/10.1080/2373566X.2015.109...
EGO HIPPO: the subject as metaphor about a trans animal scholar identifying as a hippopotamus. http://www.tandfonline.com/doi/full/10.1080/0969725X.2017.13...
Rum, rytm och resande: Genusperspektiv på järnvägsstationer PhD thesis on gendered train stations. "The overriding aim of this study is to examine how male and female commuters use and experience railway stations as gendered physical places and social spaces, during their daily travels. [...] Through this theoretical frame the thesis analyses gendered power relations of bodies in time, space and mobility." http://liu.diva-portal.org/smash/record.jsf?pid=diva2%3A7424...
“I'm a real catch”: The blurring of alternative and hegemonic masculinities in men's talk about home cooking "while many participants drew on what they saw as alternative masculinities to frame their cooking, these masculinities may in fact have hegemonic elements revolving around notions of individuality and romantic or sexual allure." https://www.sciencedirect.com/science/article/abs/pii/S02775...
Activist Filmmaker, the Living Camera, Participatory Democracy, and Their Weaving "Throughout this article I use lowercase letters to deemphasize the importance of the individualized human in cyborg connection." http://irqr.ucpress.edu/content/10/4/340
Disaster Capitalism and the Quick, Quick, Slow Unravelling of Animal Life "Sea otters have barely survived centuries of colonial and capitalist development." https://onlinelibrary.wiley.com/doi/abs/10.1111/anti.12389
Queer organising and performativity: Towards a norm-critical conceptualisation of organisational intersectionality http://www.ephemerajournal.org/sites/default/files/pdfs/issu...
Speciesism Party: A Vegan Critique of Sausage Party "In this article, we have described how Sausage Party reflects and reproduces intersecting oppressive power relations of species, gender, sexuality, ethnicity and different forms of embodiment." https://academic.oup.com/isle/advance-article/doi/10.1093/is...
"Wow, that bitch is crazy!" PhD Thesis about watching "Bachelor" with friends. https://search.proquest.com/openview/f4a6dbda2dc523609ad56c5...
Diaries, dicks, and desire: how the leaky traveler troubles dominant discourse in the eroticized Caribbean Paper on being a sex tourist in the Caribbean. http://www.tandfonline.com/doi/full/10.1080/14766825.2011.65...
The otter article, for example, appears to be a well-researched analysis of otter populations in Alaskan waters correlated with human settlement and economic activities. The pumpkin latte article looks into the correlation of light colors and whiteness and advertising in the US, an issue which frequently pops up in marketing gaffes (including one as recently as last week for a beer commercial). The queer organizing article looks at diversity in organizations, analyzed using sociological framework (i.e., "norms") rather than the business management framework. The Sausage Party article is an academic analysis of the movie, which is a surprisingly deep commentary of race, gender, and ethnicity in the U.S.
The above is a completely fraudulent article that was accepted for publication to demonstrate the counter point to your argument. The article is literally gibberish, but got through 'peer-review'. I know the lead author on this one and she did this to make a point about the inconsistency of peer review. The article so clearly fake that its laughable, but that is the point.
I don't want to discuss the merits of individual papers, but I think as a whole they say something worrying about the current state of academic publishing.
So, let me just tell you why I am annoyed and concerned with these and similar papers:
1. Often, they torture language, in that the authors don't seem to write to elucidate and educate, but to obfuscate. (I'm well aware that scientific fields develop their own jargon; that it is useful; that the meaning of a term doesn't necessarily correspond with the ordinary meaning; etc. But even granting all that, it seems to me that a lot of that writing is wilfully opaque and unnecessarily jargon laden.) (Note BTW that you communicated the gist and utility of the articles much better than the authors themselves.)
2. Publishing "pursuant to mandatory thesis requirements" is part of the problem: people get degrees and university positions with research that does not expand the frontiers of human knowledge. Autoethnographic research is particularly galling in that respect (such as a recent paper about the time the author fell of a chair).
3. They delegitimise academia. There used to be a very broad consensus in most societies that education and research is immensely valuable and ought to be supported by government, and that academics should have immense freedom to pursue what they deem important and valuable without any interference and censorship (the essence of tenure). Papers such as these corrode that consensus.
4. Critical theory papers are ineffectual, I'd say, in achieving their laudable goals. I was going to mention MLK and Marx and Keynes ("Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist."), but this is too long as it is.
5. "Publish or perish" and Pay-to-publish open access also rear their ugly head.
I maintain that not only rejected papers, but also some accepted papers are shocking.
So, yes, you would need to assign a reputation to citations. I don’t think that’s a solved problem or even simpler than the problem of assigning reputation to papers.
So in the short run, you still need a prospective measure of scientific value, and acceptance for publication in a selective journal ("impact factor") serves that purpose well.
Note that my supposed idea was to use 'pagerank' to replace reference count as the value of a work. Not to replace peer review.
Consortia memberships in the US and Canada are not uncommon either. However, Germany possess our only perpetual license.
A lot of us desire open access, however, we are not sure how we would fund ourselves. Our subscription rates are generally very low. Especially compared to these large journals.
Re:open-access? Perhaps. I should note that this area is not my expertise. However, what to do when the grant runs out?
Even so most of our revenue comes from the subscription. A grant may pay for hosting, but what about the time of the developers. Or those working on the publication side. We are already on a near skeleton crew. Especially compared to what we had here in the 1940-80s.
Well, you get another. Lots of non-profits run only on grants.
Have you thought about applying for EU grants? Open Access is one of the goals on Horizon 2020, the €80B EU program, but even after that there's no shortage of EU cash being dumped on anything even barely related to "innovation".
> [...] an impasse in fee negotiations between [Springer Journals] and Couperin.org, a national consortium representing more than 250 academic institutions in France.
Of course I would prefers solutions which get rid of elsevier, springer etc...
HN: Just do it yourself! It is easy! You already do the hard part! Yay!
-- -- -*-
At the same time.
Poster: this is how you run your small db
HN: Oursource your databases! Outsource your apps! Oursource your auth! Outsource your mail! It is difficult!
Both knowledge extraction and signaling paper quality are fascinating, hard problems in modern science. I wish I had answers beyond criticism of the current system, but it might be a problem for people far smarter than I.
Most people give no shit about most papers. The fact that even the shittest publication will be reviewed by a handful of experts, even though most people will never read the paper, is as good as you're gonna get imo.
Please help us by expressing your opinion and public support on the forum. There is still a lot of work needed to convince the journals' editors.
Disclosure: I work at the same section as scoap3 team, at CERN.
Publishers need Universities to both consume the product and create the product. If they cut them off, it will only force the inevitable.
The research itself is funded by the tax payer. The peer review is funded by the tax payer. What endowment funding is needed to replace whatever Springer is actually providing?
This whole mess is solvable with a single law. Just have the EU or the US legislate that any tax payer funded research (including research with external grants but done by researchers in public universities) needs to be available free of charge for download to any citizen. The whole "sector" would unravel pretty quickly after that as it should. Charging the tax payer, through huge contracts like this one being renegotiated in France, huge sums for access to the very research the tax payer has already paid for to be done in the first place is a complete travesty.