Hacker News new | past | comments | ask | show | jobs | submit login
Who's downloading pirated papers? Everyone (sciencemag.org)
1085 points by nkurz on April 29, 2016 | hide | past | favorite | 381 comments



Elsevier and others can go something themselves.

Their behavior over the last two decades has been little more than reprehensible rent-seeking. Whatever goodwill they had disappeared as they sapped with increasing ruthlessness the dollars of students and non-profits. See, e.g., one of many such figures: http://www.lib.washington.edu/scholpub/images/economics_grap...

It is absurd and dishonest to call Sci-Hub "piracy", given that all of its contents were originally created and given away with the express goal of wide dissemination.


Totally agree. I have gone through academia and am now in industry; the other day I wanted to read and browse basic papers in IEEE Explore for basic knowledge on new topics. Since I am no longer a student, I am willing to pay money - as much as hundreds to get unlimited access. Instead, for hundreds, IEEE explore offers a measly ("generous") 25 downloads a month or some such non-sense It just seemed so miserly of them. We researchers had formerly given them pretty much free research to publish, and they turn their back on even legitimate paid avenues for us to get the papers. We are willing to pay; but where is the option?

Anyone that has gone through research and academia knows that you often need to browse many papers to even find the ones worth reading. 25 paper cap includes browsing PDFs; you will easily hit that cap in a single day simply through browsing.

IEEE and IEEE Explore can go something themselves. This is coming from a published IEEE author and academic researcher.


"Anyone that has gone through research and academia knows that you often need to browse many papers to even find the ones worth reading."

Exactly. And without the monthly subscription plan (something I wasn't even aware of until this comment thread), the going rate seems to be around $30 per paper--rent-seeking to the point of highway robbery. One thing that these publishing companies could have done long ago was allow some sort of "free preview" of the full text or "full refund within 10 min" option to help deal with this problem. I have no idea how this would be done technically to prevent "pirating" but, as it is, the pay-per-paper system is completely disconnected from the way researchers browse papers.


Have you taken a look at deepdyve.com? $40/month ($30/month if bought as an annual subscription) for unlimited online access to a ton of journals, including a bazillion from IEEE: https://www.deepdyve.com/browse/publishers/ieee

If you create an account but do not subscribe, you can view all the papers but only for 5 minutes each. You can then subscribe to read the ones you need more of, or purchase access to 5 papers for $20.

The above is for online reading. If you need PDFs, you can purchase those but they are not cheap. They are usually what the publisher charges on the publisher's site less a 20% discount.


I have no doubt other universities offer this, but see if you can join a Universities library, for access to the papers?


After the IEEE $25 monthly service came out, I subscribed for three years until finally canceled it. Honestly most of the papers I downloaded were junks, the only reason I download them was to verify the available references.

The paper quality declination plus publication quantity inflation are the main reasons in this "Sci-Hub" crisis.


This sounds like an increasingly commented on issue: there are now hundreds and hundreds of specialty journals, but quality is patchy. In fact, I think we may be seeing inherent problems coming to the fore now that journals are more focussed on profits: often there is publication bias towards certain topics; in other cases the bias is towards positive results and breakthroughs, with a lack of enthusiasm for publishing retractions or establishing reproducibility.

One thing that always sticks in my craw is that the raw data is very rarely published for analysis. Many may disagree, but I think that by not publishing the raw data for experiments the temptation to commit academic fraud is very high.

I often wonder about some of the more recent scandals whether it might have been picked up a lot faster had the data been more readily available.

I also have noticed that many journals don't say who the reviewers are after publication. For instance, I was looking at body pyschotherapy the other day and came across something called biodynamic analysis, which appears to be a widely held and well respected view within the subdiscipline. I was amazed to discover that something called "grounding" was a serious concept that underpins this analysis, so I looked at the Wikipedia references and discovered that there was at least one journal citation to The Journal of Alternative and Complementary Medicine. The article seems to be attempting to make a link between bloody viscosity and electrical grounding of humans to the earth! [1]

Now there is another article that shows there is virtually no impact on the body from the same journal, so I started to wonder how this passed peer review. The answer is - I have no way of knowing as they don't make it clear what their review policies and procedures are, and they appear to charge authors to publish work.

In other words - it's pseudoscience dressed up in credibility. And it is making a serious impact in the world of psychology!

1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3576907/


I have found recently that reviewers in natural language processing, in aggregate, place negative value on reproducibility.

Perhaps 1 out of 3 reviewers gets it, another is bored by the fiddly and detailed methods section, and the third was subconsciously hoping for a "brilliant", mystifying secret sauce. Oh, that's all you did? he asks. I could have done that.

Of course you could have done it, I just told you how and pointed you to the data. I also compiled that data, and you won't even let me tell you that because of blind review.


Wow.

I'm an occultist. One of the first regimens I did so to learn occultism was grounding and centering. It's very much an esoteric skill, and not something to be cited in a paper... unless it was being discussed under a 'microscope'.

I would enjoy in using fMRI or other diagnostic tools to see what physiologically happens when I do those things. But I have no qualms; that shouldn't be in any academic paper until my recommendation of measuring it is done.


> I would enjoy in using fMRI or other diagnostic tools to see what physiologically happens when I do those things. But I have no qualms; that shouldn't be in any academic paper until my recommendation of measuring it is done.

That's not how science works...


Oh, so researching an interesting phenomenon, that is already cited in journals is somehow "not science"? I'd be careful about letting your own biases affect you negatively.

Simply put, there may or may be nothing there. With diagnostic methods and feedback from the person, we can start to determine if there is a measurable effect. If there's something there, we research further. If not, we cite it as proof "no noticeable effect". This also goes to show that we (academic community) should be much more accepting of papers showing "No Effect", rather than only positive papers. Knowing the landmines that others went down is just as valuable as what works.

But in actuality, I was also giving on-topic discussion about where those phenomena are discussed at length: in studies on occultism. That's just a factual statement with no value proposition. Whomever is more interested can do their own research, with this topic in mind.


What did you mean by "that shouldn't be in any academic paper until my recommendation of measuring it is done"? I think that might be the sticking point.


I was using google speaking keyboard. I was at a stoplight and spoke it and submitted.

I'm all for scientific method, be it showing positive, negative, or no results. I also know what isn't currently scientific, although I do have curiosity if some of those 'things' can indeed be proved.


This is not how science works, that is just paying attention to results that back your hypothesis or biases; in fact, doing this is not science at all.

EDIT: An example of this in the wild is how drug companies would cherry pick research to back the outcome the wanted; cherry picking is an anti-pattern of true science.


Happens to the best of us. My editing issues is Apple's autocorrect, I get some truly strange results sometimes.

That makes sense - though I hope you don't mind me asking but when you say "I also know what isn't currently scientific", do you mean untested hypotheses?


Everything is "scientific" so long as it is subjected to unbiased skeptical experimental evaluation. That's all it means to be "scientific."


Science needn't mean experiments. Just ask astronomers.


Hypothesizing then observing or reviewing past observations to verify the hypothesis is a form of experimentation. Every revelation of astronomy has been the result of experiments.


This is what I was responding to.

I happen to think occultists are wrong (as in: factually incorrect in their beliefs). But they are entitled to think whatever they want, so that's not a problem. What is a problem is objecting to data being collected and published.

But it seems that his speech-to-text translator wasn't accurate and that's not what he meant.


Did you really have to give the guy such a dick response?


If you live in a city with a University, like Toronto, check out getting a library card to their library system. It costs me $300 / year and it's unlimited research papers. The only downside is that it is retrievable in person only, so you kinda need to be either close to the University or keep a list of papers you need before you make the trek out.


You are paying $300/year for access to information that was given to the world freely, and sacrificing your time and effort to get it? Do you see the problem here?


He didn't claim there wasn't a problem, just gave practical advice for making the most of the unfair situation


Even more practical advice: 'use Libgen'.


I somehow misread that as 'use Lojban' and was momentarily both confused and delighted :\


It's unfortunate that you are now clear-headed and disappointed.


Indeed,"Lojban" and "practical" in the same sentence is not something you see every day.


And also all the other resources the university library offers. Especially in summer months, uni libraries are amazing and massively under-used resources. Working/learning is a public library is usually a drag, but uni libraries are awesome.

I used to live a couple blocks from a uni library, and I practically lived there. In the summers I drove straight there from work, got lost in the stacks learning about random things, and only left when it closed.

Fortunately that library was completely free to the public, but if it weren't I would have happily paid $100/mo during the summer months for access (Probably saved that much just in utilities anyways.)


What if your city doesn't have that?


The IEEE is such BS. I was a member but left because of all the SPAM. They continued to SPAM me for years after I left.


IEEE is not for Engineers, it's a Management Overlay.


I recognized what a scam IEEE was my first year of grad school. So many fees, sub-society fees, transaction fees...something themselves indeed. Many times over.


I don't understand academia (genuinely):

Couldn't the researchers simply publish their own papers? Make torrents etc? Or are they prevented from doing that somehow?


Journals are used as a proxy for quality. A publication in "Science" or "Nature" is generally regarded as being better than a publication in, say, the "Proceedings of the National Academy of Science".

Bibliographic metrics, like the number of publications and where they are published, are use as a proxy for the quality of someone's research. It can affect grant funding and career advancement.

Researchers can publish their own papers. But researchers also tend to want that work to be disseminated, both to spread knowledge and get recognition for the work. Journals help simplify that process. A field typically has one or two main journals, which most people in the field track. It's more likely that a publication will be noticed if it's published in one of those papers.


Haven't we solved the quality issue in other situations by giving users the ability to rate a scientific paper.

What's wrong with giving other researchers the ability to rate and comment on a paper.

It's a not a perfect filter but I would argue it works just as well a this journal reputation system and it prevents issues like people having to publish with them to get recognized not to mention it encourages discussion between academics which could lead to some breakthroughs as well.

You want peer recognition so why not go to your peers directly instead of a proxy like these publications.

This system is a product of a bygone era and as all other systems like it it is resisting change by relying on copyright.

I see copyright as the root of most of our current societies issues it is just not compatible with how we work not to mention most of these papers were made with public funding so there shouldn’t be any copyright on it to begin with the copyright belongs to the public domain they payed for it.

A publication just publishing it on their website and slapping a price on it is pure theft and they should be treated like the pirates they are pursuing.


"by giving users the ability to rate a scientific paper."

LOL! No. Ratings are nearly useless. By which I mean I don't know how I would incorporate ratings into my own research.

How do you keep a ratings system from being a proxy for popularity, rather than a measure of novelty or utility? How do you keep it from being easily gamed? Does a "5.2" for a 1995 paper have the same meaning as a "5.2" for a 2010 paper?

"You want peer recognition so why not go to your peers directly instead of a proxy like these publications"

Do tell me how. Post on my blog and hope people come across it?

Journals outsource marketing. Get rid of journals and something must replace that service, or I have to become a marketer.

"This system is a product of a bygone era .. relying on copyright"

You are mistaken. The system does not rely on copyright. The scientific publication system dates from the 1600s, dated from Oldenburg's 'Philosophical Transactions of the Royal Society'. Copyright dates from the 1700s, dated from the Statute of Anne.

The scientific publication system depends on reliable selection and curation. Its current form is dominated by large publishing companies which use copyright law for revenue generation, but that's only been true for the last few decades. Scientific publication also existed in the USSR even without copyright protection. (article 103(2) according to https://en.wikipedia.org/wiki/Copyright_law_of_the_Soviet_Un... gave "permission to reproduce published scientific, artistic, or literary works as excerpts (or even entirely) in scientific, critical, or educational publications").

"most of these papers were made with public funding so there shouldn’t be any copyright on it to begin with the copyright belongs to the public domain they payed for it."

I understand and sympathize with your argument. But in practice things aren't so clear cut. Rarely does the money come 100% from public sources. Is your argument that even 1 cent of public funding means the entire work product must be in the public domain?

Are you making the moral argument that copyright should to the funding source? Or is it limited only to public funding sources? For example, if 100% of the money for a project comes from the Bill & Melinda Gates Foundation, then do you want the foundation to control the copyright the work product? What if it's 80% Gates and 20% public funded? Is it public domain then?

If 3 years of research was publicly funded, and resulted in paper, and 2 more years was privately funded by BigCorp., to analyze the data further, then must that additional two years also be in the public domain?

And on and on. Unless "1 cent => public domain always", then there's no clear line. Is that your argument?

(BTW, there is another clear line - work by a US government employee is in the public domain. A small percentage of the scientific papers are in the public domain because of that.)


> How do you keep a ratings system from being a proxy for popularity, rather than a measure of novelty or utility? How do you keep it from being easily gamed?

To respond to this particular point, I'm not familiar with any characteristics of the academic journal system which prevent these problems either. Name recognition is often key to getting into competitive journals, which is just another way of calling it an easily gamed popularity contest.


Popularity plays a role. Here's a real life example: One of my quantum physics mentors submitted a paper to PRL (a high prestige physics journal) and it was rejected. Why? Because it came from a small, rather unknown university in Spain.

He was later admitted to Cambridge for a PhD and sent the exact same paper (same formatting, content, layout but different department/uni). It was immediately accepted. Does it necessarily mean that scientists in poorer countries are doing bad science? Or just that they didn't get their visa / couldn't leave their country / just want to stay there, etc.

The first filter most editors use is: "Do I know this research group? Are they part of the club? Have they published in top journals?". If not, it won't get read (unless it's a compelling and exciting title and abstract).


Phys Rev Lett isn't double blind?

Double blind doesn't always work, sometimes it's painfully obvious who the PI is, but it should stop stuff like this from happening.


I haven't published there in a while but I don't think so.

> We are no longer able to accede to requests from authors that we withhold their identities from the referees. Such "double-blind" reviewing has been discontinued. [0]

[0] https://journals.aps.org/prl/authors/editorial-policies-prac...


Certainly the current system can be gamed, and is, and the complaints about problems with bibliometrics about 10 seconds after the first one was published.

But the argument is that "giving other researchers the ability to rate ... [would work] just as well as this journal reputation system".

My question is, simply, why should I believe this?

Bibliometrics often look at citations. Citations are, yes, a marker of popularity and social connections, but also have certain scholastic requirements. Editors and reviewers can and do point out missing references or highlight too many irrelevant self-citations. This is a type of moderation system. Imperfect certainly, but it does provide some grounding.

Also, it's not like the scientific publication system only contains original research papers. Review papers fill a curational role, with the advantage of a single voice (even if from co-authors) making the comparisons, so I at least have a relatively constant baseline.


I think a lot of ratings/reputation/moderation problems can be engineered around if you make all of the moderation metadata publicly available in a form that enables competition in the search functionality. Make articles, users, links, and comments up and down-votable in a way that is public and immutable forever, and de-couple the problem of mining the data.


Traditionally peer review is done anonymously from both sides. However, it is often possible to get a good idea of who the author is based on their related work, style and choice of references. This is particularly the case where the field is very specialised and there are only a few people qualified to judge papers.


This depends a lot on the field. Every neuro paper I have reviewed has the author's names and affiliations intact, along with unblinded citations like "Experiment was done as we described previously (SameFolks et al., 1999)."

On the other hand, I've always been suspicious that people can reliably guess the authors when doing a blinded review. I've seen one study that said 30% of reviewers (in a small sample) could do so, but that hardly seems like a reason to abandon the idea altogether.


> Unless "1 cent => public domain always", then there's no clear line. Is that your argument?

I don't see a problem with this. If you want public money, you have to put your work in the public domain. That's just a condition of getting the money.

The NIH does this. If you receive a single penny from an NIH grant, the work must be posted to PubMed.


Is it practical and desirable to have it both ways? Be published in a journal but also publish it on your own blog for free access?

So if you publish a paper titled "XYZ" in a journal and also put it on your blog, making sure your blog is visible to search engines - anyone searching for the paper's title "XYZ" will find it for free on your blog?

(I'm not involved with the science community)


You're frequently not allowed to self-publish your manuscripts for free if they're submitted to a journal. The publishing agreement you sign precludes it


Many people do that. Google Scholar, among others, can help find them better than a title search on a general purpose web search engine. For the most part, your proposal represents the status quo.

Note that my comment concerned the proposal "why not go to your peers directly instead of a proxy like these publications". Your proposal replaces "instead" with "in addition to".


Rate by successfull driven back attacks and citation times time out..review only the attacks quality..


A rating system would generally become an indicator of other academics' level of agreement with a paper, rather than an indication that it passed a quality and significance threshold. Citation ranking schemes already exist, but the citation counts are heavily influenced by academics being unable to NOT mention a dissenting view because it was recently published in a prominent journal. And I'm not sure examples like Amazon book ratings, HN karma or professionally-optimized PageRank can really be claimed with a straight face to be superior for curating quality content than good old fashioned peer review.

I do think open commenting systems would be a boon to academic discourse though.


Does pre-publication "peer review" not already function to some extent similarly as an indicator of level of agreement rather than a disinterested, objective quality and significance threshold? Everything I know of human processes suggests that it would.


I worry that a rating system would reduce the amount of thought each reviewer puts into their decision, but it's certainly true that peer review isn't perfect. Famously, the Fourier transform was not published for several years due to a disagreement with Lagrange.

> When Fourier submitted his paper in 1807, the committee (which included Lagrange, Laplace, Malus and Legendre, among others) concluded: ...the manner in which the author arrives at these equations is not exempt of difficulties and [...] his analysis to integrate them still leaves something to be desired on the score of generality and even rigour.

https://en.wikipedia.org/wiki/List_of_important_publications...


I will comment on it from my point of view (theoretical CS PhD student):

> What's wrong with giving other researchers the ability to rate and comment on a paper.

I do not see nothing inherently bad with this (provided very good moderation tools are in place). However...

* > I would argue it works just as well a this journal reputation system *

As a journal reputation system, possibly; although I find that experts have often very different taste compared to say general Hacker News public -- for instance, machine learning has a very high "coolness factor" these days compared to other areas of computer science, but it would be unwise to make a system where a weaker paper on machine learning gets vastly more credit than a better paper on say approximation algorithms.

However, an important part of the journal submission process is the fact that the editors ask the members of the academic community for spending several days on the paper, noting major and minor details and checking the validity of the proof.

Since the reviewers do their work for free, there is no reason why there cannot be an open system that replicates the same process; still, just "adding a comment system to arXiv" will probably not be enough, as people spend rarely that much time on comments (15 minutes tops, not entire days).


Why do reviewers work for free? And why scientists agree to give away their work for free?


I think it's because no one can agree on a price, tax and international payments is too expensive, and there's a tit-for-tat understanding.

First, the price. The last paper I reviewed took me about 10 hours. I charge $200/hour. Should I charge $2000 for my time? If I'm paid by the hour, who tells me when to stop? If I'm paid a flat fee, how much is that fee?

I have seen a reviewer write something like "I read the paper twice. It seems good. Publish it." Does that get the same as someone who spends 10 hours for a detailed critique?

Second, does payment constitute work-for-hire? Does the publisher and/or reviewer need to pay taxes? Might some reviewers have a contract which prevents them from taking on part-time work? What if the reviewer is overseas? Do tax agencies from two different countries need to be involved?

Third, most people expect to publish, and get their papers reviewed. If a single author has three reviewers, then that author should expect to do about three reviews. This is an imperfect system which rewards publishing rather than reviewers, but there are also some compensations to the reviewer.

Moreover, I recall one story - I don't know the validity of it, but it has a ring of truth - where a daycare got annoyed with parents who picked up the children late. They decided to fine parents should that happen. As a result, children were picked up late more often because the fine was interpreted as a fee for extra daycare, and no longer social chastisement.

I suspect that reviewers are in a similar boat. I'll feel different about it if I get paid a pittance than if I were to do it for free, so long as others do the same for me in the future.


Your recall is correct - it was an Israeli daycare: http://priceonomics.com/effectiveness-of-fines-for-late-pick...


Reviewers don't really work for free; it's generally accepted that reviewing is part of their salaried job.


For free in the sense that the journals pay them nothing. So lets recap; the public/university pays for the researcher and the research, the public/university pays for the reviewing and submission process, and finally the public university pays to access the work they already paid to create and review. And people wonder why everyone but the journals think the journal process is broken.


Reviewers work for free for a few reasons reasons. First, it's prestigious to be a reviewer for a top journal and very beneficial for getting tenure. Second, it allows you to create connections with other top researchers. Third, it's giving back to the community by doing a service that needs to be done.


How do people know that I have (or haven't) been a reviewer for a top journal? Do they have a public list of reviewers?

Of the people I meet at a conference, how do I figure out who has been a reviewer for Nature, and how do I verify that claim?


My thoughts exactly. I've never known a journal to do anything to promote the fact that someone has reviewed for them (eg produce a letter on request that states which articles someone has reviewed for them). It'd be a nice and free way of creating value to reviewers who generally work for free, but I don't know any journal who's ever done this.


It's hard to explain how important the journals are for a scientists. It's not even the effect that a paper in Nature is (in some disciplines) the fastest way to tenure. It's got an even bigger psychological factor: If you get into Science, you frame it and hang it on the most prominent wall in your living room or office. With a Nobel too rare to practically hope for, this system is the only reward all those mice-conditioning mice are conditioned to.


Pretty much this:

Researcher: "I have to go to this club so people think I'm fancy"

Club: "The club charges a lot so only fancy people go there"

Other people: "This person is very fancy because they go to that club"

End result is that people buy their "fanciness" by being on that club


I'd like to point out that a journal is not purely a proxy or even an aggregator - I've gotten more detailed peer review with the bigger journals, because the reviewers are more critical and the editors more selective. I didn't start out as a fan of peer review, but now I say the more the better; it has markedly increased the quality of all of my papers which have undergone it.


I really like your counter argument, focusing on the process instead of the output. However I have to ask: isn't the end goal of submitting a manuscript to get recognition for it? Do you genuinely hope every time that your submission will be mercilessly torn apart and possibly rejected instead of getting the recognition for the work you've done?

Truly I have seen article reviews ruin a budding research scientist's career. Of course one could argue that other problems contribute to finding themselves in that position.


If there is something worth tearing apart, I would much rather it be torn apart than published. Recognition for my work is not what scientific articles are for, right? When I read a paper that has unaddressed problems, it's a waste of time - possibly years of my time if I base my own work on that weak foundation. I don't want to inflict that on anyone.

I would think that if peer review ruins a scientist's career, it would only be through a lack of support from the advisor. If the review is accurate the advisor should support the revisions, and if the review is totally mistaken the advisor should support resubmitting elsewhere.


I think reviews tend to be overly critical primarily because reviewers are forced to find excuses to reject 9/10 of the papers that landed on their desk, most of which they weren't even interested in reading.

I strongly believe that review could be almost exclusively through positive feedback, with crowd-sourced minimal annotations in the margins, i.e. "the idea in this paragraph came from <earlier paper link>", and "this idea is novel; wish I thought of it", and "there is a strong analogy between this paragraph and <some other tangentially related idea>"


It is not my experience that reviewers are overly critical. The notes I have seen fall into three categories: 1) pointing out the nagging issue that I knew but avoided, 2) misunderstandings, which are annoying but show how the paper is not clear enough, 3) genuine insights which I would not have seen. I have never wanted these to be more positive, but I can see the benefits such a cultural shift would have. Phrasing the notes as positive feedback would benefit the happiness of the authors, but make the notes lengthier and increase the effort for the (volunteer) reviewers. I wonder if the editor could reframe the reviewers' notes to improve the tone before sending them to the authors?

At least in chemistry where I work, a rejection rate of 9/10 at the review stage would be very unusual. It is possible that reviewers in the triage stage are harsher but I never see these reviews.


I'm confused by this. The researchers don't get paid for their paper, and it seems like the peer reviewers don't get paid for doing the peer review:

http://www.nature.com/news/open-access-the-true-cost-of-scie...

So why exactly are publications locking these papers behind massively expensive paywalls when writing them and reviewing them are being done for free?


You confuse two things. The main career goal of a researcher in academia is generally moving up to the tenure track. The method to measure this progress is typically a)publication in journals with a high impact factor and b) Being on the technical review committee of said journals.

The reason researchers publish in these journals is it gives them street cred. There are several open access journals - but scanty participation since their impact factor is unknown and does little to promote the researchers career.

Speaking as someone who has published myself, the paywalls are highway robbery. But, imho - it is a problem only the researchers can solve by concertedly supporting open access journals and getting involved in them (maybe a yc research idea???). The established professors would need to take the lead on this - not the underlings.

Personally - I've seen the "light" and stayed back in industry as opposed to moving into academia as I had hoped. The rat race for funding & publications & street cred seemed a lot worse there.


Another important reason for slow adoption of open access is that most open access journals charge significant publication fees ($1000 or more.) Most academic labs run on shoestring budgets, so that's a lot of money. The principal investigator has to choose between standing on principle and publishing open access, or paying for equipment and reagents or sending students to conferences. That's not an easy choice to make, especially pre-tenure.


Agreed - that's a good point. In fact Elsevier has caught on to this as well. I remember when I published with them they gave me an option of making my paper open access if I paid somewhere around $1000 (or maybe it was $3000). I ignored that option. I agree there are costs - typesetting (well - I did send a LaTeX manuscript), etc. but not $1000 worth. If they'd said $100 - now I would have definitely considered the open access path.


Their re-typesetting your articles can just be pure grief, too.

For the last journal article I published, in a journal run by Springer, we submitted a nicely typeset LaTeX article using the template they provided.

Our best theory is that they then copy/pasted the result into Word. It came back to us as a "pre-print" PDF with tons of errors, font changes, etc. We had no good way to compare it to the original other than printing them both out and going through, line by line, looking for discrepancies. There were dozens and dozens of problems. And it was a 70 page article!

After sending them corrections we got to repeat this process 3 times.

And now they charge folks for our article.

Fuck. :(


I'm sure nobody would ever blame you if the original typeset paper were to suddenly land on Sci-Hub.

But seriously, what you're basically saying here is that in fact it would have been better to have published it yourself - except you may not have had the same audience who would have known about the paper?

Or did they also go through rounds of peer review and ask for clarification or find errors?


Because they can and academic scientists don't do anything to change the status quo.


That's a bit harsh--many academic scientists can't do much to change the status quo. I would love to submit to open access journals, but I also really like eating, having a place to sleep, and other indulgences that having a job provides.

As a grad student, one basically doesn't have much say. Postdocs have a bit more latitude, but need to "play the game" to land jobs; pre-tenure faculty are in a very similar position if they want to keep theirs. Even tenured PIs are in a pretty precarious position: many are on soft-money positions (no grants, no salary), all of them need grants, and their trainees also presumably want to move up the grad student -> postdoc -> faculty ladder.

Getting this to change is a massive collective action problem and it's really going to need a push from institutions and the big names at the very top. This holds not just for publications, but a lot of other issues with academia too. I'd love to spend more time making our code solid and publicly available, but there's essentially no payoff for that either.


I wouldn't say scientists are doing nothing. See for instance the "Cost of Knowledge" project spearheaded by top mathematicians: https://en.wikipedia.org/wiki/The_Cost_of_Knowledge


Not much seeing much output from that initiative which I signed btw.


Who gets paid more: a tenure track professor doing research or an Associate Dean for Research?

This is the same phenomenon.


> A publication in "Science" or "Nature"

What excludes the academic from also publishing elsewhere?


Even if they could (and they can publish their preprints), scientists rarely bother. That's because they want to communicate their science to people who matter to them (e.g. grant giving bodies), not to the general public.


Which raises the question of why we the taxpayers should be funding this research if it's not easily available for the public good.


Copyright, according to the US Constitution, exists for the public good:

> To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

You may disagree with that conclusion, but there's 200+ years of belief that it's possible to be "for the public good" and be covered under copyright which restricts redistribution. The short term loss is outweighed by the long term gain, or so the belief holds.

Bear in mind that the copyright term back then was decades shorter than it is now.

That said, grant organizations are turning towards requiring publication either in an open access journal, or by having papers restricted for only a short time, rather than the full length of copyright.


The Sonny Bono Copyright Term Extension Act, and even the Act before it (the Copyright Act of 1976) seems to rather violate the constitution then:

1. It doesn't promote the Progress of Science and useful Arts

2. 120 years since creation or 95 years since publication for corporations, or the life of the author plus 70 years is not what I would call particularly "limited Times".

There may be 200+ years of belief in copyright, but things have significantly changed in the last several thirty years. Copyright doesn't make as much sense as it used to, and though openness and transparency has always been supported in theory (and in practice by a minority) movements that support Open Source over a range of disciplines far greater than just software are now quite significant. Whilst it predates the Internet in it's current form, the current web and other global distribution mechanisms powered by the Internet have radically changed a lot of people's views about freedom of expression and ideas.


Your objections were presented to the Supreme Court, who decided they were not actually un-Constitutional in Eldred v. Ashcroft. https://en.wikipedia.org/wiki/Eldred_v._Ashcroft . You won't be the first to disagree with a Supreme Court decision. But the only recourse you have is get the courts change their mind in the future, or amend the Constitution.

I have a hard time empathizing with an argument which, with the change of a few ephemeral names, Mad Libs style, could have been said at any time in the last 200 years.

The statement "thing have significantly changed in the last several thirty years" has been true for centuries. People in the the 1950s, or 1910s, or 1870s, could and likely did say the same thing.

Nor is your rhetoric about "openness and transparency" unique to these last two decades. Look to the populists and muckrakers from around 1900s as effective proponents of that. Look to the newspapers of the late 1800s, when the Linotype made it possible to have cheap newspapers, and look to the growth of wire services and the telephone, radio, and television, as recent examples of other technologies which have "radically changed a lot of people's views about freedom of expression and ideas."


"My" objections? Alrighty then.


Are our wires crossed again? I don't understand the point behind your reply.


Your comment seemed a bit... personal. Whilst you had some decent points, it seemed like you were more arguing against my character.


I had no intention to argue against your character, nor upon rereading do I see myself doing that.

I do argue against the meaningfulness of your comments, given the judgement in Eldred, and given the last 250 years of incessant technological change. I also think you, like many, see the near history with a much better focus than the further past. But that is not a character flaw.


Well, I'm glad to see I was wrong :-) thank you. It's just you seemed to compare my, uh, rhetoric to what the "populists and muckrackers" of the 1900s were saying. Though in certain company I guess that could be considered high praise, I'm not sure I was too fond of the comparison...


My intent was to say that 100 years ago people would have said that the muckrakers and populists pushed for a level of "openness and transparency" which had never before been seen. I don't see how that is coupled to your character.

In the 1970s, after the Watergate hearings and the new FOIA and Sunshine laws, people again could have, and likely did, say that it was also a level of openness and transparency which had never before been seen.

Your essential argument seems to be "things are different now so throw out the old". But things always change, so that argument is always true, and can be therefore be used to justify anything.


Not all things always change. Like change, for instance. Change that doesn't change remains the same. Just sayin' :-)


Or, as I pointed out, the existence of copyright, which brings us full circle.


I think it's a mistake to assume the general public must have access to the paper for it to advance the public good. A climate change paper does good even if the general public doesn't read it.

Furthermore, most of these papers are available to the public, just not online.

Many libraries have copies of the works.

I do think the pricing for non-institutional clients is absurd. Paying 30 bucks just to get past the abstract is nuts.


The terms of the agreement with the journal.

Nature, for example, permits private redistribution of accepted papers - I could send you a copy if you asked me for one - but requires exclusive publishing rights - I couldn't just put it on my website for anyone to take.


There is an expectation that each publication is a new piece of research and having the same work published in more than one journal and one conference is considered to be somewhat fraudulent. Just think of it: if researchers started to double publish, what would happen with the administrative "objective" process of giving the money to the guy with most bullet points on his list?


The contract you sign with the owners of Science and Nature preclude publishing elsewhere.


If we were to work out what makes a journal so prestigious, what are the objective measures that should be considered?


Mainly it's a chicken and egg situation.

A journal is prestigious because the papers it publishes have an allusion of quality, because they are published by a prestigious journal.

A prestigious journal publishes high-quality papers because it can attract high-quality authors and reviewers, because it is a prestigious journal.

A journal is prestigious because researchers choose to (try to) be published in it, because their articles will be widely read by other researchers, because it is a prestigious journal.

If you were trying to bootstrap a new journal, you'd have a tough time; unless you didn't care about its quality.

The bare minimum would be a staff who have the time and skill to sift through obviously junk papers and then know which reviewers to use for the legitimate papers. Your best bet would probably be to assemble a staff of well-regarded researchers in whatever field(s) you wanted the journal to cover.


> If you were trying to bootstrap a new journal, you'd have a tough time; unless you didn't care about its quality.

You could (should?) start by going through freely published (open access) papers, collecting the most quality ones, then publishing them in your journal. That's most of the value of journals, as well - informing the subscribers of noteworthy research.


But how do I do that? I can interpret a paper and apply it to my problems, but I'm not qualified to tell if it's a good paper[1]. In my field -- computing -- research papers are often written in a language that only just makes sense to me. I'm sure it'd be easier if I was a computer scientist that regularly published, but that's why I suggest you'd need a team of actual scientists who are used to reading, writing, and reviewing papers.

As Kernighan almost wrote:

Reviewing the paper is twice as hard as reading the paper in the first place. Therefore, if you have to be as clever as possible to read the paper, you are, by definition, not smart enough to review it.

[1] Although I can tell whether the code works, the quality of a paper is also about how well-written it is, whether the arguments are valid and supported by the data, and whether it provides something future research can be built on. A terrible paper that happens to contain code that works is useful for me, not so much for science in general.


I believe tomp's suggestion is that you can bootstrap as an overlay journal (see https://en.wikipedia.org/wiki/Overlay_journal ) to build up that expertise. You don't need to have experienced people from the start, because they can learn while doing. At the start, no one is going to pay attention to the journal. Then as it gets known for being a good curator, people will start requesting to be reviewed for inclusion. Poof you've got a journal.

You might even start with only yourself. And in 5 years if you want to distance yourself from your earlier, naïve efforts, then simply announce a name change.

Here's the ugly truth - it's hard to tell if most papers are "good", and there are many types of "good".


Distancing yourself from early failures is never a good strategy. Too often we flee from our past failures. For me, I don't shy from them but admit they occurred, and that I have learned from them since.


Perhaps I used the term incorrectly. I don't mean "distance" to require someone to "flee" nor not admit its connection, nor do I regard naïvety as a failure.

In my field, the main journal used to be "Journal of Computer Documentation". It spun off from "American Documentation" in the early 1960s. The JCD title changed in the 1970s to "Journal of Chemical Information and Computer Sciences" because "documentation" - a 1920s term - was considered already old-fashioned by the 1960s, and the title didn't capture the new focus on computers, and the underlying principles of information science.

It then became "Journal of Chemical Information and Modeling" in 2005 because the 1990s showed how to apply those information techniques to make predictive models.

Each name change is a form of distancing itself from its previous focus as part of a shift to a new focus.

Or is "distance oneself" not applicable for that situation?


I think I probably got my wires crossed. By "distancing" I did indeed think you meant "pretend as best as possible previous work never happened in an attempt to protect your reputation", but that's not at all what you meant! Sorry about that.


Do these actually exist? I would love a "This Week in Vision Research" that has 1-2 paragraph summaries of recent papers, maybe with some attempts at critiques and placing the results in context.

Faculty of 1000 is the only thing I know of that's even close to this.


"Discrete Analysis" - http://www.nature.com/news/leading-mathematician-launches-ar... just started and was in the news

http://processalgebra.blogspot.com/2015/12/the-novelty-of-ar... lists a few more, including "Logical Methods in Computer Science published its first issue ten years ago and has become one of the favourite publication outlets for researchers working on logic in computer science, broadly construed."


Right, you probably can't do it yourself. Even established journals often have a group of peer reviewers etc. Probably the best idea is for a group of academics to get together and start "sharing" the papers they found useful, acting as a "peer review" group of sorts.

I was just commenting about how to get around the "the journal isn't prestigious so no important research is published in it so it's not prestigious" problem.


Your journal wouldn't ever be considered prestigious or even moderately average if you're re-publishing old articles instead of publishing actual novel research.

You could create a newsletter that's useful to some, but if anyone would consider to publish their new research findings in your journal, then that would be bad for them since it wouldn't be considered anywhere equal to a "proper" publication (since you openly claim to accept non-novel findings as well) and would also prevent them from ever publishing it in any "better" journal since that would not be unpublished research any more.

In practice new good journals are generally started by (sub)communities who know each other and feel that [a topic of] their discipline is not adequately covered by the existing journals. This means that since day 1 they have (a) a source of good new papers that they will want to send there and not elsewhere - the whole reason for them founding the journal, (b) a pool of respected and knowledgeable editors/reviewers (the same community), and (c) initial interest of the community who would all read (and cite) the journal. Given that, the only missing part is recognition by various indexes and funding agencies, which will arrive within a few years (3-5?) if the new journal is well run, productive and gets appropriate citations outside of it.


It could select papers from preprints. That's what several overlay journals do.

But I'm commenting now more to point out that some journals do accept non-novel findings, like 'Journal of Negative Results in BioMedicine'.


100 years of track record publishing the least amount of junk.


Worst metric evar.


Your career is based on how "good" the journals are that you publish in.

If I publish a paper in Nature, Nature Genetics, Science, etc. then I can hand-pick my next position. The more glam-journals I publish in, the better my chances for tenure.


This is why the scam is so insidious. The journals are really squatting on academic output so they can be gatekeepers of academic status and of career prospects.

The product of publishing isn't the dissemination of insight and innovation, but of gold stars academics can put on their resumes.

Sci-Hub doesn't just threaten copyright, but the entire status factory.


Can you cross-publish / self-publish as well?


In practice, people often just put a "draft" of the paper on their personal website without telling their publisher.

Papers in fancy journals are more prestigious, but the papers that get cited the most are the ones with full text freely available on the Web. So, when they can, scientists choose both.

(The better way to do this is to support a publisher that's both high-quality and open-access and submit to their journal.)


It depends on the venue. In computer science, it is often possible, but there are often weird restrictions: embargo delay, only the preprint, must include a link to the publisher version, only on a personal webpage or on certain kinds of repositories, copyright assignment has been signed as well, etc. So you don't always feel comfortable doing it.

Further, we academics are incentivized, in terms of career, to publish in prestigious venues managed by official publishers. We aren't incentivized to publish in open access repositories as well. So, even though most of us want our articles to be disseminated widely, not all of us can find the time to make sure that the paper is also available on an open repository. This is a factor that explains why many articles are not available for free online even in cases where it would be legal for researchers to upload them.


Don't people in CS just ignore that crap and post free PDFs of "preprints" (taken the second before submission of camera ready) on personal websites?

IME the publishers fuck up the formatting anyways.


Yes. Yes they do. RE formatting, publishers ~do enforce page limits, which has value... if you're publishing on paper.


It depends on the venue.

In my field of computer science (visual analytics / visualizations) at least some of the leading journals/conferences officially allow you to put a version of the paper on your website.

Just today we submitted a paper where the copyright form explicitly mentions that.


How... generous?


I did not imply that I'm happy with the situation, but sadly it is a fact scientists have to deal with right now.


Well... They sort of don't any more. People just go to sci-hub!


It depends upon your field and your country but many journals require a copyright assignment. They also often require that the work has not been published elsewhere previously.


This is actually common, at least in mathematics: Researchers simultaneously submit to a traditional journal and to the preprint server http://arxiv.org.

Usually I don't have any difficulties finding contemporary papers. It is papers from, say, thirty years ago, that are usually stuck behind paywalls.


Interesting. I wonder why nothing of that sort exists for the biomedical sciences? Maybe it's been tried but these publishers won't allow it?


The biomedical sciences have bioRxiv:

http://biorxiv.org/

It is not used as broadly as arxiv.org, but that may be starting to change.


Nice to know, thanks!


If you are funded by most US or Canadian agencies, you're required to put make something publicly available. Depending on the journals' contract, it might be the published article, or the final version of the text before they copy-edit/reformat it.


The problem is nobody is going to read them. That's because the people who these scientists are targeting (other established academics, funding sources) choose to ignore the rest of the internet and only read the well known journals. This is almost exclusive to life sciences though. In CS, physics and other disciplines people just upload their preprints on arxiv and don't really bother about impact factors and such.


I don't think most scientists bother much about impact factors. Institutions and funding agencies are the ones who bother about impact factors.

I work in CS and I know that if I want my paper to be read by the community I have to publish in a conference and/or arXiv, as almost no one read journals in CS. But then comes one of the almost infinite assessments the bureaucratic system in my university and country makes me jump through, and what do they look at? Impact factors, and conference publications are worth next to nothing. So I constantly have to juggle between what is good for being relevant and known in the community, and what is good to keep my job, which are often incompatible.

This depends on country, though. In the US, I think the funding agencies have left behind the impact factor fetishism in CS and authors can (and often do) just publish in conferences. But in Spain if you don't publish in journals you just don't get a position in academia, in fact some universities are starting to require > X ISI JCR-indexed publications to be able to become a PhD. And I have heard that in e.g. China it's the same thing.


> I work in CS

You 're fortunate. I bet sci-hub is mostly used by life scientists.


Many researchers do so – but the established magazines are used for evaluating the quality of a researcher, and also used for ranking universities.

You might notice that universities which recommend to their researchers to self-publish do not even show up in any university ranking.


Actually, quite a few of them do publish at least preprints outside of the peer review journals. For about 80% of the papers I needed recently that were behind a paywall, I managed to find PDFs published by the authors somewhere else.


More and more universities are paying for 'gold' (open) access publication. They're beginning to realise that you get more citations if people can actually read your papers without needing to go through the paywall.

One of the advantages of publication is peer review. When you cite someone's work, if it's in a reputable journal then you can assume that someone reviewed it and would vouch for the quality of the research. You have to take it with a pinch of salt (reviewers are never totally unbiased and they make mistakes), but in fields where research is easy to verify like computer science, a publication in somewhere like Pattern Recognition holds a lot of weight.

If you just cite a random paper that someone's self published on arXiv, you have no real idea whether what's being presented is true. Again, this isn't so much of a problem for CS papers where you can implement and test the work yourself, but for fields like physics where you're relying on other people's data, you want to be sure that someone smarter than you checked the numbers.


You are absolutely right. Scientists are the only ones to blame for this. It's true, as someone else replied to you, that scientists tend to chose journals based on their reputations but it's also true that nothing prevents a scientist from publishing their unedited manuscript on their personal websites. It's absolutely legal and absolutely uncommon.


I am with you, since pretty much any scientist would love to be cited by another's research, any scientist will send you a copy of their paper if you ask them for it. And since the creator of the work is willing to give it away to you for free, it cannot possibly be "piracy".


And since the creator of the work is willing to give it away to you for free, it cannot possibly be "piracy".

My client (who will remain unnamed) insists that I advocate for his position: Just as a work-for-hire belongs to the party who paid the creator, surely it matters that the creator signed a contract with a publisher giving up rights to distribute the work in return for invaluable services? Are you implying that the creator retains some sort of "moral right" to distribute copies of the work to whomever despite the explicit stipulations of the contract?


You might ask your client to consider whether or not the contract the author signed was under duress when the contract was signed. It is well known that publishing in "impactful" journals is a pre-requisite for academic advancement and success, failing to sign that contract would do grave economic and reputational harm to the author. The journal, taking advantage of the pressure being applied by the author's employer, exploits that advantage to achieve an unearned and involuntarily given economic benefit at the cost of the author.


Duress? You might do better concentrating on the "freshness of a fine morning when you're young and the taste of food when you're hungry."[1] Duress is a tricky defense, as I think it typically requires that the threat of harm be illegal, and not just the workings of capitalism as it's designed.[2]

Next you'll want to tell me that just because the free market price of an essential medicine is prohibitive, one government should be allowed to flaunt another government's lawfully issued patent just for the benefit of its citizens! Much as you might wish otherwise, in the eyes of the law there simply is no "clear bright line" between infringing on copyright and slaughtering a ship's crew at sea in pursuit of treasure to bury.

[1] http://www.hum.huji.ac.il/upload/_FILE_1370804282.pdf Apologies for the spurious copyright notice at the top. Clearly it's not under copyright, since the author died over 70 years ago! 73, in fact.[3]

[2] http://legal-dictionary.thefreedictionary.com/duress

[3] So as to "promote the Progress of Science and useful Arts", in 1976 Congress passed a 47 year extension to the previous 28 year copyright term (for works that had not yet entered the public domain) giving them a total term of 75 years. Then in 1998 (shortly before the copyright for Mickey Mouse was set to expire[4]) it was realized that even greater progress could be obtained with a slightly longer term. To further promote said progress, the duration of copyright was retroactively extended to be the greater of 75 years since publication or 70 years after the death of the author. Unless it's a work of corporate authorship. Since corporations don't have a natural lifespan, it's only fair to extend the copyright on these works to 120 years after creation or 95 years after publication. In any case, it's clear that "Blackmask Online" couldn't hold any copyright, since all they did is format a public domain text into a PDF!

[4] http://artlawjournal.com/mickey-mouse-keeps-changing-copyrig...


Oops. s/flaunt/flout/. And if not yet clear, my position is devil's advocate. I do think it's worth pointing out, though, that in a world where it is legal to assign copyright in return for compensation, it's difficult to simultaneously assert than a creator permanently retains the right to give away copies of the work for free.


Has anyone got a breakdown of actual costs incurred by Elsevier? Note I don't include marketing their journals, I mean a breakdown labour costs, rent for buildings (where the building are located) and other production costs.

I'd like to know their revenue, and a full price list per journal they publish.

I very much believe that if we looked at what they make in profits, we'll see that they can't justify how much they charge. I, for one, have no sympathy for them.


I've done a lot of work on tools that interface with the academic publishing infrastructure. Elsevier not only make a huge profit margin, but they operate at a staggering level of inefficiency. They are haemorrhaging money and producing precisely nothing of value, and still making profit for their investors hand over fist. Quite incredible.


Hmmm...it seems like the space is ready to be disrupted.

How might that look? I mean purely publishing isnt just ideal as it lacks the so-called quality control and prestige of established publications.

What might be a middle ground for disrupting this crappy monopoly on the world's knowledge?


Academics in some fields have been starting independent open-access journals, sometimes with the explicit goal of replacing Elsevier journals in particular.

https://gowers.wordpress.com/2013/01/28/the-elsevier-boycott...


Need to figure out how to align the incentives and fix the prestige problem of academia in order for this to work.

Open Access exists, but you don't get the status you need from it (and weirdly it often costs a couple hundred dollars for the graduate student to publish their paper open access in addition to the Journal).

It'd be nice if there was a way to make the community more like open source software where you get status from sharing and collaborating instead of from hoarding information in fear of being 'scooped'.

As it stands the incentives are directly opposite collaborating and sharing information and really the best parts about science and research.


Bingo. But unfortunately life scientists stubborny refuse to think like programmers.


Personally, I think it has at least these important aspects:

- built on a foundation of open source software

- new ways of presenting and consuming should be encouraged and easily discoverable

- completely outside the influence of the current publishing industry

There's a community of us working on this stuff. We hang out on gitter (https://gitter.im/codeforscience/community).


You would need a shift away from publishing i think. The rationale behind publishing in long-form journal articles is not really valid in our century. There is an immense amount of noise in each paper, and findings are typically summarized in their 3-4 figures anyway. There are proposals for new approaches (e.g. http://www.silvalab.com/silvapapers/QAAlcinoSilva.pdf).

I believe we need a directory of scientific findings of sorts. It should be possible for each finding to cite other findings it relied on, thus creating a graph of influence/significance. The current situation with citations often involves friends citing friends , so i don't consider it reliable. It should also include open questions , thus giving research directions.


Out of curiosity, have you spent time in academia?

> The rationale behind publishing in long-form journal articles is not really valid in our century.

No, it's perfectly valid. Some ideas can't be expressed in short form. A 5-page long proof requires a 10-20 page article to properly motivate and provide context/explanation. An evolutionary or piece-by-piece doesn't make any sense; you'll end up with fragmented dead ends. Waste of everyone's time.

> and findings are typically summarized in their 3-4 figures anyway

I'm not really sure which fields you're referring to, but I assure you that you're over-generalizing. This isn't even remotely true even for very empirical sciences.

> There are proposals for new approaches

Usually these are appropriate only for a single subdomain or methodology and only provide one particular and extremely opinionated view on the results. Such as the proposal in the article you posted.

The overhead isn't worth it and there's a huge risk the relevant field(s) move quickly enough that the presentation method becomes obsolete before it becomes useful.

> It should be possible for each finding to cite other findings it relied on

...I don't know what to say to this...

> thus creating a graph of influence/significance

Yeah, we have this. It's called bibliometrics. UNIVERSALLY HATED by anyone who's not a bean counter. Good people ignore them. Bad people optimize for the metrics and it becomes a stupid game that has nothing to do with the thing you're trying to measure.

> The current situation with citations often involves friends citing friends , so i don't consider it reliable

This is just one reason among many that bibliometrics (a.k.a. any necessarily poor attempt at "a graph of influence/significance") are a poor mechanism by which to judge science...

> It should also include open questions , thus giving research directions.

In many fields literally every paper includes this. In most it's rather obvious to anyone who comprehends the paper what the next steps are.


As long as you have government power (public funding) in academia, you'll have rent-seeking.


Can you give examples of the inefficiencies you discovered?


Wikipedia:

> In 2015, Elsevier reported a profit margin of approximately 37% on revenues of £2.070 billion.

https://en.wikipedia.org/wiki/Elsevier


I'm wondering if they'd make more money with very cheap subscriptions (a.k.a. "long tail") and no legal department.


They charge $5000 if you want to make your article public access (i.e. they won't put it behind paywall). So that's a ballpark value of how much it costs +their profits.

Whatever they may claim their costs are, the money does not add meaningful contribution. The meaningful contribution (writing and reviewing) is done by scientists themselves, almost 100% independently of elsevier. Elsevier employs some senior editors who decide which reviewer gets to review which paper, and that's about their entire contribution to scientific process. The rest is their website costs, PR and fluff which is not relevant to the science itself.


I once looked into their financial statements and came to the conclusion that per article published, they had about $2500 in costs and $5000 in revenue, so 50% profit margins. If you look at the price of comparable PLoS journals (i.e. not PLoS ONE), the publication costs are also about $2500, so I think this is in the right ballpark.


Completely without source and no breakdown, but I read recently that their margins are in the 20-30% range.


In 2014 Elsevier's profit margin was 34% http://www.ft.com/cms/s/0/93138f3e-87d6-11e5-90de-f44762bf98...


Wonder what the margin on their e-journals are like, vs printed journals.

They get the research papers for free, the editors don't get paid ... it's such a scam, and always has been.


Well, they do need to employ a highly paid CEO... /s


Yeah and with that setup, 37% seams somehow very low. I wonder what accounting stunts they pull...


This is an interesting point: if the law allows rent-seeking, people will correct it by breaking the law, at enormous personal risk (for real, it's riskier than selling heroin in terms of prison time). Obviously, the best solution would be to change the law, but good luck with that, given the influence of corporate money.


I'v got a hard time calling this piracy when the "owners" have next to nothing invested in the material's production, and provide even less in return to those who do. What Elsevier is doing is more like kidnapping.


If anyone is having trouble accessing https://scihub.io (the site providing the papers) you can find the site directly at the ip address: https://31.184.194.81/ ... Apparently the domain name was seized. The certificate is for sci-hub.io (safe to accept). Or you can just connect to http://31.184.194.81/ if you don't want to bother with the warnings (and are ok with DOIs and papers being transmitted without encryption).

EDIT:

Their other domains:

https://sci-hub.cc (uses sci-hub.io certificate)

https://sci-hub.bz (uses a separate certificate and ip address -- 104.28.20.155)

And a tor site: scihub22266oqcxt.onion


Have there been attempts at dispersing the whole collection of papers through torrents or IPFS? The goal here is not to have a central location with a pretty web page but to make the content freely accessibly everywhere by anyone. Distributing it over thousands of nodes would achieve that goal.


I believe that libgen torrents are available, which, as I understand it, are basically a mirror of the content available from scihub. I don't have a lot of information on it now and can't research it from work.


Torrents are pretty bad at the archiving problem , i.e. maintaining copies of things with no readership over a long time. Not a great fit for this stuff.


I agree. Setting up mirrors would probably be a better fit for this kind of content, but mirrors would be susceptible to the same dangers that the primary website may face: forced take-down by the hosting provider, domain blocking, etc.

The fundamental difference of content-addressed networks is their resiliency in the face a single authority trying to track down all of the sources that have copies of the content.

Even though IPFS is still in its infancy, one of its primary goals is to solve the problem of content suddenly disappearing from the Internet.


Yes and no. You need servers that guarantee availability (but they could hide themselves pretty well if necessary). Then the torrents provide accessibility.


That's a good question. I don't know if there has been an attempt to IPFS/Torrent the files, but it would be good. They are referenced by DOI on sci-hub so they are easily searched. It would probably be helpful to have a DOI -> IPFS/Torrent name. For IPFS a doi like 10.1037/rmh0000008 would probably work but I don't know if the '/' character will work as a torrent name.

I don't know that the sci-hub underlying database is available outside of the site. I expect with growth they will need to move to torrents or similar to minimize their bandwidth requirements.


> the whole collection of papers through torrents

yes, although their bandwidth is horrible



Eventually they'll just setup their own DNS system, or replace it with something else entirely. something like a BitTorrent for nameservers.



NameCoin is a failure. There are two problems. 1. no one uses it. 2. cybersquatters.



Or you can just enter "31.184.194.81 sci-hub.io" into your /etc/hosts, so the certificate will match up with the domain name.


Good point. Also, they have a facebook page: https://www.facebook.com/sci.hub.org/

.org was their original domain which was pulled a while ago.

They are still posting updates there. Hopefully they don't get that pulled too.


> “I’m all for universal access, but not theft!” tweeted Elsevier’s director of universal access, Alicia Wise

You want everybody to have access, but you don't want them to get it for free.

Wow, so you want the entire world to all pay for the material you were given for free. Hmmmm.


I was at a conference with a number representatives from various publishers. During a discussion about access control I was basically told that publishers want to enforce specific spatio-temporal constraints on who can view (as in read with human eyeballs) their content based on whether a library or individual paid for it. They make the other content industries look positively benign.


That reminds me of the whole talk around "open standards" that you have to pay for to legally acquire (ANSI, ISO, IEEE, etc.), although in that case the situation is slightly different. Also, the "free software" vs "open source" distinction.


Ugh, yes. I recently wrote a parser for an ISO standard file format. I had to base it on the draft submission because they wanted me to pay them a bunch of money for the actual end standard.

I really hope the final standard didn't have any substantive changes, so that my implementation is correct. But all I can really do is cross my fingers since it at least seems to work with available data.

Charging for standards has a direct negative effect on the proliferation of those standards, especially today when a lot of code is written as open source hobby projects, which are left out in the cold.


You can find some standards at the usual places for books etc. (LibGen has some), but if you are not in a hurry then scouring the Internet very thoroughly will usually yield results. Google won't as easily show you as it used to, but careful queries and following leads through forums and such can bear fruit. You know you're getting close when Google starts showing "results removed due to DMCA" messages and requiring you to enter a CAPTCHA "because your search queries look unusual" --- the latter may even be intended purposely as a form of discouragement, but persevere and eventually you will find what you're looking for.

Then, once you've found a source, always remember to bookmark it and save a copy. They disappear from search indices quickly, despite the site still being around. It also helps to repeat your search periodically as sometimes the "shifting sands can uncover treasure momentarily" ;-)


Since you're posting from a throwaway, what ISO standard was it?

Also put some form of anonymous contact info in your profile. Low chances, but who knows...

EDIT: Looks like your account has a bit of history to it. If there isn't anything particularly identifying in there, the above idea should be good.


Oh this is a huge pet peeve of mine, it's unbelievable.

Granted, a lot of work goes into these standards, and they don't come out of nowere, still


Yep, this is my bête noire too.

It gets even worse. I am working on software that confirms to guidelines set by the Dutch government for handling healthcare data. These Dutch national standards were actually (and justly!) made free instead of available at a fee because they became requirements instead of just guidelines. But these standards are (naturally) based on international standards. This is fine for the RFC's that are freely available, but for the three (!) relevant Dutch standards (NEN 7510, 7512, 7513) you would need access to another forty-ish (!!) ISO standards that may, or may not, be relevant to the section you are dealing with.

Of course even a single PDF for these healthcare ISO standards costs up to €100, to be paid in Swiss Francs…


Who pays for the standards committees? I note that the IEEE has a charitable foundation with over $45 million in assets...


Luckily in the UK many libraries have access to BSO (British Standards Online), which allows you to view the standards in a DRM viewer as part of your free library membership.

It's not perfect, but it allowed me to implement an RFID data decoding library (ISO-28560) at my last job.


The publishers are either given the material for free and readers have to pay, or the researchers pay the publishers on the order of $2k per page to have their research published as open access. Not great in either case.

http://openwetware.org/wiki/Publication_fees


Surely, there's a middle way. Since we are talking about a club good [1], a club would be the appropriate alternative. Assuming a fee of 20$/month for people in developed countries and 100.000 members, this would cover about 1.000 papers per month.

[1] https://en.wikipedia.org/wiki/Club_good


Actually, the material wasn't given for free. In many cases you have to pay to get your articles published. There are fees for color figures, for example. All in all, you end up paying significant money to get published. And later people pay to access your articles.


Well, no, they want some people to have some things for free https://www.elsevier.com/editors-update/story/access/so,-wha...


Actually people often pay for the privilege of giving them the material.


> “Graduate students who want to access an article from the Elsevier system should work with their department chair, professor of the class, or their faculty thesis adviser for assistance.”

Now THAT is chuckle worthy.


Imagine wasting some researcher's valuable time on tracking papers for you!

The gall of these publishing companies never ceases to amaze.


Not very helpful for Wikipedia researchers. Or people who are outside the walled garden of academia!


"walled garden of academia"

The walled garden of academia is not so fruitful as outsiders are led to think it is.

We struggle year after year to pay salaries and still have money to buy the equipment and material we need for the projects, we don't have that much left to spend on buying access articles that we and our peers wrote and that we and our peers reviewed, all for free while some publisher makes millions on it.


To be fair, I haven't set a foot in my university for 3 years (planning to return soon!) but since I still send them the occasional email asking to be registered for a course I'm considered active there and get access to a lot of publications as long as I'm logged in to their systems.


Do any universities offer something like "passive studentship"? i.e. you pay only a small fraction of the tuition, it does not include any courses, but you may attend the facilities and specifically the library and get access to the university network. If that existed, hobby researchers and the like could get access to all these journals for a probably smaller amount of money.


The state university library here has a [membership program][0] that gives you access to the library and its various subscriptions -- so effectively library donors get access. Individual membership is under $50/year, so it's definitely an affordable option. Not sure if it gives remote access to their network (they used to have a web proxy server for off campus students, don't know if they still do); you may have to go to the library and use the computers there.

[0]: http://library.sc.edu/p/Develop/Society/ThomasCooperSociety


Many universities actually do have a program to give townfolk library cards, including journal access.

That said, the best way to get a research paper in CS is to check the author or lab's website and, if it's not there, e-mail the first author directly.


My other comment covers this. There is actually an upside to spatio-temporal control, which is that anyone who walks into a university library can access the licensed material for that library. Of course in this day and age such physical restrictions on access seem absurd.


Elsevier believes they have United States law on their side. And they're right; they do have US law on their side. That just doesn't mean much anymore; it's been worn away by decades of conspicuous corruption, and lost most of its respect. In principle, this should be addressed by the US legislature. In practice, academia has effectively voted no-confidence and bypassed the legal system entirely.


> In practice, academia has effectively voted no-confidence and bypassed the legal system entirely.

Yup. Seems like civil disobedience from the research community. Pretty interesting to see a civil rights movement happening online. Who says you can't sit at home and be an activist? ;-)


I've had a feeling, recently, that there should be some maxim that goes something like:

Don't put yourself in a position where the lawyers are the only ones on your side.


I recently wanted to read a five page paper on graph theory from 1977. The company entrusted with it 40 years ago is charging $38 for it. It is just absurd. I can't imagine that the author, now long dead, would have wanted his work to be so difficult to read.


Just out of curiosity, what paper is this?


All of them.


70's graph theory papers FTW! I bet if you actually pay that $40, the pdf you get will still be typeset with a typewriter.


I'm sad Aaron Swartz did not live to see this unfolding. He might have been in prison now, but he would still be a world-class hero.


Yes, it's sad that he would be put in jail for such a "crime" whereas the likes of the CEO of Volkswagen are actually being rewarded millions for much severe crimes in the name of punishment.


I think about that a lot (have a number of friends publishing papers in academia). Really sad.


> He might have been in prison now

For hypothetical crimes he could've committed had he lived? Because he certainly wouldn't be in prison for things he actually did.


He likely would be in Prison, because he'd be forced to plead out to lesser crimes, or face the prospect of a cripplingly expensive trial with the threat of higher sanctions.


No, that's complete bullshit.

Had Aaron pleaded guilty, he'd be out by now as both of the plea agreements he was offered would have meant 6 months or less .

And trial? Why would he have gone to trial? There was no question of his guilt.


While he might have been free by now, he would still have been labeled a felon.


I don't think anyone was questioning that.

Also FWIW, so am I and it's not all that bad.


Going to jail at all would have been an injustice. What would it have achieved?


Did I imply otherwise?

I don't personally believe it would've achieved anything, but that's how the world works.


Yes. You said "He'd be out by now". To get out means you must have been in, and jailtime is what you were responding to.


Nothing ryanlol has said even remotely indicates that Scwartz going to jail would've been a good thing.


I've reread the thread, looks like I misinterpreted the argument and owe ryanlol an apology. Sorry about that. FWIW, I didn't downvote any of his comments.


For whatever reason lots of people here seem to disagree.

Could any of the 4 downvoters clarify in what conceivable scenario would Aaron still be in prison if he was alive?

Had he pled guilty, he would undoubtedly be out by now. And going to trial obviously wasn't a realistic option considering the plea deals he was offered (BTW If someone here disagrees about that, I'd really like to know why.).

Read:

http://volokh.com/2013/01/14/aaron-swartz-charges/

http://volokh.com/2013/01/16/the-criminal-charges-against-aa...


> going to trial obviously wasn't a realistic option considering the plea deals he was offered

And that's why plea deals are an injustice. The honest are intimidated into felons, and the criminals never face a judge.


They may be, but I think Aarons case is a particularly bad example of such as I don't think there was ever any question of his guilt.

The law may not be just, but there's little doubt that he violated it.

Considering his charges, Swartz received exceptionally good plea deal offers.


> Considering his charges, Swartz received exceptionally good plea deal offers.

A non-violent crime shouldn't result in any prison time.


While I very much agree, the law says otherwise.


This blanket downvoting is somewhat odd, what is it I'm saying that's so disagreeable? For afaik I've only presented facts about the time Swartz would probably have spent incarcerated.


Your first comment is open to interpretation (it doesn't say would not still or in and out of or something like that) so people are reacting pretty much entirely based on their existing emotions. They don't care about analyzing the likely outcome of the case had he lived, it's brought up to share anger and outrage over the injustice.


> They don't care about analyzing the likely outcome of the case had he lived, it's brought up to share anger and outrage over the injustice.

Precisely. He didn't have to kill himself, and if he'd put a bit more forethought into what he was doing, we'd probably be discussing this in a thread around an interview with him rather than with Ms. Elbakyan. But nobody wants to hear that, because it detracts from the story of this generation's Bobby Fischer.


FWIW, there was something about the way that comment was phrased that caused me to parse it badly, hence my response. I didn't downvote you, but perhaps it might help to explain the downvotes.

Incidentally, this is why I often dislike downvotes by those who don't comment directly on non-troll comments. It doesn't add a damned thing to sensible discourse, it just smacks of punitive actions against those they disagree with. You really notice it on certain topics, like Aaron - but for some reason it really occurs in anything Apple related.


It looks like this is EXACTLY what is needed to distrupt this abusive industry. There have been numerous attempts of enforcing a change in positive ways - open access journals, campaigns by researchers and so on. But none of these had any effect.

Let Elsevier go down in flames. I have published more than 50 academic papers and have actively avoided Elsevier. To be honest, this was not too difficult, as they have a lot of journals addressing specialized subtopics that rather seem to appeal to manuscripts that were rejected in first tier journals.


The first three paragraphs of the article clearly tells what is wrong with the system - "Publishers are overcharging for content". Basically, they just continued their business model from the printed-book era to the e-book era without much change. The publishers should think of allowing individuals to subscribe to the content and charge them (nominally) for what they use, rather than putting the load on the Universities and making them subscribe the entire spectrum of journals. The Pay-per-view model of Elsevier currently charges an individual researcher (a staggering) "$31.50 per article or chapter for most Elsevier content. Select titles are priced between $19.95 and $41.95 (subject to change)." [0]

[0] -https://www.elsevier.com/solutions/sciencedirect/content/pay...


The median wage in Afghanistan is 50,000 AHD per year. Currently the exchange rate for USD to AHD is about 68.3 AHD to 1 USD. So basically, for one article it is about 2,150 AHD, or half the monthly wage of someone with s median income.

That's for one midrange article.


While pirating music or movies, one may try justify ones actions, but deep down it feels wrong.

This doesn't even feel wrong. These parasites had it coming.


I don't feel wrong pirating music or movies (especially considering it's legal in my country). I think trying to apply laws of limited amounts of goods (physical objects) to unlimited amount of goods (digital files) is silly.


The laws are silly, I think we all agree on that. The big record labels are silly, most of us probably agree on that too.

It still feels wrong when popular artists do not get paid for their work because it's more convenient for people to pirate it.

I mean, creative/artistic efforts take a lot of time, regardless of how easy reproducible the results are. By paying for their previous songs, you're basically funding their next songs, something you should be interested in as a fan.


> It still feels wrong when popular artists do not get paid for their work because it's more convenient for people to pirate it.

Popular artists barely get paid for the sale of their music. They earn the bulk of their money from touring and merchandise sales (see: finite things).

Given that, piracy is actually far more detrimental to the publisher/label than it is to the artist because it could mean an additional fan to purchase another ticket at a show or buy another t-shirt at a merch table.

In fact, by charging money for the music, it actively discourages people from listening to it because they can't afford it or already spent their budget on artists they already know and like. In an industry where gaining fans is the most important thing you can do, this seems counterproductive.


These unlimited goods comes from very limited and finite resources called humans. We cannot create a loophole in the previous economy recklessly. I'm not fond of labels and studios getting fat for absurd reasons, but structure is required for machines to function.

Personally I stopped chasing music or movies. Mostly because nothing of that era appeals to me anymore :D (I also abused it younger and don't feel the need to binge watch as before and also prefer to pick a few stuff to give money to to support creators putting nice work).

That said, I do grab some old alternative stuff from time to time. Quite often they're not buyable anyway. Release the krakens hollywood.


If it does not feel wrong, it must be right?

We should strive for a logical explanation that transcends emotional response.


The idea of "Right vs Wrong" or "Good vs Evil", is a debate mostly based on emotions rather than logic.


My post was not comparing good or evil. I was saying that simply because you feel something does not make it so.

1 + 1 = 2, regardless of how I feel about the numbers or operators.


Ah, my mistake. What would your logical explanation be for this then?


The math equation?


Which one?


The one that proves my point... :)

Honestly, if you are implying that math is a philosophy that is just as fallible as human emotion I am very interested, although very, very skeptical.

PS - Please do not ruin math. It serves as my only foundation with reality.


It does feel wrong, but it feels equally wrong that the government orders the police to prevent citizen from copying bytes.


The social and cultural value of something is related to the cost of production, not the cost of distribution.

The fact that the distribution cost is close to zero is irrelevant.

The problem with copyright is that a big proportion of the price charged is basically cartel/gatekeeper protection money, and has no useful relationship to the true social cost/value of creative work.


You can spin it however you want, but if you are keeping something of worth artificially scarce when it is in fact infinitely abundant, you are deliberately impoverishing the human race. The rules of our economy, supply and demand, break down when supply is infinite... but that is a problem with the economic system that needs to be fixed (perhaps a state funded patronage model?), it's not something that pirates need to feel guilty about. Copyright law reduces the net wealth of the world. It is just plain evil.


Copyright law does not reduce the net wealth of the world.

Before I wrote book X, there were no copies of book X in the world. The supply was zero, not infinite.

After I wrote book X, the supply was one. Then it grows as people make copies. Copyright law can limit the growth, certainly. (Or enhancement, as the potential profits help promote distribution.)

But copyright law is meant to encourage the growth from 0 to 1 by making it possible to make money from the growth from 1 to N, before the lack of copyright protection makes it effectively infinite.

It's true that plenty of people will write even without getting paid. It's also true that plenty of people wrote because they will get paid. Heinlein started writing to pay off his mortgage.

Copyright law is also the basis for free software. If you take away copyright law, you end up supporting proprietary software distributed only as compiled machine code, not reusable source. This to "impoverishes" the human race.


Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: