Their behavior over the last two decades has been little more than reprehensible rent-seeking. Whatever goodwill they had disappeared as they sapped with increasing ruthlessness the dollars of students and non-profits. See, e.g., one of many such figures: http://www.lib.washington.edu/scholpub/images/economics_grap...
It is absurd and dishonest to call Sci-Hub "piracy", given that all of its contents were originally created and given away with the express goal of wide dissemination.
Anyone that has gone through research and academia knows that you often need to browse many papers to even find the ones worth reading. 25 paper cap includes browsing PDFs; you will easily hit that cap in a single day simply through browsing.
IEEE and IEEE Explore can go something themselves. This is coming from a published IEEE author and academic researcher.
Exactly. And without the monthly subscription plan (something I wasn't even aware of until this comment thread), the going rate seems to be around $30 per paper--rent-seeking to the point of highway robbery. One thing that these publishing companies could have done long ago was allow some sort of "free preview" of the full text or "full refund within 10 min" option to help deal with this problem. I have no idea how this would be done technically to prevent "pirating" but, as it is, the pay-per-paper system is completely disconnected from the way researchers browse papers.
If you create an account but do not subscribe, you can view all the papers but only for 5 minutes each. You can then subscribe to read the ones you need more of, or purchase access to 5 papers for $20.
The above is for online reading. If you need PDFs, you can purchase those but they are not cheap. They are usually what the publisher charges on the publisher's site less a 20% discount.
The paper quality declination plus publication quantity inflation are the main reasons in this "Sci-Hub" crisis.
One thing that always sticks in my craw is that the raw data is very rarely published for analysis. Many may disagree, but I think that by not publishing the raw data for experiments the temptation to commit academic fraud is very high.
I often wonder about some of the more recent scandals whether it might have been picked up a lot faster had the data been more readily available.
I also have noticed that many journals don't say who the reviewers are after publication. For instance, I was looking at body pyschotherapy the other day and came across something called biodynamic analysis, which appears to be a widely held and well respected view within the subdiscipline. I was amazed to discover that something called "grounding" was a serious concept that underpins this analysis, so I looked at the Wikipedia references and discovered that there was at least one journal citation to The Journal of Alternative and Complementary Medicine. The article seems to be attempting to make a link between bloody viscosity and electrical grounding of humans to the earth! 
Now there is another article that shows there is virtually no impact on the body from the same journal, so I started to wonder how this passed peer review. The answer is - I have no way of knowing as they don't make it clear what their review policies and procedures are, and they appear to charge authors to publish work.
In other words - it's pseudoscience dressed up in credibility. And it is making a serious impact in the world of psychology!
Perhaps 1 out of 3 reviewers gets it, another is bored by the fiddly and detailed methods section, and the third was subconsciously hoping for a "brilliant", mystifying secret sauce. Oh, that's all you did? he asks. I could have done that.
Of course you could have done it, I just told you how and pointed you to the data. I also compiled that data, and you won't even let me tell you that because of blind review.
I'm an occultist. One of the first regimens I did so to learn occultism was grounding and centering. It's very much an esoteric skill, and not something to be cited in a paper... unless it was being discussed under a 'microscope'.
I would enjoy in using fMRI or other diagnostic tools to see what physiologically happens when I do those things. But I have no qualms; that shouldn't be in any academic paper until my recommendation of measuring it is done.
That's not how science works...
Simply put, there may or may be nothing there. With diagnostic methods and feedback from the person, we can start to determine if there is a measurable effect. If there's something there, we research further. If not, we cite it as proof "no noticeable effect". This also goes to show that we (academic community) should be much more accepting of papers showing "No Effect", rather than only positive papers. Knowing the landmines that others went down is just as valuable as what works.
But in actuality, I was also giving on-topic discussion about where those phenomena are discussed at length: in studies on occultism. That's just a factual statement with no value proposition. Whomever is more interested can do their own research, with this topic in mind.
I'm all for scientific method, be it showing positive, negative, or no results. I also know what isn't currently scientific, although I do have curiosity if some of those 'things' can indeed be proved.
EDIT: An example of this in the wild is how drug companies would cherry pick research to back the outcome the wanted; cherry picking is an anti-pattern of true science.
That makes sense - though I hope you don't mind me asking but when you say "I also know what isn't currently scientific", do you mean untested hypotheses?
I happen to think occultists are wrong (as in: factually incorrect in their beliefs). But they are entitled to think whatever they want, so that's not a problem. What is a problem is objecting to data being collected and published.
But it seems that his speech-to-text translator wasn't accurate and that's not what he meant.
I used to live a couple blocks from a uni library, and I practically lived there. In the summers I drove straight there from work, got lost in the stacks learning about random things, and only left when it closed.
Fortunately that library was completely free to the public, but if it weren't I would have happily paid $100/mo during the summer months for access (Probably saved that much just in utilities anyways.)
Couldn't the researchers simply publish their own papers? Make torrents etc? Or are they prevented from doing that somehow?
Bibliographic metrics, like the number of publications and where they are published, are use as a proxy for the quality of someone's research. It can affect grant funding and career advancement.
Researchers can publish their own papers. But researchers also tend to want that work to be disseminated, both to spread knowledge and get recognition for the work. Journals help simplify that process. A field typically has one or two main journals, which most people in the field track. It's more likely that a publication will be noticed if it's published in one of those papers.
What's wrong with giving other researchers the ability to rate and comment on a paper.
It's a not a perfect filter but I would argue it works just as well a this journal reputation system and it prevents issues like people having to publish with them to get recognized not to mention it encourages discussion between academics which could lead to some breakthroughs as well.
You want peer recognition so why not go to your peers directly instead of a proxy like these publications.
This system is a product of a bygone era and as all other systems like it it is resisting change by relying on copyright.
I see copyright as the root of most of our current societies issues it is just not compatible with how we work not to mention most of these papers were made with public funding so there shouldn’t be any copyright on it to begin with the copyright belongs to the public domain they payed for it.
A publication just publishing it on their website and slapping a price on it is pure theft and they should be treated like the pirates they are pursuing.
LOL! No. Ratings are nearly useless. By which I mean I don't know how I would incorporate ratings into my own research.
How do you keep a ratings system from being a proxy for popularity, rather than a measure of novelty or utility? How do you keep it from being easily gamed? Does a "5.2" for a 1995 paper have the same meaning as a "5.2" for a 2010 paper?
"You want peer recognition so why not go to your peers directly instead of a proxy like these publications"
Do tell me how. Post on my blog and hope people come across it?
Journals outsource marketing. Get rid of journals and something must replace that service, or I have to become a marketer.
"This system is a product of a bygone era .. relying on copyright"
You are mistaken. The system does not rely on copyright. The scientific publication system dates from the 1600s, dated from Oldenburg's 'Philosophical Transactions of the Royal Society'. Copyright dates from the 1700s, dated from the Statute of Anne.
The scientific publication system depends on reliable selection and curation. Its current form is dominated by large publishing companies which use copyright law for revenue generation, but that's only been true for the last few decades. Scientific publication also existed in the USSR even without copyright protection. (article 103(2) according to https://en.wikipedia.org/wiki/Copyright_law_of_the_Soviet_Un... gave "permission to reproduce published scientific, artistic, or literary works as excerpts (or even entirely) in scientific, critical, or educational publications").
"most of these papers were made with public funding so there shouldn’t be any copyright on it to begin with the copyright belongs to the public domain they payed for it."
I understand and sympathize with your argument. But in practice things aren't so clear cut. Rarely does the money come 100% from public sources. Is your argument that even 1 cent of public funding means the entire work product must be in the public domain?
Are you making the moral argument that copyright should to the funding source? Or is it limited only to public funding sources? For example, if 100% of the money for a project comes from the Bill & Melinda Gates Foundation, then do you want the foundation to control the copyright the work product? What if it's 80% Gates and 20% public funded? Is it public domain then?
If 3 years of research was publicly funded, and resulted in paper, and 2 more years was privately funded by BigCorp., to analyze the data further, then must that additional two years also be in the public domain?
And on and on. Unless "1 cent => public domain always", then there's no clear line. Is that your argument?
(BTW, there is another clear line - work by a US government employee is in the public domain. A small percentage of the scientific papers are in the public domain because of that.)
To respond to this particular point, I'm not familiar with any characteristics of the academic journal system which prevent these problems either. Name recognition is often key to getting into competitive journals, which is just another way of calling it an easily gamed popularity contest.
He was later admitted to Cambridge for a PhD and sent the exact same paper (same formatting, content, layout but different department/uni). It was immediately accepted. Does it necessarily mean that scientists in poorer countries are doing bad science? Or just that they didn't get their visa / couldn't leave their country / just want to stay there, etc.
The first filter most editors use is: "Do I know this research group? Are they part of the club? Have they published in top journals?". If not, it won't get read (unless it's a compelling and exciting title and abstract).
Double blind doesn't always work, sometimes it's painfully obvious who the PI is, but it should stop stuff like this from happening.
> We are no longer able to accede to requests from authors that we withhold their identities from the referees. Such "double-blind" reviewing has been discontinued. 
But the argument is that "giving other researchers the ability to rate ... [would work] just as well as this journal reputation system".
My question is, simply, why should I believe this?
Bibliometrics often look at citations. Citations are, yes, a marker of popularity and social connections, but also have certain scholastic requirements. Editors and reviewers can and do point out missing references or highlight too many irrelevant self-citations. This is a type of moderation system. Imperfect certainly, but it does provide some grounding.
Also, it's not like the scientific publication system only contains original research papers. Review papers fill a curational role, with the advantage of a single voice (even if from co-authors) making the comparisons, so I at least have a relatively constant baseline.
On the other hand, I've always been suspicious that people can reliably guess the authors when doing a blinded review. I've seen one study that said 30% of reviewers (in a small sample) could do so, but that hardly seems like a reason to abandon the idea altogether.
I don't see a problem with this. If you want public money, you have to put your work in the public domain. That's just a condition of getting the money.
The NIH does this. If you receive a single penny from an NIH grant, the work must be posted to PubMed.
So if you publish a paper titled "XYZ" in a journal and also put it on your blog, making sure your blog is visible to search engines - anyone searching for the paper's title "XYZ" will find it for free on your blog?
(I'm not involved with the science community)
Note that my comment concerned the proposal "why not go to your peers directly instead of a proxy like these publications". Your proposal replaces "instead" with "in addition to".
I do think open commenting systems would be a boon to academic discourse though.
> When Fourier submitted his paper in 1807, the committee (which included Lagrange, Laplace, Malus and Legendre, among others) concluded: ...the manner in which the author arrives at these equations is not exempt of difficulties and [...] his analysis to integrate them still leaves something to be desired on the score of generality and even rigour.
> What's wrong with giving other researchers the ability to rate and comment on a paper.
I do not see nothing inherently bad with this (provided very good moderation tools are in place). However...
* > I would argue it works just as well a this journal reputation system *
As a journal reputation system, possibly; although I find that experts have often very different taste compared to say general Hacker News public -- for instance, machine learning has a very high "coolness factor" these days compared to other areas of computer science, but it would be unwise to make a system where a weaker paper on machine learning gets vastly more credit than a better paper on say approximation algorithms.
However, an important part of the journal submission process is the fact that the editors ask the members of the academic community for spending several days on the paper, noting major and minor details and checking the validity of the proof.
Since the reviewers do their work for free, there is no reason why there cannot be an open system that replicates the same process; still, just "adding a comment system to arXiv" will probably not be enough, as people spend rarely that much time on comments (15 minutes tops, not entire days).
First, the price. The last paper I reviewed took me about 10 hours. I charge $200/hour. Should I charge $2000 for my time? If I'm paid by the hour, who tells me when to stop? If I'm paid a flat fee, how much is that fee?
I have seen a reviewer write something like "I read the paper twice. It seems good. Publish it." Does that get the same as someone who spends 10 hours for a detailed critique?
Second, does payment constitute work-for-hire? Does the publisher and/or reviewer need to pay taxes? Might some reviewers have a contract which prevents them from taking on part-time work? What if the reviewer is overseas? Do tax agencies from two different countries need to be involved?
Third, most people expect to publish, and get their papers reviewed. If a single author has three reviewers, then that author should expect to do about three reviews. This is an imperfect system which rewards publishing rather than reviewers, but there are also some compensations to the reviewer.
Moreover, I recall one story - I don't know the validity of it, but it has a ring of truth - where a daycare got annoyed with parents who picked up the children late. They decided to fine parents should that happen. As a result, children were picked up late more often because the fine was interpreted as a fee for extra daycare, and no longer social chastisement.
I suspect that reviewers are in a similar boat. I'll feel different about it if I get paid a pittance than if I were to do it for free, so long as others do the same for me in the future.
Of the people I meet at a conference, how do I figure out who has been a reviewer for Nature, and how do I verify that claim?
Researcher: "I have to go to this club so people think I'm fancy"
Club: "The club charges a lot so only fancy people go there"
Other people: "This person is very fancy because they go to that club"
End result is that people buy their "fanciness" by being on that club
Truly I have seen article reviews ruin a budding research scientist's career. Of course one could argue that other problems contribute to finding themselves in that position.
I would think that if peer review ruins a scientist's career, it would only be through a lack of support from the advisor. If the review is accurate the advisor should support the revisions, and if the review is totally mistaken the advisor should support resubmitting elsewhere.
I strongly believe that review could be almost exclusively through positive feedback, with crowd-sourced minimal annotations in the margins, i.e. "the idea in this paragraph came from <earlier paper link>", and "this idea is novel; wish I thought of it", and "there is a strong analogy between this paragraph and <some other tangentially related idea>"
At least in chemistry where I work, a rejection rate of 9/10 at the review stage would be very unusual. It is possible that reviewers in the triage stage are harsher but I never see these reviews.
So why exactly are publications locking these papers behind massively expensive paywalls when writing them and reviewing them are being done for free?
The reason researchers publish in these journals is it gives them street cred. There are several open access journals - but scanty participation since their impact factor is unknown and does little to promote the researchers career.
Speaking as someone who has published myself, the paywalls are highway robbery. But, imho - it is a problem only the researchers can solve by concertedly supporting open access journals and getting involved in them (maybe a yc research idea???). The established professors would need to take the lead on this - not the underlings.
Personally - I've seen the "light" and stayed back in industry as opposed to moving into academia as I had hoped. The rat race for funding & publications & street cred seemed a lot worse there.
For the last journal article I published, in a journal run by Springer, we submitted a nicely typeset LaTeX article using the template they provided.
Our best theory is that they then copy/pasted the result into Word. It came back to us as a "pre-print" PDF with tons of errors, font changes, etc. We had no good way to compare it to the original other than printing them both out and going through, line by line, looking for discrepancies. There were dozens and dozens of problems. And it was a 70 page article!
After sending them corrections we got to repeat this process 3 times.
And now they charge folks for our article.
But seriously, what you're basically saying here is that in fact it would have been better to have published it yourself - except you may not have had the same audience who would have known about the paper?
Or did they also go through rounds of peer review and ask for clarification or find errors?
As a grad student, one basically doesn't have much say. Postdocs have a bit more latitude, but need to "play the game" to land jobs; pre-tenure faculty are in a very similar position if they want to keep theirs. Even tenured PIs are in a pretty precarious position: many are on soft-money positions (no grants, no salary), all of them need grants, and their trainees also presumably want to move up the grad student -> postdoc -> faculty ladder.
Getting this to change is a massive collective action problem and it's really going to need a push from institutions and the big names at the very top. This holds not just for publications, but a lot of other issues with academia too. I'd love to spend more time making our code solid and publicly available, but there's essentially no payoff for that either.
This is the same phenomenon.
What excludes the academic from also publishing elsewhere?
> To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
You may disagree with that conclusion, but there's 200+ years of belief that it's possible to be "for the public good" and be covered under copyright which restricts redistribution. The short term loss is outweighed by the long term gain, or so the belief holds.
Bear in mind that the copyright term back then was decades shorter than it is now.
That said, grant organizations are turning towards requiring publication either in an open access journal, or by having papers restricted for only a short time, rather than the full length of copyright.
1. It doesn't promote the Progress of Science and useful Arts
2. 120 years since creation or 95 years since publication for corporations, or the life of the author plus 70 years is not what I would call particularly "limited Times".
There may be 200+ years of belief in copyright, but things have significantly changed in the last several thirty years. Copyright doesn't make as much sense as it used to, and though openness and transparency has always been supported in theory (and in practice by a minority) movements that support Open Source over a range of disciplines far greater than just software are now quite significant. Whilst it predates the Internet in it's current form, the current web and other global distribution mechanisms powered by the Internet have radically changed a lot of people's views about freedom of expression and ideas.
I have a hard time empathizing with an argument which, with the change of a few ephemeral names, Mad Libs style, could have been said at any time in the last 200 years.
The statement "thing have significantly changed in the last several thirty years" has been true for centuries. People in the the 1950s, or 1910s, or 1870s, could and likely did say the same thing.
Nor is your rhetoric about "openness and transparency" unique to these last two decades. Look to the populists and muckrakers from around 1900s as effective proponents of that. Look to the newspapers of the late 1800s, when the Linotype made it possible to have cheap newspapers, and look to the growth of wire services and the telephone, radio, and television, as recent examples of other technologies which have "radically changed a lot of people's views about freedom of expression and ideas."
I do argue against the meaningfulness of your comments, given the judgement in Eldred, and given the last 250 years of incessant technological change. I also think you, like many, see the near history with a much better focus than the further past. But that is not a character flaw.
In the 1970s, after the Watergate hearings and the new FOIA and Sunshine laws, people again could have, and likely did, say that it was also a level of openness and transparency which had never before been seen.
Your essential argument seems to be "things are different now so throw out the old". But things always change, so that argument is always true, and can be therefore be used to justify anything.
Furthermore, most of these papers are available to the public, just not online.
Many libraries have copies of the works.
I do think the pricing for non-institutional clients is absurd. Paying 30 bucks just to get past the abstract is nuts.
Nature, for example, permits private redistribution of accepted papers - I could send you a copy if you asked me for one - but requires exclusive publishing rights - I couldn't just put it on my website for anyone to take.
A journal is prestigious because the papers it publishes have an allusion of quality, because they are published by a prestigious journal.
A prestigious journal publishes high-quality papers because it can attract high-quality authors and reviewers, because it is a prestigious journal.
A journal is prestigious because researchers choose to (try to) be published in it, because their articles will be widely read by other researchers, because it is a prestigious journal.
If you were trying to bootstrap a new journal, you'd have a tough time; unless you didn't care about its quality.
The bare minimum would be a staff who have the time and skill to sift through obviously junk papers and then know which reviewers to use for the legitimate papers. Your best bet would probably be to assemble a staff of well-regarded researchers in whatever field(s) you wanted the journal to cover.
You could (should?) start by going through freely published (open access) papers, collecting the most quality ones, then publishing them in your journal. That's most of the value of journals, as well - informing the subscribers of noteworthy research.
As Kernighan almost wrote:
Reviewing the paper is twice as hard as reading the paper in the first place. Therefore, if you have to be as clever as possible to read the paper, you are, by definition, not smart enough to review it.
 Although I can tell whether the code works, the quality of a paper is also about how well-written it is, whether the arguments are valid and supported by the data, and whether it provides something future research can be built on. A terrible paper that happens to contain code that works is useful for me, not so much for science in general.
You might even start with only yourself. And in 5 years if you want to distance yourself from your earlier, naïve efforts, then simply announce a name change.
Here's the ugly truth - it's hard to tell if most papers are "good", and there are many types of "good".
In my field, the main journal used to be "Journal of Computer Documentation". It spun off from "American Documentation" in the early 1960s. The JCD title changed in the 1970s to "Journal of Chemical Information and Computer Sciences" because "documentation" - a 1920s term - was considered already old-fashioned by the 1960s, and the title didn't capture the new focus on computers, and the underlying principles of information science.
It then became "Journal of Chemical Information and Modeling" in 2005 because the 1990s showed how to apply those information techniques to make predictive models.
Each name change is a form of distancing itself from its previous focus as part of a shift to a new focus.
Or is "distance oneself" not applicable for that situation?
Faculty of 1000 is the only thing I know of that's even close to this.
http://processalgebra.blogspot.com/2015/12/the-novelty-of-ar... lists a few more, including "Logical Methods in Computer Science published its first issue ten years ago and has become one of the favourite publication outlets for researchers working on logic in computer science, broadly construed."
I was just commenting about how to get around the "the journal isn't prestigious so no important research is published in it so it's not prestigious" problem.
You could create a newsletter that's useful to some, but if anyone would consider to publish their new research findings in your journal, then that would be bad for them since it wouldn't be considered anywhere equal to a "proper" publication (since you openly claim to accept non-novel findings as well) and would also prevent them from ever publishing it in any "better" journal since that would not be unpublished research any more.
In practice new good journals are generally started by (sub)communities who know each other and feel that [a topic of] their discipline is not adequately covered by the existing journals. This means that since day 1 they have (a) a source of good new papers that they will want to send there and not elsewhere - the whole reason for them founding the journal, (b) a pool of respected and knowledgeable editors/reviewers (the same community), and (c) initial interest of the community who would all read (and cite) the journal. Given that, the only missing part is recognition by various indexes and funding agencies, which will arrive within a few years (3-5?) if the new journal is well run, productive and gets appropriate citations outside of it.
But I'm commenting now more to point out that some journals do accept non-novel findings, like 'Journal of Negative Results in BioMedicine'.
If I publish a paper in Nature, Nature Genetics, Science, etc. then I can hand-pick my next position. The more glam-journals I publish in, the better my chances for tenure.
The product of publishing isn't the dissemination of insight and innovation, but of gold stars academics can put on their resumes.
Sci-Hub doesn't just threaten copyright, but the entire status factory.
Papers in fancy journals are more prestigious, but the papers that get cited the most are the ones with full text freely available on the Web. So, when they can, scientists choose both.
(The better way to do this is to support a publisher that's both high-quality and open-access and submit to their journal.)
Further, we academics are incentivized, in terms of career, to publish in prestigious venues managed by official publishers. We aren't incentivized to publish in open access repositories as well. So, even though most of us want our articles to be disseminated widely, not all of us can find the time to make sure that the paper is also available on an open repository. This is a factor that explains why many articles are not available for free online even in cases where it would be legal for researchers to upload them.
IME the publishers fuck up the formatting anyways.
In my field of computer science (visual analytics / visualizations) at least some of the leading journals/conferences officially allow you to put a version of the paper on your website.
Just today we submitted a paper where the copyright form explicitly mentions that.
Usually I don't have any difficulties finding contemporary papers. It is papers from, say, thirty years ago, that are usually stuck behind paywalls.
It is not used as broadly as arxiv.org, but that may be starting to change.
I work in CS and I know that if I want my paper to be read by the community I have to publish in a conference and/or arXiv, as almost no one read journals in CS. But then comes one of the almost infinite assessments the bureaucratic system in my university and country makes me jump through, and what do they look at? Impact factors, and conference publications are worth next to nothing. So I constantly have to juggle between what is good for being relevant and known in the community, and what is good to keep my job, which are often incompatible.
This depends on country, though. In the US, I think the funding agencies have left behind the impact factor fetishism in CS and authors can (and often do) just publish in conferences. But in Spain if you don't publish in journals you just don't get a position in academia, in fact some universities are starting to require > X ISI JCR-indexed publications to be able to become a PhD. And I have heard that in e.g. China it's the same thing.
You 're fortunate. I bet sci-hub is mostly used by life scientists.
You might notice that universities which recommend to their researchers to self-publish do not even show up in any university ranking.
One of the advantages of publication is peer review. When you cite someone's work, if it's in a reputable journal then you can assume that someone reviewed it and would vouch for the quality of the research. You have to take it with a pinch of salt (reviewers are never totally unbiased and they make mistakes), but in fields where research is easy to verify like computer science, a publication in somewhere like Pattern Recognition holds a lot of weight.
If you just cite a random paper that someone's self published on arXiv, you have no real idea whether what's being presented is true. Again, this isn't so much of a problem for CS papers where you can implement and test the work yourself, but for fields like physics where you're relying on other people's data, you want to be sure that someone smarter than you checked the numbers.
My client (who will remain unnamed) insists that I advocate for his position: Just as a work-for-hire belongs to the party who paid the creator, surely it matters that the creator signed a contract with a publisher giving up rights to distribute the work in return for invaluable services? Are you implying that the creator retains some sort of "moral right" to distribute copies of the work to whomever despite the explicit stipulations of the contract?
Next you'll want to tell me that just because the free market price of an essential medicine is prohibitive, one government should be allowed to flaunt another government's lawfully issued patent just for the benefit of its citizens! Much as you might wish otherwise, in the eyes of the law there simply is no "clear bright line" between infringing on copyright and slaughtering a ship's crew at sea in pursuit of treasure to bury.
Apologies for the spurious copyright notice at the top. Clearly it's not under copyright, since the author died over 70 years ago! 73, in fact.
 So as to "promote the Progress of Science and useful Arts", in 1976 Congress passed a 47 year extension to the previous 28 year copyright term (for works that had not yet entered the public domain) giving them a total term of 75 years. Then in 1998 (shortly before the copyright for Mickey Mouse was set to expire) it was realized that even greater progress could be obtained with a slightly longer term. To further promote said progress, the duration of copyright was retroactively extended to be the greater of 75 years since publication or 70 years after the death of the author. Unless it's a work of corporate authorship. Since corporations don't have a natural lifespan, it's only fair to extend the copyright on these works to 120 years after creation or 95 years after publication. In any case, it's clear that "Blackmask Online" couldn't hold any copyright, since all they did is format a public domain text into a PDF!
I'd like to know their revenue, and a full price list per journal they publish.
I very much believe that if we looked at what they make in profits, we'll see that they can't justify how much they charge. I, for one, have no sympathy for them.
How might that look? I mean purely publishing isnt just ideal as it lacks the so-called quality control and prestige of established publications.
What might be a middle ground for disrupting this crappy monopoly on the world's knowledge?
Open Access exists, but you don't get the status you need from it (and weirdly it often costs a couple hundred dollars for the graduate student to publish their paper open access in addition to the Journal).
It'd be nice if there was a way to make the community more like open source software where you get status from sharing and collaborating instead of from hoarding information in fear of being 'scooped'.
As it stands the incentives are directly opposite collaborating and sharing information and really the best parts about science and research.
- built on a foundation of open source software
- new ways of presenting and consuming should be encouraged and easily discoverable
- completely outside the influence of the current publishing industry
There's a community of us working on this stuff. We hang out on gitter (https://gitter.im/codeforscience/community).
I believe we need a directory of scientific findings of sorts. It should be possible for each finding to cite other findings it relied on, thus creating a graph of influence/significance. The current situation with citations often involves friends citing friends , so i don't consider it reliable. It should also include open questions , thus giving research directions.
> The rationale behind publishing in long-form journal articles is not really valid in our century.
No, it's perfectly valid. Some ideas can't be expressed in short form. A 5-page long proof requires a 10-20 page article to properly motivate and provide context/explanation. An evolutionary or piece-by-piece doesn't make any sense; you'll end up with fragmented dead ends. Waste of everyone's time.
> and findings are typically summarized in their 3-4 figures anyway
I'm not really sure which fields you're referring to, but I assure you that you're over-generalizing. This isn't even remotely true even for very empirical sciences.
> There are proposals for new approaches
Usually these are appropriate only for a single subdomain or methodology and only provide one particular and extremely opinionated view on the results. Such as the proposal in the article you posted.
The overhead isn't worth it and there's a huge risk the relevant field(s) move quickly enough that the presentation method becomes obsolete before it becomes useful.
> It should be possible for each finding to cite other findings it relied on
...I don't know what to say to this...
> thus creating a graph of influence/significance
Yeah, we have this. It's called bibliometrics. UNIVERSALLY HATED by anyone who's not a bean counter. Good people ignore them. Bad people optimize for the metrics and it becomes a stupid game that has nothing to do with the thing you're trying to measure.
> The current situation with citations often involves friends citing friends , so i don't consider it reliable
This is just one reason among many that bibliometrics (a.k.a. any necessarily poor attempt at "a graph of influence/significance") are a poor mechanism by which to judge science...
> It should also include open questions , thus giving research directions.
In many fields literally every paper includes this. In most it's rather obvious to anyone who comprehends the paper what the next steps are.
> In 2015, Elsevier reported a profit margin of approximately 37% on revenues of £2.070 billion.
Whatever they may claim their costs are, the money does not add meaningful contribution. The meaningful contribution (writing and reviewing) is done by scientists themselves, almost 100% independently of elsevier. Elsevier employs some senior editors who decide which reviewer gets to review which paper, and that's about their entire contribution to scientific process. The rest is their website costs, PR and fluff which is not relevant to the science itself.
They get the research papers for free, the editors don't get paid ... it's such a scam, and always has been.
Their other domains:
https://sci-hub.cc (uses sci-hub.io certificate)
https://sci-hub.bz (uses a separate certificate and ip address -- 220.127.116.11)
And a tor site: scihub22266oqcxt.onion
The fundamental difference of content-addressed networks is their resiliency in the face a single authority trying to track down all of the sources that have copies of the content.
Even though IPFS is still in its infancy, one of its primary goals is to solve the problem of content suddenly disappearing from the Internet.
I don't know that the sci-hub underlying database is available outside of the site. I expect with growth they will need to move to torrents or similar to minimize their bandwidth requirements.
yes, although their bandwidth is horrible
.org was their original domain which was pulled a while ago.
They are still posting updates there. Hopefully they don't get that pulled too.
You want everybody to have access, but you don't want them to get it for free.
Wow, so you want the entire world to all pay for the material you were given for free. Hmmmm.
I really hope the final standard didn't have any substantive changes, so that my implementation is correct. But all I can really do is cross my fingers since it at least seems to work with available data.
Charging for standards has a direct negative effect on the proliferation of those standards, especially today when a lot of code is written as open source hobby projects, which are left out in the cold.
Then, once you've found a source, always remember to bookmark it and save a copy. They disappear from search indices quickly, despite the site still being around. It also helps to repeat your search periodically as sometimes the "shifting sands can uncover treasure momentarily" ;-)
Also put some form of anonymous contact info in your profile. Low chances, but who knows...
EDIT: Looks like your account has a bit of history to it. If there isn't anything particularly identifying in there, the above idea should be good.
Granted, a lot of work goes into these standards, and they don't come out of nowere, still
It gets even worse. I am working on software that confirms to guidelines set by the Dutch government for handling healthcare data. These Dutch national standards were actually (and justly!) made free instead of available at a fee because they became requirements instead of just guidelines. But these standards are (naturally) based on international standards. This is fine for the RFC's that are freely available, but for the three (!) relevant Dutch standards (NEN 7510, 7512, 7513) you would need access to another forty-ish (!!) ISO standards that may, or may not, be relevant to the section you are dealing with.
Of course even a single PDF for these healthcare ISO standards costs up to €100, to be paid in Swiss Francs…
It's not perfect, but it allowed me to implement an RFID data decoding library (ISO-28560) at my last job.
Now THAT is chuckle worthy.
The gall of these publishing companies never ceases to amaze.
The walled garden of academia is not so fruitful as outsiders are led to think it is.
We struggle year after year to pay salaries and still have money to buy the equipment and material we need for the projects, we don't have that much left to spend on buying access articles that we and our peers wrote and that we and our peers reviewed, all for free while some publisher makes millions on it.
That said, the best way to get a research paper in CS is to check the author or lab's website and, if it's not there, e-mail the first author directly.
Yup. Seems like civil disobedience from the research community. Pretty interesting to see a civil rights movement happening online. Who says you can't sit at home and be an activist? ;-)
Don't put yourself in a position where the lawyers are the only ones on your side.
For hypothetical crimes he could've committed had he lived? Because he certainly wouldn't be in prison for things he actually did.
Had Aaron pleaded guilty, he'd be out by now as both of the plea agreements he was offered would have meant 6 months or less .
And trial? Why would he have gone to trial? There was no question of his guilt.
Also FWIW, so am I and it's not all that bad.
I don't personally believe it would've achieved anything, but that's how the world works.
Could any of the 4 downvoters clarify in what conceivable scenario would Aaron still be in prison if he was alive?
Had he pled guilty, he would undoubtedly be out by now. And going to trial obviously wasn't a realistic option considering the plea deals he was offered (BTW If someone here disagrees about that, I'd really like to know why.).
And that's why plea deals are an injustice. The honest are intimidated into felons, and the criminals never face a judge.
The law may not be just, but there's little doubt that he violated it.
Considering his charges, Swartz received exceptionally good plea deal offers.
A non-violent crime shouldn't result in any prison time.
Precisely. He didn't have to kill himself, and if he'd put a bit more forethought into what he was doing, we'd probably be discussing this in a thread around an interview with him rather than with Ms. Elbakyan. But nobody wants to hear that, because it detracts from the story of this generation's Bobby Fischer.
Incidentally, this is why I often dislike downvotes by those who don't comment directly on non-troll comments. It doesn't add a damned thing to sensible discourse, it just smacks of punitive actions against those they disagree with. You really notice it on certain topics, like Aaron - but for some reason it really occurs in anything Apple related.
Let Elsevier go down in flames. I have published more than 50 academic papers and have actively avoided Elsevier. To be honest, this was not too difficult, as they have a lot of journals addressing specialized subtopics that rather seem to appeal to manuscripts that were rejected in first tier journals.
That's for one midrange article.
This doesn't even feel wrong. These parasites had it coming.
It still feels wrong when popular artists do not get paid for their work because it's more convenient for people to pirate it.
I mean, creative/artistic efforts take a lot of time, regardless of how easy reproducible the results are. By paying for their previous songs, you're basically funding their next songs, something you should be interested in as a fan.
Popular artists barely get paid for the sale of their music. They earn the bulk of their money from touring and merchandise sales (see: finite things).
Given that, piracy is actually far more detrimental to the publisher/label than it is to the artist because it could mean an additional fan to purchase another ticket at a show or buy another t-shirt at a merch table.
In fact, by charging money for the music, it actively discourages people from listening to it because they can't afford it or already spent their budget on artists they already know and like. In an industry where gaining fans is the most important thing you can do, this seems counterproductive.
Personally I stopped chasing music or movies. Mostly because nothing of that era appeals to me anymore :D (I also abused it younger and don't feel the need to binge watch as before and also prefer to pick a few stuff to give money to to support creators putting nice work).
That said, I do grab some old alternative stuff from time to time. Quite often they're not buyable anyway. Release the krakens hollywood.
We should strive for a logical explanation that transcends emotional response.
1 + 1 = 2, regardless of how I feel about the numbers or operators.
Honestly, if you are implying that math is a philosophy that is just as fallible as human emotion I am very interested, although very, very skeptical.
PS - Please do not ruin math. It serves as my only foundation with reality.
The fact that the distribution cost is close to zero is irrelevant.
The problem with copyright is that a big proportion of the price charged is basically cartel/gatekeeper protection money, and has no useful relationship to the true social cost/value of creative work.
Before I wrote book X, there were no copies of book X in the world. The supply was zero, not infinite.
After I wrote book X, the supply was one. Then it grows as people make copies. Copyright law can limit the growth, certainly. (Or enhancement, as the potential profits help promote distribution.)
But copyright law is meant to encourage the growth from 0 to 1 by making it possible to make money from the growth from 1 to N, before the lack of copyright protection makes it effectively infinite.
It's true that plenty of people will write even without getting paid. It's also true that plenty of people wrote because they will get paid. Heinlein started writing to pay off his mortgage.
Copyright law is also the basis for free software. If you take away copyright law, you end up supporting proprietary software distributed only as compiled machine code, not reusable source. This to "impoverishes" the human race.