Hacker News new | comments | ask | show | jobs | submit login
Sci-Hub Is Blowing Up the Academic Publishing Industry (jasonshen.com)
291 points by jasonshen on May 30, 2016 | hide | past | web | favorite | 115 comments



> In 2015, the company earned $1.1 billion in profits on $2.9 billion in revenue...

How in the world does a company like Elsevier accrue $1.8 billion in expenses? Apple's 2015 worldwide operating expenses were $30 billion [0], or about 17 times higher. Yet Apple's costs for manufacturing, retail, sales, payroll, etc. ought to be hundreds or thousands of times greater than Elsevier's; Apple has 115,000 employees [1], for example, while Elsevier only has 7,200 [2].

It just seems that if their revenue is really $2.9 billion, then even if we weren't in an age of nearly expense-free digital distribution, Elsevier's profit margin should be far higher than it is.

What do they spend all that money on?

[0] http://www.wikinvest.com/stock/Apple_(AAPL)/Data/Total_Opera...

[1] https://en.wikipedia.org/wiki/Apple_Inc.

[2] https://en.wikipedia.org/wiki/Elsevier

Edit: My employee-count example is a bad one, since the ratio of employees for each is about the same as the ratio of operating expenses. But Apple has huge R&D, manufacturing, and retail expenses. Elsevier's equivalents must be minute in comparison.


Terrible comparison. Best to look at other publishers. To start with, let's look at one with free labor for contributions and review. It costs $45.9 million a year. You might have heard of it:

https://annual.wikimedia.org/2014/#s-6

The next comparison would be finding a firm that does all of Elsevier's services. You start by knowing what they are. They're a part of Relx Group which describes its activities here:

http://www.relx.com/OurBusiness/Pages/Home.aspx

That's a lot. Elsevier itself mainly buys, accepts for free, and hosts research. They might pay reviewers. They also do commercial work for businesses. They also try to make money for shareholders. That always adds up. ;) Here's their annual report.

http://www.relx.com/investorcentre/reports%202007/Documents/...

It has a lot of data but can't directly answer your question. For Elsevier, three things stand out as possibly high-cost items:

1. 17,000 editors reviewing the articles for 2,500 journals. Whoever manages journals plus editors already constitutes large cost.

2. 23% of the revenue is by print sales. A physical print adds real cost to things esp when we're talking big, technical reports and books.

3. There's in-person sales and sale agents for subscription models. Probably lots of sales agents. They'll likely take a commission.

4. The amount of money involved with publicly traded means there's going to be plenty of compliance, management, and executive overhead. On top of nice, executive compensation.

These together would easily make it cost in $100-500mil a year range. Probably more.

EDIT: As gleb said, the Consolidated Income Statement on PDF reader p 98 will tell you in detail what they spend it on. The above activities are common among these types of organizations, though.


- the 17.000 editors are almost all unpaid. The paid editors who coordinate the free work of others are typically very few per journal (e.g. Cell: 8 editors, Neuron: 6 editors)

- The print sales, (along with most of their sales) come from bundling everything in large subscriptions that the universities are then forced to accept. 99% of the paper they make is not used. Most people download the papers and re-print them actually. (Not saying paper doesn't cost, but that people would not normally pay for such a useless thing)


Damn. That knocks out quite a bit of the baseline cost for an alternative.


> What do they spend all that money on?

Here is the most recent annual of their parent company:

http://www.relx.com/investorcentre/reports%202007/Documents/...

I don't have the time nor knowledge to give a summary on this but it's quite interesting - e.g. page 8 - 50% of their income are subscriptions and 80% of these are electronic. The other 50% are "transactional" I guess that means fees.

I can't read investor reports but I guess some answers are in there.


A transactional expense may be purchasing a license to the online version of the content for perpetual access. Similar to how when you buy a print journal you can keep that forever. The way some online publishing sales work is that you buy this year's journals as a capital expenditure as for perpetual access. Then you usually still need to pay an much smaller hosting/access fee forever but you've essentially licensed the content forever. This works better for some libraries versus the subscription model.


Page 94 - Consolidated income statement


So, roughly, 50% cost of sales (print, inventory, online services), 40% salaries, 10% depreciation


Comparing sci hub to napster is a bit misleading. Elseviers customers are not individual scientists, they're university libraries who pay huge sums to get massive bundles of journal licenses. I have an extremely hard time imagining university libraries cancelling their deals with Elsevier and instead telling their users to check SciHub. What it might result in is slowed growth as new customers feel less pressure from their users to purchase licenses as those users can already access the articles. So, a problem for Elsevier, but very unlikely to be its death knell or a cause for it to change its business model.


I worked for a top ten academic research library, and even with budgets in the millions to low ten of millions, there were still plenty of journals we were not buying. Other than the big bundles, often times it was pressure from professors that would force a specific big-ticket journal to be acquired.

If sci-hub.io stops those professors from making requests to the library, then there won't be as much pressure to pick up every last journal.


OP here. Like most analogies, this one isn't perfect. Napster and Sci-Hub are similar in that they are underground yet widely used platforms for acquiring digital content that would otherwise cost money to purchase, but unlike CD's or actual printed journals, cost nothing to replicate. Many academics are specifically downloading papers they otherwise wouldn't have access to via their institution because it is too expensive. And libraries will eventually look for ways to cut expenses and if people aren't clamoring for more subscriptions (because they know they can already get them for free) over time, Elsevier's revenues will decline. There is no need for a death knell. Record companies aren't extinct, but they had to reshape dramatically in part because of the prevalence of pirated music via platforms like Napster.


To add, its not like napster because none of the "artists" (scientists) will go after scihub, if anything almost everyone cheers them.


If there are nonetheless economic consequences for the publishers, the end result will be the trimming of some fat.


Or, god forbid, creating value that isn't available elsewhere. You know... competition.

Why use netflix, when you can get TV Shows/Movies for free? Why use Pandora/Spotify when you can get music for free? Why use Steam when you can get games for free? Dropbox? Microsoft Office?

Because those service(s)/product(s) offer something that the free version(s) doesn't...

If Elsavior is losing customers because of Sci-Hub, they need to look at why they are losing customers... and what they can do to get them back... or if they can (maybe other countries where their services aren't available).


This issue of geographically dividing markets doesn't make any sense for products that are essentially information products that are distributed by the Internet.

Measures like geoblocking just drive people in markets who are blocked from accessing content, or who are charged higher prices, to just pirate the material. Or alternatively they bypass the geoblocks.

Even now people are starting to ignore the various mechanisms used to bypass geoblocks as its getting harder to achieve, but before publishers start congratulating themselves they should consider that those people who bypassed their geoblocks were willing to pay what they consider a far price for that content. The fact is, in most cases they bypass the geoblocks because the pricing is not fair.

I'm not sure these publishers can legitimately claim how unfair it is that people steal their content when they are literally discriminating against people by charging them higher prices based on their nationality.

There may be an ethical weight you can assign to stealing and a lower ethical weight you can assign to price gouging, but they are both unethical.

I often find it interesting when I bring up the fact that there is a fundamental unfairness to geoblocking that people immediately chime in to state that stealing is wrong, but they don't consider that a case can also be made that price gouging is really a form of theft of a consumers limited resources.


Geoblocking goes beyond simple greed... different countries have different laws, supply, demand, etc. Some countries may outlaw certain things. Others may have a much lower amount of money per person/family/gdp.

While I agree on the base level (Why should county a pay more than country b... like Australia routinely pays more for software), just saying "it's not fair" or "its unethical" is too simplistic.


If certain countries outlaw certain things then it is not up to the country who allows that same material to block supply.


But if they have a (eg trade) agreement with the other country then they may do it as a favour.


The problem with this is that currently, as best as anyone knows, Sci-hub works via proxying using legitimate logins. At a University level, usually there is some server running that acts as a proxy so that students and instructors on the campus can access (all calls get proxied), or students can access off campus using their federated student logins, which then creates a quick web-proxy.

Since it's just a login/access to proxied logins, Sci-hub has a third option which they will most definitely double-down on: try to "fix" the log in problem they have.

As important as I do think piracy is for situations just like this, the problem with trying to understand piracy in a market environment is that piracy doesn't really follow the same rules as other market elements. While there may be some economic value to pirates, for the most part, they can compete on cost and free of the trappings of business deals. They don't have to worry about making a profit, and as such are not influenced by the responses from other elements, such as those they're pirating from.

This isn't "oh woe to the publishers"; I do think they've been bending us over forever. But to expect them to respond to piracy as they would another business is unrealistic in my mind, especially when they can just regulate and sue their way to their goals.

What hopefully happens as a result of Sci-hub is that it will change the way that researchers and their institutions look towards paper publication in the future, once they realize that there's no longer a need for Elsevier and its kin. That is change I hope happens. But to expect Elsevier et. al. to do anything but try to return to their status quo is kind of absurd.


While I agree on some of the major points - the business will try to maintain status quo especially and reacting to pirates is different than reacting to businesses - I think that, as someone on the outside without a big care either way, this can't help be have the same issues as IAA vs Pirates.

At the end of the days, now that the IAA has moved towards models that meet the needs that were unmet years ago (streaming, large catalogs at high fidelity, etc) they are able to gasp make money again. They were afraid to leave the "old media" but once forced (kicking and screaming every step of the way) they were able to realize new ways that met demands. People pay for music again.

Not all things are the same between the two (Academic papers and Music/MP3s) obviously, but the fact is Elsevier can only stick to the "old way" at it's own peril. Closing "Napster" didn't solve it - in fact it just created a whack-a-mole game. Attacking Sci-Hub won't solve the problem that Elsavier obviously isn't meeting market needs in an ever changing technology landscape.


>for the most part, they can compete on cost and free of the trappings of business deals.

Pirate sites are usually also run by small teams, don't have dedicated user interface experts, can't easily offer apps, etc. Users are willing to pay for convienience, there's lots of room for big companies to be competitive with pirates.

And yet, according to the article about 15% of SciHub users use SciHub because it's more convienient to use than the alternatives. If the pirates are cheaper and have the better user experience, why would anybody not use the pirate platform?

Of course building a product that's competitive with pirate offers doesn't prevent you from legally challenging them. A war can be fought on multiple fronts at once.


Elsevier does not control the product (because they are not making it). They leech off the rents that they gained decades ago. There is no future for them.


The one thing I've not seen mentioned yet in the Sci-Hub discussions is that the national level academic ISPs seem to have put a very effective domain and IP address block on SciHub. I've tested this both from UK and Scandinavian university networks I have access to, and none of the SciHub domains nor the direct IPs are accessible.

Can anyone in the US or rest of Europe with university access test this?

It's certainly ironic when you need an ssh tunnel from your university to your home to be able to read academic papers at work.

Edit: I have not tried eduroam, just uni ethernets. All Norwegian universities have the block both on DNS and on IPs. Imperial College only has the block on DNS, not on the IP.



DTU (Chemical Engineering) has not blocked the access to http://sci-hub.cc.

Maybe you can check if you can resolve the domain with an open DNS server like 8.8.8.8 from Google and then access it from your university network.


http://sci-hub.cc/ works in the UK


This is from Cambridge? Eduroam or uni ethernet?


3 Mobile, UK here -- it's accessible.

It's shocking to me that your university administrator has put a block in place on a website that isn't obviously malicious.

VPNs are cheap. In this case, I think it's worth it to invest in one.


Virgin media is blocking some addresses but not that one so I presume they are on the UK blacklist.


Loading fine from ox.ac.uk VPN


You don't need DNS, access it directly by IP: http://31.184.194.81/


My university hasn't blocked them either.. JK sci-hub is my university


My university in Germany hasn't blocked them either.


My university (in USA) has not blocked SciHub domains.


Works from my university here in New Zealand.


Not blocked here in France.


sci-hub.cc works at the University of Denver

Edit: also accessible via eduroam at DU.


Can you submit a support ticket and find out why?


My university in the US has not blocked it either.


Is this block eduroam wide or *.ac.uk wide?


I haven't tried eduroam, only cabled uni networks.

I just tried several Norwegian ones, traceroute indicates they are blocked by the national level academic ISP in Oslo.

The one in the UK is stopped by a university "badware" firewall.


sci-hub.cc works in eduroam in my German university.


Available at Cornell through Eduroam.


Works in Cambridge, MA.


One thing that worries me with Sci-Hub is that even though they put all of their PDFs on the torrent, it's too much to mirror by one individual. It would be nice if they implemented some scoring system that would help others mirror the most popular papers or if they worked like thepiratebay - just associating names to magnet links. It would be much easier to help work around the single point of failure then. Right now those huge torrents have as few as 4 seeders.


It's not the popular papers that are important. Those will be well spread already. It's the unpopular "rare" ones that need help.

You mean the libgen torrents or is there something spefically branded sci-hub?


It's possible to solve, you just make users download a zip file which includes 10 papers including the one they're interested.


Bitorrent kind of does this anyway, because in order to download one file, you need to download the entire block it's in. In a case like this where you have a large torrent with many small files, that will actually cover quite a few other files. If you ordered them strategically, you could even try to make sure every block has a fairly popular file in it.


Yeah, good idea. Sort the papers by popularity, and when grouping them find the distribution that minimizes the distance between any paper and the top-popular ones for atomic block size.


I don't think you need to do anything that complex. Just find the N most popular papers and make sure there's one in each block.

Or thinking about, you could put them at block boundaries so people grab two blocks when they download them, in which case you'd only need half as many popular ones.


I don't think that would accomplish much. People would just delete the 9 papers that they don't care about rather than maintain them as clutter. And even if everyone did keep the extra files, would use would they be? It would be impossibly difficult to reconstruct the archive from them.

It would probably be better to break down the torrents into smaller, more coherent, and organized collections that contain all associated metadata (e.g. a collection of biology papers from a certain time-span or set of publications). That way you might incentivize people to maintain smaller personal-use archives that could be recombined back into the whole, if the need arises.

You could perhaps even build a system that could automatically generate, publish, and consume torrents with a particular (metadata + content) layout, but that's not restricted to a particular set of torrents. Then it could just consume certain topics from a feed of new torrents, from various sources, on the Pirate Bay.


Why, though? It's a waste of bandwidth to force someone to accept a bunch of information they don't need and didn't ask for.


I've always just thought we'd make a system where storing some extra information was just the price of entry. That is, in order to torrent the papers you want, you need to first share an extra gigabyte or so of papers chosen via some algorithm to ensure their availability.


I'd feel pity if you wasn't trying to get something for free...

Heaven forbid you help propagate lesser used stuff as payment for getting access to the few papers that you feel entitled to get for free...


> Heaven forbid you help propagate lesser used stuff as payment for getting access to the few papers that you feel entitled to get for free...

I'd more than likely just delete the unusable stuff after setting aside what I actually needed.

So, yeah, waste of bandwidth to send in the first place.


The idea, especially with Torrents, is that you are sharing what you downloaded...

So even if you download 10 files, and then delete 8... BEFORE You delete those 8, you can "share" them for a small period (or until you have positive "Download to Seed" ratio), thus helping others.

Pay it forward and what not...

That's the theory... lots of people download and stop thus leech without helping.


I believe that when you download only a single file from a large torrent , you still help the torrent with your file.

So if they just put torrent links in their search engines, and ask for people to install a browser torrent extention like [1], it could work pretty well.

[1]for firefox: https://addons.mozilla.org/en-US/firefox/addon/torrent-torna...



Yeah, the thing is that SciHub controls the information now and it's up to them how they secure against failure they are. All in all, if we were to adopt IPFS, who would download all the terabytes of papers? Who and how would index them?


If they used ipfs as backend, everyone could sync and serve whatever parts they deem important. At least if I understand it correctly. ipfs is way too complicated for me.

I would much prefer an anonymous network anyways.


Freenet offers an anonymous, censorship-resistant distributed data store system like IPFS. It is very well suited to something like the Sci-Hub archive.


This is a work in progress.


> Right now those huge torrents have as few as 4 seeders

Any ideas on fixing this? I've tried offering her storage, bandwidth, or physical delivery of hard drives, but so far no bites (specifically, outright refusal).


Install a torrent client, download the torrents, keep seeding?


Is it safe to do? I mean on the legal side.


Safe? No, it's probably copyright infringement. But you have to consider that so is reading them on sci-hub.

The question is whether you want to engage in this particular act of civil disobedience and bear the risks or not.

The risks can also be reduced by living in the right jurisdiction or obscuring your identity, e.g. by renting a torrent seedbox with bitcoins.


No.


What's wrong with setting up a mirror at archive.org? After all, they /are/ a library, so they get some immunity from copyright. Or can we not do that?


> What's wrong with setting up a mirror at archive.org? After all, they /are/ a library, so they get some immunity from copyright. Or can we not do that?

Pretty sure no-one gets "immunity from copyright," at list not to the point where they could mirror Sci-hub with impunity (at least no on in the US). I'd very surprised if archive.org did not respond to DMCA take-down requests, etc.


On the other hand, I believe I've read before that DMCA takedowns only cause content to become inaccessible publicly but not actually deleted, which in the case of SciHub sort of defeats the purpose; however, it does mean archive.org retains an "archive" of the content, presumably until its copyright expires or copyright laws change.


> I've read before that DMCA takedowns only cause content to become inaccessible publicly but not actually deleted

This is a violation of the takedown request, and can result in harsher penalties for the host. It's not a great example, but MegaUpload was faulted for doing just that: Removing the link between a url and the content that url pointed to. Granted they did deduplication so there could be other links from other urls pointed to the same "illegal" content, which is what the USA went after them for, but the fact remains that it's a pretty clear grey area whether or not the host can retain that data after someone claims rights to have it removed for their own future benefit.


Looking at the text of the law, it always uses "remove, or disable access to" or similar language: https://www.law.cornell.edu/uscode/text/17/512


> presumably until its copyright expires or copyright laws change.

Then what's the point? The whole point of Sci-hub is to distribute copyrighted papers, "archive" would be inaccessible until it is useless.


The point would be to keep an archive of scientific papers, but prevent access to them in case SciHub goes down.


Do you have a link for the torrents? I couldn't see anything on the main sci-hub site.


My suggestion for such filesharing is sending her money to anonymously send some people hard drives or access to file upload sites that have a copy. They can make their own copies that spread. And so on.


The problem with knowledge not being free is that it blocks the very engine of humanity’s progress: building on top of previous knowledge. That effectively slows us down and means we will take longer to solve the big issues of our time.

To me, the only reason why private (some might say evil) corporations like Elsevier still have a place in the world is because they somehow still have a monopoly for quality/prestige. That's basically for historical reasons and it can be solved by creating an open alternative for assuring quality of papers.

We're building just that, a platform for open peer reviews, meant as a layer over all scientific publications used to surface the valuable content. You can check it's current (early) state at http://peer2paper.org and are very very welcome to get involved in the development.

Feel free to shoot me an email (me @ iamguico -d0t- com) as well, in case you want to discuss anything related to making the highest quality scientific knowledge available to everyone.

Cheers!


Academic publishers are parasitic industry. They profit from somebody else's work without contributing anything of value themselves. If they go away, everybody wins.


What the big publishers offer is prestige and most academics are addicted to that prestige. Getting into Science or Nature is like a heroin. No academic-run open access web journal offers this high. Apart from that, journal prestige also plays a key role in the distribution of funding, for example in the UK. Many researchers feel they just cannot afford to publish in more ethically correct journals because these journals have less prestige which will make it harder for them to compete for funding. Of course all this doesn't change the fact that this industry is completely parasitic.


Talk about vanity fair.

But I guess it's not only researchers who are to blame, but the people who need to be evaluating researchers without being researchers themselves.

I was disappointed in Hassabis/DeepMind's decision to publish the AlphaGo paper in Nature, though. Unlike most, they didn't have to.


Exactly. If these publishers weren't offering anything of value, then why would authors continue to use them when open source alternatives are available?

No one is holding a gun to their head saying "you must publish your paper in Science!"


Actually there's incredible pressure on academics to publish as much as they can, in journals with as much prestige as possible.

There may not be actual guns involved, but funding and research opportunities are very much on the line.

Which raises the question: who are the pirates here?

In what sense is a corporation holding an entire professional community to ransom while adding no real value not being piratical?

In reality the journal publishing "industry" is just another example of aggressive for-profit enclosure of what was once considered a public good.

I'm more ambivalent about rights issues around creative works, because I think everyone wins when unusually talented artists and creators earn enough to work full time.

But academic publishing seems straightforward extortion of value from universities and governments - ultimately from tax payers - with no plausible upside.


> But academic publishing seems straightforward extortion of value from universities and governments

It seems universities and governments find value in the service provided by publishers. If they wanted, they could stop making funding and research opportunities dependent on how the results are published, right? I don't see how publishers have much leverage here, let alone a position where they can exert extorsion.


No, universities can't do this. They can't compete with publishers directly because publishers can decide to cut off the supply of journals to libraries.

They also can't set up a competing independent paper service because there's no way New Journal X can compete with the brand recognition of Nature or Phys Rev D.

The publishers have a de facto monopoly on the prominent brands. That's why it's extortion, and not a service. The only service provided by the publishers is access to the goodwill associated with the brand.

What universities can do - and are starting to do - is to set up alternative publishing systems that will start to bypass those brands. Arxiv is the most famous examples, but increasingly communities of academics - not universities - are creating their own online enclaves, with the optional prospect of live debate about papers instead of the current somewhat dysfunctional formal peer review system.

Eventually the goodwill for many disciplines will move to those online enclaves, and that's when publishers will lose their leverage.


I guess we agree, then. Publishers have something that customers want and cannot find elsewhere, and set the prices accordingly. I wouldn't call that extortion, though. ("The NFL has a de facto monopoly on the prominent teams. That's why it's extorsion, and not a service. The only service provided by the NFL is the ability to watch the games of the teams I care about in the stadium or on tv.")


You nailed it. Ignore the publishers. Just get academic institutions to put as much weight on a publication in an open sourced journal as they do for the leading journals. The problem will then solve itself.


This is a dead horse that gets overly beaten. One powerful value-add that groups like this do is manage the anonymous peer-review process, a fundamental part of modern science. Do they take too much money? Sure. Are they worthless? Absolutely not.


I doubt that Sci-Hub is doing serious harm to the big publishers. Elseview doesn't lose any money when someone downloads an articles from SciHub. They actually save money that they would otherwise spend for bandwidth. The key here is that hardly anyone is buying individual articles. What happens instead is that universities subscribe for journals and this is not going to change because a university can hardly tell their their staff and students: "Hey everyone, we canceled our Elsevier subscriptions. Please use SciHub from now on." This is also the reason why the comparison to the music industry and Napster does not really work. In sum, I doubt that Sci-Hub is such a big deal and much less a game changer. There have always been ways to find pirated PDFs of research articles. Sci-Hub just made it a little bit more convenient.


More people getting their papers from SciHub means less people using their universities subscriptions, and even less people complaining when the university wants to cancel the subscription to save money.


If a university cancels the subscriptions for a journal and the publishers find a way to show that people from that university's network access articles from that journal on Sci-Hub, I'm pretty sure the publishers would be in a great position to sue. Even if the publishers eventually lose, the university would look incredibly bad in the media. Not sure they would risk that.


Sue who? The university? The university doesn't have any control over what their students or professors do in their own home.


even Lars Ulrich regrets the side he took in 1999 "I wish that I was more...you know, I felt kind of ambushed by the whole thing because I didn't really know enough about what we were getting ourselves into when we jumped. [...] We didn't know enough about the kind of grassroots thing, and what had been going on the last couple of months in the country as this whole new phenomenon was going on. We were just so stuck in our controlling ways of wanting to control everything that had to do with Metallica. So we were caught off guard and we had a little bit of a rougher landing on that one than on other times than when we just blindly leaped. But you know, I'm still proud of the fact that we did leap...and I took a lot of hits and it was difficult."


"I'm not sorry for what I did, I just wish it hadn't, you know, made me look so much like a weird, out-of-touch old man yelling at kids to get off my lawn."


I think you're misreading that, particularly since he also said: "In retrospect, I'm proud of what we did, I really felt sideswiped on that one."

That, plus the end of your quote, makes it pretty clear that he is PROUD to have sued Napster and been shitty to every pirate.


It seems that the only thing keeping Elsevier and their ilk alive is the built-up reputation of the scientific journals that they have control over.

If academics got organized to the point of establishing new journals, with legit peer review, they could make all of the information free. Which it wants to be, right?

Obviously, there is the problem of establishing the credibility of these new "free journals," which is a serious obstacle for the reputation-based "publish or perish" pecking order of academia.

But once such a movement is established, it could eventually crush the paid journals and their rent-seeking profits. The captive journals would also eventually emancipate themselves and come around to this free information model.

Since such free journal articles would also be available on sites like Sci-Hub, the transition to (almost) totally free academic publishing could be unstoppable.


> It seems that the only thing keeping Elsevier and their ilk alive is the built-up reputation of the scientific journals that they have control over.

Isn't the reputation of those journals somewhat derived from the work that Elsevier puts into editing the papers that are submitted to them? I don't have enough knowledge to claim that it's a lot of work, nor that it costs them a lot to edit and review the submissions, but it's starting to sound like most of the arguments against Elsevier are completely ignoring the actual work they do.

"If we could find some way to do the work that makes the Science and Nature Journals desirable we could really change the world here. We already have the distribution portion figured out, so it shouldn't be hard!" I've got some really great ideas for an app, I just need a developer to implement it... etc.

Edit: I feel weird arguing for Elsevier. I personally would love to see all the paywalls and weird academic gateways that hide these fascinating nuggets of knowledge go away, but I have to play devil's advocate on this. Elsevier does do work, and that work is represented in the prestige that journals like Science exhibit.


Most of the review and editing isn't done by employees of the publisher, but by other scientists in the field. Sure, the main editors do important work coordinating the process and providing a point of contact, but they get a lot of value from unpaid work in the community. Which they then charge for access.

+ the status of a journal or conference is somewhat self-sustaining, since people want their papers in the best venues they'll submit more/better papers to the venues known as the good ones, which means those venues have a large pool of high-quality submissions to select from, which means they a) can boast high rejection rates and b) have great content, which means they are seen as high-quality, which ...


There have been some efforts along these lines in computer science and math:

http://theoryofcomputing.org/ (has some nice surveys if you're into theoretical computer science)

http://discreteanalysisjournal.com/ (the arXiv overlay model is interesting)

This is a bit apples to oranges—journals are not as central in CS as in biology or other older fields, and the norms about authorship and preprints tend to be more relaxed—but hopefully the trend will spread to more old-world sciences over time.


It's been tried. JHEP [1] started like that, as a journal by high energy physicists, for high energy physicists, online only, with infrastructure provided by SISSA. But after a few years, they turned it over to Springer.

[1] http://jhep.sissa.it/jhep/help/helpLoader.jsp?pgType=about


Really, this couldn't have happened to a nicer bunch. Research staff need to get published in "A+" journals for tenure, promotion and all that jazz. At the same time, they know that these journals are exploitative, bankrupting their University and generally impeding progress in their field. Sci-Hub solves everyone's problem. Who on earth (except the academic publishers, who we all agree are pond scum) is against it?


Cached, as the original seems to be unable to handle HN traffic.

http://webcache.googleusercontent.com/search?q=cache:8Y_lcTf...


Is the CSS messed up in this cache?


Wasn't before, but it looks like Google has updated their cached version.


I wonder how many people have book marked sci-hub under its IP address http://31.184.194.81/ so that every time the domain name changes the bookmark still works.


There's no need to bookmark the IP. Sci-hub actually runs a DNS server and you can simply specify 31.184.194.81 as one of your DNS servers in your computer network settings. Under OS X you can just add it in Sys Prefs. It's probably similar under other OSes.

If you do that, sci-hub.org, sci-hub.io, sci-hub.club still work just fine.


That's putting a lot more trust in them than bookmarking the IP address. (Well, not for https, but for everything else.)


Is this the 2016 version of "This page best viewed in Netscape at 800x600"?


Put the domain name and its IP in your HOSTS file, that's what it's for.


relevant xkcd for the map: https://xkcd.com/1138/


If only Aaron Swartz was alive to see this.


Is anyone aware of a Sci-Hub equivalent for accessing content on websites like Forbes for free?


archive.is


Sorry, but that doesn't work. It loops indefinitely on submission.


Try again.

For Forbes you could use incognito and click twice (find article in Google, click, close tab, click again) usually works for me.


Startup theory 101: your customer is the person who pays you. What they buy is the story about how you're solving their problem.

What the universities are buying is not access to the papers. No-one in this whole system gives a monkey about papers.

The universities are buying a solution to their filtering problem. Allowing anyone to produce "proper science" with no filtering mechanism will kill the universities' business model.

There needs to be a filtering mechanism that stops "crackpot" science from getting in. The journal system works well for this. If all "proper" science is in the peer-reviewed journals then everything else can be ignored as crackpottery.

So the universities sell qualifications that are a prerequisite to being published in journals. People without qualifications do not get published. Papers not published in journals are ignored.

Universities pay Elsevier for journals so that they in turn can sell qualifications to students and farm research grants from governments.

None of this has anything to do with science or access to papers.

If SciHub develops an effective filtering mechanism (like ArXiv has apparently done) then that's the existential threat to both Elsevier and the Universities that support it.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: