How in the world does a company like Elsevier accrue $1.8 billion in expenses? Apple's 2015 worldwide operating expenses were $30 billion , or about 17 times higher. Yet Apple's costs for manufacturing, retail, sales, payroll, etc. ought to be hundreds or thousands of times greater than Elsevier's; Apple has 115,000 employees , for example, while Elsevier only has 7,200 .
It just seems that if their revenue is really $2.9 billion, then even if we weren't in an age of nearly expense-free digital distribution, Elsevier's profit margin should be far higher than it is.
What do they spend all that money on?
Edit: My employee-count example is a bad one, since the ratio of employees for each is about the same as the ratio of operating expenses. But Apple has huge R&D, manufacturing, and retail expenses. Elsevier's equivalents must be minute in comparison.
The next comparison would be finding a firm that does all of Elsevier's services. You start by knowing what they are. They're a part of Relx Group which describes its activities here:
That's a lot. Elsevier itself mainly buys, accepts for free, and hosts research. They might pay reviewers. They also do commercial work for businesses. They also try to make money for shareholders. That always adds up. ;) Here's their annual report.
It has a lot of data but can't directly answer your question. For Elsevier, three things stand out as possibly high-cost items:
1. 17,000 editors reviewing the articles for 2,500 journals. Whoever manages journals plus editors already constitutes large cost.
2. 23% of the revenue is by print sales. A physical print adds real cost to things esp when we're talking big, technical reports and books.
3. There's in-person sales and sale agents for subscription models. Probably lots of sales agents. They'll likely take a commission.
4. The amount of money involved with publicly traded means there's going to be plenty of compliance, management, and executive overhead. On top of nice, executive compensation.
These together would easily make it cost in $100-500mil a year range. Probably more.
EDIT: As gleb said, the Consolidated Income Statement on PDF reader p 98 will tell you in detail what they spend it on. The above activities are common among these types of organizations, though.
- The print sales, (along with most of their sales) come from bundling everything in large subscriptions that the universities are then forced to accept. 99% of the paper they make is not used. Most people download the papers and re-print them actually. (Not saying paper doesn't cost, but that people would not normally pay for such a useless thing)
Here is the most recent annual of their parent company:
I don't have the time nor knowledge to give a summary on this but it's quite interesting - e.g. page 8 - 50% of their income are subscriptions and 80% of these are electronic. The other 50% are "transactional" I guess that means fees.
I can't read investor reports but I guess some answers are in there.
If sci-hub.io stops those professors from making requests to the library, then there won't be as much pressure to pick up every last journal.
Why use netflix, when you can get TV Shows/Movies for free? Why use Pandora/Spotify when you can get music for free? Why use Steam when you can get games for free? Dropbox? Microsoft Office?
Because those service(s)/product(s) offer something that the free version(s) doesn't...
If Elsavior is losing customers because of Sci-Hub, they need to look at why they are losing customers... and what they can do to get them back... or if they can (maybe other countries where their services aren't available).
Measures like geoblocking just drive people in markets who are blocked from accessing content, or who are charged higher prices, to just pirate the material. Or alternatively they bypass the geoblocks.
Even now people are starting to ignore the various mechanisms used to bypass geoblocks as its getting harder to achieve, but before publishers start congratulating themselves they should consider that those people who bypassed their geoblocks were willing to pay what they consider a far price for that content. The fact is, in most cases they bypass the geoblocks because the pricing is not fair.
I'm not sure these publishers can legitimately claim how unfair it is that people steal their content when they are literally discriminating against people by charging them higher prices based on their nationality.
There may be an ethical weight you can assign to stealing and a lower ethical weight you can assign to price gouging, but they are both unethical.
I often find it interesting when I bring up the fact that there is a fundamental unfairness to geoblocking that people immediately chime in to state that stealing is wrong, but they don't consider that a case can also be made that price gouging is really a form of theft of a consumers limited resources.
While I agree on the base level (Why should county a pay more than country b... like Australia routinely pays more for software), just saying "it's not fair" or "its unethical" is too simplistic.
Since it's just a login/access to proxied logins, Sci-hub has a third option which they will most definitely double-down on: try to "fix" the log in problem they have.
As important as I do think piracy is for situations just like this, the problem with trying to understand piracy in a market environment is that piracy doesn't really follow the same rules as other market elements. While there may be some economic value to pirates, for the most part, they can compete on cost and free of the trappings of business deals. They don't have to worry about making a profit, and as such are not influenced by the responses from other elements, such as those they're pirating from.
This isn't "oh woe to the publishers"; I do think they've been bending us over forever. But to expect them to respond to piracy as they would another business is unrealistic in my mind, especially when they can just regulate and sue their way to their goals.
What hopefully happens as a result of Sci-hub is that it will change the way that researchers and their institutions look towards paper publication in the future, once they realize that there's no longer a need for Elsevier and its kin. That is change I hope happens. But to expect Elsevier et. al. to do anything but try to return to their status quo is kind of absurd.
At the end of the days, now that the IAA has moved towards models that meet the needs that were unmet years ago (streaming, large catalogs at high fidelity, etc) they are able to gasp make money again. They were afraid to leave the "old media" but once forced (kicking and screaming every step of the way) they were able to realize new ways that met demands. People pay for music again.
Not all things are the same between the two (Academic papers and Music/MP3s) obviously, but the fact is Elsevier can only stick to the "old way" at it's own peril. Closing "Napster" didn't solve it - in fact it just created a whack-a-mole game. Attacking Sci-Hub won't solve the problem that Elsavier obviously isn't meeting market needs in an ever changing technology landscape.
Pirate sites are usually also run by small teams, don't have dedicated user interface experts, can't easily offer apps, etc. Users are willing to pay for convienience, there's lots of room for big companies to be competitive with pirates.
And yet, according to the article about 15% of SciHub users use SciHub because it's more convienient to use than the alternatives. If the pirates are cheaper and have the better user experience, why would anybody not use the pirate platform?
Of course building a product that's competitive with pirate offers doesn't prevent you from legally challenging them. A war can be fought on multiple fronts at once.
Can anyone in the US or rest of Europe with university access test this?
It's certainly ironic when you need an ssh tunnel from your university to your home to be able to read academic papers at work.
Edit: I have not tried eduroam, just uni ethernets. All Norwegian universities have the block both on DNS and on IPs. Imperial College only has the block on DNS, not on the IP.
Maybe you can check if you can resolve the domain with an open DNS server like 18.104.22.168 from Google and then access it from your university network.
It's shocking to me that your university administrator has put a block in place on a website that isn't obviously malicious.
VPNs are cheap. In this case, I think it's worth it to invest in one.
Edit: also accessible via eduroam at DU.
I just tried several Norwegian ones, traceroute indicates they are blocked by the national level academic ISP in Oslo.
The one in the UK is stopped by a university "badware" firewall.
You mean the libgen torrents or is there something spefically branded sci-hub?
Or thinking about, you could put them at block boundaries so people grab two blocks when they download them, in which case you'd only need half as many popular ones.
It would probably be better to break down the torrents into smaller, more coherent, and organized collections that contain all associated metadata (e.g. a collection of biology papers from a certain time-span or set of publications). That way you might incentivize people to maintain smaller personal-use archives that could be recombined back into the whole, if the need arises.
You could perhaps even build a system that could automatically generate, publish, and consume torrents with a particular (metadata + content) layout, but that's not restricted to a particular set of torrents. Then it could just consume certain topics from a feed of new torrents, from various sources, on the Pirate Bay.
Heaven forbid you help propagate lesser used stuff as payment for getting access to the few papers that you feel entitled to get for free...
I'd more than likely just delete the unusable stuff after setting aside what I actually needed.
So, yeah, waste of bandwidth to send in the first place.
So even if you download 10 files, and then delete 8... BEFORE You delete those 8, you can "share" them for a small period (or until you have positive "Download to Seed" ratio), thus helping others.
Pay it forward and what not...
That's the theory... lots of people download and stop thus leech without helping.
So if they just put torrent links in their search engines, and ask for people to install a browser torrent extention like , it could work pretty well.
for firefox: https://addons.mozilla.org/en-US/firefox/addon/torrent-torna...
I would much prefer an anonymous network anyways.
Any ideas on fixing this? I've tried offering her storage, bandwidth, or physical delivery of hard drives, but so far no bites (specifically, outright refusal).
The question is whether you want to engage in this particular act of civil disobedience and bear the risks or not.
The risks can also be reduced by living in the right jurisdiction or obscuring your identity, e.g. by renting a torrent seedbox with bitcoins.
Pretty sure no-one gets "immunity from copyright," at list not to the point where they could mirror Sci-hub with impunity (at least no on in the US). I'd very surprised if archive.org did not respond to DMCA take-down requests, etc.
This is a violation of the takedown request, and can result in harsher penalties for the host. It's not a great example, but MegaUpload was faulted for doing just that: Removing the link between a url and the content that url pointed to. Granted they did deduplication so there could be other links from other urls pointed to the same "illegal" content, which is what the USA went after them for, but the fact remains that it's a pretty clear grey area whether or not the host can retain that data after someone claims rights to have it removed for their own future benefit.
Then what's the point? The whole point of Sci-hub is to distribute copyrighted papers, "archive" would be inaccessible until it is useless.
To me, the only reason why private (some might say evil) corporations like Elsevier still have a place in the world is because they somehow still have a monopoly for quality/prestige. That's basically for historical reasons and it can be solved by creating an open alternative for assuring quality of papers.
We're building just that, a platform for open peer reviews, meant as a layer over all scientific publications used to surface the valuable content. You can check it's current (early) state at http://peer2paper.org and are very very welcome to get involved in the development.
Feel free to shoot me an email (me @ iamguico -d0t- com) as well, in case you want to discuss anything related to making the highest quality scientific knowledge available to everyone.
But I guess it's not only researchers who are to blame, but the people who need to be evaluating researchers without being researchers themselves.
I was disappointed in Hassabis/DeepMind's decision to publish the AlphaGo paper in Nature, though. Unlike most, they didn't have to.
No one is holding a gun to their head saying "you must publish your paper in Science!"
There may not be actual guns involved, but funding and research opportunities are very much on the line.
Which raises the question: who are the pirates here?
In what sense is a corporation holding an entire professional community to ransom while adding no real value not being piratical?
In reality the journal publishing "industry" is just another example of aggressive for-profit enclosure of what was once considered a public good.
I'm more ambivalent about rights issues around creative works, because I think everyone wins when unusually talented artists and creators earn enough to work full time.
But academic publishing seems straightforward extortion of value from universities and governments - ultimately from tax payers - with no plausible upside.
It seems universities and governments find value in the service provided by publishers. If they wanted, they could stop making funding and research opportunities dependent on how the results are published, right? I don't see how publishers have much leverage here, let alone a position where they can exert extorsion.
They also can't set up a competing independent paper service because there's no way New Journal X can compete with the brand recognition of Nature or Phys Rev D.
The publishers have a de facto monopoly on the prominent brands. That's why it's extortion, and not a service. The only service provided by the publishers is access to the goodwill associated with the brand.
What universities can do - and are starting to do - is to set up alternative publishing systems that will start to bypass those brands. Arxiv is the most famous examples, but increasingly communities of academics - not universities - are creating their own online enclaves, with the optional prospect of live debate about papers instead of the current somewhat dysfunctional formal peer review system.
Eventually the goodwill for many disciplines will move to those online enclaves, and that's when publishers will lose their leverage.
That, plus the end of your quote, makes it pretty clear that he is PROUD to have sued Napster and been shitty to every pirate.
If academics got organized to the point of establishing new journals, with legit peer review, they could make all of the information free. Which it wants to be, right?
Obviously, there is the problem of establishing the credibility of these new "free journals," which is a serious obstacle for the reputation-based "publish or perish" pecking order of academia.
But once such a movement is established, it could eventually crush the paid journals and their rent-seeking profits. The captive journals would also eventually emancipate themselves and come around to this free information model.
Since such free journal articles would also be available on sites like Sci-Hub, the transition to (almost) totally free academic publishing could be unstoppable.
Isn't the reputation of those journals somewhat derived from the work that Elsevier puts into editing the papers that are submitted to them? I don't have enough knowledge to claim that it's a lot of work, nor that it costs them a lot to edit and review the submissions, but it's starting to sound like most of the arguments against Elsevier are completely ignoring the actual work they do.
"If we could find some way to do the work that makes the Science and Nature Journals desirable we could really change the world here. We already have the distribution portion figured out, so it shouldn't be hard!" I've got some really great ideas for an app, I just need a developer to implement it... etc.
Edit: I feel weird arguing for Elsevier. I personally would love to see all the paywalls and weird academic gateways that hide these fascinating nuggets of knowledge go away, but I have to play devil's advocate on this. Elsevier does do work, and that work is represented in the prestige that journals like Science exhibit.
+ the status of a journal or conference is somewhat self-sustaining, since people want their papers in the best venues they'll submit more/better papers to the venues known as the good ones, which means those venues have a large pool of high-quality submissions to select from, which means they a) can boast high rejection rates and b) have great content, which means they are seen as high-quality, which ...
http://theoryofcomputing.org/ (has some nice surveys if you're into theoretical computer science)
http://discreteanalysisjournal.com/ (the arXiv overlay model is interesting)
This is a bit apples to oranges—journals are not as central in CS as in biology or other older fields, and the norms about authorship and preprints tend to be more relaxed—but hopefully the trend will spread to more old-world sciences over time.
If you do that, sci-hub.org, sci-hub.io, sci-hub.club still work just fine.
For Forbes you could use incognito and click twice (find article in Google, click, close tab, click again) usually works for me.
What the universities are buying is not access to the papers. No-one in this whole system gives a monkey about papers.
The universities are buying a solution to their filtering problem. Allowing anyone to produce "proper science" with no filtering mechanism will kill the universities' business model.
There needs to be a filtering mechanism that stops "crackpot" science from getting in. The journal system works well for this. If all "proper" science is in the peer-reviewed journals then everything else can be ignored as crackpottery.
So the universities sell qualifications that are a prerequisite to being published in journals. People without qualifications do not get published. Papers not published in journals are ignored.
Universities pay Elsevier for journals so that they in turn can sell qualifications to students and farm research grants from governments.
None of this has anything to do with science or access to papers.
If SciHub develops an effective filtering mechanism (like ArXiv has apparently done) then that's the existential threat to both Elsevier and the Universities that support it.