Author here. Thanks for submitting this project! I made this because I thought it would be funny if publisher websites had SciHub links that look like they belong on the website [0]. I didn't know about the bookmarklet when I started this. Maybe I should have used that instead. Oh well.
I'm currently waiting for Mozilla to accept this into the add on store. If that passes, I will submit it to Chrome as well.
Small suggestion: instead of hardcoding the .se domain, you might want to send a request to Wikidata to get the currently ised domains. That's how similar sci-hub tools do it to stay up-to-date.
I had never heard of wikidata, but might steal this idea for a similar app I have on F-Droid that pulls PDFs from Sci-Hub when a doi link is clicked using android's intent system https://f-droid.org/en/packages/com.sigmarelax.doitoscihub/
Is Wikidata the proper way to get the currently functioning mirror? I was under the impression that you had to get it from Elbakyan's VK or the SciHub Telegram. I've been assuming that the subreddit would update with accurate links, so I've just been scraping it from there: https://github.com/smasher164/search/blob/53ae11b52f158d1986...
My god, why did you mention wikidata? SPARQL is the most obscure fucking thing I've ever encountered. I've been sitting here for an hour trying to find how to get the data of a specific page!
I made an iOS Shortcut based on the same code. To use it after installing, access it from the iOS Share screen when you're on a relevant site. It will look for preferred mirrors from Wikidata before running.
There’s also this [0] userscript that does essentially the same thing but doesn’t need a separate extension assuming you already use a userscript manager. It also supports far more domains from the looks of it.
Does this script work for you? I installed in FF and tried on 3 sample links and none of them produced the extra link :/ Icon in Greasemonkey shows that scripts recognized URL and was running on each site
Ooh, that's a lot simpler than my attempt to extract the DOI via regex (which is anyway not 100% possible because of the how flexible the DOI spec is...)
This post and the four (4) comments suggesting alternate methods to do this are a pretty good indicator that pirating papers is still by far the easier method than going through official channels.
I think it would be marginally quicker for me to access a paper legally if I was on my uni's campus. But I am WFH from the other side of the country and would need to log into the VPN. Sci-hub with one of these solutions would be much quicker!
My company gives me access to a few journals, but at home I have no such thing. $20 is ridiculous for a paper, given that (a) the authors rarely see anything of this money, and (b) you often need to skim 10 papers before you find the 1 that's relevant.
Luckily, many papers in my research domain (compsci/ML) are open access. 90% is either on arxiv or Google Scholar knows a pdf URL.
My main problem with sci-hub right now is that it's stopped adding new content to the website since like 1 or 2 years. Which means if you want to have an up-to-date state of the art, you can't use sci-hub. I personally use the bookmarklet, i'm way more inclined towards this than some random browser extension.
The reason why they've stopped temporarily is due to an ongoing court case in India initiated by Elsevier. I'm not sure this is the best article on the case but basically sci-hub agreed not to post any new articles for a period of time (which has been extended) while the case is ongoing:
They did release a bulk issue of 2.7 million articles a few months ago (as part of the torrent collection available from libgen), but nothing new since then.
One of the contributions being solicited on the page is to get DOIs for the given webpage. Right now, it has a few methods to grab DOIs for specific sites.
I've had a lot of success running Zotero's translation server for my own bibliographic needs, but I would really love if I didn't have to host it on a server somewhere (and could actually do that part in a browser engine I depend on to download PDFs anyway). Has anyone here figured out how to wrap the translation server brains (i.e., the recipes for each URL) into a simple library?
I have access to most of the good journals through my institution, but this is more convenient than the typical process, which involves logging in to a proxy and going through one or more gateway sites to find the actual PDF download.
1. Create a temporary bookmark (e.g. by pressing ctrl + D, or clicking or the star in the address bar)
2. Open your bookmarks (e.g open the bookmark bar, or press ctrl + B)
3. Right click on the bookmark, and choose "edit bookmark" (or right click, then press "i")
4. Fill the "URL" field with
javascript:window.location='http://sci-hub.se/'+window.location
5. Fill the "keyword" field with "shb" (or whatever you want)
That's it, whenever you write "shb" in the address bar on a page and hit ENTER, it will navigate you to
I use something similar with tampermonkey that detects on hundreds of websites and autoinjects the scihub logo and link into any page, including search results.
I'm currently waiting for Mozilla to accept this into the add on store. If that passes, I will submit it to Chrome as well.
[0] https://imgur.com/a/GP7rm43