* Pubmed Central
Threads are submitted using the publication ID, DOI, or URL of a pre-print or publication. PDFs are fetched when available and used to generate a thumbnail.
Currently, the site only supports academic email addresses. I am trying to figure out the best way to open the site to all without having to manage spam. Let me know if you have any thoughts/ideas. If you are interested in accessing the site and do not have an academic email address please PM me with an email address to whitelist.
The assumption that anyone who might have anything important to say would be at an academic institution is extremely dangerous (just as is the problem of access to journals outside of academic institutions, etc.).
My suggestion: use ORCiD.org for authentication with OAuth. This is a researcher profile. Most academics have one (or should!). They're free, the service is open. I think it strikes the right balance between barrier-to-entry for spam and accessibility.
Academic journal papers are meant to be digested by experts in that topic, not the general population. It seems necessary to have a filter on registration to ensure that the platform is useful to these experts. You could imagine the restriction could be relaxed to research institution emails -or- a referral.
Even with a restrictive registration, the platform could still be useful to the a non-expert who is interested in the topic by reading the exchanges between experts.
TL;DR: Studies like these need scrutiny from everyone. Not just people that declare themselves experts.
I disagree.The latest E-Cig study says they found evidence of tumors in mice who were exposed to cigarette smoke....What they don't tell you in the abstract is that they used 10mg/ml nicotine E-juice and exposed the mice for 3h/day, for 12 weeks. The highest ratio most people vape at is 9mg/ml. No one smokes them for 3 hours straight. The study failed to include how exactly they produce the vapor, no mention of wicking material or coil metallurgy, temperature, or voltage of the coil. It could very well be that the tumors were cause by combustion of either the coil being heated above it's melting point, the wicking material burning, or the juice being combusted and not vaporized. Even though I'm not considered an expert in tumors or cancer I have learned that this test is flawed is missing information and doesn't really show anything that we as people don't already know "Heating things beyond the combustion point produces byproducts that could be harmful to our health." I can rest assured instead of listening to headlines that didn't bother to read or scrutinize the study. I think studies like these need scrutiny from everyone. Not just people that declare themselves experts.
Let's also be fair...just because I have an academic email address doesn't mean I'm an expert. I could be a janitor at JHU for intent and purposes.
I have to agree. I'm no academic for almost 10 years, now, yet I've spent the last three years working with academics on my free time and it's really frustrating how hard it is to work without a “proper” affiliation. I've had to create my own “lab” (it's just a name, and it's only me) to have my name on posters, but I'm still unable to get access to some services because, well… because I'm not paid for the work I do, I guess.
She's getting paid a reasonable amount. She gets paid overtime. Her opinions are given weight. Her physical needs are taken seriously. She gets credit for her work. She is, in general, treated as an actual human being capable of having expertise and bringing value.
The total result is that she's spent months going "WHY DID NOBODY TELL ME THE PRIVATE SECTOR WAS AWESOME?!"
- A tool for annotation so that people could comment on specific portions of papers. This could help new researchers in a field to more quickly understand the different broader contexts in which a result can be interpreted, or the technical limitations of a given method and how that should be considered when say using the results to motivate other work.
- An option for some form of anonymised commentary, so that people could voice their views on a particular work without fear of reprisal. I think this would be amazing for non-tenured academics to exert some form of power in the community, if a platform like this were to take off. Of course, it would make for potentially very messy and difficult to moderate discussions.
The basic problem is that I was constantly finding myself struggling with a tricky part of some paper from the 70's ("Why is this sentence true??"), and I knew many students before me had struggled with the same section, but there was no way for one really great expositor to clarify it, get upvoted, and be done.
Swot is a community-driven or crowdsourced library for verifying that domain names and email addresses are tied to a legitimate university of college - more specifically, an academic institution providing higher education in tertiary, quaternary or any other kind of post-secondary education in any country in the world.
Those are used in many places as a simple way to validate that you are a student or a researcher.
@bringtheaction comment mentioning SWOT is an interesting solution. Although the name of my institution is outdated! I'll send a PR.
I have been involved in academia for 10 years now, and I never held an email address that would match against the SWOT list.
Or you can crowdsource this, say I want to register as email@example.com; in the registration form simply require me to give a link to someone's Google Scholar profile with an email address from the same domain.
It should also be noted that e.g. on Figshare, anyone can upload any PDF and get a DOI for it.
Many papers list authors' email addresses.
Scrape PDFs on the services you support for email addresses. If a domain name occurs in enough author email addresses and isn't on a "known generic" list (gmail et al), consider it "safe".
(1) the thumbnails indicate that the PDF is available and can be downloaded. This is to promote submissions from pre-print servers and open access journals that make their research readily available. If the PDF is behind a paywall there will not be a thumbnail.
(2) the behavior of the site differs slightly from HN and Reddit. Clicking the title link of a publication takes you to be comment page, whereas the thumbnail takes you to the PDF. Open to hearing what people think about this, but the thumbnail image replaces the title link.
(3) the thumbnails do tend to look similar - but you can often tell what journal it is in and you can recognize familiar papers from it. I like it when authors put figures up front too which some of the ML paper do and which adds some variety.
Your first and second points could be satisfied by showing a small PDF icon where the thumbnails currently are. Whatever size that allows the listings themselves to be as close together as possible. The PDF icon could also be different colours depending if it has a paywall, or an money symbol over it.
The really big thumbnails that mostly all look the same make the site look very amateurish on first sight.
Here's an example where I on the top row changed the size of the thumbnail to 6em height and width in proportion to the page size and also aligned the votes and upvote button closer to the content. The 2nd and following entries are left with your style.
(2) I don't think this is good enough justification for the huge amount of space wasted by the thumbnail
(3) Recognizing familiar papers isn't all that useful. Making a specific thumbnail for each sub would be better IMO.
On my phone, I see about 14 listings per screen on HN, 9ish on Reddit (mobile website, I don't use the app). This? Only 4. So whole I would personally prefer something more dense, but good job nonetheless!
Why should I use your service?
You cherry-pick results and polish them to look super flashy so everyone hears about your work over Twitter, Facebook, and r/MachineLearning cause that's how everyone learns about new papers these days.
I do like the idea of having a public forum to comment on work, regardless of where it is published.
Building the site won't be the hard part, though I'm sure you'll get plenty of feedback here.
Attracting a community of scientists that foster real discussion will be. Any ideas there at what will set you apart and properly entice them?
To entice users, I have thought about adding features that users find directly useful to their work (e.g. the ability to take notes on papers within the site and export them), or the ability to export saved articles as bibtex... but I'm still thinking this through. Open to ideas.
Also https://eprint.iacr.org/ is used a lot for publishing crypto preprints
Peer review as it currently works is not perfect, but it is the status quo, so needs to be taken into account. Obviously many things can be considered to be changed, for example I would enforce that anyone who comments is displayed with his/her real name.
A statement about scientific research should stand on its own merit, regardless of who it came from.
Peer review is to be done by one's peers, not some random anonymous commenter.
Let's say this takes off and is the answer to the publisher fees and everyone posts their papers here an peer reviews them -- I assume that's the dream end goal.
As the user base grows and engagement with posts grow, comment sections can become overwhelmed by well meaning illinformed people or even bots with an agenda.
Have you thought about registering people in the space and having a separate section for them to discuss? In your mind, is the site primarily geared towards researchers publishing and their peers, regular people looking for more access to papers and the process, or a mix of both.
Very nice start and cool to see someone actually moving on this problem!
I see this as a tool to increase the hype about Machine Learning. Only people starting in the topic will use it.
It will make sense to filter the whole list with keywords or topics or anything that filters the papers and shows only the ones related to your researcher. The subs still have too many papers.
Mobile is obviously the extreme majority target now, however it should take less than 20-30 minutes to do a good job adjusting the styling to make it a much better experience on non-mobile.
Small issue: when I click an arrow, it properly gives me a basic alert() that I need to be logged in to do that, but it still changes the arrow to blue. I suspect it's an MVP, and you may already be aware of that.
Also I was wondering how you went about grabbing contents from the Bioarxiv? Are you using their RSS feed? I built a web scraper myself which will grab the pdfs and relevant info from (https://www.biorxiv.org/content/early/recent) and store it on my computer (to run some ML algorithms) and it was kinda a pain to do..
I’d love to add subs for other fields but I wasn’t sure which subfields would be best.
Edit: Downvotes? Please explain how I violated the etiquette?
Correct, it's the most citations.
The site is very much an experiment and can definitely be improved. Its not designed to be a popularity contest. I am hoping old and new research is submitted and that the commenting function of the site can counterbalance articles that are overhyped.
My suggestion is not to restrict it based on email but to have a very very strong voting policy (i.e. HN). If you say something dumb on HN you get downvoted into oblivion, and that is OK.
Updated it ot Python3 and added a number of new features.
Please, dissable this sh*t function! No need collect such data in our non safe web...