Journal articles, even review papers, are cramped for space and so tend to be very dense. The author suggests methods for doing battle with this density, but I suggest that, before doing that, you search for a class of document that's allowed to be as expansive as the author desires, and whose authors have recently struggled to learn and understand their content, and so tend to be expansive:
Find out what research group published the research, find out which graduate students have recently graduated from that group, and read their theses (if the author's command of the language of publication isn't what you'd prefer ... find another graduate student). I guarantee you it will function much better as an introduction to what the group does than trying to parse any of their journal publications. In particular, the "draw the experiment" step will often be solved for you, with photographs, at least in the fields where I've done this.
read 2 or 3 papers.
All that effort you would put into doing these steps? Instead, read 1 or 2 other papers that the author refers to in the beginning.
Science is a conversation. When you read the other papers, even if you don't understand them at first, you will get a sense of the conversation.
Also, some writers are abysmal, and others are amazingly lucid. Hopefully one of the 3 papers you read will be the lucid one that will help you understand the other 2.
Probably the best evidence that a paper is a good entry point is whether or not the author cared about the abstract. A lot of scientists treat it as a chore, picking some key points from the premise, methodology, and conclusion sections, and haphazardly pasting them together into a miniature version of the paper. An abstract is a sketch of your argument. It's supposed to be how the author thinks about the work they are doing, in terms of how it relates to the work everyone else is doing. Look for an abstract which presents an argument in plain english and isn't afraid to give a little background or motivation. It might take dozens to find one though.
Another simple trick is to look at the journal title. Articles in journals like "Trends in... " tend to be written for a broader audience, so often have clearer introductions. In general, the less specific the journal, the better the introduction will be for newcomers.
(Be aware that journals with lower word limits / shorter articles may have less rigorous introductions, for better or worse)
Unfortunately the citation graph sites won’t show all the outgoing references from a particular paper (I assume because they’re afraid of copyright issues?) but looking at the incoming references for a paper can be very useful. If you can find an old classic paper in some subject, then most major later papers will have cited it, so you can start from google scholar search’s list of references to the classic paper, which will be sorted by citation count. Doing keyword searches within such a list of references can often quickly surface the most important papers on a subject, especially if you go a couple hops into the “cited by” graph. Often among the most-cited papers is some kind of review paper with a clear explanation of the context, overview of the literature, and extended definitions of important terms.
As a kind of weird aside, if anyone ever emailed me about any of my journal articles, I would 100% respond to them (assuming they weren't a machine). I think most of my colleagues would do the same (except for articles featured in a newspaper, which might garner a lot of weird emails).
That's a great tip. I've found that a lot of papers aren't necessarily complicated, but the vocabulary is unfamiliar (but you experience the same sense of confusion with both). It's interesting that we often conflate complexity with unfamiliarity, my reading comprehension abilities improved quite a bit by understanding the difference between the two.
They're useful to decide whether you should read this paper or another one, but they're often not useful to get a summary of what exactly the paper actually achieves. Often the abstract will imply a more interesting result by leaving out key aspects and limitations (which are detailed in the paper and its conclusions) that significantly change the impact of the paper, the abstract often is more like an advertisement for the paper than an effective summary. I mean, it may be, but if I'd read just the abstract and go away thinking, "oh, so now there's a way to do X", I'd often be wrong.
A couple of notes, generally if you email the author of a paper they will send you a copy. Scholar.google.com can be used to evaluate the other papers referenced, highly cited ones will be 'core' to the question, less highly cited ones will address some particular aspect of the research.
For any given paper, if it cites one or two seminal papers in the field, you can build a citation cloud to create what is best described as the 'current best thinking on this big question'. You do that by following up the citations and their citations for two or three hops. (kind of like a web crawler).
With something like sci-hub and some work on PDF translation, it should be possible to feed two or three 'seed' papers to an algorithm and have it produce a syllabus for the topic.
- By knowing about the conclusion first I will better understand the motivation and why certain steps are being taken.
- I find out sooner if the paper (or book) is something I am looking for.
I like to read papers unrelated to my field to learn new thing to apply. To be honest, some papers still take me a long time to understand because they usually assume you already are researching the topic (for ex. certain terms, symbols and/or variables that are not being defined).
Then I do ctrl-F "blind" (can't find it), ctrl-F "significance" (see p-value with nearby text indicating it has been misinterpreted). Boom, paper done in under a minute. There is really no reason to study such papers unless they have some very specific information you are searching for (like division rate of a certain cell line or something).
"This means that if the experiment suggests that the probability of a chance event in the experiment is less than this critical value, then the null hypothesis can be rejected."
I was asked to help on a project that needed to identify humans in an audio stream. During my literature review, I came across the field of "Voice Activity Detection" or VAD, which concerns itself with identifying where in an audiosignal a human voice / speech is present (as opposed to what the speech is).
I implemented several algorithms from the literature and tested it on the primary tests sets referenced in papers and spend a few months on this until I finally asked myself "What would happen if I gave my algorithm an audiostream of a dog barking?"
The barking was identified as "voice".
As it turns out, the "Big Question" in Voice Activity Detection is not to find human voices (or any voices), but to figure out when to pass on high-fidelity signals from phone calls. So the algorithms tend to only care about audio segments that are background noise and segments that are not background noise.
Better advice intended to make layman with zero background in science become more scientifically literate would be to tell them to read some textbooks.
Later on in the article, she tells people to write down each and every thing you don't understand in an article and look them up later. And this is excellent advice for people with a background equivalent to an advanced undergraduate or higher, but for people with zero background it would be better to read some textbooks and get yourself a foundation.
Honestly, even when I was in grad school in neuroscience, I asked around for advice on reading papers and the surprisingly universal response from other grad students was that it took 2 years to become reliably able to read and evaluate a research paper well. And this is 2 years in a research environment with often weekly reading groups where PIs, postdocs, grad students, and some undergrads got together to dissect some paper. These reading groups provided an environment in which you had regular feedback on your own ability to read papers by seeing all the things those more experienced than you saw and that you missed. A paper that took me 3+ hours of intense study would take a postdoc a good half hour to get more information out of.
I feel like this article makes reading articles well seem a lighter undertaking than it really is. It's really no wonder we see studies misinterpreted so often on the internet, where people Google for 5 minutes and skim an abstract.
This completely coincides with my experience. When I started grad school, it took me a few hours to read one paper, and I probably understood only 50% of the materials even though I had some foundations in the research area from my undergrad studies.
Reading textbooks is a great advice. Then one can start reading some review papers in the area to get some more depth in his/her knowledge. I think the difficulty is that it is hard to find good textbooks and review papers for the subject that one is interested in, especially when the subject is in a niche field.
I try to paraphrase the paper into a Acolyer like 'morning paper' blog post on evernote while mentally I am directing a 'two minute paper' video on the paper :)
I think it would be great to have a journal/blog that would construct a bridge between the industry and the university.
On a side note, I'd say that many researchers don't do a good job of conveying their ideas clearly (it gets worse with conference presentations). It won't really matter in what order you try to read their papers.
Look, I get that there's some natural professional context and lingo that goes into these things, but for all the angst that goes into what esteem that population at large holds up the science community
making their work more accessible to both novices and interested outsiders would be a nice step in the right direction
This seems like a good approach in my humble opinion.
My question would be why don't donors and taxpayers (who are funding research) demand that researchers do these 11 steps?
Because the overwhelming majority of taxpayers do not read them or care to. Also, even though a taxpayer may not read papers, they hopefully still see the value in the progression of science, and allowing scientists to assume a certain level of background to the audience of their paper probably allows them to use their time on science.
Perhaps there's an opportunity for a motivated individual or group of individuals to make something to parse papers and help make it easier to read for those who don't have the proper background?
(His "Academia, Schmacademia" collection is very highly recommended.)
> Before you begin reading, take note of the authors and their institutional affiliations.
> Beware of questionable journals.
Institutional affiliation and journal imprimatur should have no bearing in science. These are shortcuts for the lazy, and they introduce bias into evaluation of the paper's contents.
Even more than that, dispensing advice along these lines perpetuates the myth that scientific fact is dispensed from on high. If that's the case, just let the experts do the thinking for you and don't bother your pretty little head trying to read scientific papers.
If the author's approach to reading a paper only works by checking for stamps of approval, maybe the approach should be reconsidered.
It's a shortcut fraught with potential for deception, as even a casual glance through a site like Retraction Watch will demonstrate:
I'm not sure what you mean by "evaluating the science." A scientific paper should present a hypothesis, the author's best attempt to disprove the hypothesis, and an interpretation of the evidence gathered in the processes of testing the hypothesis. There's going to be a back-story, and it's likely to be quite involved.
The article does a good job of presenting a method for navigating a paper on this basis. I don't see what checking credentials adds to the process. On the contrary, it may do harm.
Bias = priors. Use your priors.
They shouldn't, but they do. This is reality. Industries producing research are often industries that would benefit from the research being biased in their favor. This is an important element to consider.
It's not about dismissing certain affiliations, it's about being conscious of institutional bias.
Journals is something else, mostly because finding out if it is a "real" journal can help to sort out a lot of crap. If it's a journal where you just pay to get stuff published without any peer-review that tries to look like a real scientific journal, it's pretty safe to just stop reading.
Using the prestige or impact factor of the journal as a guide for quality is likely misguided as well. Though it is a warning sign to me if a paper makes claims that look like they should be publishable in a prestigious journal, but it actually appeared in some journal nobody knows.
"Searle had submitted 168 studies on aspartame"