Actual discussions between scientists are much more lively: either both participants know the literature so no one bothers with citations and we try to directly jump to the key insights using our and other people's intuitions; one side knows the relevant literature and one does not (happens often in collaborations) and blindly trusts the former; both sides don't now anything (happens often in bars and conference cocktails) and just speculate wildly without any seriousness or defensiveness.
The writing part (with all the seriousness, opaqueness and citation overload) is utterly boring and mindless drudgery: I know very few people who actually take pleasure reading or writing papers, and the few ones that do are considered weird.
This only works because the people are vetted. Almost everyone on HN is a rando and randos had better be able to back their assertions up. Otherwise you just end up spending all your time humouring crazy people.
This doesn't match my experience. When I talk with researchers (linguists) they are happy to point me to references when I ask.
1) They tend to at least in part be rooted in a problematic stereotype of the 'toxic HN reader,' which is inappropriately generalized to cover the entire community. The stereotype is generally of a rationalist bro/"man child" who thinks he (choice of gender intentional) is much smarter than he really is, with no comprehension of his privilege in society, or any awareness of the arts and humanities, with poor social skills, etc.
> "...the end result is that arguing on HN feels like passive-aggressively LARPing as scientists, but not with any of the good stuff."
Observe the key ingredients: generalizes to all of HN not just to certain users, implication that readers are insecure scientist-wannabes, implication of stunted growth (the choice of "LARPing" here is characteristic), poor social skills, and lack of taste ("but not with any of the good stuff").
2) The critique ends up being vacuous because any online community of sufficient size is going to have negative aspects to it. This fact leads to a necessary reading style where you seek out the best stuff among the less good stuff. I view it like panning for gold, filtering out a bunch of dirt in the process. I think most people are aware it's necessary to use sites like HN in this way.
Once you consider that, the critique starts to sound a little funny: it doesn't make much sense sense to characterize a large online community by its most boring parts if those parts are easily skipped over in accessing the interesting parts. So what's the critique really about?
3) The other aspect almost always present is that they're structured to make HN a foil for the speaker's own intelligence and enlightenment—and even more importantly, it's often used as a shibboleth to communicate that one is part of the group who has transcended HN.
Spend some time in certain Twitter circles (often made up of accomplished developers and/or researchers) and you'll see that this is so common it's developed abbreviations and can be communicated almost with something like a wink or a nod: snide comments disparaging HN can be tossed out in just about any context for a laugh and shared feeling of superiority.
It's interesting though because I've also noticed the critiques tend to have defensive roots: oftentimes the critic produced something that was not well-received by HN, at which point they become aware of all its problems.
In any case—it's a pattern I think HN readers should be aware of. The parent comment, for instance, is much more insult than substantive critique if you look closely—and yet it was the top comment on the article.
This is pretty much my experience with scientific writing. No one told me how to do it, so I started copying the style from the papers and books I had at hand.
Paper authors are not: they want to look sophisticated, whether their findings are sophisticated or not.
Paper reviewers are not: they want to preserve the usual style, at least for consistency, and also because they are authors of other papers, too.
Readers who are specialists in the field? Maybe, but they are used to the jargon and easily see through it.
Readers outside the field? Maybe, but nobody has an incentive to care about them, they are not reviewers, not potential co-authors (unless you plan a rare cross-disciplinary study), and if they approve grants, it may be better to impress them with the jargon and look important.
The rare curious non-scientist reader? These are a rounding error.
 https://philarchive.org/archive/ALEATO-6 "A type of simulation which some experimental evidence suggests we don't live in"
We had a problem at UC Berkeley where a very smart CS guy wanted to publish a paper but since he didn't have an official affiliation, major journals wouldn't publish his papers (they literally wouldn't publish a paper by a person whose correspondence address was their home address).
So we gave him a title at Berkeley and the paper was published.
His work is some of the best around, but the academic community wouldn't pay attention until he ran some benchmarks and had them disseminated by prominent academics.
Then I got reviews saying I have to cite x, caveat y, relate to z and discuss special cases a,b,c... By the time you've done that there is no way it's going to read nicely any more.
But to play devil's advocate, the reviewers are right. The permanent scientific record does need all the nitpicking details. Papers are not supposed to read like news articles. Now I try to get a good abstract, intro and confusion and accept that the rest will be nit picking.
So our best guess at changing the status quo is the reviewers?
If the reviewers would push for readability, this would force authors to adapt, right?
Now, how do we convince people to change expectations and habits?
The most familiar example is nutritional science, and the whole fat vs carbs debate. There are a lot of easy examples in medical science and economics as well. The scary thing is the more scientific articles you read the more you see this pattern everywhere. Scientists have greatly oversold the degree of their knowledge in most fields.
My impression is that the people doing the overselling were very much not the scientists in almost all cases.
I find it far more likely that the mistrust comes from those articles combined with the fake experts that appear on TV. This is less of an issue in Europe, but it does happen here as well.
Structurally it's similar to clickbait. I heard that such an incentive structure did bad things to journalism.
i will be working with ton of phds this year and starting one fall 2020.
Read the conclusion before the "Results" section. In fact, read the "results" section last.
Read review papers, they are way less bad. In addition they tend to comment on the source papers clearly enough that you can use them as a reading guide. Same goes for theses, but only if they're any good, so review papers work better. Pick a good review journal (impact factor is a decent proxy).
Email the author of a paper if you have a specific question (on the methods, what a result means, why they chose to put some data in supplementary and some in the paper, etc). Usually it works and they don't actually write emails in academese. Always write to the corresponding author not necessarily the PI.
If you want to write papers and not merely read them, reading them is the first step so do the above first, and then loosely parrot what you've seen in the wild with some technical term mad libs thrown in.
1. Simon Peyton Jones, How to Write a Great Research Paper
2. Larry McEnerney’s writing workshop
These focus on writing papers, and get as close to talking about “scientific” writing as any resources I have seen.
But I think good general writing advice will help with scientific writing, too.
The book “Style: Towards Clarity and Grace” may be useful for that.
I would also suggest that it is important to remember, even in technical writing, that you are telling a story. The more you practice your storytelling, the better your papers will be.
Besides jargon, I see two common problems in the papers I read. (I am an art history student, but I expect this is not unique to my field.)
1. I read too many articles that fail to get me excited about where they are going. They tell me what they will say, but not why I should care.
2. I read too many essays that fail to conclude with applications, take-aways, or next steps.
Please don’t just give A Novel Approach To Dragon Slaying. Show me the villagers suffering from the dragon’s violence. Then show me how to slay the dragon. And don’t end with the dead dragon—end with the fireworks the villagers light in celebration.
- For the general public (or management) keep it very simple.
- For a specific journal, try to keep with their style. Any deviation from their standard might arise additional scrutiny which could risk your paper being rejected. And with the current focus on publication metrics, the actual paper count is likely more important that quality.
That said, the one nature paper I've properly studied was to me so obviously making unwarranted claims that I'd never have let it through review. (And I'm usually a very supportive reviewer! ) One of the authors admitted as much in another paper a year later. But hey, it had all the wow factor that nature selects for.
He stopped, stared, and asked, “you can read those?” I was taken aback but realized I knew exactly what he meant. I told him, “about half” and this somehow relieved him a bit.
We need a lot more abstract thought in plain English in this world.
The problem with smart people is that they have the faculties to create elaborate protections against easily bruised egos. If only we could figure out those faculties can also be used to get over yourself and try to be helpful to the world instead of a trumped-up windbag.
Usually after reading what he writes I don't even read the paper because nearly all papers massively overstate the importance of their results and it takes a ton of reading to parse out what little thing they did and how it contributes to our existing massive knowledgebase.
I think scientists use complex language to make it harder for other scientists to figure out how wrong they are.
On the most basic levels, because journals don't want it and many referees want it taken out. (There's still the mindset that physical space on paper is a bottleneck, since most of the big journals also have printed versions.)
On a less cynical level, intuition is highly non-transferrable. What gives me the intuitive understanding of my result probably won't help you (https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...). I think that the established school of thought is therefore that, rather than my giving you my useless esoteric intuition, better to give you the results of crystallising that intuition into a transferrable formalism, and then allow you to decode that formalism into your own custom-built intuition.
This is a fantastic insight. I have been so frustrated trying to reach people monads over the years. People complain that Haskell is only intelligible for those with a math background. Now I understand why!
It’s not because Haskell requires you to know the underlying abstract algebra and category theory to grok monoids (in the category of endofunctors). It doesn’t! It’s because people who have studied math in undergrad have developed the skills to take a bare, abstract definition and work through a few examples on their own to build an intuition for the concept. Regular people for the most part do not do this! Most people are used to having everything explained to them and not used to learning anything really abstract which requires effort to understand. This is where their frustration comes in, just as it does for first year math majors at a rigorous school.
I don't think that many research mathematicians expect that the readers of their papers will be able to derive their work intuitively. I know that I don't expect this, and my papers are no works of high-flown genius, just highly specialised and domain-specific so that even the people most interested in using the results probably won't be as interested in the techniques.
One of the most promising things I was taught at high school in Argentina were methods to think about how and why I think what I think. It's true that you don't have to apply it to any topic, but if you're serious about writing is really helpful to grow with that in mind.
But what I think sometimes happens is even the target audience doesn't understand what the papers are saying.
Goethe said something about this in his autobiography about health/chemistry books that were popular in his day where imposters would write some "Scientific" book which is full of esoteric terminology that would look appealing to the lay reader, but once you started analyzing and reading that work "the book still remained dark and unintelligible; except that at last one become at home in a certain terminology, and, by using it according to one's own fancy, felt that one was at any rate saying, if not understanding, something".
Technical English is much more restricted in its vocabulary (within each field) and conforms to predictable patterns that are markedly different from colloquial English, which was used as a reference for readability. It is this technical English that foreign language scientists pick up and publish in, so perhaps the conclusions are not surprising.
Thoreau wrote that real reading is that which we have to "stand on our tiptoes", and "devote our most wakeful hours" to grasp. The French philosopher Gilles Deleuze believed that we aren't really thinking if we don't struggle with the content. He maybe took it to an overly extreme level in his writing, but I like his general point.
The article cites the increased presence of words such as 'robust’, ‘significant’, ‘furthermore’ and ‘underlying’ as examples of how papers are getting harder to read.
They go on to say,
>The words aren’t inherently opaque, but their accumulation adds to the mental effort involved in reading the text.
The article doesn't sufficiently explain why mental effort is something to be avoided. Or why multisyllabic words are actually bad. Perhaps if one read more texts with lots of multisyllabic words it would get easier over time?
They give a scary example sentence from an abstract (completely taken out of context anyhow):
>Here we show that in mice DND1 binds a UU(A/U) trinucleotide motif predominantly in the 3' untranslated regions of mRNA, and destabilizes target mRNAs through direct recruitment of the CCR4-NOT deadenylase complex.
Well a good reader knows that when you don't know a word, you look it up. If i was reading this paper I would have to look up just about everything in that sentence:
What is DND1?
What is a "UU(A/U) trinucleotide motif??
What is "the 3' untranslated regions of mRNA", what is translation of mRNA for that matter? What does mRNA even do?
What is a target mRNA and what does it mean for one to be destabilized?
What is the CCR4-NOT deadenylase complex?
Would it take me hours to read this paper and gain an incomplete, novice-level understanding of it? Yes. But just in that one sentence I would learn like 1000% more about biology than I currently know.
You do not need researchers to waste time writing a basic biology textbook in every single one of their papers. You need them to communicate their research and get to the point. If the reader wants to understand it they need to put in the work, science will never be easy and devoid of mental effort.
I realize this will not be a popular post as many people value accessibility in science and more widespread science literacy. But I argue that accessibility is not the same thing as easy reading, and a literacy built on purposefully watered down texts is a cheap knockoff of true understanding won through dedicated effort.
I mean, no you wouldn't. DND1 is a protein name and googling it won't tell you exactly what it does, because it may be involved in several pathways. There is probably a gene dnd1 (note the lowercase) that will muddle up your search results . Destabilizing mRNAs can happen a bunch of ways and knowing the others won't help you with that one, also the vast majority of biologists don't care about mRNA being destabilized one way or the other. Biology is a ton of details, and by learning too much too early about the details you miss the big picture. Just sign up for a class if you're at this level.
> You need them to communicate their research and get to the point.
Arguably the problem with the sentence you quoted is it gets too much to the point. It is very precise and obviously of use to anyone is interested in mRNA decay. It does not tell you what most HN readers want to know, which is why they should care about mRNA decay.
(And if they want to know that, they should read review articles.)
There is a problem with opaque biology papers, but in my experience, the main problem in those cases are the data (impossible to find) and the figures (tables filled with bad statistics and low-res western blot pics). I understand all the jargon in the sentence you quoted, but none of the implications; and I understand that this means I haven't learned anything by reading it at all (though I do have a grad degree in biology).