Fundamental research and humanities both have these funding issues. It may be worth a discussion if these fields could benefit from increased public funding, because some forms of research don't result in an immediate economic benefit.
Add to that the number of distractions that everyone is subjected to in the modern world, and I start to feel like it's no wonder we're still struggling with quantum interpretations or going beyond the standard model.
Now, I'm not saying that we should reduce the number of conferences or papers, or amount of outreach. Just the amount of time academics need to spend on them. Of course there will always be a few of the elite who will find time to sit and think for hours per day outside of these, nevertheless the harder you make it the more knowledge will suffer.
He published Freakonomics together with Stephen Dubner then went back into academia while Dubner went down the podcaster route.
15 years later, Dubner has reached millions of people with his work and Levitt recently started a podcast and soft quit academia after realizing that one of his best papers after years of research and hard work got 3 citations. He assumes fewer than 20 people will ever read that work.
Levitt in particular says he feels like his research can have a bigger impact if it’s shared broadly with people than if it reaches 10 other academics and that’s it. We’re not talking about likes on social media, we’re talking about breadth of audience for academic research and how scientists can do better than chasing meaningless citation stats.
Levitt talks about his thought process in this wonderful episode of Freakonomics. https://freakonomics.com/podcast/math-curriculum/
> Dubner went went down the podcaster route. 15 years later, Dubner has reached millions of people [...]
I assume this is measured in terms of subscribers or listen counts.
I'm not saying the Freakonomics Stevens have/haven't had quality contributions to the world, I'm not even saying anything w.r.t. average podcast quality vs. average academic paper quality. I'm only saying that people often measure impact along only one dimension.
This is the Freakonomics episode where he talked about this https://freakonomics.com/podcast/math-curriculum/
Interesting that they put their hopes on data fluency on the College Board while the SATs are falling out of fashion.
The ability to use the web, citation indices, etc. to discover and read papers has made the whole of the scholarly literature much more useful to me than it would have been a few decades ago.
The code itself is always usually a mess: no readme, no license, random binaries, and a 'pipeline' which is simply a bunch of perl and python scripts cobbler together from snippets found on the web.
People publish, then move onto the next project. There is no maintenance.
I agree with you that this is research and there's no reason for it to be pretty. The fact that it's unsightly and brittle makes a lot of people reticent about sharing it online since it could reflect on the quality of their non prototype code. Maintenance is a large burden and once the original goal of the paper has been proved, there's little benefit to the researcher to keep the code around, so I understand why they might say let's not release it at all.
A recent example of this was I was building a gesture classifier using an accelerometer to generate data. I decided do use an svm as based on what I had read, it would be the simplest to port to the microcontroller I was using. I was getting decent performance but needed to extract better features. I found several papers talking about the topic with some novel ideas and their impressive results. While it was nice to know that accuracy could get that high, none of the papers explained how the features they extracted were made. It was usually a one or two sentence line about using both time and frequency domain data.
Excuse me. Is this your actual experience or the meme that everybody is simply repeating?
In the admittedly few papers I have published, the reviewers’ feedback has been a net positive I wouldn’t be without.
Also, the general focus on code in science here on HN is so incredibly myopic and narrow minded. Code for research are not high-profile software projects. They are a piece in the toolbox used to solve a problem. Some grow to become larger projects, some fulfill their purpose and stay fixed in time.
One of the causes of the slowdown (even if marginal) of science and research in general is the administrative and social media overlays that were added to them.
I'm not sure you appreciate how hard this might be for someone who is not closely connected to a project.
The people who can do that reliably will have: a) the academic chops on a similar level of a peer reviewer b) social media skills. a) puts them back in the bracket of people you don't want to 'slow down'.
More over : the population of people who are good at doing something isn't necessarily the same as the one good at explaining it.
It's like saying Newton would have to spend 20% of his time explaining to 14yrs old kids what he's doing when someone less capable than him would do it as good but would leave Newton free for what he does best.
> change in citations at 1 year (Tweeted +3.1±2.4 vs. Non-Tweeted +0.7±1.3,
I don't know what that p<0.001 means in this context but there is certainly more than a 0.1% chance that the null hypothesis (the two distributions are the same) given those 95% confidence intervals.
The graphs that are used to illustrate the paper have completely different confidence intervals than the numbers in the paper and look too good to be true.
Given the average tweet was engaged with less than 16 times I really doubt that the effect size would be this big. This looks to me to be the case that a couple of good papers happened to be tweeted and that nothing statistically significant can be gleaned from this paper.
Having said that I am not a professional statistician so I would appreciate the input of someone more knowledgeable than myself.
Just ignoring the stats though the results seem pretty solid. 3x the number of citations with ~100 observations. If the tweets were truly randomly assigned and there aren't big outliers driving results (big ifs, not gonna go deep enough to find out), their conclusion should be fine.
Two distributions can overlap a lot and still have a low p-value for anova etc if there are enough data points.
An example of this is the rat maze study that Feynman cites.
That last thing we need is clickbait science.
It’s also perfectly justified to search for papers solely based on how influential they’ve been in your field. Papers which trend on Twitter (perhaps long-awaited results) are already having an impact on labs, probably being discussed in journal club and shaping the trajectory of the field.
Related work sections aren't typically thorough, that's what lit surveys/reviews are for. It's also not a new phenomenon that highly cited work gets cited because it's highly cited, i.e. work is cited simply because it's more readily noticed. It's furthermore common for the same research to arise independently in different fields, with different terminology, i.e. bodies of research are somehow not found. So, no need to be scared, it's nothing new.
This effect would have to be replicated in some form to understand its boundaries better but if it were to generalize I think it has much broader implications than just Twitter and thoracic surgery.
There have been lots of bibliometric studies of citation impact etc. but this is one of the first times I've seen an actual experimental study in academics of this sort of thing, and it confirms what a lot of people have been experiencing, which is that actual scientific quality is only part of the reason why some work gets lots of attention. Replace twitter with other forms of social networking and the implications become clear.
There's ups and downs to this: maybe some underappreciated work would get more attention if it's marketed more, for example. But it speaks to the role of things outside of the domain of study per se (not sure what the term for this would be -- something like nondiagetic but for scientific content).
A thorough literature search is mostly needed to avoid claiming as novel innovations/discoveries things that others have previously published (even if you more-or-less ended up discovering it independently). Secondarily, it also helps you avoid ratholes that others have stumbled into (although the reduced incentives for publishing negative results makes this less effective).
However, citations of papers encountered on — and recalled from — social media are more likely to be for the inspiration they provided for the line of inquiry or methodology in your paper (even if that inspiration was negative, ie. you're refuting someone's work or conclusions).
However, I think RTs enhance the experience a lot, as you get to discover other people with similar interest kind of organically. But the same rule applies to people who RT a lot of negativity heavy tweets, just unfollow of mute them.
If you're interested, I recently wrote a bit about how I've been using Twitter quite happily in the past few years .
It seems a strange property that twitter has developed of being both a terrible experience but also incredibly effective at organic advertising.
If it is tweeted on an account with 0 followers, then certainly not, right?
And when tweeted to one millioin scientists in the same field, then certainly yes, right?