Stepping back a bit, I feel the argument is analogous to how music is becoming more mainstream and generic, how casual gamers "corrupt" the art form of gaming, how clickbait and stupit stuff is overshadowing lovingly curated independent websites, eternal September etc.
One answer to all these propositions is that the valuable stuff is still there, you just have an additional flood of mediocre boring predictable stuff. You can still find enough indie music and games and movies to spend all your free time on.
Similarly, if you know where to look and whose papers to watch out for, you can read more interesting stuff. The existence of more bullshit makes the filtering somewhat more effort, but it's bearable.
Most papers are never deeply read anyway, outside of the reviewers. Even citations don't mean someone really read the paper in depth.
So, most medíocre, boring papers get ignored.
It's an illusion to expect that every researcher can pump out multiple really interesting non-boring papers every year. So we pretend. We write, we cite, we present, overiflate, overpromise etc. It must look like there is steady hard noble work and toil. It's like a ritual. The scientists are writing all these papers so tax payers can sleep well that their money is paid for hard work.
I largely agree, but we do not write all these papers "so tax payers can sleep". We write these papers because the administration incentivizes us to get the highest profile publications with the lowest effort possible. In relatively creative fields (i.e., much of computer science), this can even mean to make friends in the community and co-author papers with them without being the one who does any of the hard work. (In other sciences, it means being the one who pulls in the money, as far as I understand.)
A related problem I see is that we treat all papers the same: position (opinion) papers, reviews & surveys, and "hard" results (relevant proofs, strong empirical results, software artifacts that are deployed at scale in practice etc.). As the blog post somewhat suggests, the administration should primarily ask us to summarize these results when assessing whether we are worthy of funding and faculty positions; sure, it takes more effort, but I think it is possible.
Exactly, the big issue is that paper counts have become the dominant metric. And like with all structures dominated by a single metric, it's easy to game the system.
Most professors I have worked with in experimental sciences are completely disconnected from actual research. They just optimize for hiring hordes of people to churn out tons of mediocre papers, which they don't care about. I came from theoretical CS & math, so this was really shocking.
It's really depressing and it needs to change. Academia is now in a phase similar to that of an innovative startup which has been filled in with many middle managers, and has turned into a corporate monstrosity.
"Paper count" is about as reliable a metric as "lines of code". Publish or die is a toxic mentality in academia and you are certainly not the first to have observed this, sadly.
It's pretty outdated. In research schools you need to win funding or die nowadays.
And while publications do help you, say in getting hired and promotions, your publications are dug into pretty deeply. They don't want to be tricked into hiring someone useless who can crank out fluff but can't win a grant.
This is completely true -- something that many outside of academics don't understand -- but I'm not sure it really makes the situation any better. In a lot of ways it just kicks the problems down the road.
By whatever metric you use, it's really about the checklist rather than substance. And the checklists are based on stereotypes, and get gamed.
> This is completely true -- something that many outside of academics don't understand -- but I'm not sure it really makes the situation any better.
Indeed in some ways it's worse. When the money is insufficient, the established researchers get it first. Then everyone else has to increasingly work for established researchers as postdocs and similar positions... helping to yet further boost the advantage of the established researchers. It went from senior researchers versus junior researchers, to senior researchers with a herd of postdocs working for them, versus junior researchers.
At least if it was based on publications, everyone gets co-authorship and in fact the first author counts most. So the postdoc and the PI gain the same new items on their CV's for their collaboration. For grants, credit goes entirely to the PI.
Maybe, software solutions can be applied to academia.
In software, we have "source code distributions", such as Debian, Fedora, Gentoo, Arch, Red Hat, SUSE, etc. They are collections of mostly a same free software, with minor differences, but with different policies and purpose. Some are conservative collections of proven software, others are bleeding edge, etc.
IMHO, someone should start to collect papers, which are worth reading, and keep this collection up to date, with fixes to papers (AKA as patches in software), with one paper per topic, with ready to use math models and simulations, with in software tests for models, etc.
It's a nice idea but there are practical problems. The first is that it supposes the scientific publication is a rational and concerted process. It is not, here is an example:
I am interested in Amyotrophic Lateral Disease and I blog about research done on ALS [0].
However there are 15,000 scientific papers published each year on this topic, on average only 20 to 30 articles per year are worth spending the time to read them (one every 500) and a milestone appears only every 3/4 years (TDP-43 in 2006, C9orf72 in 2011, etc).
After 100 years there is still no clue about ALS etiology. Scientists have strong but widely different opinions such as for some it starts in the brain motor area, for other it starts at the peripheral nerve junction with muscles, for other (new trend) it starts in the gut.
And scientists happily pursue hypothesis that their colleagues are supposed to have disproved. Sometimes the same lab (I have one in mind which is perhaps the best known in ALS research) publishes articles contradicting its previous research without discussing the issue.
How to rationally blog about this domain, how to make collections that are worth reading?
Every year there are 15 gazillion modules published to NPM, on average only 20-30 are worth spending the time to read, and only a handful milestones worth including in said distributions.
After 100 years there is still no clue weather microservices or monoliths are the correct hypothesis and some times the same lab publish articles contradicting previous research.
In free software distribution, we have "leaders" and "maintainers". Maintainers are volunteers which are keeping up to date just few software packages. Leaders are choosing a general direction of a distro.
IMHO, something similar can be applied to science. Leaders can chose a general direction of a collection of papers, while maintainers will work on small areas, keeping them consistent and up to date with new research.
For example:
- LIGO/VIRGO discovery of gravitational waves means that the speed of light in vacuum is not constant. It's constant in "steady" vacuum only.
- Discovery of Higgs boson means that vacuum is not an empty space, because Higgs field is present everywhere.
These recent discoveries significantly changes the field of science, so a bunch of classic paper must be updated or discarded in favor of new ones.
I'm not sure if those two fields are comparable. In software engineering, at least there's usually a dollar value impact for the code you write. In research, the impact isn't as tangible and probably won't be known for years.
Its TPS reports. That what it is. And until academia gets their own Bobs to ask academia's Lumbergh, "Yeah, Academia, lemme ask yah... real quick question here... how much time wouldja say yah spend each week dealing with these papers?"
Maybe you didn't mean it, but 'gaming the system' sounds bad. While I agree the outcome is bad, I think when your boss tells you to publish 10 paper per year and you do that, that this cannot be called 'gaming the system'
It has been like that since ten years and nothing has changed since then. People find out just about the same as you did but nothing changes. It such a convenient metric it will stay for much longer.
Reviews are especially bad. You can tell which people work at schools where citations count in their pay (and reviews are allowed in citation counts) by the ratio of reviews to original research papers they write. Some schools/govts really need to wise up.
I think of it as a kind of predatory or scavenger behavior. They basically steal the citations from the papers they are reviewing in return for a small effort in helping us with information retrieval. We might as well just cite google and elsevier. Ideally, citations should pass through them somehow and get applied to the original sources who did the hard work.
It's true that some review papers are shallow but it's pretty cynical to paint all contributions of this type with the same brush. In my experience, good review papers are more than summaries. They can connect separate lines of research for example, compare and contrast different ideas and they can re-contextaulise a body of research to give a broader picture and reveal new and interesting directions. These papers are hard to write well and take a long time. Even more so in cases where empirical work has been undertaken so as to make direct comparisons and provide reference implementations.
Even the summary papers can be useful. On particularly active problems there can be dozens of papers per year. Sorting out wheat from chaff by highlighting notable works, and pointing out trends in published research, is again a genuine contribution, and helpful for the scientific community.
Of course they are useful, as is google scholar and the search field in various journals. Or some blog that tells you whats was new in the recent conference. That doesn't mean they deserve to be cited over the real sources. They are cited as a crutch: "go here for a more detailed list of sources". The little meta-analysis they do doesn't explain most of the citations they get.
Hard disagree. We need to incentivized more curation, distillation, summarization, comparing/contrasting, systematizing, categorizing approaches along various axes, meta analyses etc., not less.
These are often very valuable, more so than yet another 1% improvement paper that never gets reproduced.
Review papers don't steal citations. These are cited from more distant literature so readers can familiarize themselves with the topic. Those wouldn't cite all the papers mentioned in the review individually.
But as a broader point, yeah, maybe something like PageRank could be used to "pass on" the citation Ina sense.
> These are often very valuable, more so than yet another 1% improvement paper that never gets reproduced.
It's not an either-or.
And one should always read and accurately describe the contents of what you cite, meaning of course you wouldn't cite everything in a review paper directly. Only that which is relevant.
Suppose we count both separately, research citations and review citations, the latter for your great review papers or even if people just want to reference the lit review section of your research paper. What would happen to the incentive? I suggest that reviews would be less incentivized as people would primarily be concerned with rating researchers according to their research citations. Just like no one cares about your textbook sales, popular though it may be for introductory uses. This would imply reviews were not "honest" citations but are done to hack the metric.
Well you guys are pulling me to an extreme position here. Of course I love good textbooks too. And review journals like signal processing magazine are my favorites. I'm sure many people do them for great reasons too. Let's not forget that academics teach too, and there are academics and colleges that are entirely devoted to teaching, not research. People are free to prefer them more if they like. However, when it comes to research metrics, the person who did the research deserves the credit for it. Even if we subtract the citations for obvious review papers (which is commonly done) there is still the lost citations by the original researchers.
Reviews also cause other devious problems, like journals hacking their own impact factors. And don't forget the Matthew principle (aka rich get richer) whereby only big-shot researchers can get invited to have their reviews published in high-impact journals, warping the network effects to their favor even more.
I’ve heard the term “minimum knowledge increment” used as a measuring unit for papers. Dice whatever research you have into papers that contribute the minimum necessary for publication. Dicing may only produce 1 paper, good enough yet meager.
>It's an illusion to expect that every researcher can pump out multiple really interesting non-boring papers every year. So we pretend. We write, we cite, we present, overiflate, overpromise etc. It must look like there is steady hard noble work and toil. It's like a ritual. The scientists are writing all these papers so tax payers can sleep well that their money is paid for hard work.
The flip-side to this is that there is a glut of moderately motivated scientists and PhD candidates. As a result, the demand for conferences and citations is self-sustaining to meet the expectations of millions of people wanting to enter the academic world without a clear plan beyond joining the world itself.
It's as though the English-major-to-professor cycle has also become a reality in the hard sciences.
In 1916, J.J. Thomson (as in Thomson scattering; discoverer of the electron) said it perfectly [1]:
“If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion, in fact years may pass without any tangible result being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate a different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: You want one kind of research, but, if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is to pay him for doing something else and give him enough leisure to do research for the love of it."
I was thinking more along the lines of people who are not suited to do research in the first place. This will either be those who join the academic world out of inertia following their educational path or class/cultural pressure, or those who dislike the general labor market and want to enter the academic world due to its particularities.
On a speculative note: I believe it's yet another argument in favor of UBI
I'd assign scientists a "pension" every time they make a worthy discovery. That pension would be a monthly payment of a fixed dollar amount, it would be inalienable and paid for the rest of the scientist's life regardless of other achievements. Publishing a minor, but worthy, paper would give a 100 usd/month grant. Getting a Nobel prize would give, say, 1 million a year. However the deflation would gradually erase value of the grants.
How would that fix the actual problem? And how do you judge what is a worthy discovery and what isn't?
As parent said, UBI would probably fix the incentive scheme, your solution wouldn't. You need to decouple the reward from the result so that research is done out of sheer curiosity and love of science.
You cant decouple this (fully) with UBI. Only a little research is done at a desk, a lot need equipment, sometimes very expensive. With UBI you just get the time but not the stuff.
This was the original intention of patents. Let market value decide if the discovery was worthy and let inventor earn from license of it. We all know that's not how it works today though thanks to sloppy patent office and abusing patent trolls.
I kinda would suggest the problem in some ways is the opposite, that even people who are well-suited to what academics should be are being driven out.
It's such a mess at the moment that it's not really possible to continue out of inertia. Even if you're well-suited to research and discovery at some level, and motivated, it's a soul-crushing experience because at every step you're incentivized toward something else.
Here is a 1994 essay (also given as testimony to the US Congress) on part of of why it is a mess, from Dr. David Goostein (then vice-provost at Caltech): http://www.its.caltech.edu/~dg/crunch_art.html
In that essay, Prof. Goodstein explains how academic had been growing exponentially for decades since WWII, with each PhD giving birth to another litter of fifteen PhDs (my phrasing...), until finally that exponential process began to hit limits. That set of changes from the ending of the exponential growth in academia beginning around the 1970, leading to a break down of peer review and lots of other issues.
That's also part of why those of us (like me) who got honest well-meant advice from extremely successful academics in the 1980s or later about the value of a PhD and a career in academia were misled, even though that advice had worked for the advice giver. Freeman Dyson, by contrast, tried to steer bright people (including me) away from PhDs even in the 1980s -- having an intuition for this sort of thing. Wish I had listened to him more. :-)
Also related by Philip Greenspun from 2006 (later updated), on "Women in Science" but which also applies more broadly and is a good description of the consequences of the situation Goodstein described in 1994: http://philip.greenspun.com/careers/women-in-science "This is how things are likely to go for the smartest kid you sat next to in college. He got into Stanford for graduate school. He got a postdoc at MIT. His experiment worked out and he was therefore fortunate to land a job at University of California, Irvine. But at the end of the day, his research wasn't quite interesting or topical enough that the university wanted to commit to paying him a salary for the rest of his life. He is now 44 years old, with a family to feed, and looking for job with a "second rate has-been" label on his forehead. Why then, does anyone think that science is a sufficiently good career that people should debate who is privileged enough to work at it? Sample bias. ... A good career is one that pays well, in which you have a broad choice of full-time and part-time jobs, in which there is some sort of barrier to entry so that you won't have to compete with a lot of other applicants, in which there are good jobs in every part of the country and internationally, and in which you can enjoy job security in middle age and not be driven out by young people willing to work 100 hours per week. How closely does academic science match these criteria? I took a 17-year-old Argentine girl on a tour of the M.I.T. campus. She had no idea what she wanted to do with her life, so maybe this was a good time to show her the possibilities in female nerddom. While walking around, we ran into a woman who recently completed a Ph.D. in Aero/Astro, probably the most rigorous engineering department at MIT. What did the woman engineer say to the 17-year-old? "I'm not sure if I'll be able to get any job at all. There are only about 10 universities that hire people in my area and the last one to have a job opening had more than 800 applicants." And that's engineering, which, thanks to its reputation for dullness and the demand from industrial employers, has a lot less competition for jobs than in science. What about personal experience? The women that I know who have the IQ, education, and drive to make it as professors at top schools are, by and large, working as professionals and making 2.5-5X what a university professor makes and they do not subject themselves to the risk of being fired. With their extra income, they invest in child care resources and help around the house so that they are able to have kids while continuing to ascend in their careers. The women I know who are university professors, by and large, are unmarried and childless. By the time they get tenure, they are on the verge of infertility. ... I've taught a fair number of women students in electrical engineering and computer science classes over the years. I can give you a list of the ones who had the best heads on their shoulders and were the most thoughtful about planning out the rest of their lives. Their names are on files in my "medical school recommendations" directory. ..."
Anyway, that's all part of why I support a UBI. Give everyone enough money to live as a perpetual graduate student (without publishing and without becoming a "Disciplined Mind" as the name of a book by Jeff Schmidt on academic perils) and some of them will, which will eventually remake academia in a healthier way (including taking more chances on small-scale but far reaching basic research, whether towards cold fusion, quantum teleportation, new computer languages, new materials, new batteries, new ways of thinking, new communications patterns, new ways of conflict resolution, or whatever). If for that reason alone the USA should invest in a Universal Basic Income.
I basically mirror Thomson's opinion but there is are other frames on the thinking. Eg, as paying someone to demonstrate a deep understanding of a topic and articulate the uncertainties. That is different from traditional research because they aren't being asked to do anything exactly new. But in practice that is what I think the better researchers actually do.
Maybe there is a clue in the name "re-search". Epistemologically I don't think it is possible to set out to discover something that isn't already known to exist, so the justification for paying people would best be for evidence that they were searching rather than if they find something.
Perhaps it varies by specialty/region, but I have a good friend in the sciences here in Australia; funding has dried right up.
This year, less than 10% of NHMRC grant applications got approved, and of the ones approved most spent nearly as much time on grant applications as they did on the real work. Anyone who is only moderately motivated has retrained, because working for low pay half of the year is not appealing to anyone who can pass any doctorate program.
Even if you took the position that only the good ones are getting funded, spending 50% of your 'good' researchers time is a tragic waste of human ability.
This is a bit off-topic to papers, but perhaps there's an angle.
One answer to all these propositions is that the valuable stuff is still there, you just have an additional flood of mediocre boring predictable stuff.
I don't think this is true for most things with a mass market. I think products which have efficiencies of scale - or low marginal costs of reproduction - suck up more capital, which raises certain basic levels of standards and polish and marketing, but also greatly increases capital risk and thus reduces investor risk appetite for product novelty. These mass market products starve the mid-market of attention, which makes the mid-market less capital efficient, which means the mid-market needs to get cheaper and cheaper, lose polish, until it's less attractive to the mass market of consumers, and so on in a spiral until a certain spartan rawness becomes its own aesthetic - indie games, indie films, indie music.
I think there's a concrete example which demonstrates the actual absence of that long tail of interesting content, and not just that it's harder to find: blogs.
Blogs were great in the mid 2000s. The market did two things: it matured - the best blogs got more mindshare and turned into something closer to media organizations, much more capital intensive with full time paid writers, while and Facebook and Twitter sucked up the mass market of newsfeed consumers, both on the producer and consumer side, making it easier to produce low-value inanities and easier to consume tidbits from celebrities. This sucked the air of attention from blogs; being long-form, they were too hard to write and too long to read; and being unpaid, they didn't have slick editors or snazzy design. So a lot of them shrivelled up. There are still niche blogs, plenty of them, but a lot fewer - and we can track this loss, because we have (or had) them in our newsreaders.
The problem with research is similar but even deeper. Researchers are supposed to show feasibility or existence of some result, then applied scientists and engineers take this off into industry or elsewhere to build on it.
But there's no direct trading going on at that state to ensure they deliver what is claimed, the research "product" is just dumped into papers. And those papers serve as an impediment to subsequent researchers, since you can't publish or get funded for the same thing that's already been done and supposedly proven by someone else. Of course you wouldn't have to if they had really solved the problem. But they just threw together some minimal effort to stick their name on it first before moving on to the next low-hanging fruit somewhere else. Doing a literature review can be really frustrating.
Video content could be in itself another argument in favor of GP's thesis. There seems to be dearth of high quality video material. You have big-budget high quality videos, big-budget garbage, and then lots and lots of low-budget garbage: i.e. all the stuff "influencers" and "YouTube personas" create to live off ad revenue.
There is a small and somewhat obscure but relatively stable middle - low-budget, high-quality videos. I attribute its survival to two things: some live off Patreon donations and occasional sponsorship deals, others live because the creators have an actual dayjob, and they do YouTube as a hobby. This is similar to blogs - best ones are done on the side, with no expectation of revenue.
I see more of your latter paragraph. In several of my interest areas there is a good amount of useful content in videos. Best example is probably electronics, where a lot of tribal knowledge is now more accessible than ever before.
Practical mechanics, both old fashioned (metal and woodworking) and new (CAD design, 3d-printing), also has a lot of useful video content.
While there is a lot of pop-commercial "personas" making noise with their videos also, it does not seem to have hurt output by "in it for the passion" people.
I think I agree with a large chunk of this: there are inevitably triage/discovery problems as the number of things/options go up.
The cost of evaluating each remains fairly constant, and at some point production outstrips the ability of even extremely-interested/dedicated individuals to have a good handle on it all. Beyond this point, you either need organized collective effort (and trust) to efficiently distribute the work of keeping up (and synthesize the results) or the larger community will be wasting ever-increasing slices of potential energy to extract increasingly less knowledge of the scope of activity.
But there's a difference between simple volume growth for good reasons (i.e., people chasing incentives that are aligned at multiple levels of society), and volume growth caused by people chasing incentives that are misaligned at one or more points up the stack.
If the researchers are spending their time on things even they find boring because Goodhart's law, I think it is important to recognize it and try to undo the misaligned incentives. They're not only externalizing a cost on their entire field of knowledge--but we're also collectively suffering some ill-defined opportunity cost of whatever it was they would've spent their time on if they were following curiosity.
Inevitable of not, this has real costs, in that that government will eventually turn off the gravy train and the "good" and "bad" research alike will be negatively impacted.
In other words, academia is sowing the seeds of its own demise, and with it, the official institutional pinnacle of our culture [please don't wince too hard when reading that!].
The amount of work library work we are not doing is just staggering. Consider these claims:
- There should be an accredited-author-ornly "kernel wikipedia" which the main one can choose to incorporate, almost like release vs staging branches.
- Different subfield way wish to keep their own intentionally biased corpuses, like https://ncatlab.org/nlab/
- After 6 months, virtually no one should have to read the original research article, because the librarian core will have incorporated it's results claims (however controversial) into the appropriate articles
- Textbooks / "knowledge bootstrap plans" should be continuously updated from the encyclopedia, not unlike how bootstrapping is managed in a package repo
- Librarians do the work, but also adjudicate disputes, as researchers will be naturally incentivized to contest how their work is incorporated as that is the primary way it is consumed.
I agree there is not enough library work and there is too much focus on the 4-15 page publication format. This amount is often not enough to fully flesh out an idea or to present something non-incremental. Rather, people tend to chunk up work, called salami publishing.
Now, sure doctoral theses also exist but the vast majority of research is presented in about 4-15 pages.
But I disagree that after 6 months everything should be integrated into other summary work.
Most published research is not worthy of being integrated into other work and 6 months are jót enough to decide. Most papers are forgotten and rightfully so. Publication doesn't mean it's correct or worthy of eternal remembrance. It just means that a scientist wants to share a finding with the research community. It's not established knowledge yet. Only when it is actually adopted and used successfully by the community, will it be part of established shared knowledge. After some years such key ideas do get written into textbooks.
Exactly! The fact of the matter is Accreditation and the State go hand-in-hand. You can go full anarchism like Wikipedia (and thats' great!) but if you want something a bit more curated, a bit more "official", you need officers (by the etymology even!) and you need funding, i.e. state support.
> In other words, academia is sowing the seeds of its own demise
This is first order thinking. Yes, if an increasingly large quantity of academic research is becoming bullshit, eventually the ratio of good to bad research will reach an inflection point where policy will be set that reduces research funding.
This reduction in funding will have many effects, but the largest will be that as funding becomes a scarce resource, a higher curation of both researchers and topics will come into effect. There will be less research and papers overall, but the largest decrease in research will be those that are bullshit.
Finally, as more and more research will be fruitful and useful, and fewer and fewer bullshit papers are published, an inflection point will be reached. There will be more funding allocated to these exciting new areas of study, and more research will happen. Of course, with funding no longer a scarce resource, less curation will occur, and a few more bullshit papers will be published... And the cycle will have advanced one wavelength.
It's been happening for years, centuries, as a basic economic cycle, and looks something like this:
_/T\_/T\_/T\
> Question. In what ways can a recession be useful for forcing inefficient public-sector agencies to lay off redundant workers and reduce bloat?
> Answer. None.
And keep in mind that academia might as well be considered public sector, but the idea transcends to most large-scale/loosely-planned cost cutting for efficiency gain schemes.
Your starting point is a lot on individual virtue, and bad apples crowding out the good. I don't disagree that some researches are better and some are worse, but I think that any "cure" is going to be gamed and worse than the disease.
> but the largest will be that as funding becomes a scarce resource, a higher curation of both researchers and topics will come into effect
Do you have any evidence of this? I disagree in the strongest of terms. By my reckoning, curation, preventation, and other "forsight-drive" work is persistently undervalue by our society and economy. The nastiness that will accompany shrinking funding as research groups fight for short-term survival will only make that worse.
> but the largest decrease in research will be those that are bullshit
Why would those who are the best at gaming incentives now loose that skill?
> It's been happening for years, centuries, as a basic economic cycle, and looks something like this: _/T\_/T\_/T\
So after centuries of Byzantine decline, there is new group of lean and mean Greek Constantinopolitan bureaucrats?
After the Spain grew rich on New World bullion and then poor with a dearth of industrialization there is a new generation of hyper-efficient factories putting Germany to shame?
I dunno what history you are reading, but the good actors never outlive the broken system and get the last laugh in mine.
> Your starting point is a lot on individual virtue, and bad apples crowding out the good. I don't disagree that some researches are better and some are worse, but I think that any "cure" is going to be gamed and worse than the disease.
I don't think that there will be any cure, just a change in available resources that changes the number of projects that get funding. Of course people will game it, but the people who dole out the cash will be more selective if there is less cash to dole.
>> but the largest will be that as funding becomes a scarce resource, a higher curation of both researchers and topics will come into effect
> Do you have any evidence of this? I disagree in the strongest of terms. By my reckoning, curation, preventation, and other "forsight-drive" work is persistently undervalue by our society and economy. The nastiness that will accompany shrinking funding as research groups fight for short-term survival will only make that worse.
There isn't any evidence of events that haven't occured yet. There is logical and historical support, as hundreds of years of economics have shown that market systems tend to optimize around the scarcest resources.
>> but the largest decrease in research will be those that are bullshit
> Why would those who are the best at gaming incentives now loose that skill?
They wouldn't lose that skill, but as there will be fewer research projects funded, more scrutiny will go into which projects get the funding. Sure, some will still game the system, but it will require more work, and more luck, so proportionally fewer will get funded.
>> It's been happening for years, centuries, as a basic economic cycle, and looks something like this: _/T\_/T\_/T\
> So after centuries of Byzantine decline, there is new group of lean and mean Greek Constantinopolitan bureaucrats?
Now you're just being silly... You're saying that the Institution of research will fall, rather than self correct? We'll just give up on scientific research and just... Hang out or something?
> I dunno what history you are reading, but the good actors never outlive the broken system and get the last laugh in mine.
It depends on the system. There have been years with drought, but people haven't given up on farming. Systems only fail when they are unsupported or unnecessary, and Institutional research is neither of those.
Rick Beato rails against gridding and autotune in music over on YouTube.
My wife, a health outcomes researcher, points out that the expectation is that every little publishable bit be kicked out the door for a pharma project.
Historically, they would have dropped a single summary result at the end of the effort.
Now, she's trying to "autotune" the neutral or even negative results. Nominally in the name of "transparency".
If this is true, the need is a research Adele to come along and shame the riff-raff with sheer natural brilliance.
I agree you're losing something that might be valuable, but it's not really part of the music, rather part of the performance. I think there's room for both though, just as we've had studio vs live recordings for the last 50 years or so.
> One answer to all these propositions is that the valuable stuff is still there
I don't understand the popularity of this argument. Whilst someone might (theoretically) have written the Great American Novel in his or her basement unbeknownst to pretty much anyone, a lot of art is fairly resource intensive and thus you can say with extremely high confidence that's its not being produced and the fault lies not with people not looking hard enough. Same with research and journalism.
Are papers getting more boring or does the reader just find less novels things to read as they read more and more? Maybe the reader is bored of the field in general. The 500th time you do anything isn’t as exciting as the 5th time.
On another note, why does everything have to be exciting? Small incremental improvements over the years is what I aim for.
Music is becoming more generic because musicians can rapidly reach a wide audience and iterate quickly, so they see what works and what doesn't and continue down the path of sound that produces the most money.
Isn’t the premise that this analogy extends to academia by “seeing what works and what doesn’t” to produce the most publications to the point where the majority of papers are derivative? While the iteration of the individual may not be quicker, the large increase in the number of publications seems to be indicative of the field iterating quicker.
E.g., the artist takes a big risk to produce a sound that diverges from “what works” just like a scientist takes a big risk to diverge from well-worn theory that has produced past publication fodder.
In the line of your comment about tax payers money: one of the biggest issues IMHO is the disfunctional "market" here because most of the work is paid easily by the tax payer . Particularly publishers are mostly not paying for reviews but are paid for journal and conferences largely on quantity (with few noble exceptions). The effect for a long time now was an ever increasing number of conferences (peer reviewed proceedings still make a large share in CS) and journals. What is actually worse IMHO, the review request get more and more and are largely boring. So you really have to attract the attention of the reviewer to not trigger the reflex of looking for reasons to reject (or worse the opposite, which may account to the same because there are always other reviewers, so its a matter of luck who gets assigned)
I think better structures are needed that actually incentivise good reviews. This would lead to better papers, too. There are multiple options for this. Maybe it is simply time to see reviews as publications themselves and say goodbye to flawed blinded review process. Another thing would be to take the publishers out of the game. Even IEEE and ACM have become largely a playing field for power politics between the US, China and Europe . I think we could do better and be more inclusive here. Publishing reviews and rejected work would further make meta studies much more reliable removing also bias towards positive results (who actually wants to hear about easily replicatable negative results at a conference)
The problem isn’t the existence of the papers, it’s the years of effort that go into producing the papers.
It’s the failure to harness the hard work and brainpower of so many brilliant people towards efforts that would meaningfully improve the lives of others.
Or even more reductionist there are only a finite amount of talented and smart people ever in history doing truely new and interesting stuff.
It’s just the nature of the game. You can’t just turn a dial and add more talent.
Although a bit off topic there are some authoritarian nation states who tried to do this by social engineering. Soviet Union made being an engineer a common place job title and China is trying to make tons of super smart kids at STEM. But we all know there is more to being smart than forcing kids to go down certain academic route and get top end scores (you know the whole creative side and capacity for original works to be created).
The democratic approach still seems to be the winner (not just due to brain). It’s as I mention always going to be a small minority who do great things. That doesn’t scale up artificially.
> Or even more reductionist there are only a finite amount of talented and smart people ever in history doing truely new and interesting stuff.
It’s just the nature of the game. You can’t just turn a dial and add more talent.
It’s true that we can’t turn a dial, but there must be a vast amount of human potential that goes unused every day, because people are born in impoverished countries, or suffer discrimination due to gender or race or something else, get sick with a preventable disease, or whatever other thing denies them the opportunity to make the most of themselves.
This is probably actually the norm in the world, not the exception.
So I think there’s rather a lot we could do as a species (or within individual countries) to give people opportunities and make sure potential for progress is realised
I'm not sure I agree there. It's notoriously hard to get/keep a research position, especially a) in academia and b) outside CS. There are a decent number of graduate student slots, some postdocs, and then you're thrown into the thunder dome.
The competition itself--and the resulting uncertainty--chases a lot of people into other careers. The people in my grad school cohort who "left" research were just as smart and often just as successful (in terms of papers, etc) as those that stayed.
It's not hard to imagine some policy tweaks that could have kept some of them in research.
>The people in my grad school cohort who "left" research were just as smart and often just as successful (in terms of papers, etc) as those that stayed.
Granting your premise (which I don't personally think is accurate): currently every country in the world squanders a massive amount of talent by not providing opportunity, even starting at the most basic level of universal food security and healthcare. You may not be able to infinitely turn a dial and add more talent (or productive research/innovation), but that doesn't mean our many dials are anywhere near their maximum levels either.
Another closely related possibility is that a lot of processes in science are inherently serial, i.e. they can't scale with the number of researchers, because they require slowly integrating information and making consecutive steps of progress.
Maybe after say, quantum computers are available, it takes around 25 years for a team of researchers to slowly make progress and integrate information until they find a killer application. Funding 25 teams of researchers may make them pump out papers 25x faster, but they aren't going to find the killer application in 1 year.
> So we pretend. We write, we cite, we present, overiflate, overpromise etc.
> so tax payers can sleep well that their money is paid for hard work.
It's a bit contradictory.
I won't go into scandalized rant because it's useless, and there's no point shouting against the universe from a bedroom chair. Still it's a bit disheartening. I wish the scientific world could reinvigorate and pump some spark back into the field.
All what you say is true and I appreciate your deep thoughts.
There is another dimension to that though. It's the long tail. The internet has tremendously facilitated mixing existing content. That's creative too but not in the original sense of creating something new.
With detached knowledge before the internet age much less content has been produced and I would assume the novelty to mixed content ratio was very much in favour of new content instead of mixed content.
Internet technology brought us search engines but their search algorithmus favour those paying most instead of providing a service to discriminated remixed content from new content.
One thing to keep in mind is that some of the papers which reviewers consider very marginal turn out to be breakthroughs. For example Jim Allison, who later won the Nobel Prize:
"Allison was hoping to be published by one of the leading peer-reviewed research journals. But nobody at Cell or Nature or any of the A-list, peer-reviewed journals was willing to publish the findings of this junior academic from Smithville, Texas. “Finally, I ended up publishing the results in a new journal called The Journal of Immunology.” It wasn’t Science or the New England Journal of Medicine, but it was in print, and in the world.
“At the end of the paper, I said, ‘This might be the cell antigen receptor, and here are the reasons why I think that it is the T-cell antigen receptor,’ and I just listed it out, all the reasons.” It was a bold announcement regarding the biggest topic in immunology. “And nobody noticed it,” Allison says. “Except in one lab.”"
Isn't it a sign of system failure when a researcher has to beg and plead to get his results published and no one is interested in his publication until someone who understands the break through happens to stumble across it?
Maybe? Sometimes a breakthrough isn't obvious, or can only be understood by a few people on the planet. The story of polar codes comes to mind. How is a journal committee to distinguish such papers from all the other junk?
I think what the parent post is suggesting is almost the opposite maybe? That those few people who should understand the significance are dismissing it.
Katalin Karikó, the Hungarian scientist that pioneered the mRNA research, was academically demoted at UPenn because the university considered her research as "impractical" and "waste of time" [0]. That was in 1995.
In 2021, she is a hot candidate for the Nobel Prize for the very same work, which led to mRNA vaccines.
Douglas Prasher did a lot of the foundational work on GFP (2010 Nobel Prize) was driving a courtesy shuttle at a Toyota dealership in 2008 because he ran out of funding.
Barry Marshall and Robin Warren had a really tough time convincing people that H. pylori caused stomachs ulcers (but won the 2005 Nobel Prize).
Number theory was famously nothing more than an intellectual curiosity, but makes stuff like this website possible. Microbial opsins were also a weird intellectual backwater before becoming vital to neuroscience. Paul Lauterbur famously quipped that "you could write the entire history of science in the last 50 years in terms of papers rejected by Science or Nature."
We are really bad at predicting the future impact of projects and people, and it's important to recognize that as a counterweight to schemes that try to select for "excellence." Things that seem laughable at the time can also be the basis for future breakthroughs, so take stupid things like Rand Pauls' ranting about silly-sounding projects with a mountain of salt.
But on the other hand, with a severe replication crisis in science (https://www.vox.com/future-perfect/21504366/science-replicat...), isn't it a good thing that we have a bunch of boring papers that are close to stuff that's already happened? It's not people replicating others' findings exactly, but it at least gives points of comparison that can help to weed out some of the problematic work out there.
Boring is often a good thing, not just in science. There's a reason that "May you live in interesting times" is considered a curse.
Think about the year we just had. Whatever else you might say about it, it wasn't boring.
[Clarification] Individual local experiences may have been boring at times, but the year as a whole, from a historical point of view, was not. And the things that made it non-boring also made it suck big fat honking weenies. IMHO of course, but I'm pretty sure most people on earth would agree with that assessment.
I disagree - boring is critical in science. Good science means detailed record keeping and precise experimentation. It's good if the results are exciting, but the process of doing good science is typically quite boring when it's done right.
Edit: This comment makes no sense, because I misread the comment I was responding to as saying "just not in science" not "not just in science"
I was really confused for a bit there, and I finally realized the comment I was responding to said "not just in science," not "just not in science" as I had originally read it. Whoops.
> Think about the year we just had. Whatever else you might say about it, it wasn't boring.
Being restrained to the house, watching the same news over and over again, not being able to see friends, no evenings out, parties, restaurant visits, &c. and you call this "not boring"?
...because of an unexpected, persistent, global threat to the lives of everyone, whose appearance led to economies worldwide shutting down, getting everyone glued to the news you mention, and putting a lot of people in precarious economical situation? Yeah, I wouldn't consider that boring. Tiring, for sure, but not boring.
You may call the problems "interesting", but the political solutions I've seen so far are far from interesting ...
Yes, we're in economically difficult times (well, most people who are not in IT), but this is from an intellectual viewpoint not a much more intriguing problem than, say, global poverty, or climate change. And so it's not like there is a shortage of intellectual challenges. This crisis, therefore, doesn't add much in that sense. Yet it does take away a lot of freedoms that make life interesting.
"Think about the year we just had. Whatever else you might say about it, it wasn't boring. "
Well, from a global perspective maybe not. But subjective there were lots of freaking times this year that were just boring, because I wanted to go traveling and on festivals and just see something else and not be locked down (even though in my area the lockdown was quite light)
I think we need to take Vox documentaries with a grain of salt. They’re not as thorough as the Economist, etc. and have their own agenda mixed with YouTube content engagement metrics. You can argue that’s the case with any publication but IMO YouTube forces content makers to do crazy shit like exciting facial expressions in thumbnails for better engagement.
I personally know the author of that piece very well and can vouch for their credibility - feel free to ask if you have any questions about their work or about the replication crisis in science.
This sounds like the age-old balance between maintenance/repair and breaking-new-ground/innovation. My personal sense is that there is and always has been a gendered dimension to striking this balance, and varying levels of predisposition that bellcurve along gender-specific neurotypes.
For me, once I was pointed to this axis, I started recognizing it everywhere -- legal institutions, the way companies are run, etc etc
Hofstede's cultural dimension studies commissioned by IBM in the 50s (?) have things to say about this too
Yes, boring in the conventional sense is desirable from a scientific point of view (because the preference for surprising results leads to the biggest problem science has today: most published findings are incorrect). The linked article doesn't really do a very good job of explaining what "boring" means to the author, but my hope is he doesn't have a problem with boring in the sense of negative findings or unsurprising results.
This is a reasonable and well-explained argument, which I’ve heard many times before: the incentives are currently producing a large quantity of mediocre research, rather than groundbreaking science.
So let me ask the research community on HN something I see discussed less often. What alternative incentives exist today, or could be created, that would push more scientists to try higher risk / higher reward research?
I think the problem is requiring excessive polish in academic papers.
This causes two problems:
1) It's easier to polish something which is incremental.
2) It's easier to achieve the look of polish by purposefully hiding issues and smoothing over edges.
I'd personally like every paper about a technique or algorithm to include as many problems where it does terribly as where it does well -- this is annoyingly uncommon. Also, people shouldn't be afraid to discuss all the nasty unfinished bits but in practice if you mention them reviews say "well, go fix that before acceptance".
Yes. I’m into ML/AI and would love to read more papers on cool ideas studied in isolation, but most papers out there are more “we combined these fiftytwelve techniques and look it’s SOTA on this dataset!” (It should also say “we have no idea why” but they tend to leave that out.)
I think this is close to the truth but misses an important factor: journal editors are, as a result of increasing submissions, asking for more refereeing. As a result, an increasingly high proportion of academics don't referee at all and those that do tend to referee more papers in less depth.
With less time spent per referee report, easily appreciated evidence of polish matters more where papers that pay attention to careful issues of methodology, tackle deep problems, etc., require time on the part of the referee that they are unwilling to spend.
This is something Patrick Collison (Stripe CEO) has been talking about these past few years. Here’s an Atlantic article from Collison and Tyler Cowen called “We Need a New Science of Progress”
The elephant in the room, IMO, is that there is only so much ground to be broken. The physical world is fundamentally limited and limiting. There's a limit to how much there is to know about it, and as we come to know more of the big things, it becomes more and more difficult to learn the smaller and smaller things. This outcome -- that the rate of groundbreaking research is slowing -- is what we should expect, even under a consistent policy.
My biggest transformative learning experience regarding this has been how much of even technical (in particular CS, AI, ML) research is about recombination of existing ideas and that the most successful researchers aren't doing strictly technical contributions and "breaking ground", "discovering new terrain", but of selling new stories and narratives involving known concepts and shifting the emphasis from known aspect to another known aspect.
Relatedly, in this lecture [0] he expresses it by contrasting the "positivist" model where knowledge is piled on, linearly expanding vs the model of discourse, where research is a conversation, where participants must know what the others take for granted, assume, doubt etc and new contributions argue that there are better ways to do or conceptualize things. Works are forgotten, left behind, ignored etc. A work can be valuable in one context at one stage in the discourse, while worthless at another time. It's not like a neutral map of some terrain that we can just file and store forever.
He's in the social sciences, where this may be more obvious, but it's also true of sufficiently developed technical sciences too. When the low hanging fruit has been picked, the game becomes closer to zero sum. I mean new value can still be created, but not mainly through bringing in "new characters into the story" but by letting the story unfold using the same characters (main ideas). Only a true paradigm shift may break this equilibrium.
I left academia for industry, so my opinion might be colored by the transition.
The way I see it, the easy problems that can be solved by one or two individuals are mostly gone. You need large diverse teams to tackle interesting problems.
To translate this into policy, you need opportunities for many more first authors, and you need to try to incentivize completely different departments to work together.
I’m not sure this is true in all areas, there are very simple questions left unanswered in health/medicine/nutrition that could be at least partially answered by very small studies.
I'd say this is true in a sense that you "stand on the shoulders of giants", you rely on a lot of technology and science that has been developed by others. But small non-diverse teams can accomplish important (but likely small) breakthroughs in their field. You don't necessarily need a large diverse team to do important science. My anecdote is my PhD work.
> You need large diverse teams to tackle interesting problems
This ‘corporate’ view of science is just not true, kind of like adding “man-hours” to software projects doesn’t necessarily account for better software, speeds up or enables those projects.
Large diverse teams just produce more mediocre research, if recent experience with massive “deep learning” industry is of any indication, or fail completely like that “blue brain” project with multi billion funding and thousands of participants.
Bigger is not always better and it has never been necessary for good science.
On the other hand, there are pretty hard limits to the sorts of projects that our standard model of a single full-time person[0] can do, even with some part-time collaborators (i.e., middle authors). This is especially true for projects that involve a combination of very different approaches: bioinformatics, HTS, in vivo validation, etc.
I've noticed more and more papers with multiple (sometimes 5!) co-first authors, but that's a kludge around the fact that we can't figure out how to allocate credit to teams.
Percentage ownership of resultant intellectual property.
A good example is the Minneola, a wonderful citrus fruit. The plant patent is owned by three parties: the University of Minnesota and the two inventors.
> Percentage ownership of resultant intellectual property.
Pfft. I've been through this, my ex-supervisor attempted to file a patent which included a bunch of figures and illustrations I drew single-handedly while writing up my PhD and which were lifted from my thesis without my consent.
My PhD was funded by a government research council.
My ex-supervisor was filing a patent on behalf of a private company he set up to run in parallel out of his university laboratory.
There are some people who just don't care about rules and figure they can just ask for forgiveness later. I'm still cross about this, over 20 years later.
Of all the possible alternative measurement or incentive systems for scientific progress, I can hardly think of a more meaningless and damaging one than patents. There's a huge problem of companies submitting vague, overbroad, or superfluous patents. And plus it does nothing to address the major problem indicated in the original article: we don't incentive people to produce important research.
Imagine how much the Maxwell Trust would own today... The biggest problem with a viral IP system is that there would be nothing left for applied scientists.
I'm not sure that's actually something we want within the institution. IMO the most important thing is improving the overall _reliability_ of research.
On this note, and admittedly a bit of a (relevant) plug: I work at a startup called scite that's trying to improve this -- https://scite.ai
Citations are the primary mechanism by which scientific papers "talk" to one another, and one of the systemic issues we see in the status quo is a sort of numerical reductionism where a lot of emphasis is placed on "how many times someone is cited" without any indication of whether those citation counts are from papers that support or dispute someone's findings.
One of the things we do at scite is let you see, for a paper, how it has been cited (i.e. not just a count, but the surrounding textual context around the citation, and a classification from our model as to whether the citing work provides supporting or disputing evidence for the cited paper, or just mentions it).
That information is also aggregated to the author level, or journal, and so on.
The hope is that by providing (and improving) this service so that researchers can see how papers are cited, we're able to promote more reliable science (in addition to letting someone explore new subject areas / doing lit reviews much faster, and other things).
I just wanted to share that because a lot of the other comments were more abstract and it might help to include something more tangible that's being actively worked on now. There's a lot of interesting work in general in this space.
If you have any thoughts / feedback / questions, feel free to reply here (I check occassionally) or write to me at the email in my profile.
I think it might be a mistake to frame the issue as scientists not doing enough groundbreaking research. Those scientists do exist in greater numbers than before and have the right incentives guiding them. What is happening instead is that the superfluous group is expanding at an ever greater rate.
These are two parallel worlds being discussed as if they were one.
To answer your question, I believe we need to look outside of academia to relieve the pressures on most people so that fewer are tempted to add to the science glut. I'd venture to mention UBI but that's a whole can of worms
Many novel funding approaches have been proposed in an attempt to free scientists from the endless grant-> public-> grant cycle.
One is to accept only mini grant proposals, say 10 pages at most, and screen these to meet some minimum threshold, say, aiming for the top 20th percentile. Then a random subset of these get funded. This helps diversify the funding somewhat, and hopefully catch more "risky, but interesting" projects.
Another approach has been universal funding. The most basic is "everyone gets X research dollars", and any additional funding would require submitting grants on top of that. More complex schemes propose "everyone gets 2X dollars, but they must donate X to other researchers", which allows some people to accumulate the funds necessary to conduct big, expensive work. But all of these proposals aim to minimize grant writing and review, and hopefully to diversify the kinds of work being done.
Longer windows of evaluation could also be useful. Big interesting ideas take time to develop and support, but the competitive academic environment demands quick evaluations, which make ot difficult to sustain long projects. Evaluating a research project after 5 years, instead of 1 or 2, could help alleviate some of the pressure on researchers so they can pursue longer agendas.
Every researcher can come up with their pet theory and solution. Ultimately, I think that all of these approaches, and many others, have merit. But we need to experiment with different policies to see what works.
A related point is that we could improve how good work is bubbled up. Another poster pointed out Allison’s work that led to the Nobel prize, and he claimed it was appreciated by just one group and that helped it get broader notice. How many great works go unnoticed now just because there is no famous author on the paper who has the bully pulpit to command attention?
The funding structures need to be reorganized. Some ideas I think have merit are basing grant award on a lottery system, or funding individual researchers based on previous research for a limited nonrenewable period of time.
Grant review panels need to be mixed up dramatically. Currently they consist largely of people who have received grants in the past. One of the biggest predictors of proposal rating is being a co-author of people on the grant panel. I think grant panels need to be based on some lottery system too, like everyone with an ORCID ID gets thrown into a pool somehow, like jury duty.
Universities need to be funded in a way that's separated from grant receipts.
Not sure if it applies to other fields, but as I observe in CS, many professors start tackling more interesting & risky projects after getting their tenure. Even though their students/postdocs still value paper count a lot, they care more about producing ground breaking works.
In my limited experience as a disillusioned PhD student dropout, the issues are in approximate order:
1. Publications in prestigious journals have become a measure, not a way to communicate. This leads to perverse behavior.
Any kind of project that would involve deep work is very scary, unless it has early low-hanging fruit called easy publishable papers as intermediate steps that are easily published. I personally felt I was encouraged to find such fruits without too much coherence in what I was doing. Preferentially one does not have to learn or study anything challenging, because that makes the fruit potentially much more difficult to pick.
My proposed solution: do not judge researchers by length of their publication list to evaluate them to hire/fund. Judge them by a selection of their recent work instead and maybe their presentation what they are doing now.
2. This is connected to another problem, corrective feedback comes quite late into play, when the most work has been done and it is very costly to change anything. I feel better results could be had of reviewers entered the stage when study was being drafted.
Currently, submit a manuscript into a journal, get it rejected after a round of substantive, critical but correct reviewer comments? Optimal thing is not to review the work as suggested in the light of new ideas, but try to find a less prestigious journal likely to accept it with minimal work.
3. This has effects on conferences. While their primary purpose has always been presenting ones research and hearing about others, they kind of turn too much into a platform for advertising your research so that you get cited. Thus the general feeling can be quite one-way, and there is less knowledge enhancing communication by scientific discussion. Some conferences are better than others (amount genuine curiosity and interest helps), but sometimes I felt people would come to deliver their own little advert, maybe as chore between meeting their personal friends, instead of coming to talk, both receive and offer meaningful updates.
4. PI-centered networks and death of university department. I believe originally university departments were formed because it was thought beneficial and reasonable to have scholars with similar interests nearby. Social networks are probably quite important part of science, and is easier with people who are located often in the same building. Research group culture is a fascinating way to have groups of people nearby who don't communicate. It appears that to PI, each unsanctioned contact from their underlings to outside the group is a distraction at best (if it leads to underling working on publishing something PI is not involved in) or an unwelcome threat at worst, because it risks changing their plans, whether they concern the projects underlings should working on, the author order on planned manuscripts, or something else. All presentations to group outsiders is about signalling importance, prestige and coolness, but no research communication within department to help with unfinished work.
Solution would involve stopping funding faux institutions within institutions, and start paying researchers in a way that encourages them actually collaborate within their physical departments when it is useful. (Salary paid by university/dept., no project grants.)
5. I suppose science moving very large projects involving lots of contributors can not be helped, but it is incompatible with publication authorship as the currency of value in academia. It leads to authorship beimg traded as a currency within projects. One is "paid" by official authorship in some papers; sometimes a person can show their magnanimity by bestowing it upon some people who participated. Sometimes technical or statistical help does not get acknowledged.
IDK what to do about that. Moving out of evaluating publication lists to evaluating personal contribution/capability might help. In addition to big publications that are meant to communicate the big findings publicly but leave outsiders to interpret author order like tea leaves, record publicly some documentation what exactly each individual did, in their own writing, and that documentation is what the contributors should refer to in their CVs?
I reject the notion that a dominant proportion of research publications are mediocre. Instead, the problems have gotten harder. Harder research questions forces careful, yet incremental, research programs.
i would say to emphasize bibliometrics measures of productivity less (how much you publish) and emphasize measures of social impact more. But it easy to say...
1. National research funding agencies and universities need to stop doing their assessments based on publication counting. Of course, publications need to be taken into account somehow, but at the very least these indicators should be weighed by curated rankings of journals.
2. Existing publications can be used as an indicator for how much output is to be expected in the future, but the goals for research output should be kept reasonably low. You cannot control the output of researchers, you can only control which ones you hire and (to some extent) the type of research they conduct.
3. Hiring in academia needs to be based more often on screening candidates by committees of outside experts in the field instead of local staff and people who don't know the area well. Projects also always need to be evaluated by experts on project's topic. (As strange as this may sound, this is often not the case!) It might even help to prune away irrelevant indicators, e.g. ask candidates to submit only 3-5 of their best publications and completely ignore the rest. And by "completely" I really mean completely. The reason is that if you have two researchers and one of them has 20 publications with five really good ones, and the other has 40 mediocre publications and not a single good one, then it is very hard for a normal assessment committee to justify taking the first one, but it's always easy to converge on the second one, even if that is exactly the wrong choice.
4. Assessment committees need to be told to evaluate the quality and originality of the research only. If you really want or need a certain output quantity, then make it part of the formal hiring criteria, not part of the scientific evaluation.
5. Increase funding for risky projects and risky individual grants, elimination of criteria that exclude unusual CVs (e.g. allow long time after PhD, people from other areas, people with time spent in business, unusual research suggestions). Originality should be one of the highest ranking criteria. There should also be a focus on versatility and hard skills. Even in the humanities, never hire anyone who says anything disparaging about mathematical methods and statistics, and never leverage people who don't know the tools of their trade into positions of power.
Basically, you cannot push scientists to anything. Once someone is hired at the postdoc level, you cannot steer much, micromanagement and constant evaluation are highly counter-productive. You need to hire scientists that do interesting research. Treat them like an investment in a startup: Most of them will fail but some of them will succeed. Give them a second chance, maybe even a third one, but not indefinitely many. The hiring policies and processes at universities are often bad. Candidates are not evaluated by experts, there is plenty of favoritism, boring high producers are favored over interesting researchers who want to built something up, because the local staff doesn't like scientists who "shake things up", and so on. There is a lot of inertia to overcome in Academia, being a good scientist can sometimes even be harmful to your career. Funding agencies need to steer against this.
Edit: Much of what I mention can also be achieved by getting an absolute top researcher in an area, give him or her an institute or research unit and ton of money, and let them do their thing. They know what to do and whom to hire. But it's expensive.
From my experience, most of your ideas have been implemented in different ways in the academic procedures as best as they can.
1. Funding agencies and universities don't make decisions. Only individuals in those institutions do. And I don't know any individuals who admit to doing assessments based on publication counts. They'd be openly mocked for such an archaic way of thinking. The only people who seem to openly obsess over publication counts are Ph.D. students.
2. Agreed, but output can be measured in different ways besides traditional academic publications.
3. It seems like this is already done, through recommendation letters when people apply, and in external letters during the tenure process. I see the opposite outcomes, whenever I've seen someone with fewer but better publications compared against someone with more but weaker publications, the first is always preferred. Because it's easy for someone to make the case "Person 1 has fewer paper but they're more impactful and they choose riskier problems, an excellent trait" versus someone to make the case for "Person 2 has more papers in quantity so are more productive." Unless of course it's not clear that person 1's papers are actually higher quality, in which case the original premise doesn't hold.
4. This is already the primary factor from what I've seen. That's the whole point of job talks, publications listed on CVs, etc. I've never seen a tenure-track hire without people reading some of the papers listed on the CV, and discussing the presented work in the job talk.
5. Half of the funding agencies now say they're looking for risky ideas, the word "transformative" has been a common part of NSF vocabulary for decades. Originality is one of the NIH core criteria factors, as is criteria about the Investigator themselves. How panelists interpret the criteria is more social and political than a technical thing.
Anyways, maybe we probably just live in different academic worlds, but I don't see as much blatant expression of the biases you are describing, and feel like the major goals of your ideas have already been implemented which has led us to the current situation. Maybe not everyone buys into them, but as you say there isn't a way to get a bunch of individuals to follow orders, so the only change that's possible is changes to process which has already happened. I don't agree that we need to add additional bureaucracy like rating applicants on "did they disparage mathematics", which I find extremely problematic.
Even the last idea in your edit is already done in every way possible. The MacArthur award, Turing Award, Nobel Prize, various prizes, etc. all give an absolutely top researcher money and prestige, so they can do their thing.
I think the best way forward are social solutions, rather than procedural. Convince minds, rather than add bureaucracy.
We're definitely living in different parts of the world, work in different areas, and are at different universities. Obviously, I wouldn't have written the points if they had been implemented at my university. (I don't want to write in which country and which discipline, since I don't want to be identified. I'm one of very few foreigners with long-term funding in my area, we are a small country with only few universities plagued by all of the problems I've mentioned.)
I have to insist about what I wrote about candidates who disparage mathematical methods in the humanities. These people can destroy whole departments if you let them roam freely.
I have the impression that yes the papers are getting boring, and peer-review is one of problems, however i have it as side effect. The main problem, IMO, is Research as profession, where the researcher is just interested in finish his/her paper, to get it published, to be able to get a next grant and keep researching about a topic that he/she is not really passionate about. The comparison with Einstein is therefore unfair. For big part of the researchers, science is just a job.
I just got my first paper accepted, after 8 months and 2 round of r&r (please clap): the reviewers and editors try very very hard to kill any joy, ambition and big idea. papers should be written with a lot of accademic mannerism. The goal should be to make the paper clear, but it's not true. It just a style to decide if you are part of the clan or not.
TBH, big ideas should not be in a paper, unless you can really prove it. I've read too many papers with grand new theories that hinged on an experiment testing the tiniest of tiny facets. All of these have been forgotten now, even by their creators.
Once you have enough experience, know the field inside out, and have gotten a bit of a name, you can write a book or edit a special edition of a journal in which you may set out your ideas.
The style of academic papers is intended to encourage objectivity I think. All the same I would agree it can indeed obfuscate at times and a more informal style could be both more enjoyable and easier to read.
And this was how papers were written historically. If you go back and read papers from well known researchers from the early 20th century, they can be a delight to read. We have gone down a road of increased formalism and a removal of any character in the name of "objectivity" and "dispassionate impartiality". This is, of course, utter bunk. Researchers are often not objective or impartial about their work, but it's all carefully phrased to be as dry as possible.
If you read some older papers, you have the author interjecting with their thoughts and opinions directly in the first person, and it can make for much more compelling reading. So long as the data presented is objective and accurate, I can't say I have much problem with it. Today we use the "discussion" section for this, but it's just not the same.
Put it this way, I've fallen asleep reading more papers than I care to admit. But some of those older papers were so enjoyable and interesting to read I'd devour them and go looking for more.
Objectivity is of course important, but I think modern publishing has lost something which is also important: the excitement and interest of the authors in their own work.
I occasionally read Geography articles from the first half of the XXth century, they are a joy to read, until at least the 50s or 60s. It reads like a book: the vocabulary is very accessible (the few words/concepts which do not belong to the common language are introduced and defined) and while the writing flows pleasantly like a fiction or documentary, it conveys much useful information.
The recent ones? They're horrible piles of pretentious jargon, jargon which oscillates between hyper-technical and utter-bullshit. They read like those technical marketing fluff pieces. Philosophico-technocratic verbiage all along; when you're done with it (assuming you didn't give up in despair or anger), you haven't learned anything, basically. Anyway, they usually don't bother really describing concrete things any more, they rather talk about fancy social constructs. The synergy of the Promethean dynamic of the actants of transformative innovation of mountainity. Right...
(I made up the example, but from real pieces from my last 2 attempts. I unfortunately can't remember or find the previous one, it was 10 times worse, I wouldn't even have needed to make up this example, I could just have copy-pasted its abstract.)
The difference is like between reading a page-turner, and trying to read the latest ISO standard about development methodology in safety-related domains (perfect for falling asleep before the second page).
I've had some limited experience with academic writing, and overall I've not been happy with it.
As a PhD student, my supervisor would want to take my writing and reword it to sound "more impressive". That was basically adding in all the pretentious jargon you are referring to. I don't think it aids at all in conveying facts in a clear and simple manner. It's completely unnecessary. More often than not it is deliberately aimed to be non-committal and ambiguous so it ends up saying nothing of any substance. That's completely intentional. There seems to be a school of thought that you should never say you aren't sure or don't understand something, and so you couch that in soft language rather than being direct. I utterly despise it, and regard it as a form of intellectual dishonesty.
Later, as a scientific software developer, I tried to write and submit a technical paper for the software I was developing at the time. I spent weeks writing in detail about what it did, along with lots of figures demonstrating its performance and behaviour. But again, after my supervisor was done "revising" it, it become a lot of pretentious waffle that said almost nothing--all of the detail and simple description was reworded ambiguously or removed entirely. If you wanted to use the software, reading the paper would tell you almost nothing of importance. It seems to me that the primary purpose of papers--to be read and to inform others for their work--has not been the case for several decades now.
In the end the technical paper mentioned above was rejected, and the reason was utterly ridiculous. The software used a 5D data model inherited as part of its fundamental design from earlier software. Deliberately done for interoperability. The reviewer was from some competitor group who had a thing about the software being unusable unless it supported arbitrary numbers of dimensions. The paper was rejected as being "controversial" as a result, despite the fact that it was an evolution of something that had been in production use for over 15 years in institutions the world over, and worked perfectly as intended. That ultimately led to losing funding and being made redundant. Such is the fickle state of academia where you aren't judged in the quality of your work, but subject to arbitrary and capricious action like this. And peer-reviewed academic publishing being used as an assessment metric for promotion and funding has led to what it is today: a bizarre meta-game played by academics which has little to do with high-quality research or high-quality publications.
I'm afraid I became sorely disillusioned with the whole thing.
Thank you! your comment and interest are quite heartwarming!
My research is about household debt: I try to understand why some countries have more private indebtedness than others. Growing up during the American crisis of 2008, and the subsequent global crisis, made me realize that debt can be dangerous, and I wanted to understand what pushes people to borrow money.
Interesting intro. It's even translated into French I noticed. So northern Europe is borrowing three times more (per capita) than the US? That goes against everything I know.
One question: did you separate out consumptive credit from mortgages? The amount is so high that it seems to me that it includes mortgages.
Because in that case, is it not the cases that it is simply because norther Europe is wealthy and has more expensive houses and more home ownership.
Those mortgages are backed by houses, and are a sign of good financial practices, allowing gradual increase of wealth. Where consumptive credit is backed by nothing and is generally a sign of bad financial practices.
Yes, in the full paper I separated consumer credit from mortgages. In the intro I linked, indeed they are aggregated.
Long story short: in northern and continental Europe consumer credit is almost non-existent. Very tiny numbers. Notable exception: the UK. But more mortgages don't necessarily mean more home ownership eh.
"Those mortgages are backed by houses, and are a sign of good financial practices, allowing gradual increase of wealth. Where consumptive credit is backed by nothing and is generally a sign of bad financial practices."
That's almost my conclusion as well, but I put more emphasis on what social policies encourage you to do, more than on good or bad practices. In the US they really encourage going for consumer credit, also their bankruptcy legislation is more lenient and encouraging a fresh start; while in Europe we are much more draconian with debt.
Well, it's unsecured debt, but a credit card is informally backed by your future income. You might say that all personal debt is backed primarily by your future income.
Even mortgages are backed primarily by future income and secondarily by the house, and this is why the bank wants to know how much money you make.
Whether or not credit card debt results in future wealth depends on what you buy.
I'm not an expert in finance, so don't hold it against me if I'm wrong, but does "backed by" not rather mean that the bank can have your house if you can't make payments?
This in opposition to a credit card debt where they have no come-back. Other than the courts and collections agencies.
I was under the impression that this was the reason that consumptive loans generally are capped far far lower than mortgages. And have triple their interest rates.
While it is true that I can put my university on my credit card, it is very expensive and very rare. I would wager that the majority of these loans are for cars. Which, I agree can create wealth, but it would have been 'better practise' to buy a cheaper car, and switch when that anticipated wealth has been created.
Moreover, those cars explain only part of the loans. The rest is 'bigger tv' or the likes. And those do not even create wealth.
It's ambiguous. I'm pointing out some grey areas. When speaking loosely, we can talk about what a loan is "backed by" meaning how people expect to pay it back. Or you could talk about what property secures the loan, legally.
Regarding investment, there are startups that got off the ground by running up credit card debt. It's not what I would do, but it's more like investment than consumption.
Or you could pay for your car to be repaired using a credit card, and if you need the car to get to work, well...
Congraturations on paper acceptance, but joy is very optional in research. Joy is emotional and it clobbers logical insight. Peer review is an attempt to guarantee a steady firm progress. It is a matter of taste --- do you want progress or just want to be comfortable? If you want joy, better to be a SF writer... Still, strong results with good execution are allowed to enjoy some freedom.
Actually, there is a good opportunity to be full-on joy mode after acceptance --- Presentation at the conference that follows. You can say as many sketchy things as you want (within caveat) and crave attention. So it is just this basic point --- There is no need to do this in a paper.
Imagine if research was like github, but deduplicated : a giant graph database of unique facts, hypothesis, assumptions, proofs and experiment results, where anyone can contribute, from fixing a typo to publishing experiment dataset.
You would reference to previous knowledge without having to think about citing authors, because it would be automatically generated from the graph.
Then we would have hundreds of "knowledge verifiers", like we have bots scanning github, and ultimately bots deriving new knowledge and contributing to the graph.
The whole world would be building a shared database of knowledge, without any duplicated effort.
Most of the individual facts in this database would probably be boring (like the trivial facts in logic or mathematic), but once in a while, an interesting and important fact would be found.
This is an interesting idea and would work for something like Math or maybe theoretical physics, but once you get to messier fields like Biology it would be mired in hopeless confusion. Every experiment has so many varied parameters and restrictions due to things like ethics committees that standardizing it is nigh impossible. This is not even going down to fields like psychology.
So, Wikipedia, except with some sort of syntactical structure that allowed for formal evaluation and analysis of arguments...we could call it a "compiler" for "compiling" sets of rules, or "libraries"
I think peer-review is an answer, perhaps not a great one, to how can we trust specialist science. In a world where the renaissance man role is no longer feasible, to move science forward we must narrow our focus on particular sub-fields and problems within that field. One issue of this is that it becomes much harder for the general science community to verify your results, which is where peer review attempts to help, by forcing you to get other experts in your sub-field to review your work before it can be stamped as trusted.
Does it work? Kind of. I've personally seen papers in reputable journals that while not fradulant, are pretty misleading. At the same time however I'm yet to see a workable alternative that fixes the trust issue.
The question of boringness I think is pretty field dependent. In the ML community I've often seen papers almost be rejected by the peer review system because they're NOT exciting enough, despite them being pretty influential (a great example is AdamW [1]).
Honestly my assesment of peer review is that not enough trainees (read: grad students/post docs) are doing reviewing. Trainees are often better acquainted with the details of methods, but also have a more open mind to accept a finding that could go a little against the grain. Additionally, they're often faster and do a better job since they have more time on their hands then senior researchers. There have been some efforts to fix this, but so far mostly isolated to specific fields.
One big issue is that grants are short time (meaning researchers spend a stupid fraction of their time writing grant applications), and incredibly competitive. Trying a slightly more ambitious project and failing means the end of your research career. So researchers do the logical thing which is the kind of boring incremental research the article complains about.
Combine this with the focus on bibliometrics in evaluating research output, and we have the mess we have today.
I read a good study that suggested that so much time and effort (and hence money) is wasted on writing proposals for non-funded grants we should just assign a sizable fraction of the money by lottery instead, without needing lengthy proposals. Everyone knows you use the money for other stuff not specific to the grant regardless. The opportunity cost of writing all the rejected proposals would be saved and could be spent on actual science, even if some of it isn't top notch that otherwise would have been funded (and thats assuming that the grant proposal system actually selects the stuff mostly likely to be ground breaking).
Yeah I agree, I would like to see some percentage be lottery based. I think you should still need to write a proposal, but if the proposal isn't selected then it goes in the lottery pool. There are two reasons to still write the proposal: helps organize the researcher's thoughts and shows the researcher is serious.
Doug Altman actually wrote a paper about this issue already in 1994 [1]. A great quote: „We need less research, better research, and research done for the right reasons“
Nearly every "exciting" paper I've encountered in my career (spanning multiple areas of biology and computer science) has turned out to be much less exciting when subjected to critical analysis by a team of grad students (journal club). I've found that many boring papers stand up better.
I can't help but but think a universal basic income, at least in societies that can afford it, would dramatically change the entrenched misalignment that causes this.
The signal/noise issue is real but as others have noted not the most concerning. The main structural problem IMO is that all the structures that support scientific research (academic evaluation, university admin, granting agencies, publishing industry, etc.) are all re-enforcing untenable illusions. A few benefit immensely from this, but there are many who have the incentive to change, yet they are cornered by their economic circumstances to play along.
The same can be said about software development: almost the totality of software developed is boring, unnecessary, and sometimes wrong. That said, we still know that developing software is an important activity.
The people who fund research look too hard for ROI. The funds that get invested into research buy hours of time of people doing research and maybe gear. They cannot buy discoveries. Until this mismatch is addressed broadly, we will continue to have both a reproducibility crisis and ‘boring’ papers.
There is a lot of caution in research funding. Some research is considered "safe", whereas others risky. IN a properly balanced portfolio, perhaps 80% of funding should go to safe work and the remaining 20% to more risky, but potentially high-reward studies. However this isn't how funding agencies usually operate.
At least in the context of the U.S. government, funding agencies are terrified of being seen as "wasting" funding, because that is a sure way to have funding cut by congress. The "Golden Fleece" awards, a congressman's attack on perceived frivolous spending, have routinely been targeted at specific research funded by the NSF and have helped make the agency afraid of being attacked by congress.
I believe this is close to the truth. Moving one step further upstream, the emphasis on high ROI stems from an overall lack of availability of funding. Since there isn't enough money to go around, it becomes necessary to choose some more selective metric to pare down the pool of grant applicants.
Do we just have too many people doing “science”? Clearly people are getting funding to crank out nonsense, boring papers, studies that can’t be replicated, etc.
Only if you believe that as a society we have solved all of the interesting problems. Since this obviously is not true, I would instead focus on the way we evaluate science to improve it.
I’m mostly suggesting the money isn’t well distributed, and I’m pretty sure that’s due to the misaligned incentives. If we incentivized outcomes we care about, I suspect more money would come naturally from the results.
> The peer-reviewed research papers allows you to “measure” productivity. How many papers in top-tier venues did research X produce? And that is why it grew so strong.
If I were king, my rule would be to look at the number of citations per paper and years of citation, rather than total publication count. I think that would solve a number of issues, although I'm sure that it would eventually be gamed as well.
I should mention though: It turns out that people have been bitching about this for decades. For example: I was at Bell Labs in the 90s, and there was regular lunchtime discussion about the Least Publishable Increment (LPI). Everyone had some example about how the LPI had decreased to near zero, and then a few wags went on to show how the LPI had in fact gone negative in certain subfields, not least because reviewers were overloaded and couldn't keep up.
Hopefully, some of this will be self-correcting: Publications are no longer the money-making activity they used to be, so resources will begin to dry up. Eventually, pubs are going to start rejecting valid papers for being too incremental.
Most people do actually judge by citations rather than raw pubs. It's an open secret that you can shove out papers (acceptance to any venue, regardless of the quality of the work is probabilistic) with enough effort. Getting cites is harder (other than self-cites which is a seperate issue).
When I look up a researcher first thing I look at is their number of citations and H-10 index, then I look through their top papers (the ones that have been cited a lot). As far as I know hiring committees also care about these things.
Of course it isn't the only thing that matters, or the most important, but its much more useful than raw number of papers published.
They can be gamed (albeit with a bit more effort), they vary a ton between fields and subfields, and citation rates often reflect the "prestige" of the lab/authors and the trendiness of the topic, rather than the intrinsic "merit" of a paper.
I don't think there's any good automatic proxy for research quality.
Garbage is a strong word. I think there's definitely a moderate to strong correlation between h-index and quality of researcher. The downside with the h-index is that it also measures your ability to play the 'game' (i.e network, cite your peers, etc.), but I really see no way around that. You can also argue that this is an important skill to have. If you produce the most amazing research in the world and people don't cite you, that could point toward a number of red flags (e.g you're an asshole, your work isn't clear to understand, you don't appropriately cite your peers, etc.)
This is whats done in science evaluation—the H-index for instance is built from a combination of publication and citation counts. Similarly, journal prestige is quantified using the Journal Impact Factor, which aggregates the average number of citation a paper in the journal receives.
As you speculate, these can also be gamed. Authors will cite themselves more to inflate their citations, or form citation carters with other authors to cite each other's work. Similarly, journals have been known to coerce authors to cite other works in the venue, in order to inflate their impact factor. Beyond these, there are issues with comparing citations across disciplines, article type, and other contexts.
> Eventually, pubs are going to start rejecting valid papers for being too incremental.
Many of hte most prestigious journals, such as Science, Nature, and Cell, already do this. However newer Mega-journals, like PLoS, have had explicitly the opposite policy, and state that they accept anything that is "sound science", no matter the size of its contribution; they have however become more selective over time.
The good papers are still out there but it's hard to slog through the mediocre or downright worthless stuff if you're not an expert in the field. That's a real problem, I think, even venturing into an adjacent field takes a lot of reading and sorting through the chaff unless you have someone to guide you. And number of citations is not a great metric to figure out what's worth reading.
By its nature, science and R&D is very incremental though. I have some sympathy for the PhD student wanting to get something published (or the prof wanting to get the PhD student to publish) and 5 or even 10 years is short for truly groundbreaking results.
Irritating to see this Atlantic "study" being accepted as valid and written about. This happened with the "bullshit jobs" meme where it stirred up enough controversy that it became an accepted fact for many people, even though the original research was highly questionable and didn't actually prove much of anything
I wouldn't call this validation. I think it's being discussed positively because it articulates what many of us feel to be true: peer-review is a broken mess. Same thing about bullshit jobs.
If you have a better system, you should suggest it. Peer review (and any process) may be imperfect. But I trust something written in a peer reviewed journal more than I trust pretty much anything else I read these days.
I am currently pursuing a second master's degree (in management), some of my personal experience in academia:
1) Publish or perish is a real thing. Citation ratings and measuring affects many of the academics I have encountered. This contributes to boring papers.
2) Mentioning the replication crisis in social science gets a lot of the social science academics observably upset and defensive. They have to take care of 1)!
Compare this to the exemplary stance in CS, as demonstrated by researchers criticizing Google for not backing magic AI claims. [ https://www.nature.com/articles/s41586-020-2766-y ]
This vigorous scientific stance is almost impossible to imagine in social science outside of Critical Management Studies.
3) I have enjoyed my time spent with CS academics a lot more, luckily have some interaction even now - the ones I have spoken to seem to lack the self-criticism blind spots the social scientists exhibited. The CS group seems to have a lot more fun with their research.
It was also interesting to observe that for term papers, many CS professors wanted to see a report with working software and a well-thought-out story of what challenges one encountered and how they were overcome. Not one paper was returned because of improper styling nor citing.
They appeared to enjoy the battle stories of trouble-shooting and finding solutions to assignment challenges. They even read the sources I linked to, commenting on what they thought was a good find.
From that experience (and others), CS sure feels like more of a meritocracy.
In opposition, some social science professors were obsessed with proper APA or Harvard styling and citing "the right thinkers". Circular citation is a real thing. Perhaps since they can't prove anything, establishing credibility via cliques, mutual citation and signaling may be their academic survival strategy.
It does feel like management and CS are orthogonal, and this has motivated me to direct my research towards a hard realism approach to facilitating communication between developers and value driven management.
I.e. I have observed that even when a manager or leader wants to hire developers, social science damaged managers and HR will actively sabotage these efforts because they are still, fundamentally, driven by Taylorist, control fetish concepts (which they refuse to accept). The SS victims seem to wish to push their personal struggle on developers. "I suffered through school, you must show a diploma as well". Some even demand transcripts!
When I consulted a company that was complaining about how hard it was to recruit developers, I looked at their application process. They demanded college transcripts even before the review of applicants began. I hope you are laughing with me at the entitled absurdity of this. Oh you want a personal letter as well?
Bitter HR person, do you fail to understand that you are competing in your recruitment against multiple actors actively scraping through code repositories and blogs to find and contact devs? Or does accepting that a developer is incomparably more competitive on the job market than you and has multiple offers of employment at any time just hurt too much?
One of my favorite quotes from a researcher (well cited) about managing developers includes a rant on how creative, industry competitive (can get another job) and financially stable developers are "hard to manage".
This may be how many incompetent managers feel, deeply.
If I can facilitate getting these emotionally insecure individuals out of recruitment and management, assisting developers and managers that want to produce value to find each other, it would be in opposition to many well peer-reviewed publications. Boring publications.
Attitudes towards the replication crisis seem to me to vary between researchers in any given social science discipline. Those (careless?) empirical researchers whose careers are being retrospectively ripped apart are inevitably horrified (and coming up with a huge range of arguments both reasonable and unreasonable as to why they shouldn't) while theorists are mostly happy and younger empiricists are taking up the reproducability/better science challenge with some enthusiasm. Likewise, attitudes to perfect citation vary wildly. I don't care as long as it's vaguely consistent, which is a common (but not universal) position in my sub-discipline.
I'm not sure how much of your experiences are specifically a feature of management research rather than the social sciences in general. I spent a bit of time studying management (and being a manager, in pre-academic times), though now I'm a computational political communication academic. Management research is seen by the rest of us as a bit weird - a bit of an outsider discipline, with different attitudes and surprisingly little scholarly overlap. I wouldn't expect that many political scientists or sociologists to be Taylorists, for example.
Publish or perish, boring papers, and problematic academic management on the other hand - that is very familiar.
Popular management research does seem like an extra easy target... I have had exposure to several social science departments across continents, not just management, and those members that could be described as "mainstream" are somewhat lacking in reflexivity, with some exceptions. (When they are critical, they stand out extra bright, like Alvesson, Wilmott, Rowlinson)
But about management research. When a leading critical figure in the field summarizes it as follows:
"Whilst there are plenty of theories in management,
there are no laws" [P.Griseri,as quoted by P. Morris in "Reconstructing project management" ],
one could wonder if that state of affairs could perhaps be explained by deeply ingrained subservience of management research to industry [ as described by Alvesson and other Critical Management Studies (CMS) practitioners].
As far as Taylorism, there may not be self-identification in the field per se, but whether taylorism, puritanism or any other -ism linked to a fundamental desire for subservience, control and performance measurement as sound management practice could be described as to run deep. ( A gem of a paper is "The impact of Puritan ideology on aspects of project management" by Whitty & Schultz )
I have been doing some preliminary experiments with applying CMS to recruitment - a short summary could be "treating developers with respect by providing necessary information and establishing transparency in the job announcement increases responses dramatically. Who knew!"
Another interest of mine lies in sociophysics, which I have a huge reading list to catch up on. Did you have any sociophysics topics that attracted your interest in relation to political communication?
I had to look up the word 'sociophysics', which isn't used in my part of the discipline, so probably not! There are a lot of physics-derived tools that are coming into increasing use, though. Half of social network analysis seems to come from sociology, and the other half (the highly computational part, in the main) from physics. I've done bits of network analysis (one of my better-recieved recent papers was applying network partitioning to the problem of identifying the edges of news 'stories' in sets of articles) but I do most of my stuff in the text analysis space where NLP/topic modelling/matrix factorisation stuff is making most of the running, and that's mostly drawing from CS rather than physics.
Sociophysics has not been met with a wide welcome in management studies, perhaps due to its approach to proof. It requires a lot more work than popular hand-waving.
It will be interesting to see whether physics-derived tools and approaches will lead to improvement in social studies in regards to current critical concerns.
It is unfortunately (or fortunately, depending on how you look at it) a far-away in the future area for me, as just applying viewpoints outside of the mainstream to experimental research in management has a lot of low-hanging fruit with potentially significant impact.
That could be one of the benefits of the humanities aspect of management studies, it is well justified (though not popular) to take a humanitarian (and thus biased) stance towards improving the well-being of employees.
My doctoral degree has required training in both Computer Science and Social Science, and regularly interact with both communities. Drawing on this, I feel that this comment is unnecessarily partisan and divisive.
> 2) Mentioning the replication crisis in social science gets a lot of the social science academics observably upset and defensive.
The replication crisis is widely discussed in social sciences. Nearly every major social science journal will have at least one, likely many, editorials and articles on the crisis tailored to their field. Maybe this is explicitely a Management Studies thing? The field is relatively small and insular, and so cannot be generalized to wider social science. And are your experiences drawon from the community as a whole, or the faculty in your department?
> 3) I have enjoyed my time spent with CS academics a lot more, luckily have some interaction even now - the ones I have spoken to seem to lack the ego-driven blind spots the social scientists exhibit. The CS group seems to have a lot more fun with their research.
Maybe this is just the local community of wherever you studied? I've met equal parts ego-driven and chill people in both CS and Social Science. A major concept in most social science fields is sampling—I'd urge you to consider how representative the people you are talking to actually are, and how your sample might be biased from the global population. My prior is that there would be little difference in personality, in aggregate, between the fields.
> I.e. I have observed that even when a manager or leader wants to hire developers, social science damaged managers and HR will actively sabotage these efforts because they are still, fundamentally, driven by Taylorist, control fetish concepts (which they refuse to accept)
I mean, that's one possibility. The other is that hiring people is difficult and expensive, and that credentials are a quick and useful, if flawed, method of sorting through the pile.
And are most HR people really trained in Management? I honestly don't know, but I doubt that the typical Psychology or Sociology major, for instance, is going to know of, remember, or consider using Taylorist management theory.
It definitely seems that you enjoyed your time in CS more than your time in Management. That's great! CS is an important field with lots of cool people. But generalizing this preference into a wider philosophy that privileges CS at the expense of (all?) social sciences is silly.
Regarding the replication crisis, can I ask why we don't yet have a central body to which scientists report their intended experimental design, sample size and research hypothesis before the experiment begins? Then we can eliminate publication bias due to null results not being published which is a big part of the replication crisis. The meta-analyses only survey those studies that reported this ahead of time and the scope of possible publication bias can be quantified.
There are registration databases, but there are issues.
Who trawls the (now huge) database to find the registered studies which never reported? In any case, that could indicate a file drawer problem publication bias or simply that the funding disappeared or a researcher decided not to proceed.
How do you move from very spotty preregistration to compulsory preregistration? It would probably take a coordinated push from all the major funders. And they haven't successfully moved to fully open access publication in over a decade of trying, despite the obvious financial gain to the non-publisher world of winning at that coordination game.
How do you handle work on the margins of registerability? Not all scientific work is testing hypotheses, some is exploratory. Should a quantitative description be ruled out if it wasn't registered? If so, how are reasonable hypotheses for future work established? If not, you have to be extra vigilant to stop people smuggling phrases that imply confirmation into nominally exploratory work.
And the other problem with the replication crisis is that even decently-planned, pre-registered, straightforward studies can not replicate. Sometimes for reasonable reasons (significance was actually by chance, some unanticipated confounder interfered) and sometimes for thoroughly blameworthy ones (rather than p-hacking you can just fudge the data, or make 'accidental' programming errors).
So I'm all in favour of pre-registration, but it's not a magic bullet.
We both probably agree that parapsychology hasn't demonstrated that psychic phenomenon is real, and yet their various meta-analyses purport to show that it is. The experimental design of the underlying studies is quite good, so it must be publication bias. I couldn't think of a better demonstration as to why preregistration is a critical necessity, even if it is difficult to get going.
All of your questions are very good ones however, and I agree with your conclusion that it isn't going to be a silver bullet.
Who trawls the (now huge) database
Only the authors of the meta-analyses, as part of their lit review. It's simply a criteria for inclusion into meta-analysis.
How do you move from very spotty preregistration to compulsory preregistration?
Eventually, if meta-analyses only look at studies that are preregistered, this would boost citation count for preregistered studies and therefore incent scientists to preregister since their careers are tied to citation count. How to arrive at this end state is a bit of a chicken and egg problem and requires a cultural shift, so it'll be very difficult. But once the end state is arrived at it should be sticky without needing to be compulsory, since career incentives are aligned.
How do you handle work on the margins of registerability?
Preregistration is mostly helpful for research questions that are amenable to statistical meta-analysis, i.e. those questions where the hypotheses tested across papers are sufficiently similar so the quantitative results can be statistically combined and statistical significance & clinical significance can be evaluated. I think preregistration is mostly needed for these questions, although it's probably helpful elsewhere too.
These kinds of things exist, though the rate that they are adopted varies widely.
In Psychology, for instance, there was discussion back in 2015 about per-registration in the field [1]. Since then, tools and repositories have been created to help facilitate per-registration [2].
But again, they are not universally adopted. Some of this is probably generational—academia is quite conservative, and doesn't change quickly. A new generation will likely use these tools more. Other times, fields are more insular than others, and so such tools will take some time to diffuse into their community. In the meantime though, things have improved, even if only somewhat, across the sciences.
How are there so many papers if the papers are so long and complicated these days and acceptance rates so low? Econ papers for example have tons of data and stat analysis and are often 50+ pages and have multiple authors because there is so much data to crunch and so much stats involved. The era of the 5-10 page paper written by a single person that make an interesting observation or novel insight, is over.
Surprised to see no mention of Thomas Kuhn or The Structure of Scientific Revolutions in the blogpost or in the discussion here. Though briefly looking at the linked papers, he's cited by Thurner et al. and directly discussed by Bhattacharya and Packalen. His ideas on incremental science seem pretty appropriate.
I find the same failed incentive scheme in industry. When a company gets big enough (tens of thousands of employees), taking risks to achieve something new and interesting doesn't pay off. It's more reliable to just be present and focus on writing about yourself during performance review time.
I think it's a real thing and has the same nature as the force that keeps a water droplet together: it's the surface tension that prevents others from sticking out and supports them to be like others.
One answer to all these propositions is that the valuable stuff is still there, you just have an additional flood of mediocre boring predictable stuff. You can still find enough indie music and games and movies to spend all your free time on.
Similarly, if you know where to look and whose papers to watch out for, you can read more interesting stuff. The existence of more bullshit makes the filtering somewhat more effort, but it's bearable.
Most papers are never deeply read anyway, outside of the reviewers. Even citations don't mean someone really read the paper in depth.
So, most medíocre, boring papers get ignored.
It's an illusion to expect that every researcher can pump out multiple really interesting non-boring papers every year. So we pretend. We write, we cite, we present, overiflate, overpromise etc. It must look like there is steady hard noble work and toil. It's like a ritual. The scientists are writing all these papers so tax payers can sleep well that their money is paid for hard work.