Hacker News new | past | comments | ask | show | jobs | submit login
Slowed canonical progress in large fields of science (pnas.org)
178 points by phreeza on Oct 10, 2021 | hide | past | favorite | 72 comments



I've also found very similar issues with extremely "data driven" organizations that live and die by A/B test performance. It's not that there is anything fundamentally wrong with the science behind A/B testing, it's just that individuals are incentivized to run tests on things that are easily measured. Things where results will take a long time to show, or things that may be initially disruptive but then beneficial, are discounted.

I see this theme in many, many areas: business, sports, politics, academia, etc. When we have tons of data and a desire to make things as "objective" as possible, it's easy to get stuck in homogeneous "local maxima" because we just grade by the things that are easiest to measure.


I think this is categorically different. What you are talking about is a bias to work on things that are measurable followed by goodharts law; the linked paper is more like "if you throw money at a problem and increase participation in the field it becomes more difficult to separate wheat from chaff due to sheer volume".


My point is that researchers are "graded" on things that are easily measured, things like publication quantity and "citation impact factor", which leads to a sort of monoculture in academia.


that is at best tangentially related to the issue in OP.


Yup its like sorting through YouTubes 7 trillion 'How to make an omelette' videos. You know there are great ones in there but its a crap shoot what gets found.

More awareness reqd about the Explore-Exploit tradeoff.


From the abstract I see OP's point to directly apply. Simplistic metrics (number of citations for instance) applied to papers push people toward "easy" decisions (reading the "best" papers in priority), and they don't get to discuss the more innovative work.

I see the parallel on people too focused on data and easily measurable properties, missing "riskier" works that would rely on subjective evaluation or less clear selection criteria.


A good role of thumb I use is — if you need a P-test to tell you your result is significant, there's basically no chance you have found something revolutionary.

Real breakthroughs are... obvious, both qualitatively and quantitatively.


It should be easy to find counterexamples, look at experimental physics.

The measure of the Higgs was not a small insignificant result, but it didn't go from unexpected to obvious. It was expected and honestly unknown, until new instruments and fancy statistics could just barely push it over the significance line. It was in a sense a disapointment, but arguably the last big breakthrough for the standard model..

Look at gravitational waves. Suddenly we have a whole new force we can measure the universe with! This one breakthrough opened a whole new field of study. And yet it's at the edge of what we can measure, and even when the data looks consistent with the explanation, it's far from an obvious thing.


I should point out that many fields have commonly used p ≤ .05 as a "significance line".

The significance line in particle physics is generally 5-sigma, or p ≤ 3x10⁻⁷. The Higgs is now at p ≤ 1x10⁻⁹. I think that meets GP's criteria of obviousness, at least in spirit.

Also -- you can look at the plots in the Higgs discovery papers and go "oh yeah, there's definitely a bump there".


Yeah exactly. Once identified, the measurements and graphs made it blindingly obvious that they'd found it.


Both of those are things that are not revolutionary, but rather logical extensions of what the established models already predict. The Higgs boson was a direct confirmation of what is literally called the "standard model" of particle physics, and gravitational waves follow fairly directly from orbital decay + conservation of energy. Actually detecting them is a fairly impressive feat of engineering, but scientifically it's "yep, the thing we figured was probably true turned out to be true".


Related to that, if (in business) you need detailed quantitative analysis to tell you which of two paths to pursue, it probably doesn’t much matter either way. And extrapolations are a dangerous business anyway.


Hm... both Tesla and SpaceX took about 10 years to be "obvious" breakthroughs.


And neither of them ever needed a P-test to show it.


I was working at a SV company and one pf my peers told the PMs that their A/B test design was terrible and no meaningful conclusions could be drawn from it. Naturally he was ignored. And he is a physics PhD who has worked at Los Alamos, so I think he could fairly be called an expert in experiment design.


The flip side is that his standards could be too high for a practically useful test, the standards for searching for something like ground truth and the standards for searching for some more money are different. In the former being right for the wrong reasons is still being wrong, in the latter no one cares why you were right.


Smaldino & McElreath (2016) -- The natural selection of bad science

https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.1603...

> Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. [...] Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. [...] To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods. As in the real world, successful labs produce more ‘progeny,’ such that their methods are more often copied and their students are more likely to start labs of their own. Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.


I was excited to read this, but was disappointed upon thinking further about the results. Is progress is measured by turnover in citations? It strikes me that the idea here is linked to the great-person concepts in history. It reminds me of a retrospective I recently read by T. Lomo, who is widely credited with the discovery of LTP. He said that many people ask why his original paper doesn't cite Hebb, who is usually described as the originator of plasticity theory. His response is that Hebb wrote down what everyone already knew, and that no one in the field found it worth citing for that reason. Of course modern neuroscience and machine learning people cite Hebbs work regularly but not Lomo and Bliss. Should we then infer that the research dollars given to Hebb were well spent and those that supported the experiments were not? An alternative view of progress would say that ideas are like accretion turning into a planet - a bunch of seemingly insignificant papers gravitate together and then someone says - "wait, there's a planet here!" But they didn't create the planet, and probably many of the individual rocks already understood exactly what was going on. But in the old days the gate keepers allowed the "this is a planet!" paper to gain influence/significance, but now 50 different people say the same thing but none of them win the credit battle...


Even before Hebb, Cajal or Tanzi had suggested the same idea in the 1800s . Hebb only wrote down a vague hypothesis, but what matters is he did it much earlier than Lomo thus the disproportionate credit he receives. References are clearly not a correct mechanism of credit attribution but a very vague one, because that is not even their goal, they are mostly used for explanatory reasons.

That said, while Lomo &Bliss may not get a lot of love from machine learning experts who have a superficial knowledge of they field, they do get a lot of attribution where it matters, in neurosciences


I recommend the book:

Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions

Generally, all scientific fields have immense issues related to the walled gardens and limitations created by their peers.

I also recommend this discussion between Eric and Brett Weinstein which highlights just one example of the major issues:

https://m.youtube.com/watch?v=JLb5hZLw44s


The article is basically saying that the quantity of papers published per year has a negative influence on the quality.

Quite off-topic, but it reminded me of an interesting book I read some time ago titled "The Reign of Quantity and the Signs of the Times". It is a bit obscure and metaphysical, but it develops the same idea across all aspects of society: economics, society, politics, religion, etc, etc.

Of science it basically proposes that science itself is a product of "quantitative thinking":

> The founding of a science more or less on the notion of repetition brings in its train yet another delusion of a quantitative kind, the delusion that consists in thinking that the accumulation of a large number of facts can be of use by itself as ‘proof' of a theory; nevertheless, even a little reflection will make it evident that facts of the same kind are always indefinite in multitude, so that they can never all be taken into account, quite apart from the consideration that the same facts usually fit several different theories equally well. It will be said that the establishment of a greater number of facts does at least give more ‘probability' to a theory; but to say so is to admit that no certitude can be arrived at in that way, and that therefore the conclusions promulgated have nothing ‘exact' about them;

Anyway, quite an interesting book.


Nice quote, and a very concise diagnosis of what ails much of science: lots of data jockeys, few true scientists. The output of scientific endeavor is not simply truth, but true theories. Or, more accurately, theories which accurately predict (i.e. are not falsified by) the broadest set of relevant observations. The real tragedy is that genuine theoretical advances are often ignored or overlooked because everyone is too busy collecting more data to notice.


The real tragedy is that even theorists often have a hard time staying current with the theoretical developments in their field, because there are so many other theorists publishing new results. The same incentives that value publishing novel results apply to theoretical work as well. Theorists can often publish at a faster pace than experimentalists, because they don't have to spend time and money on experiments.

Genuine theoretical advances are often ignored or overlooked, because the incentives value novel results over simplifying and synthetizing known results. Fixing this, like many other issues with incentives, is much easier said than done.


> Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion.

That's why change in the sciences, and similarly in commerce, industry, the arts and most human endeavours, often has to take the form of generational change, even when the raw facts would suggest otherwise to the naive observer.

It is difficult to get a man to understand something when his salary depends on him not understanding it.


>It is difficult to get a man to understand something when his salary depends on him not understanding it.

Wow, that basically sums up why I quit my PhD in one pithy quote. The entire literature was made up of folks being deliberately obtuse in order to secure grant money.


> one pithy quote.

from Upton Sinclair.

https://en.wikipedia.org/wiki/Upton_Sinclair


People are petty in every field and discipline. It's a race between mediocrities wishing to conceal their mediocrity. That's why virtue is so important. The virtuous who remain have the patience to suffer through it but also to beat back the bullshit by bringing clarity to their discipline in all that they do. They cut through the crap.

(This also reminds me of a certain kind of person who speaks in a stilted way and who somehow believes that studding his sentences with as many "Big Words" as he can somehow demonstrates his intelligence. This is the analogue of newly rich people who think that the more glitter and gold you cram into your shirt, the better it is.)


In what field may I ask?


Machine Learning (specifically its application to Brain Computer Interfaces).

I'm sure all of about 2 HN readers will be shocked to discover this!


BCI is full of people claiming to have cracked the neural code and engineered a device around it.

Neuroscience is full of people debating whether or not there is a neural code in the first place, and certainly don't consider the problem "solved".

I'm not shocked.


Would you be interested in a job trying to build something that can actually connect to a brain?


Not looking for work at this stage, but feel free to reach out and say hi. Email in bio.


Good science will be eventually recognized. Bad science will be eventually ignored. Kuhn called it a paradigm shift.

Poincare and Lorentz got old, basically gave up, and Einstein filled the void.

And in the long run, bombast, cheerleading, and fraud are irrelevant.


Recognized by whom? You have to be a specialist in the area to see what's good and promising, and what is not. But if the area is populated with bad actors that push their own agenda – your voice will be suppressed.

The idea that, eventually, everything will be organized beautifully and there will be heaven on Earth doesn't look obvious to me.


> Recognized by whom? You have to be a specialist in the area to see what's good and promising, and what is not.

And even after that the person who finds this "good science" and sees it has no citations will likely be tempted to rebrand the original idea and present it as his/her own, without referencing the original work.


The "new" science attracts plenty of citations and further development, otherwise it can't become mainstream. Sometimes this happens gradually after a first burst of recognition, sometimes suddenly.

Einstein's Nobel was for his work on the photoelectric effect, and specifically not for his relativity theories.


A lot of the comments are about the rate of scientific progress being unnecessarily low in general, but I understood the article to be about the relative rate of progress in different fields. The authors findings suggest that not only is the marginal impact of individual publications greater in fields with fewer papers, but even the field as a whole moves forward faster with a slower publication rate!


As a current PhD student, this seems very plausible. Wading through the sheer volume of prior work is ridiculous, especially in my current field (cryptography) where actually reading and understanding all the technical details in a single paper can take a full work day (or more). Fewer papers with more meaningful results would make the field so much more accessible.


Yup. I prefer old papers because, among other reasons, they tended to finish a line of investigation before publishing. Nowadays, due to pressure to inflate publication numbers, the same amount of research gets chopped up and spread out over 2-3 publications. It's not just 2-3 times harder to read -- it's worse because I have to keep referring between them to understand things.


Thankfully there are review articles to quickly get up to speed.


At least in my field (plasma physics), review articles are published infrequently, at random intervals, and their coverage is arbitrary (ie, even if they are nominally concerned with the topic you are interested in, which isn't guaranteed, they may leave out the particular facet of the topic that is of interest to you).

I wish there were incentives/support for more regular/frequent review articles with broader coverage; that would have been a huge help as a grad student.


There are issues trying to quantify science (eg using citation counts) - and of course this paper uses metrics to try and show metrics based science is flawed.

It couldn't be any other way, of course - nobody would accept what they're arguing without data.

My favourite example of how "high impact" science isn't recognised by citations are genome mappers/aligners, building blocks for pretty much all modern high throughput sequencing.

The 2 most popular alignment tools are BWA with 30,868 citations, and Bowtie with 12,810 citations are implementations of the Burrows Wheeler Transform, and have it in their names.

They are excellent tools and implementations but in terms of novelty, the first to apply the BW transform to genomes was "Space-efficient whole genome comparisons with Burrows–Wheeler transforms", RA Lippert 2005

Both BWA and Bowtie cite Lippert, making up 2 of Lippert 2005s 53 citations.


Science is largely directed by people who give grants. It's these guys fault if things don't advance. They have failed spectacularly at cancer and Alzheimers research by only giving to people who follow approved treatment approaches. Now, after many decades we find out that removing Tau proteins does nothing, and big data analytics and gene sequencing has shown us that there is no cancer gene.

There are efforts to break out of the grant writer monopolies and fund, via crowdfunding or just interested groups of philanthropists, different approaches to various long standing problems. For example, MAPS has fought long and hard to get the regulatory approval and raise the money to test psychedelics and MDMA for depression and PTSD respectively.


Dunno about Alzheimer’s, but as a result of Nixon’s “war on cancer”, there is a ton of money going into cancer research. It’s not only going into some narrow, conservative research directions: good chunk of cancer research cash (if not the most) is only funding stuff that’s only tangentially cancer research. Talk to biochem researchers, they’ll tell you: if you want to get a grant, you write some bullshit story about how it is relevant to cancer research, and apply for a cancer grant. It doesn’t have to be very convincing, because the grant committees are not stupid and just play along, since cancer gets more money than it should, relative to other research needs.


Maybe this is the reason that cancer hasn't been cured...


That’s why we have the great Perimeter Institute.



Upvote for interesting! I'll probably read it.

But Nature. And "but Magueijo", who is a bit of a crank in his own right (FTL, VLS), and who seems like a sour dude: per WP: "In [his] book, Magueijo described British culture as the "most rotten societies in the world"." A real hatchet job!

Perimeter gives a home to people who would otherwise have difficulty finding funding. Most will fail. But then...

Einstein could never have got a grant for his "annus mirabilis". And probably not even one to fund his GR work.


> Perimeter gives a home to people who would otherwise have difficulty finding funding.

Gave. Past tense.

https://nautil.us/issue/38/noise/this-physics-pioneer-walked...

http://backreaction.blogspot.com/2019/01/new-scientist-creat...


Fascinating articles. I think Bee analyzes the situation at Perimeter correctly. With Turok gone, maybe the pendulum will swing back a ways. One can hope.

Markopoulou, 38, hooked up with Doyne Farmer, 57, and got pregnant while taking a road trip from Santa Fe to San Francisco in a 42 year old Datsun convertible. And they are still together. I've been following Farmer's progress since the '70s, but I hadn't heard that one.

Great stuff, thanks very much!! Got me to break out my old copy of Kevin Kelly's Out of Control.


Reading these articles, I think somebody might be able to make a difference by funding experiments only. This would make sure to avoid string theory stuff and other non-falsifiable dead ends.

You go write your theoretical paper somewhere else, but if you need to do an actual experiment, come to the perimeter and they'll pay for the equipment and such for you to try it.


Why there is many things wrong in science and large number of publications is probably amongst them the premise of the argument here is weak at best. The authors essentially say because in established fields people cite established papers, therefore no progress is made. That is quite a leap, just because people continue to cite Newton does not mean that no progress is being made.

The argument seems to be that new "disruptive" science needs to replace the old, but that is hardly ever the case. Instead of replacing it it often extends it (see my Newton example which was extended by e.g. quantum mechanics).


Sometimes I wonder if human innovation sits on an S-curve and we alive today plus a few generations before us just experienced the steep part. Maybe it will flatten out again. Although who knows, maybe not.


I can see this but the fact that the thought is so tempting makes it suspect.


Not surprised. I've just read a book about science-metric approach implemented in different countries.

Sociologists have known for a hundred years that any measure, taken as an ultimate criterium for reward, becomes manipulated. But British neo-liberal government in 1980s started using it as a measure of efficiency, out of fear that scientists live off public money and produce no meaningful output. It made a nation-wide rating of scientists (per field), made once in 10 years, giving bonuses to those making more research that is cited.

Scientists now have to output an article at least every 2 years and get it published. This makes people do various tricks: having produced good research, they split the results it in 2 or 3 articles, or attract more co-authors (making extra work in exchange for extra citation points).

Journals also have weights, so more prestigious ones get more proposals. More competition for being published, higher the pressure and demands. And this lead scientists to define a bulletproof narrow and precise hypothesis, and then prove or disprove it very rigorously.

This essentially led to abandoning a holistic research and monographs, and to writing in "bird language", incomprehensible for outsiders.

The system has been copied since by France (90-2000s) and Russia (2000-2010s).


As the philosopher of science Paul Feyrabend wrote and made the convincing case in his book Against Method:

"Science is an essentially anarchic enterprise: theoretical anarchism is more humanitarian and more likely to encourage progress than its law-and-order alternatives."

"The consistency condition which demands that new hypotheses agree with accepted theories is unreasonable because it preserves the older theory, and not the better theory."

"Science is neither a single tradition, nor the best tradition there is, except for people who have become accustomed to its presence, its benefits and its disadvantages. In a democracy, it should be separated from the state just as churches are now separated from the state."


  "As the philosopher of science Paul Feyrabend wrote and made the convincing case in his book Against Method"
Convincing case? I think not.

Feyrabend is about the worst of the philosophers of science, and that's saying something[0]. He forms part of the tradition of humanities scholars who feel they have something of insight and utility to add to understanding science, but who offer insufficient evidence to match the claims they make.

It's with value-vacuum and nonsense statements like this:

  "Science is an essentially anarchic enterprise: theoretical anarchism is more humanitarian and more likely to encourage progress than its law-and-order alternatives."
and this:

  "Science is neither a single tradition, nor the best tradition there is, except for people who have become accustomed to its presence, its benefits and its disadvantages. In a democracy, it should be separated from the state just as churches are now separated from the state."
that these people trade in. What does this even mean? Where is the actual evidence to support such claims?

Science is far from an 'anarchic enterprise'. It has the most rigidly regulated mechanisms for knowledge procurement and knowledge dissemination available to us.

Further, science is a single tradition (underpinned by the scientific method) and it is the best tradition we have; no other field of human endeavour has progressed as far in 2,500 years as science has. This is in sharp contrast to Feyrabend's own field, philosophy, where such a claim cannot be sustained.

[0] Popper is easily the least-worst of this group.

[EDIT] Grammar clarity.


> rigidly regulated mechanisms for knowledge procurement and knowledge dissemination available to us.

Exactly.

And the DSL in which science should be written as much as possible in order to make scientific results as reproducible as possible (which includes spelling out as many underlying assumptions as possible) is called mathematics, which in recent years has been improved into formal mathematics that is mechanically checkable through (interactive and automatic) proof assistants. This 2.5k year old human endeavour of improving science has not yet finished. The next big milestone, which I expect to see completed before the year 2200, is to finish the mechanisation of all existing mathematics.


> Science is far from an 'anarchic enterprise'. It has the most rigidly regulated mechanisms for knowledge procurement and knowledge dissemination available to us.

> Further, science is a single tradition

Have these rigidly regulated mechanisms been in place for 2500 years? Because if not, that seems like the sort of thing that Feyrabend might have meant by saying science isn't a single tradition.


  Have these rigidly regulated mechanisms been in place for 2500 years? Because if not, that seems like the sort of thing that Feyrabend might have meant by saying science isn't a single tradition.
The programme/process of Science - the overarching belief that the natural world can be understood through rational means - has itself gone through numerous process improvements during those 2500 years; yet it remains the same endeavour and retains the same core impetus.


Okay, but it seems like "science isn't a single tradition" is a perfectly reasonable way to describe this state of affairs. So when you ask "what does this even mean" I think there's a fairly straightforward answer.


Low hanging fruit has been picked.

And scientists now work within the walls of corporations.


And administrators have taken over universities


this


And the referee process can suppress innovative work.


I am not sure what it means, really? Progress comes from necessity, in that science follows need… and need these days asks for engineering (makers) more than fundamentals (theorists). I mean, fundamental science’s accepted knowledge is still so way ahead engineerable results in so many fields!


MMT proponent Bill Mitchell calls that "groupthink". It's really hard to get heterodox economic theory published and accepted.


Can't blame them if "researchers" are spending time on problems like this: "A Time-Series Analysis of my Girlfriends Mood Swings" https://www.reddit.com/gallery/q17vtl


Because scientists aren't allowed to have a sense of humor?


By Dr Chad Broman in Journal of Astrological Economy. I'm not sure if you're being sarcastic or didn't get the joke


Maybe Sokal Squared shouldn't be our primary concern.


what is the nearest solution here?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: