Imo, it has to do with incentives and rewards.(in CS at the very least)
The Hindex is a measure thats become the target. So 1 seminal paper is much much worse than 10 completely forgettable papers. It is also common to divide seminal work into smaller publishable morsels to rack up the hindex at the cost of novelty and clarity. The lack of incentive to pursue novel works, also means that most papers are incremental by design. Imo, this is the biggest waste of money at academic phd programs. However, the students need to take their career somewhere, so an uncomfortable compromise is met.
Another problem is how every conference is converging into the same impact maximizing mush without any meaningful differences between them. This has massively affected searchability, which as we know leads to even greater 'agglomeration at the top'. Having different standards for novelty, experimental rigor, math rigor, scale, technical fit and the like would allow for accepted papers to follow internally consistent searchable constraints, while the diversity of target audiences accross conferences would allow for different types of research to coexist. In my field, conferences are only differentiated by deadlines and status. Everything else is secondary.There arent too many papers. There just that the quality of curation has gone to shit.
The bigger problem is structural. Academic research is mostly done by early-career researchers in short-term positions. When you are a PhD student or a postdoc with only a few years to do research, you don't have the time for long-term projects. And because your career prospects are determined by your success in your current position, you have to manage the risks by working on multiple projects. By the time they graduate, the average successful PhD will then have several publishable results where they were the primary contributor.
There are very few positions for mid-career individual contributors in the academia. After ~10 years, you are expected to take a leadership position. Instead of doing research yourself, you are now expected to convince your peers that your ideas are worth funding and that they should collaborate with you. If you get the funding, you will hire people to do your research. Your job is then guiding them through the early career. You can start thinking about ambitious long-term projects, but in order to make them viable, you have to break them down into smaller publishable subprojects suitable for your students.
You accidentally conflated "conference" and "paper" which is true in computer science and also relevant to other fields. The crux of the problem is that almost no one reads papers. People get exposed to ideas at conferences, and conference organizers invite the people that wrote the seminal papers many years ago to be the keynote speakers. Even if a paper is cited, it is usually because they saw the author speaking at a conference.
It's painful because (IMHO) the H index is just a much worse approximation of something that we could actually achieve with PageRank for academic citations. In that case, a bunch of middling papers would be rewarded, but so too would one critical paper that lays a foundation for a field.
Think of an army randomly moving through your citation graph; the more particularly nodes are trampled over, the more pagerank it has.
Now: if this army is informed about the shortest routes and instead moves about optimally, the most-trampled over places have higher betweenness centrality. I'd like my simulated citing scientist to be smart.
I'll give you a bonus: the Louvain algorithm for community detection. Whatever ships with Networkx, Gephi, etc. doesn't work for my (correlation-derived, pruned with graphical lasso) networks, but the Louvain method (a greedy approximation to modularity maximization; the real math magic is in the concept of modularity and the configuration model) is awesome.
I always thought that PageRank was inspired on methods previously used for scoring academic papers. Now I'm wondering if I misunderstood something, or my professor misunderstood it first. Damn.
PageRank has nothing to do with h-index to my knowledge. The problem with using PageRank-like methods for academic papers is that academic papers mostly reference backwards in time (except for the occasional draft or work-in-progress being referenced, but proper cycles are rare). A triangular matrix doesn't yield an interesting stationary distribution...
I don't know if it makes any difference, but if PageRank should replace the h-index, the most direct equivalent would be to rank the authors rather than the papers, no?
The incentives in academia are to publish and receive citations. This is often very political and citations are made as a favor or trade, completely independent of the research.
Unfortunately this structure forces out productive scientists who want to do interesting research rather than churn papers.
Doesn't h-index prevent exactly the scenario of having 10 forgettable papers? Having a couple of great papers yields a high h-index, but 10 forgettable papers would hold the number at a low count because you need N papers with at least N citations each. So a seminar paper would +1 to that h-index indefinitely, whereas low value papers would upper bound the h-index to their citation count.
It kind of regresses to a median. 1 big paper, and 100 papers with zero citations aren't that useful.
However, most top phd students/ assistant professors hover around the nebulous 5-30 hindex where getting 30 citations is a lot easier than publishing 30 papers. So, in most cases, you will prefer to figure out quantity, because the quality bar is so low. Additionally, they and lab mates always cite each other which leads to a free 10-ish citations overtime anyway. Lastly, authorship priority is not taken into account in hindex. So, a bunch of secondary-authors can easily get those numbers up at massively industrialized labs. So a small set of productive 1st authorships are given lower weightage than a large list of low-contribution 2nd authorships. Almost all super-high hindex professors are more like CEOs of a research company than primary researchers.
H-index, like all metrics is useful. It sort of shows the median quality of papers by an author assuming that equal time is spent on all papers. It is informative, but making it too important in academia has led to it getting gamed with counter-productive incentive structures.
H-index ignores away a lot qualities that are incredibly important to being a productive researcher, and has led to researchers with such qualities being progressively pushed out of academia ever since it has become THE target.
> assuming that equal time is spent on all papers.
And that's exactly what makes the h-index useless, every paper has a different effort put into it. I've had papers that required 5y of work, some others only 1 month... And yet they are counted the same.
Also the number of citations depends on the field, for people working across multiple fields you get papers that are highly important for a smaller community get less citations than low quality papers in a large community... Most papers are cited "by chance". Researcher type two keywords, cite paper that seem to go somewhat toward the wanted direction, rinse repeat.
> Lastly, authorship priority is not taken into account in hindex.
Given that there is no standardised way to determine authorship priority - given that this doesn't even make sense in plenty of cases - I really don't see that as a problem.
Apparently, some folks divine some nebulous properties from author order. Others just put authors in alphabetical order.
If you want to know who contributed what, ask the authors. You may get different answers from different authors, which is emphatically not a flaw.
(In contrast to the aforementioned divination procedure.)
One thing obvious is the breakdown between theory and experiment in physics.
Newton's great accomplishment was discovering a link between terrestrial and extraterrestrial physics.
Today on the large scale we see it takes multiple kinds of "dark matter" to explain the rotation of galaxies, structure of galaxy clusters, and cosmology. "Dark matter" evades any attempt to detect it on Earth.
Einstein's prediction of how light was bent in gravitational fields was tested by Sir Arthur Eddington in a few years. Last year observational evidence was found for a circa-1980 theory of black hole jets. Neutrino Oscillations were detected in 1998 and have been one of the few areas where particle accelerators get non-null results; the theory for that was developed in 1957 by an Italian physicist who defected to the Soviet Union. (No Nobel Prize!)
Since the gap between theory and confirmation of the theory could span a whole career, young physicists need to survive by pleasing their elders with fashionable theories for a decade without any feedback from the physical universe.
A strange counterexample was the 1980 rise of inflationary cosmology, where the problem and solution were discovered together. (Somehow nobody was bothered by the "Horizon Problem" until then.) It was Alan Guth's answer to survival in the physics hiring drought of his time.
I don't understand why consensus jumped to the conclusion that there must be a new form of undetectable matter instead trying to figure out what we don't know about gravity.
Theoretical particle physics is even worse. There haven't been any advances since the 1970's. Experimental particle physics has done a great job in verifying/testing the predictions from the 1960's/1970's but theoretical physicists are stuck in a rut.
> I don't understand why consensus jumped to the conclusion that there must be a new form of undetectable matter instead trying to figure out what we don't know about gravity.
Is it implied in "I don't understand" that "it's not understandable?" There is quite a bit of evidence that point towards dark matter and away from problems in our theories of gravity. We have direct evidence for forms of matter that do not interact with particular fields, it would be unsurprising to find other forms that are extremely difficult or impossible to detect.
Most notably if it was just gravity then we'd expect the effect to be more homogeneous. But with dark matter distribution it isn't. The distribution of dark matter is highly non-homogeneous and acts just like matter does, creating webs and clumps. It would be weird for a field to do this, especially since fields are associated with particles (we still haven't found the graviton, which is the exception to that statement, so far. But we also don't expect to find it without a substantially larger accelerator. One that would be difficult to build on earth)
My immediate take on "dark matter" was that there is something strange about gravity and inertia.
At the galactic level, however, there is a lot of cases where it seems you can see the mass distribution of dark matter. They've found starless hydrogen clouds that seem to be dragged around by a dark matter halo.
If you think about the evidence from galactic rotation curves you are likely to think that "this galaxy has some dark matter in it" but the modern point of view (which seems to work) is that "this dark matter has a galaxy in it."
When it comes to cosmology at the larger scales I don't really believe in the "multiple flavors of dark matter and dark energy" that is fashionable now. I wonder, for one thing, if the universe is really homogeneous at large scales and if that breaks the assumptions of current models.
Part of why dark matter is such an attractive explanation is because there are no constraints on it. Unfortunately this seems to have relieved pressure to explore and test modified gravity.
For example, how much of it there can be or where it comes from. Contrast with Neutrinos.
But you are correct, I don't know what I am talking about which is why my comments were phrased in the form of a question or literally with the words "I don't understand"
Is that a chart of how much we think there is (model fitting based on observations) or how much we think there should be based on how it is created?
With Neutrinos we know how they are created and have a very good upper limit on how much there should be and it's not nearly enough to explain the observed effects. Same for the CMBR.
I'm not saying there couldn't be much more Dark matter/energy than neutrinos or photons but it's a bit too convenient to introduce a variable that is allowed to take any value and match it to observations without an explanation of what it is (besides having mass) or how it is created.
I think they mean that the theory of dark matter has an infinity of parameters. For every point in space, one can (rather arbitrarily) assign a mass of dark matter to make the theory fit observations.
No, not really. We can observe it indirectly and have mapped out its structures. And intuitively, why do you find it implausible there is a massive particle that only interacts via gravitational force?
One of the more convincing argument I've heard is the "bullet cluster". Basically it's composed of two clusters of galaxies that recently collided. Since dark matter and normal matter interact differently (dark matter interacting weakly), you could imagine that the two would have different centers of mass following such a collision.
If there's really just modified gravity without any dark matter, the distribution of the regular matter would be sufficient to model the gravitational dynamics but if there is such a thing as dark matter, you'd see that the gravitational effects would be consistent with a center of mass which is displaced from the center of mass for the regular matter. This is, in fact, what you see, suggesting that there really is some type of dark matter.
What makes you think anyone is jumping to conclusions? Many explanations have been proposed but the undetectable matter explanation is still the leading candidate.
From observations, we can see dark matter acting independently of visible matter. It's very difficult to explain this any other way.
There's a lot of different data out there and a lot of different theories. The dark matter hypothesis fits a lot of the data really well with a very simple model, few extra constraints or variables.
The alternative theories don't fit the entire set of data as well. Or they do, but by choosing additional complexity, rules, constants that are chosen to make the model work, but can't be explained otherwise.
Dark matter isn't a full explanation, no doubt, but imho it's the best we have.
Well you never really “see” anything, you see effects of a thing. Especially at the edges of physics where all the low hanging fruit is understood.
We see an effect which isn’t accounted for by the things we understand well and come up with several theories to explain that effect. Eventually we gather enough evidence to confirm or deny those theories and science marches on.
There are several theories as to what causes the effects that are primarily attributed to dark matter, by no means is it settled. But the theory that fits the best is that there is quite a lot of mass out there that we can only observe so far by large scale gravitational effects on matter we can see more easily.
Sure, it could be something else but a really convincing candidate hasn’t come up.
We’re in the same situation physics was in towards the end of the 19th century. It seems like physics is nearly “done” with only a handful of odds and ends left unexplained. Maybe it is, maybe we’ll get a breakthrough that opens up a whole new world of physics. It’s hard to be sure but over and over we keep probing and not really finding significant “new physics”.
The trouble is that astronomers have limited ability to detect matter and estimate its mass. Given the predictions from GR, the idea that there was new astronomy and particle physics was easily more attractive.
~50 years on without significant progress for particle physics + GR means we are starting to be interested in alternatives.
Consensus shifted away from MOND and towards LCDM due to degrees of freedom in observations. A trivialized comparison would be "Why did artists give up on finding the best color and instead focus on finding the best arrangement of color on canvas?"
> A strange counterexample was the 1980 rise of inflationary cosmology, where the problem and solution were discovered together.
Yeah, then in 2004, 30 scientists signed open letter[0] to stop pushing of creationism into physics.
Moreover, it was predicted[1] that dispute between Ether and GR will be resolved in favor of Ether when Higgs boson will be discovered, because continuous Higgs «field» (a medium) must be presented everywhere for Higgs bosons to create mass. Higgs boson was discovered about 10 years ago.
Heliocentrism was proposed in the 3rd century BC. When it was confirmed? Microorganisms were hypothesized in 11th century, were confirmed in 17th century.
Looks like the scientist who discovered neutrino oscillations is Bruno Pontecorvo and his life story is quite interesting. Here's what he had to say about the Soviet Union in 1992:
Now, for the first time, he is prepared to talk about the choice he made. But, with most Communist countries having changed their colours, how does he feel about the dedication of his life to the Communist cause?
'The simple explanation is this: I was a cretin,' he said. 'The fact that I could be so stupid, and many people close to me should have been quite so stupid . . .' The sentence was left unfinished.
Communism, he went on, was 'like a religion, a revealed religion . . . with myths or rites to explain it. It was the absolute absence of logic.' He stuck by his faith, even after the invasion of Hungary in 1956. When Andrei Sakharov, a fellow physicist, turned against the system, it made no difference. 'I had always admired him as a great scientist and a man of integrity. However, my idea was that he was naive . . . it was I who was naive.'
>>Since the gap between theory and confirmation of the theory could span a whole career, young physicists need to survive by pleasing their elders with fashionable theories for a decade without any feedback from the physical universe.
There are multiple ways of defining science: a method, methodology, epistemology... One definition is that science is the scientific culture. Stuff that impacts culture, without conflicting with methods epistemology can still change science. Science in a world where generations pass between hypothesis and test is a different kind of science.
No, it isn't waning. There are struggles, but that doesn't mean its waning. We're having massive progress in medicine, space, cosmology, environment, biology, food, transportation.
Hell, the only thing that's waning is public perception of science and that seems to be a deliberate political attack.
Agreed. I only see my little corner of biology, but I am continually astonished at what is being learned. Often times the pace of learning and the diversification is such that I often don't find out about significant discoveries until a couple years after they happen, and I can still bring the ideas to others years later after that and people will say "holy crap that's amazing, I can't believe I hadn't heard of that." Even scientists can't keep up outside of their area of expertise, and there will be even greater discoveries as different specialties are connected together.
Media coverage is terrible, which is just fine because the fields are changing so quickly that who knows where things will be in a few years, ans the invention of understandable lay explanations take a long time to develop.
I used to learn about, say, physics from popular media, but I don't get that any more. Presumably some things are going on in those fields.
I think there are two effects: the expanse of human knowledge is now so wide that the human mind is having trouble keeping up even compared to a few decades ago, and also the media has changed massively over the last few decades as internet advertising has gutted its funding model.
Biology is one field that seems to be making massive breakthroughs recently. On the other hand, it seems like a lot of theoretical sciences like Physics haven't have similar breakthroughs in decades.
I believe that the “hard” sciences are much much easier than any soft science, which is one reason they have progressed so far. When you are arguing with a presumably objective reality, one party remains rational.
Hard sciences are seen as harder due to university systems, and nothing to do with actual genuine difficulty of the science.
Soft sciences are much harder to tease out fact from fiction, and your discoveries often modify reality so even your facts actually change (macro economics). Also for a lot of soft sciences, there are a lot of facts that don’t have to make sense (path dependencies for phenotypes of random mutations).
Is it? Biology is one of the fields most often cited as stagnant. Yes there's lots of papers, and some progress in understanding at the micro level, but progress against the most important diseases is nearly zero (see: Alzheimers, obesity, heart disease).
Also, the field is full of low standards and fraud. That's one reason why there's so much arguing over COVID, and why tech VCs stay away from it. Way too many impressive sounding results that you only discover don't replicate once you funded and spun up a lab. In biology you get things like Theranos, which is the sort of long term scam that's much harder to pull off in the software world.
Biology is also one of the few fields that appears to have serious problems with undetected paper forgery, which professional firms that create entirely fictional experiments and sell them. Browse through https://pubpeer.com/ and you'll see that the papers getting flagged are nearly always biology papers.
> In biology you get things like Theranos, which is the sort of long term scam that's much harder to pull off in the software world.
It's funny that in the biomedical circles, it's been remarked that Theranos is what happens when the software world (Silicon Valley) try their hand at biology. You can't fake it to you make it here, if the science doesn't work it doesn't work, no matter how much VC funding and employees you throw at the problem.
Wow. I really hope that's not really the takeaway biotech people are getting from Theranos.
Firstly, Therano's investors were mostly not tech VCs - I can only find one in the list actually, unless you count Larry Ellison in a private capacity. Theranos got a lot of money from non-tech investors like Rupert Murdoch, Walgreens, private equity etc. One reason they had to do this is exactly because tech VCs know to stay away from anything biotech related - the people who work in that space should be wondering why.
Secondly, Holmes did fake it to make it. That's the point. Faking it was sufficiently easy that they went years before the scam fell apart, they even got their machines deployed!
What's the software equivalent of such a large scale, 100% fraudulent operation? I can't think of any offhand, because it's normally pretty clear early on whether software is working or not.
That's exactly what I take away from the whole Theranos story, non-scientists scamming non-scientists out of their money, because neither of them understood the science, or were too arrogant to admit some things could not be done despite the flashy presentations.
You should also note that none of her funders or advisory board had much to do with biotech or diagnostics either. Might be because those with knowledge of the area had been calling it a scam since 2014.
I think it's obviously waning if you look at any field that deals with physical world: a car from 1940 looks totally different than one from 1980's, but the one from 2020 and 1980 looks almost the same.
Space rockets are same as they were in 80's, where are nuclear engines, fusion propultion, etc?
Its the same for airplanes, appliances, and everything execept computers.
I started driving in 1982... and I can say with 100% certainty that cars of 2020 may not look all that different from cars of 1980, but under that skin of glossy steel and glass, they are very different. The engines are different (electronic fuel injection, timing and compression ratio monitoring), the transmissions are different (dual clutch, torque vectoring differential), the electronics are massively different, safety features are like out of sci fi (parking cameras, adaptive cruise control, anti-lock brakes, in-dash navigation, ). All that and the quality (fit & finish, reliability, durability) is hugely better.
A typical mid-market car of today would have been considered absurdly high quality and uber-luxurious in 1980. The difference is night and day.
I started driving around that time, too... in a 1950 Studebaker. Not only do I agree with everything you say about the difference between cars of today and 30 years ago, the magnitude of advances has accelerated too.
In the 50s, they were predicting that cars would be flying and nuclear powered by the 80s. That's what I think about with accelerating magnitude of advances. What we have now seems much more like the expected linear progress of 70 years, helped along by the computing revolution, where most of the actual magnitudes of progress has occurred.
Cars, planes, and rockets all have optimal shapes for their environment, of course they're not going to change much once they get to that point.
Cars are still vastly different today than in the 80s in (at least) performance, efficiency, and safety.
Planes are also getting quite a bit better, although adoption of these planes is slow as most airlines want to get as much as they can out of the old fleet.[0]
SpaceX has been landing their rockets for several years now, and are about to take an even bigger step with Starship.
I was watching the Boardwalk Empire not so long ago. When the main character was a boy, people were wearing fancy clothes, sending letters, and riding horses. As a grown man, he talked on the phone and flew a plane. Whereas my parents were flying planes when they were young. I fly almost the same planes(yes, more safe and efficient) and it takes the same time to get from A to B.
Now you can contact almost anyone on earth using a pocket size computer and even see them, and share with them a very large portion of humanity’s knowledge.
I know that's the meme, but I think it's false and a dangerous thing to tell ourselves.
Walk into any university library, pull a random book off the shelf, and flip to a random page. What are the odds that the information on that page can be found in a google search?
And that's just the things that are publicly documented at all. There's libraries worth of implicit industrial knowledge too, including material that is explicitly proprietary. How does Intel or AMD design a modern computer chip? How does Rolls Royce design a jet engine? How do you fabricate a mono-crystalline solar cell? How do you mine for raw materials?
This is "I, Pencil" writ large. I would estimate only the smallest fraction of humanity's knowledge can be found on the internet - well under a percent, at least if you don't count "emailing an expert". If we had to rebuild society on the basis of what we could find on the internet, we'd be lucky to reach 20th century technology levels.
> Walk into any university library, pull a random book off the shelf, and flip to a random page. What are the odds that the information on that page can be found in a google search?
If you include pirating sites? Close to 100%, most books are scanned into pdf's and can be found free online. So only thing stopping this is legal and not technological.
> > Walk into any university library, pull a random book off the shelf, and flip to a random page. What are the odds that the information on that page can be found in a google search?
General Aviation was destroyed for many reasons, none of which were really technology or innovation problems. Commercial General Aviation was decimated by NIMBAs, Commercial Airline pressure, massive population growths, expensive to insure/maintain/own and to be honest, a lack of care of passion from aviation for the past few generations.
But.. the homebuilt and sport space has innovated quite a bit - glass cockpits, auto pilots, efficient engines, electric power plants, micro jets, composite aircraft..
I think appropriate comparison would be, what would equivalent travel cost, in money an time, to the parents of your parents.
The argument is not that nothing is getting better, but that the rate of improvement has slowed. So the frequently touted 'exponential progress' is a myth.
And you could make the same case for other inventions.
Hammers in 1860 looked like hammers. In 1880? Same thing. In 1900? Same old hammers. 1920? 2020? Yep you guessed it, still a hammer.
Some things are invented and then perfected to a point where you can't really improve them much in a cost effective way. That doesn't mean that new stuff isn't being discovered and worked on at the same or faster rate.
"SpaceX has been landing their rockets for several years now, and are about to take an even bigger step with Starship."
From 1940s, in 20 years we invented jet planes and rockets, and since then for the next 60 years we are fiddling with the same basic rocket design. Are you seriously pretending that taking 3 generation to learn to land them is as big of an achievement?
If we kept up the pace of progress, we would have skyhooks in service,
nuclear thermal rockers, nuclear electric propultion, fission fragment rocket and dozens of others.
We have arguably regressed, as starship will just take us to where we were before, being able to reach the moon.
Starship is designed for mars. They built a new space craft that will be re-usable, at a cost efficiency never seen before that uses technology specially designed because of Mars. It's engines will use Methane - something no other manufacturer was able to master because of specific issues with those styles of engines and they did it in with the purpose of generating methane on mars to be able to fly back. The methane can be synthesized on mars from CO2 in atmosphere and Hydrogen in Ice. They had to invent the largest re-usable rocket platform, the first re-usable and working methane engine and the first flight computer that could take off and land...
Not only that, but they invented or invested in massive technology for manufacturing all of this such that the engines are often 3d printed and designed with precision only dreamed of before.
"First flight computer that could tiake off and land"
Soviet Buran could do that in the 80's
"Starship is designed for mars"
Original starship design was 5x larger and could take meaningfull payload to Mars. It had to be scaled down so that it use the ancient Saturn 5 launchpad and other infrastructure and be more affordable. Current starship is in the same weightlifting category as saturn 5.
"First engines to use Methane"
So what? If it was first one to use Uranium, that would be a revolution. This is just burning a different propellant. Its an incremental step. It is not a 60 -year milestone. Its like saying 'i upgrading home boiler from coal to oil' - so what? You are still stuck will low energy fossil fuels
Landing a winged aircraft is different than landing a rocket.
Starship is designed for mars and its design changes pending mission realities. The fact they're progressing so quick is awesome.
First engines to use Methane are great - and it shows a mission profile that is correct for a trip to mars - since they can use science to generate fuel while on mars.
Uranium wouldn't be a revolution and there is no way it would ever pass certification for leaving earth orbit beyond small decay batteries that have been used for 60+ years.
Just because you don't agree with the cool shit going on, doesn't mean it isn't cool.
And if it could have been done 60 years ago, it would have been done 60 years ago.
No, not a "fan" of spacex, i'm a fan of space in general. my passion is cosmology, where were discovering more about black holes and gravity waves and new kinds of stars and getting closer and closer to possibly figuring out what all the dark matter and dark energy is.
But hey, two can play this game, you seem to hate SpaceX so you throw out the sciences and engineering too..
There are multiple space companies sending up tourists now and dropping the cost per kilo of space travel down while creating new tech to do it. Sure, it may look like a rocket is a rocket, but its certainly not the same rockets from the 1960s - these are rockets that can take off, launch multiple payloads and have components return to earth and land from where they started.
Absolutely not! I love SpaceX and the all-composite Dreamliner, but the benchmark is not 'pretty good', the benchmark being discussed is 'exponentially better than anything that came before', otherwise we don't have 'exponential progress' - perhaps we have linear progress, or quadratic, or asymptotic.
I think they are not in the same league as Wernher von Braun or the Wright Brothers - those folks changed the world as we know it,
- when Braun was designing rockets, we didnt even know that human body can function in space at all. There were no engines to look at for examples. There was nothing.
Turns out we dedicate tons of resources to advanced technology when we are at war (WW2/cold war). Sounds like we need another good war with a power who is actually a threat to rapidly innovate.
Cars are vastly different than they were in 1980. Fuel injection, aerodynamics, airbags, driver assist, hybrid and electric drivetrains, etc… the difference between a 1980 Honda Civic and a 2020 civic is massive.
In 1980 we weren’t landing self piloting rocket stages on drone ships and reusing them a month later. The engines powering our rockets were completely different chemistry and metallurgy. We weren’t building hyper efficient carbon fiber airframes with high bypass turbofans (Dreamliner). Even my washing machine is using about half the energy compared to a washing machine in 1989.
Composites, manufacturing, and design are all completely different to how things were in the 80s
Just because something has a similar form factor doesn’t mean it is the same. There is a strong argument that the past 40 years has seen some of the fastest progress in the history of the human race when it comes to making things.
> There is a strong argument that the past 40 years has seen some of the fastest progress in the history of the human race when it comes to making things.
I still think somewhere between the late 19th century to the middle 20th century had the most impactful change in human history, and things have slowed somewhat in term of overall impact since then, with more gradual improvements. 1881-1951 saw more transformative changes than 1951-2021. As in the world changed more in the previous 70 years.
There's still a lot of change going on, and some of it is transformative. But not to the extent of the radical transformations from the late 19th to mid 20th centuries, with revolutions in science, technology, economics, trade, transportation, communication and political structures.
This might be more true of the developed world than the developing, which probably has seen those transformative changes more in the last 70 years. But in terms of what came to exist, it's hard to beat that period of time.
"We weren’t building hyper efficient carbon fiber airframes with high bypass turbofans (Dreamliner)."
Sir Joseph Wilson Swan first created carbon fiber in 1860. Carbon nanotubes were discovered in 1950's. If Dreamliner was built out of carbon nanotube, then I would concede. Like this, meh.
"There is a strong argument that the past 40 years has seen some of the fastest progress in the history of the human race when it comes to making things."
to test this hypothesis, let's stop arguing about somewhat subjective things, like whether fuel injection is more significant than invention of a nuclear reactor, and think about things we can measure objectively.
Think about things we can measure, in the developed world, anything all, from wage growth to energy production to production of steel, by every metric that I have seen, their rate of increase was higher in year 1955 than in year 2015. Feel free to take your pick and show any statistics that disagrees with me.
Let's just get back to tech. In the 1980s a 4mhz computer with 360k floppy disks and cga monitor cost about 2500 bucks... Today, you have a phone in your pocket exponentially faster and smaller - all in a few decades.
The apple watch - has 100s of times more compute/storage and resolution than that computer... and some of those watches have 4g / gps and things we could have only dreamed about.
In the late 90s, we used to watch "videos" in "realplayer" in a 1" block.. 20 years later, we're streaming 4k on demand on ever device in a home across wifi networks and gigabit connections to our homes.
It wasn't that long ago most of the world was excited about 56k
The only thing that's happening, is you seem hellbent on taking the science/engineering marvels of our generation for granted.
We could go on and on but nothing would change your apparent disdain for the future
And what has all that computational improvement bought?
Modest improvements in communications (the bulk of which are taken up by ... decidedly non-critical messages). Adtech. Another few days on the long-term weather forecast. Pervasive Panopticon surveillance, both capitalist and state, largely symbiotic. Random death from the sky if you happen to be brown-skinned.
For millionsfold increases in CPU power, and billionsfold increases in the actual number of CPUs available.
You are using a platform that can distribute that sentence to a majority of the human population, at a marginal cost that rounds to 0, in less time than it would take a human to type it.
Another few days on the weather forecast for heavily populated areas that already have a team of forecasters on staff. For the rest of the earth, we have enabled forecasts to exist at all. in 1980, if I wanted the forecast for a remote spot on earth the answer was "hire a meteorologist". Now I can have a reasonably accurate forecast sent to me hourly via a satellite constellation for a single digit percentage of the cost of hiring a meteorologist. This has HUGE implications for logistics, agriculture scientific research, etc...
Surveillance and death from the sky? The west has had no problem spying on citizens since the 1930s, and randomly killing people in less developed civilizations is a centuries long tradition for the west. All technologies have their downsides. Much like computing, metallurgy is one of the core technologies behind warfare (modern guns don't exist without steel, planes without aluminum, etc...).
Niche artists and businesses can find an audience using the internet (using ad tech, I'll note). Lives are so routinely saved by cell phone technology, that we don't even bother noting the role that technology plays. I can use google translate to have a real time conversation with a stranger on the other side of the earth without having to worry about language. I taught myself how to design PCBs and had a prototype run made by a factory in a country that I've never been to, in less than a month.
The effects of computer technology are so ubiquitous that I think they are almost invisible
Broadcast communications is an inherently rivalrous space, and the ability for any one person to reach others means of necessity that they are preempting anyone else from doing so. Given present population and life expectancy, the amount of time that can be equitably granted any individual on Earth by others is ... a small fraction of a second. The likelihood that any of my online utterances is seen by more than a few hundreds of people is very slight. I'd have a similar audience at any crowded city street corner, or on a university quadrangle.
Worse, the existence of such a channel incentivises low-value ways to fill it. Advertising, robocalls, spam, phishing attempts, and fraud. It's somewhat enlightening to look over the Greek and Roman pantheon to realise that Fama was interested only in trumpeting the pronouncements of the gods, not considering their value or veracity, and that Mercury was the god not only of messengers, but of tricksters and thieves. Herbert Melville's The Confidence Man, from which the modern term derives, was set on the first great superhighway of the United States, the Mississippi River.
Improvements in communications technology have much the same effect has handing out party favours to a room full of five-year-olds. Communications and intelligence isn't improved, though the noise floor is considerably raised.
My point isn't that a few days' additional improvement on weather forecasts have no value. It is that in order to achieve this, MILLIONS OF TIMES increases in computer power are necessary. Which is to say that information-processing advances of an extreme degree deliver very, very modest additional benefits.
Niche artists and businesses used to find an audience the old-fasioned way: locally, because of a phenomenon known as "friction". It cost too much to move goods (or audiences) long distances, so provisioning was local. It wasn't until the rise of the factory system, mechanised transportation, branding, and advertising, that it became possible to sell goods over a range of more than a few villages or towns (with rare exceptions). The consequence now is that:
- Every manufacturer of a durable good is competing on a global basis, and production has a strong tendency to shift to where labour and environmental regulations can be most heavily suppressed.
- Ephemeral production, as with software, video entertainment, music, fashion, banking, and propaganda centralises to where the non-ephemeralisable components of production are most suitably concentrated, giving rise to toponymic global centres (with possible variations based on language, culture, and regulation or legal jurisdictions): Hollywood and Bollywood, Nashville and Motown, Silicon Valley, New York / London / Frankfurt / Tokyo / Shanghai, Milan and Paris, Moscow and Macedonia.
Cellphones are so gunked with junk calls that actual lost hikers will ignore incoming calls from unknown numbers, a problem communications service providers have seen coming for years. (https://news.ycombinator.com/item?id=29003383)
I'm not saying that there are no benefits. But that there are also costs and consequences and counterveiling trends, and that many of the so-called gains have been offeset by losses either in the same domain or elsewhere.
An understanding of human impact on climate change
Discovery of extra-solar planets (and even potentially an extra-galactic one)
Self-driving taxis (waymo, in Phoenix)
CGI in movies (and: cgi-movies) revolutionising entertainment
Scientific advances (simulations in physics, astronomy, biomedicine; formal verification in CS)
untold quality of life improvements (eg cars, apps that translate signs/menus,...)
Wrt cars: the chip shortage is reducing car production. That says something about how much modern cars need that computational power.
- Realisation of climate change puts additional constraints on human activity, it doesn't lift them. It's what I call a "hygiene factor", along with other realisations of unintended consequences of activities. Note that the constraints existed all along, we're just made aware of them at some point.
- Extrasolar planetary systems have virtually no practical impact for life on Earth. They represent an extension of understanding but not of means.
- Self-driving taxis ... are solving a wrong problem for most people. They don't solve transportation problems (congestion, cost, pollution, land-use), they are far more capital-intensive than increased land-density, walkable (or bikeable) cities, or mass-transit. They seem to exacerbate, not solve, inequality issues amongst both drivers and riders. There may be a slight safety improvement, but that remains theoretical rather than demonstrated. On every other touted benefit, ride-hailing / self-driving vehicles have failed to deliver.
- Movie CGI is an example of a technology with very minimal upside benefits and huge downside risks. The positives are ... making it easier to tell lies in entertainment. The negatives are ... making it easier to tell lies for any other cause at all: deepfakes, porn, fraud, spearphishing, propaganda, etc., etc., etc. There may be some benefits in situation / scenario modeling and simulation, though much of that seems to have military and offensive applications as well.
- Physics, astronomy, and medicine are all domains in which return to knowledge seems to fall rapidly. Basic mechanics gave us simple machines, chemistry and gas laws gave us our prime movers, steam engines, ICEs, and turbines. Electromagnetic experiments gave us generators, electric motors, and a wide range of electromechanical and electronic systems virtually all conceived of in the 19th century. Of quantum-based systems, the most widely used are probably LEDs, lasers (in information transmission and preservation), and photovoltaics. Returns to basic medical interventions (public health, nutrition, food quality, antisceptics) are vastly greater than those to acute interventions. And we can't even persuade a huge fraction of the population to practice basic hygiene and get a proved vaccine.
- Modern cars are dependent upon computational power. It is not necessary to use computational power in those cars, and vehicles lacking any electronics do in fact function. There are marginal at best improvements in performance and safety. Pollution controls (through improved real-time combustion tuning) is probably the largest impact. Note that the primary alternative to ICE-based automobiles are electric cars, themselves a development track abandoned over a century ago in favour of (at the time, and by badly flawed economic pricing functions) cheaper fossil-fueled internal combustion engines.
Meantime, pervasive problems exist in pollution (global warming from CO2 emissions, ozone from CFC releases, lead largely from use in motor fuels though also paint and other applications, other heavy metals, endocrine disruptors, plastics), environmental and ecosystem devastation (multiple causes), economic inequity, political imbalance, massive injustice, discrimination and genocide, epistemic warfare, and fundamentally, a failure to accept and act on the reality of limits to human expansion and progress.
Maybe your car hasn't changed, but your cancer care sure as hell has. The medical and biological sciences are where a lot of innovation has taken place, and the pace of technological advancement in things like DNA sequencing (as one example) is breathtaking.
People are living longer and age is a dominant risk factor for cancer. In a world where cancer care wasn't improving, we'd expect to see massive increases in deaths due to cancer. What's also not captured by that number are the years of life after cancer diagnosis, which for many cancers has gone up dramatically. It's a tough problem, but the curves are bending in the right direction.
If you want a more punctuated example, how about gene therapy, which is doing things like restoring (partial) vision to the blind. https://www.smithsonianmag.com/smart-news/new-gene-therapy-p... Early days still, but we've laid the foundation for a really exciting next decade or so in genomic medicine.
Same disease that would've killed my friend 30 years ago would kill him today, with the same treatments being used as had been available another 30 years before that, based on WWI chemical warfare.
Healthcare's shining decade was the teens.
The 1910s.
Largely based on public health measures. Much as today.
Car styling being somewhat static does not imply that Scientific Progress is waning. You could say something similar about clothing styles: comparing clothing styles between 1980 and 1990 there was a lot of difference. Comparing between 2010 and 2020 - not much difference. This has a lot more to do with economics and tastes than scientific progress (or lack thereof).
Car styling being static is because we are likely very close or at some local optima. That is minimising the drag. In past we may have had some idea, but now we can compute the best shape. Plus there is regulatory pressure that makes some solutions illegal.
Safety, performance, handling, comfort, ease-of-use, reliability, and efficiency have all significantly improved.
The difference in experience of driving a 2020 Tesla Model S vs a 1980 Cadillac Seville is more drastic than that same Cadillac and its 1940s equivalent.
I still beg to differ... The shape of cars is even changing... We have Cyber trucks coming out, we have several EV trucks coming out - battry tech is improving, engine performance is still improving, fuel efficiency is only being attacked for political reasons - It's pretty cool that a Jeep - a brick on Wheels has several options of efficiency and power that are major improvements from just a few years from - from a 4xe hybrid to a turbocharged engine to a diesel option.
Airplanes? There is a really good Nova episode on the electric race - we're nearing an electric age with airplanes even in commercial aviation - In the next 5-10 years we'll probably have the short hops covered by quiet electric planes - that innovation isn't necessarily paced by science, but by safety, engineering and certification - things we don't want to shortcut since humans are involved in these systems.
Hell.. we've been dabbling with autonomous cars too and driver assist and lane assist... I can go on and on
That ignores an enormous about of vehicle safety improvements (roof crush resistance, various collision detection/drift detection, electronic stability control, crumple zones... ad nauseum).
That also ignores really impressive improvements in ICE based cars (idle shutdown, cylinder deactivation, reduced pollution).
That's not even touching on electric vehicles and self-driving car advances.
The fact that we are still discussing ICE cars proves my poity. The first electric care was built 120 years ago and London had an Electric Bus company (!) in year 1903.
Yeah, we still use ICE cars, which unfortunate, but outside of greenhouse gases they're immensely less polluting than they used to be[0]. 1968->2010 reduced NO 99%, CO 95%, and particulate matter 99.92%. (I'm guessing this is looking at engines for PM. Tires/brakes still produce a lot, as I understand it.)
The EPA says tailpipe emissions are 98%-99% cleaner, with a 71% overall drop across "six common pollutants" despite miles traveled climbing 114%[1] (They use a few different starting and end points throughout that article.)
It's easy to miss how much cleaner modern cars are.
Two inventions were required for BEV's to compete with even 1920's ICE vehicles: the power MOSfet (1970's) and the Lithium-ion battery (1990's).
GM was about to put Lithium-ion batteries in the EV1 when the project was cancelled in 1999. It was less than 10 years later that Tesla introduced it's game changing Roadster. In between Toyota's Hybrid technology both helped and hurt EV development. It helped advance EV tech but reduced commercial and regulatory pressure for pure BEV's.
The first electric car is unrecognizable from a Tesla Model 3. It is not appropriate to compare them. To me, it sounds just like saying SpaceX and Starlink is fundamentally the same as an R7 + Sputnik. It's a narrow view of progress to ignore substantial incremental improvements over time. While you might not see big external differences between a 1980s car and a 2020 car, a number of engineering professionals working in different disciples would be incredibly impressed.
Cars are massively different. Anyone who works on cars in the garage will know that our modern vehicles are very different than the old carburetor and magneto machines.
Drivetrain, running gear, and fuel systems are completely different systems that would be unrecognizable to a 1985 mechanic.
This statement sounds like it came from a software engineer who has never washed grease off his hands.
There is no way to measure the 'progress' between different types of systems. The manufacturing process is exponentially faster and more efficient between 1980 and 2020.
As far as cars go, I would say a 1980 mechanic could work on a 1940 car. A 1980 mechanic could NOT work on a 2020 car.
I think alot of non-mechanically-inclined people don't understand that massive revolution that happened in car engines in the 2000s.
I think "scientific progress is waning" is maybe not the correct phrase. I think it's more like "scientific establishment is waning" or "scientific efficiency" or something like that. The issue isn't so much "is progress being made?" I think that's clearly the case. The question is "are resources being wasted?" or "what are the opportunity costs?"
The blog post at the end kind of focuses on turnover of ideas and progress, which I'm not sure is quite the right focus. I think the original question, about how papers are being cited, are good papers being cited enough, are bad papers being cited too much, are papers being cited appropriately, is probably more on-point.
I think people have this schema that academic science is a bunch of brilliant people just looking around, and when one of them comes up with a brilliant idea, others recognize it because they're brilliant, and then it floats to the top. What happens in reality is really different: you have a bunch of people who are pretty smart, but not always as brilliant as they are made out to be, and they have their own misunderstandings, blind spots, and biases. Ideas explode in popularity because the field as a whole is ready to understand or accept them, not because of the ideas per se.
Re: "political attacks" I think this that's self-inflicted in that the worst part of all of this is the denial of how broken academics is at the moment among the scientific establishment. In any event, the focus of the article isn't really even about typical conservative anti-climate, anti-vaccine research, it's about citation patterns, written by academics, about academics.
Whatever politics touches, it taints. No surprise here.
But one of the items you mentioned is food. I sort-of doubt that we have a great progress in food. People seem to be much fatter than ever before. That would suggest a lot of cheap calories, but less quality of food overall. Unless we measure progress in food by raw caloric content, we might actually be regressing.
The burden of metabolic diseases is certainly at an all-time high and not a single country in the world managed to reduce it meaningfully again.
> Hell, the only thing that's waning is public perception of science and that seems to be a deliberate political attack.
No doubt there are political attacks on science, but it's also true that science is often politicized and also corporatized, both by forces outside of science and those who practice it.
How come a person that is interested in doing science politizes it? I think the first party interested in doing it is people who live from politics, not from science.
At least not people who honestly live from science.
Well because they see science as a means to an end (political control), not an end in and of itself.
For example, epidemiology has this problem. Epidemiologists routinely publish supposedly scientific papers that are actually policy papers in disguise. Often these papers violate the scientific method in some major way. Nobody seems to care. They get published regardless. Here's an example:
It unashamedly cherry-picks the UK, Denmark and Sweden to try and argue that the UK/Sweden should have adopted Danish policy. Why not study all countries for which data is available? They easily could have done, the data is there. But they don't because if you do that you end up with a null result (no policy makes reliably makes any difference). So they cherry pick in order to be "informative" (as they put it) with respect to policy.
One reason is money. Government institutions are one of the biggest sources of funding, so if you're a scientist, avoiding anything that could be politically connected can be difficult to do.
~30 years ago I had a bone marrow transplant. I lived in an isolation bubble for a month. 95% mortality rate in the first year. A year of recovery.
Today, treating my original disease is practically an outpatient therapy.
To say nothing of immunotherapy replacing chemotherapies. My buddy had a late stage solid organ tumor. Cured with immunotherapy. For a condition that was absolutely fatal just a few months prior.
Agree with your last sentence. Everything is analyzed/used from a political point of view.
We need de-polarization. First thing would be to get rid of so much power by these people that rule at their convenience all of us. I really think it would be a better environment for everyone.
I think this comes down to the growth of scientific management and technocracy in research and that bureaucracy's attempt to fight over dwindling public funding for the sciences coupled with lack of private non-commercial sources of funding. Universities are mostly made up of managers or researchers that end up acting like managers in order to justify their position. This leads to a set of bureaucratic rules for scientific success and a range of conferences that prop that system up. Disagreement, the possibility to be stupid and wrong, and the ability to take random choices based on intuition are eliminated when the majority of a field acts like scientific managers.
I highly recommend reading "The Body Electric" which details an excellent example, in both technical and social terms, of how structural effects like the above impede highly "random" or creative ideas in science.
> Disagreement, the possibility to be stupid and wrong, and the ability to take random choices based on intuition are eliminated when the majority of a field acts like scientific managers.
How do we democratize physical sciences in the same way as CS? My bet would be on the combination of the two (simulation) and providing the high-level tools to the masses.
By focusing on Small Science. Big Science needs big money, big equations, big machines, big careers, big meetings, etc, so big science should be last resort. Make experiments cheap, cheap enough that it becomes embarrassing not to double check. Cheap enough that half of the comments in a science article are about people doing the experiment themselves right there.
That doesn't work. The scientific literature is already flooded with cheap theory papers and a clear absence of real heavy lifting. Such papers are routinely filled with basic errors or flawed methodologies. There's no embarrassment associated with not double checking, failed replications happen all the time and there are usually no consequences.
are there guarantees that "Small Science" is productive? Regardless of how knowledgeable I am I don't think I can do experimental particle physics in my garage.
This one, the main relevance is that the author came up against bio and medical establishment that refused to accept even the scientific interest of studying the effect of small voltage profiles/current on tissue and bone growth. One reason was that the main source of info in the mid-20th century was a soviet scientist, but thats definitely not the whole reason as it comes back to debates about vitalism and the role of electricity in bodily functions/philosophy (etc). Theres a whole lot of similar work in studies on bioelectricity at the moment which if successful seems like it could become a dominant approach too lol. Pretty tricky to think about how that process works over decades or hundreds of years.
Understandability is waning. Which in many places amounts to the same thing.
Newtonian physics could be understood by an average child. General Relativity could be understood by intellectuals somewhat. The cutting edge of physics now is barely understood by the people who are publishing the papers.
For additional progress to continue in a lot of fields we're giving up a lot of understanding.
If we give a mouse a maze that requires understanding of calculus or trigonometry to get to the cheese - the mouse just won't get there. Doesn't matter how many attempts we give it - the reasoning is beyond its capacity.
Why would humans be any different to our own upper limits of understanding?
(mostly stolen from a chomsky lecture called "the ghost and the machine")
This. I don't even bother to read physics article in quanta anymore. I won't compare some theories to astrology but if someone does, I won't run to defend either.
I struggle with thwir math articles, but I know that if I find time on weekend, I'll get the theorem (may not be the proof). Knuth books feels the same. Hard reading but rewarding.
Biology is always pleasing to read. CS is my bread and butter so I usually bookmark them.
PS: Masters in electrical engineering and PhD in system biology.
As a general rule, if you are hearing about some scientific endeavor in the popular press, it is because that science isn't very important, and they need publicity to get funding.
What a lot of people don't understand is that there is actually a lot of real science going on in physics. There are two branches of physics, what you call condensed matter / atom optics. And then there is Cosmology / High energy physics.
condensed matter / atom optics is where the real science is happening, and those who work in those areas consider the second group to be an absolute joke. The thing is, there is also a feeling of everyone working together to try to get as much money from the government as possible, which is why no one blows the whistle on what a complete scam cosmology and the like is. It is understood at a subconscious level that everyone could be hurt if academics start in fighting, and people would be ostracized for doing it. Also, there are a lot of bad scientists / zealots in condensed matter/ atom optics just as there is in cosmology, and they would try to ruin anyone who said a bad word about the church of academia.
Anyway, as far as real physics goes, there was great article on here a while back about how we finally got to look at the atomic structure of glass, and how we can finally try to work out how it is put together. No one knows how glass is put together, there are a number of different theories, and none of them agree. That is the absolute peek of human achievement in science right now, trying to understand how things like glass are put together.
So if someone tries to tell you they know how the universe was formed and all of creation came about, but they can't explain to you how that window next to them works, then they are clearly a crackpot, not a scientist. The most hilarious part is that if you pull them up on it they will say "Oh well you see the whole creation of the universe and everything in it is actually much less complicated than glass, so that is why we can get results in this area easier".
haha.. funny because it is literally exactly the same thing. No we do not know more about the surface of the moon than our oceans. When academics can't be proven wrong, they come up with all sorts of theories that they are certain of.
We know so little about the surface of the moon that we don't even know if it is possible to land a rocket on it or not.
Indulge me with an odd potential counter-point though.
What if human knowledge is fundamentally both more inductive and collectivist than we care to admit? After all, Hume's problem of induction (that deductive reasoning stems from induction) does seem to suggest this as a potential resolution.
Isn't understanding mostly a set of connections and relationships about a thing? I can use memorized/practiced knowledge of trig and calc to solve problems, sure, but just like the rat if I was born 4000 years ago I'd probably just struggle with the concept of negative numbers -- with near certainly I wouldn't be able to invent them to solve a maze either.
So I would argue that perhaps all knowledge and understanding seems to be fundamentally inductive, and is hard to conveive of with just a single person in isolation, same as a mouse. Large communities of people with millenia of progress, useful abstractions, and recorded insight though?
Perhpas understanding is scalable with communities and time, and thinking of understanding on the individual level of a mouse or a human is missing the forest for the trees?
GR is hard to understand, because it uses wrong postulates. A medium (Higgs «field») is present everywhere, so we can use it (or CMB) as 0 point.
BB is wrong theory because photons are not immortal things, they are losing energy with time, thus H0(s) are representing rates of loss for different frequencies. Our local group of galaxies is expanding, because we are falling into Great Attractor and Shapley Attractor, but it's coincidence.
QM can be reproduced and studied at macro scale using walking droplets or air bubbles in water bubble in microgravity.
The entire college phd system is toxic and self serving.
At my local college the guy the physic department building was named after was obsessed with some weird thing that no one understood and when he died they just mothballed all his "projects" equipment. No one had any idea what it was or what to do with it and it filled a good part of the building. He was not easy to get along with and shot down anything that he didn't like without even discussion. I'm told this is not in any way uncommon in the field.
Academia now selects for people who are good at navigating bureaucracy and getting research funding rather than people who are good at doing actual research
Shitting on academia is certainly one of them, but lets look at the merit of each comment for themselves.
As someone with a lot of friends in academia, but luckily not dependent on it myself, I was quite shocked by the amount of politics, polishing, neck rubbing etc. going on. Scientists present as very clean and orderly to the outside, but the process of writing a paper and getting it published is usually super messy.
I don't want to say of course it is, but of course it is.
It's an environment full of smart and hungry and competitive people. There are politics, yes, but you can damn well choose to avoid them, especially if you offer value.
Nobody in any industry presents all of the warts and difficulties of getting to a solution. If you wanted to hear about six years of failed experiments, I've got lots of time, but I feel like you don't want to hear it and neither do the people reading and writing research papers.
You'll find that outside of the superstar schools, the smaller schools (certain depts) are staffed with brilliant people. They'll tell you about the nuances of academia if you're a normal person but they're not going to show up on HN where people say what they do is worthless, so people get warped views of what the majority of it is.
The kind which agree with the HN group think - "nuclear energy is good", "Electron is bad", "Chrome is good", oh wait, that was 2009 HN group think, today is "Chrome is bad".
Academia and any sort of formal education is a waste of money. Math is difficult because of the notation used. The random guy who shows up in any ML/AI thread and starts talking about how useless it is because it’s not AGI. How stupid every hiring process is, especially anything involving testing technical competency.
True, but he died before he ever published anything about it. I was dating a girl that worked there at the time is the only reason I got to see any of this. There were hundreds of these large 6-10ft tall cylinders with lots of science looking stuff in them that were made I guess? for interchanging between some sort of system in a sub zero room full of other equipment. I was kind of impressed at first then more disappointing that literally no one could even tell me what the stuff was or what he was working on. I didn't get to really see what was in the cold room since everything was already disassembled and boxed or piled up in the case where it didn't fit in boxes. Also I don't know what the room is really called but it was made to be cold and was not active at the time I was there.
There were also piles of very expensive scientific machinery that he gutted for single parts. Because apparently simply sourcing the part he needed for say $250k less was beneath his time so since he had tenure and budget he would just order something he already knew had what he wanted and gutted it leaving behind a very expense broken machine/instrument.
He also believed that women brains were incapable of doing science so she didn't really like him much as you could imagine.
I am not in favour of naming people, but in case of academics I think senior professor is something like being an employer. It's not going to change when people see it as a distant example. You could say I don't understand the method of this person and don't like his/her approach.
I have worked in academia. I can't think of a single prof like that in my department. They exist, but that's not why people choose to spend decades of their life in poor paying jobs.
This is likewise anecdata. Experiences vary, so I don't know if I'd throw "normal academia" out there without having a full view of the sector as a whole, which few if any do.
My view is that, like many other fields (including notably software engineering) before it, Academia has fallen victim to Goodhart/Campbell's law.
Goodhart's law is an adage named after economist Charles Goodhart: "When a measure becomes a target, it ceases to be a good measure."
This follows from individuals trying to anticipate the effect of a policy and then taking actions which alter its outcome.
Campbell's law (by Donald T. Campbell, a psychologist and social scientist), is similar, but has a more concrete focus on the predictably negative unintended consequences of using such indicators for decision / policy making: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."
For example, how schools evaluated by marks end up 'teaching to the test' or outright help children cheat.
A simple combination of the two (to misuse Donald trump style) basically would say that using metrics: sounds good, doesn't work; worse still, it backfires. Goodhart's law focuses on the fact that metrics don't work; Campbell's on the fact that they tend to backfire.
We have seen this time and time again in software engineering, when managers try to use crap like LOC metrics, or more recently "slack activity" to judge the quality of a software engineer.
Academia is now experiencing its own version of Goodhart/Campbell's law. Between impact factors, h-indices, and now REF exercises, scientific progress is but an afterthought, and the system self-selects either those with a high ability of navigating this monstrous maze of inane metrics created by bureaucrats, or those with the ability to successfully commit academic dishonesty (p-hacking, optimal division of publication units etc) without getting caught, or both. And only extremely occasionally, people with truly novel ideas and output, which happen to somehow still manage to obtain funding despite not fitting into any of the tight little checkboxes that need to be ticked to get a grant on the latest grant-bandwagon.
Ah it's a UK thing. Not sure what kind of equivalents other countries have, but I'm sure they do in similar forms.
It stands for Research Excellence Framework, and effectively it's a metric used by government and research councils to allocate funding to more 'deserving' universities / groups, by rewarding departments with "high quality paper outputs".
This has led to effectively departments launching tedious bureaucratic exercises and workshops where all members of staff need to submit summaries for "star-ratings" for departmental sifting, submissions of only certain kinds of papers to particular journals, eschewing both high-risk and low-hanging-fruit research in favour of controversial or 'popular' topics which are more likely to gather citations, "buying" researchers for their paper co-authorships within the currently valid REF-cycle rather than based on their research/teaching skills or interests, etc.
> the newbies compete so intensely amongst themselves that they can’t compete with the established dominant choice — in the case of research, that often means [mostly only old papers get cited].
The article does not mention tenure and the publish or perish incentive (which forces academia to work like the auto industry, just-in-time manufacturing of incrementally better output).
From the study's abstract:
"The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea."
- This appears to be a key problem in academic research that matches my personal experience. More papers with less novelty are not only not beneficial, but beyond some point they become a net tax on everyone in the field. Without innovative approaches to fix the incentive structure of modern science, this is going to get worse over time.
Are there any obvious solutions aside from "hacks" such as e.g. private foundations flooding specific fields with cash to reduce the need to publish?
Was academia terribly relevant in my field when I was a grad student? Nope. In general, the work was poorly supervised, not-reproducible, and the peer review process was completely broken. Students just wanted to graduate and professors just wanted tenure and funding points.
Something has changed. I can't imagine Einstein's generation functioning like that.
Have pioneers of innovation moved to private corporations now that they have capital that rivals academia? Private companies can reward innovators with more than just credentials.
Bell Labs, Xerox Parc, Google Brain, OpenAI, Tesla, SpaceX, ...
Granted, this isn't even across all fields as they are not all economic drivers.
Something I often think about is how eccentric the leading researchers were in Einstein’s generation. Just look at a picture of the guy himself. Or consider Kurt Gödel, who starved to death when his wife was hospitalized because he could only eat food prepared by her.
Different times/cultures tend to put different personalities in charge, and that has a huge impact on what gets done. Overall I have a feeling the curious eccentrics are now out, and the charismatic corner cutters are in…
Bell Labs was only pseudo private. The government required they have that research lab in exchange for being allowed to have a monopoly. One should also consider that institutions like that were partly so productive for being at the right place and time in history, when we were developing the tools to exploit the low hanging fruit nature has to offer us. That whole era, we snatched it all up quick. Spaceflight and such didn't stagnate after because we turned to idiots, but because mass and aerodynamic drag and Newton impose pretty inalienable constraints.
I was watching a good documentary about Bardeen and Shockley and their development of semiconductor tech in the 50s. The military got their hands on some of their samples and work and put together a team to try to work out how they were doing it. The scientists they were interviewing were very depressed because for every month of progress they made catching up to Bardeen and Shockley, those guys would be a further 3 months ahead of them by then.
My point is, you can make claims about Bell labs being semi private, but that doesn't explain why all the innovation happened at Bell Labs and not at some fully government run lab or the military. The government couldn't even keep up with them when they knew what to do, forget about the government actually initiating that kind of research.
In the last 50 years almost nothing has come out of government research. All innovation has occurred in the private sector, or privately owned research universities. At best, the government has succeeded in some cases where government funded academics managed to get private funding from industry.
An excellent book that needs to be read carefully by many college students, including engineers, scientists, political scientists, and business students.
And more relevant to the actual Einstein's generation (and the few before), GE, GM, Bayer, IBM, 3M...
Bell Labs and Xerox Parc are gone. OpenAI, Tesla and SpaceX are very different places, and Google has Alphabet that actually tries to be like those but fails. And I imagine that cutting funding from projects before they can mature is a large cause of that failure.
Well but Einstein wasn't an academic when he was doing his best work. After he became famous he became an academic in the USA and (I read somewhere) spent much of the rest of his life being quite depressed because he never again reached the epic highs he achieved in his 20s.
I wish the article had addressed the other issue in the area: funding. The US stopped lavishly funding scientific research sometime in the 1970s. Private industry has taken up the difference, but private industry wants to focus on immediately usable research for profit, rather than fundamental stuff that’ll be useful for the next century for all of society.
Is it that surprising that the most cited research papers come from the tail end of the federally funded research era?
That "lavish funding" era extended largely from WWII, with specific focus on technologies such as radar, fire-control (computers), and the Manhattan Project, was inspired strongly by Vannevar Bush's "Science: The Endless Fronteir" (itself something of an HN perennial), and kicked mightily in the keister by the Sputnik scare and nuclear / missile arms race of the 1960s.
By a decade later, numerous factors had taken much of the steam out of the sails (to mix metaphors): the Vietnam war, foreign exchange and major changes in global currency, and the emergence of domestic peak oil in the US (lower 48 at least) with ceding of control over global petroleum production and prices to the Middle East, along with numerous consequences there. At the same time, Detante and the opening of China, and political scandals (most notably Watergate), and the civil rights and anti-war movements, changed attitudes toward government (amongst the Left) and toward academia (amongst the Right). The former is well documented through the general counterculture movement, the latter probably through the Lewis Powell Memorandum.
At the same time, there was what I'd see as a real decline in the pace of both scientific and technological progress in almost all areas, save information technology and some materials science.
TFA actually focuses fairly narrowly on one element, which is the explosion in publishing. I'll address that in a top-level comment, as I feel it's been overlooked by most other comments.
I'd add a growing awareness of and concern for the environment. The United States had its "Moore's Law" era for nuclear technology for about 20 years after WW II. After that, concern about weapons test fallout and other environmental releases of radionuclides made experiments much slower and more expensive. To the extent that many ideas never left the drawing board.
There are similar stories with chemical technology, manufacturing, even electricity generation. Fossil fuel depletion is one example of overtaxed sources. Strontium 90 in human teeth, acid rain, phosphate driven algal blooms, etc. are emblematic of overtaxed sinks. The US circa 1960 enjoyed a faster-than-sustainable pace of development (scientific and technological) by borrowing from the future on multiple axes.
I didn't want to head down that rabbit hole, but there are a few lines of argument which lead to the conclusion that the end-stage of most technologies involves both ever-diminishing positive returns and an increased concern in dealing with unintended consequences. I call these "hygiene factors", though environmental concerns would certainly be a prime example.
One framing of this looks at the mechanisms by which technologies achieve results. I've identified nine of these: fuels, materials, energy & power transmission and transformation, technical knowledge ("technology"), causal knowledge ("science"), networks, systems, information, and hygiene. These seem reasonably well-defined.
The area of accelerating rates of returns seems specific to network / dendritic structures (physical, conceptual, or both). Even here, growth ultimately slows, probably best considered governed by a logistic function.
The same is true in software. The first people in a company or on a project have by far the biggest impact on the structure and future of the software. People who join much later focus on smaller parts, and they might even be geniuses and coding superstars, but they will (naturally) work harder for narrower reasons on a smaller part of the project, relative to the whole.
Papers in established fields naturally have narrower and more specific, i.e., smaller impact over time because most papers are fine-tuning things, asking smaller questions, and not building brand new theories or frameworks from the ground up. It’s expected that Newtonian physics is not going to be re-invented every year, right? Newton did it, and now the questions left are how gravity works at scales and speeds we can’t observe everyday on earth. Nobody will ever supplant Newton, because he was first.
I’ve watched this happen in my own field, computer graphics. The early papers that have lasting impact are the ones that were inventing the field and laying the frameworks for how to think about it. The rendering equation, the shadow map and Phong shading are ideas that wouldn’t get published today, however they were pioneering at the time. Now the questions we have are about things like what is the true microfiber surface shape of human hair strands, so that we can increase realism by 1% compared to the previous hair models.
If you compare them side by side in the context of today, increasing realism of hair shading is a more difficult question to answer than the earlier question of how to interpolate a shading normal across a triangle.
So, yes scientific progress is waning in the sense that we’re inventing fewer fields and fewer new theories. There are fewer papers that are expected to be or even trying to be foundational, because the foundations already exist. And it’s not waning in the sense that scientific output has never been higher, and today’s papers are answering harder (and more specific) questions.
Emphatically not in global terms, but probably so in the United States.
Diffuse national priorities since 2000 have misallocated academic research, corporate R&D and government investment from hard-science to social science.
Also, the inverse relationship between the "financialization" of the US economy & the decline of applied science innovation is stunning. By way of example, applied sciences have lost a lot of talent to fintech jobs.
Similarly, the US-Russia cold war, for all its many downsides, did drive applied research. 20 years or so of low-intensity conflict in the ME wasted vast amounts of capital on logistics, munitions and purchased alliances that would have otherwise found its way to various DOD research programs.
I enjoyed this podcast[0] with Peter Thiel and Eric Weinstein, where one of their central discussion points is the current stagnation of technological innovation, beginning in the mid 1970s, with the sole exception of computer software.
Peter Thiel aka Computer Software businessman and Eric Weinstein aka theoretical mathematician and his pal in software business. Like they know anything about the technological innovation, outside of computer software.
The nature of scientific progress isn’t linear so it’s entirely possible that progress is slowing down. I’m not sure how you’d quantify scientific progress though.
If you’ve not read Thomas Kuhn’s Structure of Scientific Revolutions, I highly recommend it because it lays out how science progresses. If you think about it, most of our current technology stems from late 19th Century and early/mid 20th Century science. There are exceptions and different disciplines experience revolutions perhaps at different rates. I don’t know much about biological science but CRISPR comes to mind in that regard.
> When the rare paper does break through, it usually does so in less than 12 months, suggesting that popularity comes from social media, news coverage, or via existing networks of people who are already well-connected in the subject area—rather than from citations in other work
I buy ads for one of my papers. Only a few dollars per month but I like to think it's worth it
Fascinating. If I understand correctly, you are saying that you buy paid advertising for academic papers that you have authored, is that right? Can you please elaborate? e.g. what platforms do you use, which keywords / audiences are you targeting?
I tried Twitter (the paper is about Twitter) but seems like the minimum spend is $50 per day. Also got the feeling that they weren't targeted enough based on who followed me.
I've had a small promo running on Google search ads for a few years (disclaimer I now work at Google). The keywords are roughly what's in the title. I don't know if it's leading to citations.
I think there are a lot of major, groundbreaking discoveries ahead of us in fields like biology and applied physics. I think it's plausible that we may be able to cure aging in our lifetimes. I think that we're going to see breakthrough discoveries in socioeconomics and anthropology too.
With that said, you can only discover electrons once. The last new particle discovered was the Higgs Boson, and what are the practical applications of the Higgs? Nothing compared with the practical applications of the electron.
So, waning? Probably not, at least, not yet. But, we are getting out into the branches of science and not working on the roots anymore.
We needs fewer PhD candidates, and more auxiliary roles.
Pay people to work on wikis — ever field needs an n-lab.
Pay people to work on open source software, and distros.
Pay people to develop existing research independent of their being enough demand (because Say's law is not a thing — monetarily)
Confining the academy to the R of R&D kills virtuous cycles and saps productivity as no one can take risks to improve the process separate from chasing outcomes.
There are probably 100 times more scientists doing research than in the 1920's and the technological means for research are incomparably more advanced, so in that regard we're definitely moving at a much faster pace.
Progress in fields like physics is definitely more incremental though as compared to e.g. the first half of the last century when the foundations of modern physics were laid (quantum mechanis, special and general relativity, quantum field theory, ...). Then again, there are many "small" cracks that start showing up in various theories (dark matter & dark energy being one), so I hope we'll soon discover something as groundbreaking as general relativity or quantum mechanics that explains some of them and makes the universe even more interesting.
But what about the replication crisis? Was there such a crisis in the 1920's? Because the motivations for things like p-hacking all point at something broken in science.
If one wants to limit focus only to the field of psychology, there's been a very long history of exceedingly flawed theory and experimentation, dating to the 19th century.
It's worth noting that scientific progress in the Soviet Union seriously stagnated in the Lysenko era due to the prioritization of ideology and the destruction of independent science.
I'd argue we are seeing the same thing in many western countries, especially the USA, under the ideology of corporate control of academic research. Much can be traced back to Bayh-Dole legislation in the 1980s, which allowed universities to exclusively license patents (which had been developed with taxpayer money) to private entities.
This created a new system of control and influence in academics, i.e. the Intellectual Property Office. What it really represented was the offloading of R & D burdens from the private sector to the public sector, while retaining private control of the patents generated in the public sector.
This means academic scientists in the USA today labor under the constraints imposed by large profit-minded corporations, just as academic scientists in the Soviet Union labored under constraints imposed by communist ideologues.
This is clearly seen in the pharmaceutical and medical sectors, where research into treatments for conditions is limited to patentable drugs only, older out-of-patent drugs are seen as unprofitable even if they're show to be effective treatments for off-label conditions.
Another good example is the elimination of R&D programs for renewable energy by the state; as fossil fuel interests infiltrated government and exerted regulatory capture at institituions like the Department of Energy, solar R&D programs in the USA were basically eliminated in the 1980s and 1990s (leading countries like China, Germany and Japan to become the world leaders).
There was also the gutting of environment pollution research that used to be funded by USGS, again due to regulatory capture and threats to deny funding in the 1990s.
Basically, the new ideology in American science institutions seems to be 'only do research into subjects that can generate profits for our corporate sponsors', much as it was in the Soviet Union, where the line was 'only do research whose conclusions support the communist ideology'.
Excellent observation apart from the profit-minded corporations bit. Corporate R&D spending has greatly increased, while USGOV spending has remained mostly flat.
Every major pharmaceutical drug I know of (including mRNA vaccines see Pfizer) is based on publicly-financed (NIH) research done at public univesities and transferred to the private sector under Bayh-Dole exclusive licensing regimes.
Now, would a university academic overseer be pleased to find their chemisty professors doing 'open-source drug discovery', or focusing on alternative uses of old drugs that cannot be patented (say, cannabis extracts as pain medications competing with new patented opiate derivatives)?
I don't see how anyone can honestly argue that the profit motive isn't seriously skewing (and limiting) the kinds of academic research being done in US universities today.
This may be a problem, but it is not a problem at the level of the university IME. Universities are happy with any research that brings in a good amount of grant money. Sure it is gravy when they can make money on something like CRISPR, but the vast majority of labs will never make money on their results, and tenure track decisions don't appear to be made based on potential for those kind of pipe dreams, at least anywhere I've been. Universities want PIs that can regularly rake in money via grant proposals.
In the biological sciences the vast majority of grant money is coming from the government. The NIH could very easily push open source drug discovery by changing the way some grants are allocated, without any other change to existing policies.
I also think industry influence on research will always be a double edged sword. Yes there are important research topics that would be understudied in a purely profit-driven system. But there is also a big lack of accountability in a purely "academic" system. Industry forces replication in a way academia never can, for the topics it does tackle.
This definitely also applies to the realm of programming languages. Clear back in the 1990s, Richard Hamming said that we used to stand on the shoulders of giants, but today we stand on each other's feet.
Scientific progress is still way ahead engineering readiness in terms of results… and engineering readiness, while catching up, validates and enables further scientific progress.
This is not a question that would be answerable until at least 50 years from now. Discoveries that seem inconsequential are often profound, and vice versa.
Rise and decline of large scale human efforts and societies is generally only visible in retrospect. A good chunk of the people in any golden age think they are in a dark age or that doom is on the immediate horizon.
It's my view that most unexplained areas today are chaotic systems where reductionism fails: brain-body, ecological systems, etc. Even, i think, extremely fundamental physics.
I have the sense that "robust" science doesnt work here: the "explanation" is precisely the irreducible chaos. There isn't much more to be said than to point.
I think we're just seeing a sector getting used to a systemic shift from for-profit vetting by name brand publications and universities to a freer publication system with lower average quality. As an engineer I'm a consumer of science publications and the way I get what I need has definitely been affected by this.
More empty "paradox of choice" cliches. It's not that we have too many choices, it's that our many choices are a sea of crap, we know it, and there's no way to find one good thing in ten thousand craps.
I read some machine learning research papers among others. The explosion of papers from chee-na, at all quality levels, at orders of magnitude more than a year before.. was/is hard to deal with for me
This article and the HN discussion (as I write this) themselves illustrate a major component of the problem, with rich irony.
The full title of the article is "Is scientific progress waning? Too many new papers may mean novel ideas rarely rack up citations".
As submitted, the first, generic, clause was chosen. The second, more specific clause, might at least tip off readers that there's something more afoot.
The article narrowly addresses a specific premise: "There are so many papers coming out in the largest fields of science that new ideas can’t get a foothold". And indeed, that notion itself has failed to gain a foothold in the ensuing HN discussion. Instead, I see numerous threads in which some popular narrative, many with merits, but not specific to the contents of the article itself are being advanced and discussed. (There are a few notable exceptions, of course.)
As the article notes, even within single disciplines, there may be well over 100,000 articles published. No single researcher within a field can even keep up with the titles* published on a daily basis, along with their other research loads. As an empirical validation of this, I'll point to numerous instances of high-volume data assessment:
- The New York Times content-moderation desk manages a sustained rate of about 700--800 comments moderated per moderator per day.
- Facebook's content moderation data suggest similar rates.
- Data by Stephen Wolfram ("quantified life") and Walt Mossberg (general interivews) suggest that people can handle a peak of about 100--300 email messages of any significance and complexity, per day.
At 100,000 articles/day, a researcher would be faced with 235 titles per day, every day, 7 days a week, 365 days per year.
- What is driving publication of papers? Is it advancement of knowledge or gatekeeping functions within institutions and disciplines?
- What methods for capturing useful and valid information should be applied in cases of information overload? I've argued for years that in such cases, selection is less of a concern than rapid, low-cost, and unbiased elimination. That is, it's essential to discard information which cannot be usefully utilised and which will in fact impair the ability to process relevant available information.
- There's the meta-question (addressed by most comments so far on this thread) of what the limits and value, or even definition of science are. Whilst that's an interesting question of itself, and should probably have its own conversation, it's the least part of this specific article's merits.
Note that HN itself faces this issue, with numerous submissions daily, of which about 30 count as having made the front page. I'm increasingly going through the "Past" or "Front" links to find what's been curated on a given, or using Algolia to search for the top submissions for a given week, month, or year. That last is somewhat awkward where the immediate prior interval isn't selected, but illuminating. Rates of progress and/or stasis, as well as tropes and remarkable incidents, become much clearer when aggregated.
Ok, we've changed from the title to the subtitle above. Thanks!
More than 30 stories make the front page per day - how many depends on how you want to count them, but actually 30 would be the lower bound of all such numbers.
On the topic of progress in science I personally believe that fields where computation is useful/important (such as biology, chemistry, etc) will continue to progress without much issue, as computation for now at least is continuing to grow without much issue itself, and as it grows it will take at least some time for the appropriate computational methods to be developed. However for other fields where computational resources may not have as much significance (such as theoretical particle physics) I can see that stagnation may be an issue.
As many other commentators have noted there are lots of problems in academia to do with funding and how ideas are spread, etc, however these problems seem to be fundamentally economic or political in nature, so just throwing more bodies/scientists at the problem will not resolve them, therefore there must be systematic change from the economic/political sides to improve the situation.
Another thing that pops into mind is in regards to low hanging fruit. An obvious solution to why progress may be waning in fields where computation is not important is that all the easy work has already been done (see Dirac's quote about 2nd rate physicists doing 1st rate work in the late 1920s with QM) and thus in order for there to be progress
1) students must have significantly more background knowledge as they need to know more about what does or doesn't work
2) creative ideas should not be shunned
1) imo is already a bit of a problem and I guess at one point in the future there may be issues once humans reach limits to how much their can learn in a certain time but for now this can be mitigated by being more efficient in how students are taught. For example in my own personal experience in math once you get to PhD/research level topics nobody reads textbooks to gain broad knowledge on a topic but rather reads them for reference. There's just too much to learn and know so instead if you're researching some topic you try learn whatever you need as you go rather than from a bottom up approach.
2) is linked to the problems in academia as I mentioned but I guess as more technologically advanced societies should have the advantage over less technologically advanced ones and science is of course basically a prerequisite for technology advances then this problem will solve itself natural selection style.
One other thing I would like to note about academia with regards to inefficiencies is how much is still locked away behind paywalls or other inefficiencies due to decentralization. For example lots of historical journals I've seen that only have the content in a certain language, etc things like that. For the scientist and for anyone in general interested in science it would be easier if there was just a central place to look for topics that would include all historical content in some common language so it would be easily accessible. An example again from math is me having information in a Hungarian journal that no longer exists. Not only is the information behind a paywall but also it was in Hungarian and only photo scans so I had to OCR it myself. This is an example of an inefficiency and I doubt most would go as far as I did to find that information, so potentially you have huge quantities of information that may be lost unless it is cited by modern literature on the topic, which is not always the case.
If you consider the set of all things to be learned. Some of those things will be easier to learn than others. Any intelligence setting itself to the task of learning things from this set is going to learn the easier things first. For a time, the power gained from learning early easy things may allow you to accelerate the rate of learning. But eventually you will hit a point where the things left to learn are so hard to learn, that the rate will start to decline.
This does not imply something is wrong with our approach now, or wrong with people now. It is a natural and unavoidable thing, that the rate must at some point slow. Newtonian mechanics is incredibly simple to figure out, as evidenced by multiple people working it out about as soon as the tools were there to do so. General relatively quite a bit harder and more complex. Whatever rules tie the quantum world to general relatively appears to be trickier still. Hopefully we get there some day.
The Hindex is a measure thats become the target. So 1 seminal paper is much much worse than 10 completely forgettable papers. It is also common to divide seminal work into smaller publishable morsels to rack up the hindex at the cost of novelty and clarity. The lack of incentive to pursue novel works, also means that most papers are incremental by design. Imo, this is the biggest waste of money at academic phd programs. However, the students need to take their career somewhere, so an uncomfortable compromise is met.
Another problem is how every conference is converging into the same impact maximizing mush without any meaningful differences between them. This has massively affected searchability, which as we know leads to even greater 'agglomeration at the top'. Having different standards for novelty, experimental rigor, math rigor, scale, technical fit and the like would allow for accepted papers to follow internally consistent searchable constraints, while the diversity of target audiences accross conferences would allow for different types of research to coexist. In my field, conferences are only differentiated by deadlines and status. Everything else is secondary.There arent too many papers. There just that the quality of curation has gone to shit.