“If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion, in fact years may pass without any tangible result being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate a different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: You want one kind of research, but, if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is to pay him for doing something else and give him enough leisure to do research for the love of it."
-- Attributed to J.J. Thomson (although I've not been able to turn up a definitive citation -- anyone know where it comes from?)
 Full biography: https://archive.org/details/b29932208
 Screenshot of relevant quote: https://imgur.com/a/qicgorD
"Not all famous quotes are by famous people"
- Abraham Lincoln
Googling this quote points back to RcouF1uZ4gsC as an original author.
I think allowing people to explore their passions is essential to bringing new ideas into fields, and will likely bring a renaissance to some areas of study, and at least portions of other fields, but I'm at a loss as to how it can help at the forefront of fields that require a lot of investment. Rocketry, for another example. Can anyone make a realistic case that another 10,000, or even 100,000 passionate people could achieve what Space X has over the last few years? I don't doubt they come up with many or all of the same ideas, but the testing of those ideas requires a lot of money.
Even then, playing kerbal space program does not make you a rocket scientist. It requires a certain dedication to master various physics and mathematics disciplines to contribute to even a part of one, which is itself a significant investment that you won't find possible with UBI alone; there will still need to be some form of dedicated funding through either specific government programs or commercial enterprise.
Salary also has a bit of a sticky effect; people change fields less than jobs, and spend more time in jobs than in funsies hobbies. A rocket designed by a committee of hobbyists will likely perform much like any other design-by-committee process or product.
ROBERT Hughes, the Australian art critic, filmmaker and writer, wandered into the kitchen of his fashionable loft home in New York’s SoHo to see how the plumber was going, setting up his new dishwasher.
On his knees grappling with the machine, the plumber heard a noise and looked up.
Hughes gasped: “My god, you’re Philip Glass. I can’t believe it. What are you doing here?”
Glass, one of the world’s most famous composers, said afterwards: “It was obvious that I was installing his new dishwasher, and I told him I would soon be finished.”
“But you are an artist,” Hughes protested.
Glass said: “I explained that I was an artist but that I was sometimes a plumber as well, and that he should go away and let me finish.”
Fun fact, most historical republics and democracies elected political offices by lottery. https://en.wikipedia.org/wiki/Sortition
What is that?
heh, am involved currently in exactly the process of
constructing such an environment. And we are doing exactly that: there is a high value deliverable that is relatively easy and predictable to achieve (but requires such specialised knowledge that it cannot be attained any other way than through very rare and highly skilled people), and then to attract actual good people, the people themselves are free to take the rest of their job / time and apply it to solving whatever they see as the most valuable contribution they can make on any time scale that is relevant.
On the one hand you can look at it as an embarrassing political fig-leaf, on the other, you can actually see it as optimal. I acutally don't think completely detaching people from obligations to reality often works out to be ultimately optimal anyway. You need something to calibrate what you are doing against.
(from the book The Truth About Everything by Matthew Stewart)
But having seen how poorly we adopted agile, I'm skeptical this will happen (we devolved to "waterfall w/sprints" due to not properly managing expectations with our users). If we can't get people to move away from hard deadlines for regular releases, how are we going to make them wait for a properly verified hypothesis?
One solution I've seen is organizationally "hiding" science teams from users, and only allowing a select few drive the direction of the team. But it still comes down to that select few properly managing everyone else's expectations.
I think “fundamental research” is not a thing, except possibly in mathematical theory. There is only incremental research + good salesmanship.
Research results should show up at a predictable regular pace, or else the result of that effort should be better & deeper explanation of a negative result which is also valuable.
If you’re sinking 2 years of R&D costs on something and you’re not getting the positive result you wanted and the negative results are not incrementally adding up to a clearer and clearer diff between the state you’re in and the state you’re trying to get to, then it is wasted money and the researcher isn’t being effective in their job.
I really think even difficult research tasks need to be rescoped and broken up into a series of incremental challenges, each of which has a known way to address it. You must treat it with reductionist dogma and eventually you’ll keep breaking it down into constituent parts that have known solutions until you hit on the novel problems to solve and it will be at a level of scope small enough that you can infer the solution from existing methods.
That’s all there is in the world. There’s no miracle cure for cancer or climate change or aging or social inequality. Let alone random business problems. There’s just a big bunch of little tiny problems with boring solutions that get all glued together into bigger messes that are hard to figure out. You can try smashing with a hammer or basically scaling the hammer up or down, that’s about it.
edits: fixed typos
That seems like a blanket assertion. It certainly seems like the most "manageable" mode of operating, but looking back at human history it seems like the "big breakthroughs" rarely happened that way.
The model of scientific research as harvesting the tail events (low probability & massive payoff) seems quite incompatible with what you've said.
That said, you probably think the way you do because of your experiences. Can you articulate that better?
> Research results should show up at a predictable regular pace, or else the result of that effort should be better & deeper explanation of a negative result which is also valuable.
To make a slightly more provocative claim, I think that this fetish for predictable/steady research progress is one of the primary causes of the problem discussed in the article. While papers can be generated steadily, piling on details doesn't necessarily make insight. (To quote Alan Kay: "A change of perspective is worth 80 IQ points")
PS: There's an apt SMBC comic (Aaargh, I'm unable to find right now!) where researchers keep digging a tunnel linearly, and declare the field dead, while there is a big gold mine slightly off to the side.
The problem is when people have been wasting time on big leaps of progress and then feel pressured to deliver something when they haven’t been pursuing incremental progress all along.
Think special theory of relativity or quantum mechanics kind.
On the other hand:
> The problem is when people have been wasting time on big leaps of progress and then feel pressured to deliver something when they haven’t been pursuing incremental progress all along.
That’s why the most upvoted comment with Thompson’s quote is so true. You need to pay gifted people to do simple, predictably realisable things but give them lots of leisure time to pursue big leaps.
Many problems with the universities and academic system comes from the fact that granting system etc. sweats the small stuff not leaving enough leisure time to pursue big ideas.
If anything, examples like these highlight why the Thompson quote _is wrong_.
It has been more than 90 years since they were invented. At the time they were proposed they've been unexpected and quite unwelcome.
I definitely agree that right now the progress is more linear, distributed and steady, but I wouldn't exactly call quantum computing (application of theory that has been created ~90 years ago) "big leap". I would rather compare it to development of hear engine from thermodynamics.
By big leaps I meant "paradigm-shifting" kind of things. These things cannot be expected to come from steady pipelines and projects with expected outcomes with anticipated results.
A flaw in Thomson's idea is that some research is expensive, costing much more than leisure time.
Another approach is angel-investing / business-loan style: Give a large grant, wait a long time, and then call in the note -- to return research that justifies the investment, or repayment with interest.
Or X-prize style: Give outsize awards for outsize accomplishments, and let the researches take on the risk.
And what are the consequences if the scientist has pittered that money away? How do you decide if a scientist/manager is even deserving of that level of confidence in the fist place
That discovery would lead to radio astronomy ... no thanks to established interests.
There's a basic assumption you're making here and which underlies a lot of the writing on this topic - that it's desirable, even morally virtuous, for research funding to be disconnected from application.
Bell Labs funded the transistor and not radio astronomy because apart from making cool TV documentaries, radio astronomy isn't actually useful for much. If we knew how to travel faster than light and explore the universe it'd be extremely useful, but we don't, so learning things about what a remote corner of the galaxy looked like a few billion years ago is easily argued to be a rather absurd waste of limited research dollars.
It's exactly what this op-ed in the Scientific American is talking about: a research field optimised to produce papers independent of any concrete economic utility function. In a world where such things get funded, what exactly should scientists be measured by? They can't be measured by market success because nobody cares or has any use for their output: their work is pure academic navel/star-gazing. So they pretty much have to be measured by volume of output or respect of their peers, both of which are closed and circular systems of measurement.
In my view the right fix for the science crisis is not to pay scientists to research whatever the hell they like with no success measurement at all: that really is directly equivalent to just firing them all and putting them on social security (or "UBI" as HNers like to call it). The right thing to do would probably be to just slash academic funding dramatically and reduce corporation taxes so corporate research can be given more funding. The net result would still be a drop in the amount of science done, but as Bayer's study makes clear, "not enough science" is not the world's problem right now.
The person that did lead the way to the (now enormous) field of radio astronomy was Grote Reber. He had a BSEE degree. It was his life-long passion, in his free time, at his own expense, and he had to struggle to get anyone to pay attention. He personally discovered Cygnus A in 1939 (and lots more). But he didn't get the physics Nobel in 1974. Instructive story:.
1. The first and most egregious type is outright fraud. This is when you intentionally manipulate or fake data. Everyone agrees this is bad, and honest actors are enough to prevent it. In some cases, other honest actors are sufficient to determine if the claims are fishy.
2. The second more subtle type is not paying attention to adaptivity. For example, maybe an investigator wants to look at the data before coming up with a hypothesis to test. This is dangerous because the investigator is already overfitting, so any p-values the investigator computes afterward do not mean what they're supposed to mean. This is less egregious because it's easy to do this just by not being careful or not knowing your statistics very well. A scientist can be honest, but imperfect, and do this. It's also not easy to sniff this out as a reviewer -- the scientist might just omit all the stuff that didn't work. But there appears to be growing awareness of this kind of problem.
3. The third, and hardest to solve, problem is not factoring in the whole population of experiments. This is where 100 labs independently try an idea and one of them gets a genuine (from their limited view) result with a genuine p-value. It's novel and that lab has (in the limited view) been careful about adaptivity and keeping their hypotheses carefully generated. Maybe they've even used carefully generated noise to ensure their conclusions generalize  (which would definitely cut down on this problem). So it's pretty much impossible for a reviewer to tell there's a problem, because they don't know about the 100 other people that tried this and failed because the randomness didn't go their way. Short of a public experiment registry, this one is hard to fix, especially because it may be that nobody's being malicious or ignorant.
I wish there was more praise for negative results in publication, because whether confirmed or not, the knowledge has value.
It's hard to argue that I had justification to think that novel intervention X would have an effect. It turns out it doesn't. Science is often very specialized, there's little chance others would have the very same idea. If it works out, the argumentation would have to be reversed: my idea was very novel and non-obvious but as I show it actually works, which no one would have guessed.
The negative result story only works if the research community would have very strongly expected to see the effect, almost reversing the role of the null and the alternative.
Depends what you mean by "works". If you mean "is reasonably publishable in the current academic climate, then I agree. If you mean "has value", then I disagree.
In particular as that would mitigate problem #3 above ("3. The third, and hardest to solve, problem is not factoring in the whole population of experiments.")
I wrote an analysis of their data here: https://www.baybridgebio.com/blog/aducanumab-analysis.html
#1 and #2 are difficult to solve, given the near-exponential growth of funding for academic research.
Could you show any data that supports the claim about nearly exponential growth of funding for academic research?
AFAIK the funding, especially considering growing number of people in the field, gets rather worse than better.
That seems like a really good idea, that would solve a lot of problems. Can we do that, please?
We, the other scientists (just like the voters) incentivize certain behaviors and with that favor a certain type of scientists (and politician) to prosper.
There are all kinds of scientists (and politicians) competing for your trust (and votes). There are good and bad among them. As long as we reward the bullshitters more, they are the ones that will outcompete the others.
All these rules and regulations that people propose are ineffectual, as long as a certain level of self-criticism is not being applied:
- Stop believing and propagating the bullshit even if it seems to support your preconceived notions (or even the truth). This is very hard to do in practice.
As sad as it sounds: the greatest enemy of good science are the other scientists.
feels like some sort of prisoners's dilemma, it only works if all do it, otherwise, it is best to just not admit anything
> Politics does not lead to a broadly shared consensus. It has to yield a decision whether or not a consensus prevails. As a result, political institutions create incentives for participants to exaggerate disagreements between factions. Words that are evocative and ambiguous better serve factional interests than words that are analytical and precise.
> Science is a process that does lead to a broadly shared consensus. It is arguably the only social process that does. Consensus forms around theoretical and empirical statements that are true. In making these statements, a combination of words from natural language and tightly linked symbols from the formal language of mathematics encourages the use of words that are analytical and precise.
While I often bash for-profit journals for being parasites that do little actual work and profit from withholding access to science that should be public, and for this I would open a bottle of champagne if they disappeared, I don't think journals have much to do with this particular problem of incentivization of bad science. Journals just respond to the demand of publishing more, and shallower, papers. That demand comes from hypercompetitivity in academia, where researchers need to fight for scarce positions and scraps of funding, often paired with too much bureaucratization (selection processes that look at "objective" and "verifiable" metrics like number of papers published at a given impact factor quartile, etc., instead of just asking a bunch of neutral experts whether the person is doing good research, which may be more opaque but also much more meaningful).
As evidence that journals are not the problem in this particular case, in fields like machine learning, where publication happens mostly in arXiv and conferences that don't charge for publishing or reading papers, the problems pointed out in the post also exist. Published models that only beat previous ones because they were lucky with random seeds or data splits are widespread.
there is quite a lot of funding
> often paired with too much bureaucratization (selection processes that look at "objective" and "verifiable" metrics
this is the problem.
The whole OP reads like a bizarre hit piece on open access.
How could scientists paying to publish their work incentivize them to publish more? How would spamming the world with more publications inflate a scientist's impact factor? (It wouldn't -- impact factor would be diluted by the spam)
It was always possible to self-publish and to cite self-published work, and even without journals, a modern scientist can publish on a free webhost for even cheaper than an open-access journal.
I assumed that the point was that the journal is incentivized to publish as many articles as possible, and hence to lower review standards.
If you're just charging for publishing articles, you don't care about whether anyone reads them or about what your "prestige" is, since you don't make any money off of that.
It's true that if I publish a paper it's better for me if it's read and thus cited, but that's much less of a difference compared to published vs not published at all. The entire problem starts with authors not being incentivized to publish a few good articles over churning out as many as possible.
It would also help if we could open up science to people outside the academia, and begin the process of de-pedestalization of academia altogether, but not in an unregulated, completely flat way - academic discourse cannot be done in facebook. We know it has to happen and it will happen but we pretend the current situation can last forever. Academia is turning to a place that sells indulgences.
But, nowadays, you can also use your TV to watch a French arthouse film, to go to Youtube and be recommended a Japanese jazz album from 1974, to join the conversation on Twitter and ask questions to leaders in their respective fields.
Now you can swim against the current: Force all these power - and money - hungry institutions to fundamentally change their tune. Or you can find one of the many new waves to surf. Life is good, science is good, progress is good. The choice, as a scientist, is up to you. Can't write one groundbreaking paper a year? Write two or three mediocre ones. No amount of foundational change is going to make you a groundbreaking scientist. And change the channel once in while: the world is only getting bigger and more connected.
I think a few things could improve the quality and discovery of published papers:
-After so many publications, could it be mandatory for random samples in that author’s publication be tested? I get that there is a limit on resources, but some advanced undergrads could do this with guidance.
-I would love to see some version of a journal of failures. That is, well intentioned research that had poor outcomes. It was so frequently in Chemistry research that my compounds were useless or the methods to synthesize them did not work, and it would be helpful to document that. Unfortunately, there is no “Journal of Failed Chemistry.” Only research that ostensibly make a contribution with a clear outcome get published. So much time wasted experimenting where you could save another scientist time and encourage them down another path.
This mirrors my experience where a lot of the post-docs carried war stories of things like lab/country-specific humidity playing a role in synthetic methods succeeding (or failing). There were a lot of dark arts/tricks of the trade that people carried around with them: stuff like going that extra mile to dry things of water super thoroughly (even if it was not mentioned in the paper we were referencing).
An approximate CS analog: Writing great commit messages, and using an SCM.
You don't have to do them, nobody writes it up because those who know regard it as trivial, but if you don't do them, almost nothing works. Everyone who knows what they're doing does it.
This is definitely not the analogue. The analogue is always doing a clean install of the OS before running your experiment. Or only ever using Arch version x.y.z for replicating lab 1 and maybe a.b.c for replicating lab 2.
It's knowing all the magic undocumented JVM flags ahead of running the application.
Some you know to use as part of war scars/best practices. Some are just pure inside information from working in that lab or having a personal relationship with people in that lab.
However, for some I know, it's really difficult to get in, because there are many more journals of positive than negative results, and there are probably many more negative results than positive... so your negative result had better be especially interesting.
I just wish there were more, or at least more journals rewarding sound methodology instead of unsound "results". Who knows how much effort is wasted reproducing sound, but unremarkable studies simply because they weren't published
Obviously there would be no awards for the poor outcomes and failed results and we would be back to square one where no one is incentivized to look at them positively.
This is of course some work, but if you really want it to exists it’s better not to wait for someone else to do it. Also allowing anonymous authors would help I think (to avoid the name being associated with failure).
Edit: openedition.org is for humanities, but there probably is alternative for other fields.
Creating a journal is a tremendous task, and only highly respected researchers have any chance.
> if you really want it to exists it’s better not to wait for someone else to do it
Again, most people simply cannot create a credible journal. Let's be realistic here.
Very unlikely to be effective.
Time and again people think that when they have a problem it must mean the punishment was not severe enough. It won't work.
The problem is that good science is not rewarded enough. Nobody rewards you for having published reliable stuff five years ago.
Disconfirmation is also important.
The big experiments don't have publication bias: they proudly say exactly what they did, even if 90% of the time there are only negative results, because exclusions are important too. Experiments are inherently replicated, with multiple independent simultaneous experiments (LHC) or multiple independent analyses (EHT), data blinding throughout, and even occasionally a further layer of blinding using decoy signals (LIGO). The statistical standards for discovery are, in terms of p-values, about 10,000 times more stringent, and even still people are moving away from p-values entirely.
The resulting publications are put out for free, publicly, on the ArXiv. Later they are submitted to a relatively small family of low-cost journals, which everybody knows the reputations of.
Hopefully some of these lessons can be adapted to other fields.
But this has mostly to do with your subject matter being better behaved than others, no? The most profound changes and the healthiest research culture could not make 5 sigma a reasonable goal in psychology.
Since they trusted the original result, if their results varied too much they discarded them. Ultimately they redid the experiment until they got something closer, and the only values that ended up getting published were results that were slightly bigger.
Correct me if I am wrong, as I read this story a while ago.
More details here: https://en.wikipedia.org/wiki/Oil_drop_experiment#Millikan's...
They proposed and agreed upon new "standards" based upon measurments with better precision and accuracy. See the Wikipedia summary 
Also, the charge of the electron experiments you are talking about happening over 100 years ago, so not that relevant to today :)
Hundreds of authors on each publication. Whose contributions are real and who is just cruising?
I would expect that recommendation letters heavily favor some individuals for subjective reasons.
We are talking about the case when authorship on a scientific publication is insufficient evidence that someone had any contribution whatsoever to the paper. So that one needs a recommendation letter, in addition to the authorsip, where the recommendation letter would presumably state, this individual did actually do some work and is not just cruising.
I have tried to figure out what would be required to remove myself from 98% of the papers we publish, but it turns out to be a lot of work (essentially you have to argue with the head of the experiment, who has other things to worry about).
And I think it's tough on a lot of people: many people in the experiment feel that they should read every paper, because they feel personally accountable to it. They feel they should insist on changes because otherwise it will reflect badly on them.
To me that's an enormous amount of work and gets in the way of the really interesting science. Within the field everyone knows that I'm not directly accountable for a paper just because my name is on it. Most work is done by small teams of less than 20 people (sometimes just one or two) so it would be absurd to ask me to fully understand, much less feel accountable for, 95% of what comes out.
Personally I'd love an opt-out option. I don't think my colleagues should feel burdened by any weight that my name adds. Beyond that it's a convenience thing: I was updating my CV and thinking "if only there was some automated system to keep track of the papers I contributed to...". Incidentally, we have such a system, but it's only visible within the collaboration.
That’s an understatement.
* Couldn't get DAMA to work :(
Speculative theory is not reliable, which is why all the papers on the 750 GeV bump were wrong. But it has never been reliable, because, well, it's speculative! For each new phenomenon there's probably only one right explanation, but far more than one paper. Being 99% wrong has been par for the course for almost a century.
OPERA is a bit of a special case, but BICEP just straight up declared discovery and wanted the Nobel, have you seen their video of when they tell Linde?
* Any time I can, I link to the conclusion: http://resonaances.blogspot.com/2016/06/game-of-thrones-750-...
That’s a bit exaggerated. You mean the champagne popping video? Well, it’s somewhat awkward in hind sight, but a bit of excitement was warranted at that time. I was at Stanford around that time, might even talked to Andrei right after, and don’t recall a festive atmosphere or anything.
It's certainly true that they announced early, but it's also true that the community at large regarded it with appropriate skepticism, causing the whole thing to be self-corrected in months.
the primary lesson here is to get a commitment for decades of government funding
And I know that few fields get that kind of commitment. Maybe cancer, maybe something in semiconductors. Lots of other fields would produce things if you offered them many years of billion-dollar commitments.
I'm not saying that it was wrong to give so much to fundamental physics. I'd just be very happy if we gave that level of funding to lots of other things.
That’s not unique to LHC, or CERN, or physics. It’s a general problem of academia, where PhDs and postdocs are paid a pittance compared to what they could otherwise earn in the industry. This problem is especially bad in high energy physics of course, since jobs are especially limited, and it’s the brightest people competing against each other, who could easily land jobs on Wall Street or Silicon Valley.
> Then there is literally decades of over promising on ground breaking discoveries right around the corner (super symmetry, extra dimensions, dark matter).
Standard Model works exceedingly well at LHC. No one was actually sure about BSM (beyond Standard Model) so there was no “promise” really. Or the promise is: we may see something interesting, or we may disprove some otherwise interesting theories.
> Defunding the super collider in the 90s in the US was probably one of the best science policy decisions they made.
Cancelling SSC was such a stupid waste of labor and money, it’s painful to see someone touting it as a triumph. Two words: defense budget. Enough said.
Disclosure: I worked for CMS for a while. (Not physically at CERN; was doing data analysis for CMS in the U.S.)
No, it wasn't. The construction site was chosen for political reasons and the entire bidding process was beset with problems that would have made it cost even more money, run into a ton of problems (for example fire ants causing multitudes of delays and groundwater seepage making the installation of sensitive electronics probably impossible), and probably not have been finished. The administrators kind of knew it was going to cost more than they pitched even under best of circumstances and were counting on funding momentum to keep it going (well we put this much money into it so we can't just stop now).
What the US did with the SSC money was to put it into LIGO instead.
source: growing up my neighbor was this guy: https://www.npr.org/2019/05/19/723326933/billion-dollar-gamb... and he told me about these things.
Anderson made an argument against the SSC https://www.the-scientist.com/opinion-old/the-case-against-t..., which I pretty much agree with. Science funding is finite and physics talent in a country as well. Many really good students are funnelled into dead end careers in high-energy physics (whether theoretical or experimental). It's just a huge waste of human potential, especially given the fact of how ruthlessly they are exploited. I know people in the field, a hiring decision between three people was recently described to me as a choice between a 'social case' and two competent workers, one of which happens to be a friend of mine.
Funnily enough lot's of institutions doing fundamental research in high energy physics either also do military research or receive military funding. Most of Witten's work for example has been funded by the Department of Energy. The whole reason CERN was build in a neutral country was because people worried that a post nuclear arms race would break out otherwise. In France one of the major institutes contributing to particle physics (Saclay Nuclear Research Centre) also developed their nuclear arsenal and is located next to a major arms manufacturers research center (Thalys).
Yeah, “more likely than not” isn’t a promise. Sure, a lot of people firmly believe in their theories, so me saying “no one was actually sure” seems wrong, but I was talking about a different kind of “sure”. The community overwhelmingly agreed on SM, whereas there were huge divides on where the BSM bets were, or even on roughly the same bet, where SUSY scale lies, etc.
> Mant really good students are funneled into dead end careers in high-energy physics, ...
I was one of the funneled. We signed up because we were drawn to the fundamental questions, not because of glowing job prospects, which were largely laid out for anyone paying a little bit of attention. Cancelling things and decreasing funding certainly didn’t help, only lead to worse “exploitation” in your words.
> Funnily enough lot's of institutions doing fundamental research in high energy physics either also do military research or receive military funding.
Institutions do lots of things. Most also receive funding for medical research, so?
In general, modern day HEP in and of itself hardly contributes anything to the military sector. On the more practical side, powerful magnets, computational methods etc. should be useful in military applications, but a lot of different areas have such second-order effects. Nevertheless, I’m neither knowledgeable nor enthusiastic about killing machines, so I could be missing some obvious connections.
> Most of Witten's work for example has been funded by the Department of Energy.
Why would you put all DOE funding under defense budget? It’s not DOD. Or would you characterize all renewable energy spending as military spending too?
> In France one of the major institutes contributing to particle physics (Saclay Nuclear Research Centre) also developed their nuclear arsenal...
Particle physics has largely moved on from nuclear physics. (I know, many particle physicists are still interested in cold fusion etc.)
Every statement in your diatribe would seem to be predicated by the assumption that discoveries in theoretical physics (what you're complaining about) will result in revolutionary changes to your day-to-day experience (kinda i.e. applied physics) in a short enough span of time that you can actually enjoy them with an able body.
Probably not. Flight/space and semiconductors were notable exceptions, and they were largely driven by the century of hot and cold war that we know to be the 1900s.
Bad science is science that fundamentally doesn't yield a new understanding of the world, either by falsifying theories or confirming them. What you described is:
1. failure to meet exaggerated lay expectations, and
2. poor working conditions in the nonprofit space.
#2 (and possibly #1 depending on where the theoretical discoveries take us) can largely be resolved by pouring more money into science, not less. I'll point back to our century of war if you need past precedent.
Edit: removed a line that was unnecessarily insulting. Sorry about that.
I would be happy if they found any experimental evidence for their predictions. If you have followed the field basically the majority opinion was that supersymmetry at the LHC would be inevitably found, as long as the Higgs mass was in a certain range (naturalness argument). Turns out it (narrowly) was in that range but supersymmetry was not found anyways. Arguably a lot of particle phenomenology in the last 20+ years was bad science in that way, because no new phenomena had actually been observed/measured and needed model building. Of course there is nothing inherently bad with conjecture or theory, but given the fact that there are lots of unsolved problems in physics on which progress was made in that time, a lot of what they were doing seems like a huge waste of time.
Philip Anderson made his case for refocussing physics a long time ago: See his case against against the SSC: https://www.the-scientist.com/opinion-old/the-case-against-t... and "More is Different" https://science.sciencemag.org/content/177/4047/393.
Perhaps that wouldn't have gone on so long if the SSC wasn't fucking cancelled in '93.
That is an interesting observation. Would you be willing to guess specifically why there was such a long gap between the LHC and the previous collider? The answer is already in your comment.
I suppose I'd be much more interested in learning about the latter half of your comment (specific scientific failures).
Suppose all TeV-scale colliders had been defunded in the 90s, including the LHC. Then we would have forever been stuck with the strong suspicion that a Higgs is there, but no proof, along with untested but plausible-sounding well-motivated speculation over what else could be. Is that not worse than actually knowing?
Sure this is your subjective perception, maybe fuelled by the positive reinforcement that rituals of accomplishment provide and the fact that working on physics is really fun. As long as you realise that you are playing a game that is unlikely to yield any economic or societal value (in the case of high-energy physics) at any time-scale with a group of people that have been playing the same game for >20 years with no discernible progress except for the discovery of one particle predicted >40 years ago.
> Suppose all TeV-scale colliders [..]
Well this is hypothetical, but what would most likely have happened is that the ~10000+ scientists involved with the LHC and high-energy physics had gone into different subfields of physics. Hopefully the same would have happened with funding (we don't want it to go to biologists, do we?). Since high-energy physics still attracts some of the best students, this would have actually disproportionately improved human capital in other areas of physics. Whether the Higgs was there or not was never a super pressing concern anyways. Condensed matter physics, bio-physics, environmental physics, all kinds of (mostly experimental) quantum physics still have discoveries to be made with ~1-10s of million in budget. Not only that a lot of those discoveries will have society level consequences in a time frame of decades. We are in contrast very unlikely to derive any benefit from studying the energy scales high-energy physics has reached. For those reasons I think funding high-energy physics is a huge net negative to overall societal progress.
Most young people that pick science do not understand the "real" rules of science, and what it takes to "succeed". Being a good, reliable, hard-working scientists will not ensure your success.
Whereas in many if not most jobs, being good at the job itself usually suffices.
The struggle for jobs is the result of the government funding structure, plus supply and demand. It's not perfect, but it's infinitely preferably to the older system, where you could reliably do science only if you had great personal wealth, or were favored by somebody who did.
Then I will say, job offers are not relevant here. Who gets the job offer? What do they have to actually do in that job? Where are they going to be in 10 years? What kind of life, job security, job conditions, career advancement prospects do you have over the long term? For most scientists, these are the most nebulous concepts ever.
You may think you know what the job is - maybe, I wouldn't know.
What I do know that most people do not understand that a PI (principal investigator) at a University does nothing even remotely similar to what a postdoc (who wants to be a PI does)/ The churn and exploitation is very common.
Any references to this? First I've heard of it.
> Unless and until leadership is taken at a structural and societal level to alter the incentive structure present, the current environment will continue to encourage and promote wasting of resources, squandering of research efforts and delaying of progress; such waste and delay is something that those suffering diseases for which we have inadequate therapy, and those suffering conditions for which we have inadequate technological remedies, can ill afford and should not be forced to endure.
There is a conceit in the final paragraph, where it is implied that we are missing out on cures for diseases etc due to wasteful scientific endeavours. This is not necessarily true. There have been many successes in the current era of medical science. Generally these are driven by technological advances such as monoclonal antibodies or next-generation sequencing.
People in glass offices run everything in 2019 and the more of an expert you become in your field the more professional managers will feel you are a pest to be silenced/hated/removed.
> Of course, scientific publication is subjected to a high degree of quality control through the peer-review process, which despite the political and societal factors that are ineradicable parts of human interaction, is one of the “crown jewels” of scientific objectivity. However, this is changing. The very laudable goal of “open access journals” is to make sure that the public has free access to the scientific data that its tax dollars are used to generate.
Meanwhile in China, there are numerous article-factory-journals that are pay to publish, you can put your shoddy work in those and amp up your publication count easily. Surely, these exist in the USA, but are career scientists at major institutions utilizing these shady Chinese journals? There is evidence that some of these Chinese journals are publishing straight up BS, which is especially easy to do with data analysis where you could "clean" your data easily. Perhaps it is a cultural or political difference, but I don't see nearly as much rigorous self-reflection of Chinese scientists on this front?
Second issue, grants: publications are a small slice of this story. Science departments (not humanities) in universities are MAJOR revenue generators for the University. My university took 1/3 of your grant straight off the bat, to cover overheads like shiny facilities and administration and marketing. Meanwhile, the scientists themselves may make huge salary bonuses or advance their tech/staff substantially when they have substantial grants. So, getting a grant is great for you personally, and improves your chances of further grants.
So, is it really that surprising that there may be pressure to publish at all costs, to p-hack, to reach for those low-impact journals despite their lower reputation/impact, given both universities and scientists are BOTH massively financially benefited from this incrementalism? Does it really pay to reach for pie-in-the-sky, fundamental sea-changes in your field? It seems like a high variance, high risk strategy that only very bold, well-funded, devil-may-care scientists would employ.
When was there any more ethics or integrity in science than any other time? The AIDS crisis was a shitshow of choosing prestige and recognition over the lives of a generation. The discovery of DNA was off a woman who was hardly recognized. Henrietta Lacks' cells. The syphilis experiments.
What you are referring to is the use of Rosalind Franklin's X-ray fibre diffraction images by Watson and Crick to elucidate the 3D structure of DNA, and, depending on the accounts you read, whether she got due credit is arguable. She did publish in the same Nature journal issue as W&C (https://www.nature.com/articles/171740a0.pdf), she got credit for the photos (see the acknowledgements in the W&C paper, http://www.nature.com/genomics/human/watson-crick/), and she was dead by the time the Nobel Prize decision was made (so she could not have received the prize).
I understand many feel very strongly that she was cheated, and while I do believe she was definitely slighted and not given enough credit, the underlying story is fairly complicated. I recommend reading both Dark Lady and Eighth Day of Creation and then forming your own opinion. personally I thought her personal diaries, which she willed to Aaron Klug and were used in the writing of Eighth Day, were really illuminating.
This is the exact point of the replication crisis. Within social science, the answer is an absolute yes. That's why we have been having this conversation more and more over the past decade.
> Meanwhile in China, there are numerous article-factory-journals that are pay to publish, you can put your shoddy work in those and amp up your publication count easily. Surely, these exist in the USA, but are career scientists at major institutions utilizing these shady Chinese journals?
In China, people know about scientific reputation perfectly well, which is why publication in Nature/Science/Cell ensures great financial reward.
There are shoddy journals everywhere, of course. Yes, top scientists at top institutions and rigorous fields in the US don't use them. The same is true for top scientists at top institutions in China. The average quality is probably lower (due to the inherent difficulties of doing cutting-edge science only two generations removed from mass starvation) but the principle is the same.
Of course, there is some difficulty in determining which work is going to be brilliant before it is done. But he decided that he could do that, seemingly based on how PR worthy the proposal was.
At any rate it did immense damage and set back deep research by years. Naturally he left when he could wangle a better job elsewhere.
It is very easy to point fingers, to blame it on funding, blame it on journals, blame it on media, but in the end:
- scientists decide who gets funded
- scientists decide who gets published
- scientists make exaggerated claims in the media
The source of the problem is the scientists, not understanding the damage they are doing to themselves.
I do foresee downvotes, because scientists do not like this idea at all :-)
When it comes to abstract science, eg, the ultimate origin of the universe, it doesn't materially affect anything if it is believed or not. But if someone produces a cheaper or longer lasting battery, the proof is in the pudding, and that basic research will have made a difference.
Then there is the category of hard science which is disbelieved because moneyed interests wish to discredit it, and/or it has become a political shibboleth to discredit the science. Those aren't due to bad science.
Can you give a reference to back up your interpretation?
Alas, no. I do recall being astonished at my claim, and then being convinced by a colleague (which was backed up by plain language on the CMI page, in my memory...) but now that I'm re-reading (current and archive.org'd) I cannot find such a thing. Disturbing. Yet relieving. Fuck my memory.