
We're Incentivizing Bad Science - slowhand09
https://blogs.scientificamerican.com/observations/were-incentivizing-bad-science/
======
RcouF1uZ4gsC
From a previous HN comment
[https://news.ycombinator.com/item?id=14022158](https://news.ycombinator.com/item?id=14022158)
by dasmoth

“If you pay a man a salary for doing research, he and you will want to have
something to point to at the end of the year to show that the money has not
been wasted. In promising work of the highest class, however, results do not
come in this regular fashion, in fact years may pass without any tangible
result being obtained, and the position of the paid worker would be very
embarrassing and he would naturally take to work on a lower, or at any rate a
different plane where he could be sure of getting year by year tangible
results which would justify his salary. The position is this: You want one
kind of research, but, if you pay a man to do it, it will drive him to
research of a different kind. The only thing to do is to pay him for doing
something else and give him enough leisure to do research for the love of it."

\-- Attributed to J.J. Thomson (although I've not been able to turn up a
definitive citation -- anyone know where it comes from?)

~~~
kbenson
Taken to an extreme, this is UBI, and I think we would all benefit to some
degree from a system like that if we could get from here to there. On the
other hand, I imagine there's quite a bit of research that requires expensive
tests and/or apparatus to get right, and I'm not sure how you feasibly come to
a situation where you have a bunch of hobby physicists on staff and provide
them with a particle accelerator, but just for funsies.

I think allowing people to explore their passions is essential to bringing new
ideas into fields, and will likely bring a renaissance to some areas of study,
and at least portions of other fields, but I'm at a loss as to how it can help
at the forefront of fields that require a lot of investment. Rocketry, for
another example. Can anyone make a realistic case that another 10,000, or even
100,000 passionate people could achieve what Space X has over the last few
years? I don't doubt they come up with many or all of the same ideas, but the
testing of those ideas requires a lot of money.

~~~
Fomite
There have been suggestions that doing science funding via lottery (with some
caveats) would be more effective and more efficient than our current grant
proposal based system.

~~~
yvdriess
There should first be a good base of teaching assistant funding with enough
free time to do research work. A lottery for extra PhD grants would indeed
work well on top of this. Often the key is simply top-k grades now for those
grants.

Fun fact, most historical republics and democracies elected political offices
by lottery.
[https://en.wikipedia.org/wiki/Sortition](https://en.wikipedia.org/wiki/Sortition)

------
majos
It's worth distinguishing between different types of bad science.

1\. The first and most egregious type is outright fraud. This is when you
intentionally manipulate or fake data. Everyone agrees this is bad, and honest
actors are enough to prevent it. In some cases, other honest actors are
sufficient to determine if the claims are fishy.

2\. The second more subtle type is not paying attention to adaptivity. For
example, maybe an investigator wants to look at the data before coming up with
a hypothesis to test. This is dangerous because the investigator is already
overfitting, so any p-values the investigator computes afterward do not mean
what they're supposed to mean. This is less egregious because it's easy to do
this just by not being careful or not knowing your statistics very well. A
scientist can be honest, but imperfect, and do this. It's also not easy to
sniff this out as a reviewer -- the scientist might just omit all the stuff
that didn't work. But there appears to be growing awareness of this kind of
problem.

3\. The third, and hardest to solve, problem is not factoring in the whole
population of experiments. This is where 100 labs independently try an idea
and one of them gets a genuine (from their limited view) result with a genuine
p-value. It's novel and that lab has (in the limited view) been careful about
adaptivity and keeping their hypotheses carefully generated. Maybe they've
even used carefully generated noise to ensure their conclusions generalize [1]
(which would definitely cut down on this problem). So it's pretty much
impossible for a reviewer to tell there's a problem, because they don't know
about the 100 other people that tried this and failed because the randomness
didn't go their way. Short of a public experiment registry, this one is hard
to fix, especially because it may be that nobody's being malicious or
ignorant.

[1] [https://arxiv.org/abs/1411.2664](https://arxiv.org/abs/1411.2664)

~~~
kradroy
My advisor was guilty of pushing me to do #2. "Your original hypothesis was
disconfirmed? Find one that the data does not disconfirm and that'll lead to a
follow-up paper!" I dropped out of grad school because I assumed this was the
norm.

~~~
unlinked_dll
I feel like evidence dis-confirming a hypothesis is as valuable as confirming
it. How many other grad students are going to attempt the same
experiment/concept if null results are never published?

I wish there was more praise for negative results in publication, because
whether confirmed or not, the knowledge has value.

~~~
bonoboTP
I think it's partially embarrassing when that happens. Hindsight is 20/20, and
people will think it was obvious that it wouldn't be the way you expected.

It's hard to argue that I had justification to think that novel intervention X
would have an effect. It turns out it doesn't. Science is often very
specialized, there's little chance others would have the very same idea. If it
works out, the argumentation would have to be reversed: my idea was very novel
and non-obvious but as I show it actually works, which no one would have
guessed.

The negative result story only works if the research community would have very
strongly expected to see the effect, almost reversing the role of the null and
the alternative.

~~~
thfuran
>The negative result story only works if the research community would have
very strongly expected to see the effect, almost reversing the role of the
null and the alternative

Depends what you mean by "works". If you mean "is reasonably publishable in
the current academic climate, then I agree. If you mean "has value", then I
disagree.

~~~
bonoboTP
I mean "sufficiently impresses and/or catches the attention of others,
especially other scientists".

------
glofish
The real problem with science is the same as in politics.

We, the other scientists (just like the voters) incentivize certain behaviors
and with that favor a certain type of scientists (and politician) to prosper.

There are all kinds of scientists (and politicians) competing for your trust
(and votes). There are good and bad among them. As long as we reward the
bullshitters more, they are the ones that will outcompete the others.

All these rules and regulations that people propose are ineffectual, as long
as a certain level of self-criticism is not being applied:

\- Stop believing and propagating the bullshit even if it seems to support
your preconceived notions (or even the truth). This is very hard to do in
practice.

As sad as it sounds: the greatest enemy of good science are the other
scientists.

feels like some sort of prisoners's dilemma, it only works if all do it,
otherwise, it is best to just not admit anything

~~~
leftyted
A different framing of this is that politics is, more and more, intruding into
science.

Paul Romer:

> Politics does not lead to a broadly shared consensus. It has to yield a
> decision whether or not a consensus prevails. As a result, political
> institutions create incentives for participants to exaggerate disagreements
> between factions. Words that are evocative and ambiguous better serve
> factional interests than words that are analytical and precise.

> Science is a process that does lead to a broadly shared consensus. It is
> arguably the only social process that does. Consensus forms around
> theoretical and empirical statements that are true. In making these
> statements, a combination of words from natural language and tightly linked
> symbols from the formal language of mathematics encourages the use of words
> that are analytical and precise.

from
[https://paulromer.net/mathiness/Mathiness.pdf](https://paulromer.net/mathiness/Mathiness.pdf)

~~~
tobias3
He also left as World Bank chief economist after he complained that the "Ease
of doing Business" index suspiciously went down for Chile while a socialist
party was in power.

------
Al-Khwarizmi
I agree with everything except for the jab at (paid) open access, which looks
quite gratuitous to me. It may be true that "authors are willing to pay more
to get their articles published in more prestigious journals. So, the more
exciting the findings a journal publishes, the more references, the higher the
impact the journal, the more submissions they get, the more money they make".
But this is also true of non-open-access journals. Journals live off their
prestige, and before paid open access was a thing, publishers still wanted to
have high-impact journals so that university libraries would subscribe to
them. It was a very distorted market (as it is now), but a market
nevertheless.

While I often bash for-profit journals for being parasites that do little
actual work and profit from withholding access to science that should be
public, and for this I would open a bottle of champagne if they disappeared, I
don't think journals have much to do with this particular problem of
incentivization of bad science. Journals just respond to the demand of
publishing more, and shallower, papers. That demand comes from
hypercompetitivity in academia, where researchers need to fight for scarce
positions and scraps of funding, often paired with too much bureaucratization
(selection processes that look at "objective" and "verifiable" metrics like
number of papers published at a given impact factor quartile, etc., instead of
just asking a bunch of neutral experts whether the person is doing good
research, which may be more opaque but also much more meaningful).

As evidence that journals are not the problem in this particular case, in
fields like machine learning, where publication happens mostly in arXiv and
conferences that don't charge for publishing or reading papers, the problems
pointed out in the post also exist. Published models that only beat previous
ones because they were lucky with random seeds or data splits are widespread.

~~~
gowld
> researchers need to fight for scarce positions and scraps of funding

there is quite a lot of funding

> often paired with too much bureaucratization (selection processes that look
> at "objective" and "verifiable" metrics

this is the problem.

The whole OP reads like a bizarre hit piece on open access. How could
scientists _paying_ to publish their work incentivize them to publish more?
How would spamming the world with more publications inflate a scientist's
impact factor? (It wouldn't -- impact factor would be diluted by the spam)

It was always possible to self-publish and to cite self-published work, and
even without journals, a modern scientist can publish on a free webhost for
even cheaper than an open-access journal.

~~~
lutorm
_How could scientists paying to publish their work incentivize them to publish
more?_

I assumed that the point was that the _journal_ is incentivized to publish as
many articles as possible, and hence to lower review standards.

~~~
cbkeller
Sure, but if the journal publishes too many articles it'll lose prestige.
Nature publishes what, 8% of submissions? And that's apparently what they
consider financially optimal -- they're a for-profit corporation, they could
publish more if they wanted to. I guess the question is just whether this
applies equally to open-access and traditional subscription journals.

~~~
lutorm
But Nature doesn't make its living by publishing articles, it does it by
selling subscriptions. This requires having high "prestige" so people want to
subscribe to it.

If you're just charging for _publishing_ articles, you don't care about
whether anyone reads them or about what your "prestige" is, since you don't
make any money off of that.

It's true that if I publish a paper it's better for me if it's read and thus
cited, but that's much less of a difference compared to published vs not
published at all. The entire problem starts with authors not being
incentivized to publish a few good articles over churning out as many as
possible.

------
hardtke
I've noticed a trend across many scientific fields where publications are
basically irrelevant since nobody reads them. Status is conferred by high
profile talks, which tend to be invited. The material discussed in high
profile talks is eventually published in a journal or conference proceeding
(in the case of CS) and will get citations, but ideas only spread if they are
picked by the program committees. The academics who publish many low impact
papers are not really relevant.

~~~
Fomite
This is _not_ applicable to a lot of fields. Generally, only CS and ancillary
fields are ones I've encountered where talks and conferences are the primary
"currency". In biomedicine, talks will only follow _after_ publications, and
are often intentionally paired.

------
buboard
I think we also need to facilitate the work of reviewers. With so many papers
and vast literature, reviewing is a PITA and errors keep slipping through the
cracks. I d like to use a simple wiki of "argument refutations" , where we can
look up previous objections to arguments being made. This is useful work that
is currently being lost in the email archives of journal editors.

It would also help if we could open up science to people outside the academia,
and begin the process of de-pedestalization of academia altogether, but not in
an unregulated, completely flat way - academic discourse cannot be done in
facebook. We know it has to happen and it will happen but we pretend the
current situation can last forever. Academia is turning to a place that sells
indulgences.

------
ipsa
Television incentivizes forgetable reality TV, the radio incentivizes
meaningless poppy music, social media incentivizes bickering about the
controversies of today.

But, nowadays, you can also use your TV to watch a French arthouse film, to go
to Youtube and be recommended a Japanese jazz album from 1974, to join the
conversation on Twitter and ask questions to leaders in their respective
fields.

Now you can swim against the current: Force all these power - and money -
hungry institutions to fundamentally change their tune. Or you can find one of
the many new waves to surf. Life is good, science is good, progress is good.
The choice, as a scientist, is up to you. Can't write one groundbreaking paper
a year? Write two or three mediocre ones. No amount of foundational change is
going to make you a groundbreaking scientist. And change the channel once in
while: the world is only getting bigger and more connected.

------
Vaslo
I agree with this article and have always thought about this but never voiced
it.

I think a few things could improve the quality and discovery of published
papers:

-After so many publications, could it be mandatory for random samples in that author’s publication be tested? I get that there is a limit on resources, but some advanced undergrads could do this with guidance.

-I would love to see some version of a journal of failures. That is, well intentioned research that had poor outcomes. It was so frequently in Chemistry research that my compounds were useless or the methods to synthesize them did not work, and it would be helpful to document that. Unfortunately, there is no “Journal of Failed Chemistry.” Only research that ostensibly make a contribution with a clear outcome get published. So much time wasted experimenting where you could save another scientist time and encourage them down another path.

~~~
djtango
>It was so frequently in Chemistry research that my compounds were useless or
the methods to synthesize them did not work, and it would be helpful to
document that.

This mirrors my experience where a lot of the post-docs carried war stories of
things like lab/country-specific humidity playing a role in synthetic methods
succeeding (or failing). There were a lot of dark arts/tricks of the trade
that people carried around with them: stuff like going that extra mile to dry
things of water super thoroughly (even if it was not mentioned in the paper we
were referencing).

~~~
ISL
Many "dark arts" aren't known as dark arts to the people who know them.
They're just the things you always do, because if you don't do them, nothing
works.

An approximate CS analog: Writing great commit messages, and using an SCM.

You don't have to do them, nobody writes it up because those who know regard
it as trivial, but if you don't do them, almost nothing works. Everyone who
knows what they're doing does it.

~~~
djtango
>An approximate CS analog: Writing great commit messages, and using an SCM.

This is definitely not the analogue. The analogue is always doing a clean
install of the OS before running your experiment. Or only ever using Arch
version x.y.z for replicating lab 1 and maybe a.b.c for replicating lab 2.

It's knowing all the magic undocumented JVM flags ahead of running the
application.

Some you know to use as part of war scars/best practices. Some are just pure
inside information from working in that lab or having a personal relationship
with people in that lab.

------
rwj
There are other ways in which the incentives are misaligned. Granting agencies
that prefer to fund researchers who nearly always "succeed" in proving their
hypothesis will find that research proposals are low-risk and low-information.
At in at least one sense, the ideal experiment is one with a 50/50 chance of
failing.

Disconfirmation is also important.

------
knzhou
I don't want to sound too triumphant, but over here in fundamental physics we
have decades-old safeguards in place against all these problems.

The big experiments don't have publication bias: they proudly say exactly what
they did, even if 90% of the time there are only negative results, because
exclusions are important too. Experiments are inherently replicated, with
multiple independent simultaneous experiments (LHC) or multiple independent
analyses (EHT), data blinding throughout, and even occasionally a further
layer of blinding using decoy signals (LIGO). The statistical standards for
discovery are, in terms of p-values, about 10,000 times more stringent, and
even still people are moving away from p-values entirely.

The resulting publications are put out for free, publicly, on the ArXiv. Later
they are submitted to a relatively small family of low-cost journals, which
everybody knows the reputations of.

Hopefully some of these lessons can be adapted to other fields.

~~~
orbifold
CERN and the LHC are to me prime examples of bad science. You build huge
machines at incredible cost, wasting hundreds of thousands of man hours of
brilliant young people’s time. They are paid almost nothing (a checkout clerk
at a Swiss supermarket makes more than a PhD student at CERN). Then you string
along a subset of those for years, exploiting them for further cheap labor,
somehow making them believe they are “lucky” to get that opportunity (a
postdoc at CERN pays a fraction of what you can make at Google Zurich). In the
time between experiments people only ever see simulated data, leading to a
rude awakening when actual experimental data comes in (c.f. ALICE’s desaster
of a analysis Pipeline). Then there is literally decades of over promising on
ground breaking discoveries right around the corner (super symmetry, extra
dimensions, dark matter). Defunding the super collider in the 90s in the US
was probably one of the best science policy decisions they made.

~~~
oefrha
> They are paid almost nothing (a checkout clerk at a Swiss supermarket makes
> more than a PhD student at CERN). Then you string along a subset of those
> for years, exploiting them for further cheap labor, somehow making them
> believe they are “lucky” to get that opportunity (a postdoc at CERN pays a
> fraction of what you can make at Google Zurich).

That’s not unique to LHC, or CERN, or physics. It’s a general problem of
academia, where PhDs and postdocs are paid a pittance compared to what they
could otherwise earn in the industry. This problem is especially bad in high
energy physics of course, since jobs are especially limited, and it’s the
brightest people competing against each other, who could easily land jobs on
Wall Street or Silicon Valley.

> Then there is literally decades of over promising on ground breaking
> discoveries right around the corner (super symmetry, extra dimensions, dark
> matter).

Standard Model works exceedingly well at LHC. No one was actually sure about
BSM (beyond Standard Model) so there was no “promise” really. Or the promise
is: we may see something interesting, or we may disprove some otherwise
interesting theories.

> Defunding the super collider in the 90s in the US was probably one of the
> best science policy decisions they made.

Cancelling SSC was such a stupid waste of labor and money, it’s painful to see
someone touting it as a triumph. Two words: defense budget. Enough said.

Disclosure: I worked for CMS for a while. (Not physically at CERN; was doing
data analysis for CMS in the U.S.)

~~~
orbifold
I'm pretty positive that people made firm predictions and bets that the LHC
will see super partners (because otherwise the "naturalness" argument would go
away). Whether it would be the MSSM or something else was up for debate, but
people thought it would be more likely than not that they would see them if
the Higgs was found in the predicted energy range. In any case physics
departments at top universities all over the world are stacked with
phenomenologists that made their careers on working out these predictions.

Anderson made an argument against the SSC [https://www.the-
scientist.com/opinion-old/the-case-against-t...](https://www.the-
scientist.com/opinion-old/the-case-against-the-ssc-63734), which I pretty much
agree with. Science funding is finite and physics talent in a country as well.
Many really good students are funnelled into dead end careers in high-energy
physics (whether theoretical or experimental). It's just a huge waste of human
potential, especially given the fact of how ruthlessly they are exploited. I
know people in the field, a hiring decision between three people was recently
described to me as a choice between a 'social case' and two competent workers,
one of which happens to be a friend of mine.

Funnily enough lot's of institutions doing fundamental research in high energy
physics either also do military research or receive military funding. Most of
Witten's work for example has been funded by the Department of Energy. The
whole reason CERN was build in a neutral country was because people worried
that a post nuclear arms race would break out otherwise. In France one of the
major institutes contributing to particle physics (Saclay Nuclear Research
Centre) also developed their nuclear arsenal and is located next to a major
arms manufacturers research center (Thalys).

~~~
oefrha
> more likely than not

Yeah, “more likely than not” isn’t a promise. Sure, a lot of people firmly
believe in their theories, so me saying “no one was actually sure” seems
wrong, but I was talking about a different kind of “sure”. The community
overwhelmingly agreed on SM, whereas there were huge divides on where the BSM
bets were, or even on roughly the same bet, where SUSY scale lies, etc.

> Mant really good students are funneled into dead end careers in high-energy
> physics, ...

I was one of the funneled. We signed up because we were drawn to the
fundamental questions, not because of glowing job prospects, which were
largely laid out for anyone paying a little bit of attention. Cancelling
things and decreasing funding certainly didn’t help, only lead to worse
“exploitation” in your words.

> Funnily enough lot's of institutions doing fundamental research in high
> energy physics either also do military research or receive military funding.

Institutions do lots of things. Most also receive funding for medical
research, so?

In general, modern day HEP in and of itself hardly contributes anything to the
military sector. On the more practical side, powerful magnets, computational
methods etc. should be useful in military applications, but a lot of different
areas have such second-order effects. Nevertheless, I’m neither knowledgeable
nor enthusiastic about killing machines, so I could be missing some obvious
connections.

> Most of Witten's work for example has been funded by the Department of
> Energy.

Why would you put all DOE funding under defense budget? It’s not DOD. Or would
you characterize all renewable energy spending as military spending too?

> In France one of the major institutes contributing to particle physics
> (Saclay Nuclear Research Centre) also developed their nuclear arsenal...

Particle physics has largely moved on from nuclear physics. (I know, many
particle physicists are still interested in cold fusion etc.)

------
rblion
> Self-regulation by scientists of decades and centuries past has created
> modern science with all its virtues and triumphs. However, much like the
> bankers of the early 21st century, we risk allowing new incentives to erode
> our self-regulation and skew our perceptions and behavior; similar to the
> risky loans underlying mortgage-backed securities, faulty scientific
> observations can form a bubble and an unstable edifice. As science is
> ultimately self-correcting, faulty conclusions are remedied with ongoing
> study, but this takes a great deal of time.

> Unless and until leadership is taken at a structural and societal level to
> alter the incentive structure present, the current environment will continue
> to encourage and promote wasting of resources, squandering of research
> efforts and delaying of progress; such waste and delay is something that
> those suffering diseases for which we have inadequate therapy, and those
> suffering conditions for which we have inadequate technological remedies,
> can ill afford and should not be forced to endure.

I agree.

------
Gatsky
Yes well every solution we come up with invariably favours well established
scientists... how can a new scientist get anywhere if it takes 10 years to
produce substantive work?

There is a conceit in the final paragraph, where it is implied that we are
missing out on cures for diseases etc due to wasteful scientific endeavours.
This is not necessarily true. There have been many successes in the current
era of medical science. Generally these are driven by technological advances
such as monoclonal antibodies or next-generation sequencing.

------
dr-detroit
Applied research has always been in fashion. Theoretical is always rare and
underfunded. Many books on this subject have been written I recommend "A
University for the 21st Century"

People in glass offices run everything in 2019 and the more of an expert you
become in your field the more professional managers will feel you are a pest
to be silenced/hated/removed.

------
cbkeller
The article makes some fair points, but strangely seems to imply that open
access is somehow opposed to rigorous peer review, which certainly leaves an
odd taste in my mouth.

> _Of course, scientific publication is subjected to a high degree of quality
> control through the peer-review process, which despite the political and
> societal factors that are ineradicable parts of human interaction, is one of
> the “crown jewels” of scientific objectivity. However, this is changing. The
> very laudable goal of “open access journals” is to make sure that the public
> has free access to the scientific data that its tax dollars are used to
> generate._

------
WhompingWindows
First issue: publications. I think this is not a big problem in the USA
relative to China. Is there evidence of American scientists who have greatly
benefited in reputation and prestige, despite doing very shoddy work? Many
scientists have been rebuffed and rejected from grants/awards due to not
having enough publications. American scientists, though, are more attuned to
the quality and integrity of journals in which publications appear. Whether
for tenure or grants, a Nature or Science or Cell paper will mean a LOT, and
many scientists evaluating other scientists wouldn't care much about a journal
publication with Impact Factor under 1.0.

Meanwhile in China, there are numerous article-factory-journals that are pay
to publish, you can put your shoddy work in those and amp up your publication
count easily. Surely, these exist in the USA, but are career scientists at
major institutions utilizing these shady Chinese journals? There is evidence
that some of these Chinese journals are publishing straight up BS, which is
especially easy to do with data analysis where you could "clean" your data
easily. Perhaps it is a cultural or political difference, but I don't see
nearly as much rigorous self-reflection of Chinese scientists on this front?

Second issue, grants: publications are a small slice of this story. Science
departments (not humanities) in universities are MAJOR revenue generators for
the University. My university took 1/3 of your grant straight off the bat, to
cover overheads like shiny facilities and administration and marketing.
Meanwhile, the scientists themselves may make huge salary bonuses or advance
their tech/staff substantially when they have substantial grants. So, getting
a grant is great for you personally, and improves your chances of further
grants.

So, is it really that surprising that there may be pressure to publish at all
costs, to p-hack, to reach for those low-impact journals despite their lower
reputation/impact, given both universities and scientists are BOTH massively
financially benefited from this incrementalism? Does it really pay to reach
for pie-in-the-sky, fundamental sea-changes in your field? It seems like a
high variance, high risk strategy that only very bold, well-funded, devil-may-
care scientists would employ.

~~~
rocketflumes
I can't comment on China and other fields, but in the US for AI and robotics,
in which I do research / publish in, there is definitely a growing trend of an
over-emphasis on novelty research that disregards reproducibility in favor of
"wow" factors and fancy demo videos. A lot of highly cited research
papers/labs tend to be the most heavily promoted ones (on Twitter, in the
press, etc), and they're not necessarily impactful/useful for the rest of the
field.

~~~
K0SM0S
Amidst this worrying trend of shallow research, begging for petty cred and
grants more than knowledge, you have to wonder what has become of ethics, of
integrity in science. Globalized academia needs some soul-searching, IMHO.

~~~
SolaceQuantum
_" you have to wonder what has become of ethics, of integrity in science"_

When was there any more ethics or integrity in science than any other time?
The AIDS crisis was a shitshow of choosing prestige and recognition over the
lives of a generation. The discovery of DNA was off a woman who was hardly
recognized. Henrietta Lacks' cells. The syphilis experiments.

~~~
dekhn
The discovery of DNA was done by Friedrich Meischer, a brilliant Swiss
biologist. At the time, nobody knew what the nuclein he purified did (and, the
results were considered so surprising that his own thesis advisor redid all
the experiments manually before letting the data be published).

What you are referring to is the use of Rosalind Franklin's X-ray fibre
diffraction images by Watson and Crick to elucidate the 3D structure of DNA,
and, depending on the accounts you read, whether she got due credit is
arguable. She did publish in the same Nature journal issue as W&C
([https://www.nature.com/articles/171740a0.pdf](https://www.nature.com/articles/171740a0.pdf)),
she got credit for the photos (see the acknowledgements in the W&C paper,
[http://www.nature.com/genomics/human/watson-
crick/](http://www.nature.com/genomics/human/watson-crick/)), and she was dead
by the time the Nobel Prize decision was made (so she could not have received
the prize).

I understand many feel very strongly that she was cheated, and while I do
believe she was definitely slighted and not given enough credit, the
underlying story is fairly complicated. I recommend reading both Dark Lady and
Eighth Day of Creation and then forming your own opinion. personally I thought
her personal diaries, which she willed to Aaron Klug and were used in the
writing of Eighth Day, were really illuminating.

~~~
SolaceQuantum
Sorry, you're right, the story is complicated, but that it is so complicated
draws further the question- _where are we getting the idea that science was
full of ethical humans doing purely ethical things_?

------
angry_octet
A previous Great Leader decided that he would improve our research output in a
simple way: we should stop doing humdrum work, and only do work which would
succeed famously.

Of course, there is some difficulty in determining which work is going to be
brilliant before it is done. But he decided that he could do that, seemingly
based on how PR worthy the proposal was.

At any rate it did immense damage and set back deep research by years.
Naturally he left when he could wangle a better job elsewhere.

------
Dowwie
Every time an arxiv paper gets published to HN, I wonder whether it's even
worth reading until findings are peer reviewed..

~~~
enjoyyourlife
*worth

~~~
Dowwie
thx

------
SirLuxuryYacht
How do you quantify rigor?

~~~
dredmorbius
In mortises?

------
xvilka
Partially can be solved with technology - all data and programs should be
self-contained, interactive, edition and modification process should be
visible for general public, better connections between papers rather than
current citation mechanism.

------
cde-v
Market demands Bad Science, market supplies Bad Science. I do not see a
problem here.

~~~
glofish
A cynical view - alas it captures the truth.

It is very easy to point fingers, to blame it on funding, blame it on
journals, blame it on media, but in the end:

\- scientists decide who gets funded

\- scientists decide who gets published

\- scientists make exaggerated claims in the media

The source of the problem is the scientists, not understanding the damage they
are doing to themselves.

I do foresee downvotes, because scientists do not like this idea at all :-)

------
notadoc
Not surprising when science has become a tool for political and industrial
purposes.

------
DayDollar
Actually, i think some part this bad science and low innovation rate- is
conciousness. Goverments everywhere realized, that unless they have total
social control, people can not be trusted with exponential powerfull
technology of all kinds. So, i predict, that unless we as humans are not
totally surveilanced and controlled- we will not see great leaps of technology
in the forseeable future.

~~~
rapjr9
When I was doing laser spectroscopy research as an undergrad my prof said
there was an informal physics "association" that had taken an oath not to
develop anything like another atomic bomb. He asked if I wanted to join and I
said yes. So I certainly see some basis for people being scared of fundamental
advances in science and technology and wanting to control and place limits on
what is developed. There have been a lot of advances in astrophysics (little
potential harm to things here on Earth) and not much research in gravity (big
consequences if we could actually control gravity.) That doesn't require
"total social control", just a strong influence at key checkpoints (like
funding). Whether there is an active effort to reduce investigation in
specific areas of science might be discoverable by a meta-analysis of papers
showing what areas of research tend to not get funding and see if there is a
correlation with the potential for social disruption.

------
jplayer01
At this point, what even is the point of good science if nobody believes it?

~~~
tasty_freeze
If what you said was literally true, that nobody believes it, then there would
be little point. But "nobody" is an exaggeration. Scientists are likely to
believe it, or at least are in a much better position to evaluate the truth
and utility of it, and build upon it.

When it comes to abstract science, eg, the ultimate origin of the universe, it
doesn't materially affect anything if it is believed or not. But if someone
produces a cheaper or longer lasting battery, the proof is in the pudding, and
that basic research will have made a difference.

Then there is the category of hard science which is disbelieved because
moneyed interests wish to discredit it, and/or it has become a political
shibboleth to discredit the science. Those aren't due to bad science.

~~~
jplayer01
The problem is that significant portions of the population refuse to believe
anything that comes out of science. This directly affects what politics
decides to do about real problems like climate change. This isn't some
abstract issue, it's something that's affecting us on an everyday level
because people are unwilling to listen to the evidence.

------
stevenwliao
We need to incentivize thorough reviews as well.

------
klyrs
I've always found it distasteful that the Clay Math Prize offers a million
dollar reward for proving that P=NP but nothing for revealing the truth should
it be otherwise.

~~~
bonoboTP
I think you misunderstood something. The problem itself is sometimes referred
to as the "P=NP" problem, but that's just a name, it is also called the "P vs.
NP" problem. If you can prove either P!=NP or P==NP, you get the prize.

Can you give a reference to back up your interpretation?

~~~
klyrs
> Can you give a reference to back up your interpretation?

Alas, no. I do recall being astonished at my claim, and then being convinced
by a colleague (which was backed up by plain language on the CMI page, in my
memory...) but now that I'm re-reading (current and archive.org'd) I cannot
find such a thing. Disturbing. Yet relieving. Fuck my memory.

