
Cracking down on research fraud - apsec112
https://undark.org/2020/07/23/cracking-down-on-research-fraud
======
gwerbret
These issues of research fraud come up often, but the root of the problem is a
bit more subtle.

In North America at least, biomedical research labs operate largely as
fiefdoms of the individual principal investigators (PIs). The actual research
work falls almost entirely on the backs of grad students and postdoctoral
fellows. The grad students need to generate "good" data in order to graduate,
the postdocs need the same in order to gain real employment (with only about
10% gaining faculty positions themselves after many years of postdoctoral
training). The PIs need such "productivity" from their trainees in order to
gain the funding that keeps the labs going. The PIs themselves face success
rates in grant applications that are often 10% or lower and, particularly
early in their careers, their job security depends almost entirely on their
ability to secure grant funding.

These competitive pressures create enormous incentives for otherwise
conscientious people, all the way along the hierarchy described above, to
fudge their research data. Research fraud is thus a direct outcome of a
fundamentally-broken approach to the structure of research funding.

There are exceptions to that approach, however. The not-for-profit Howard
Hughes Research Institute [1], and to an extent the intramural research
programs of the NIH [2], offer funding for PIs to do what they do best,
without the pressure of competing for scant funds. Coincidentally, some of the
best science comes out of these sites.

1: [https://www.hhmi.org/scientists](https://www.hhmi.org/scientists) 2:
[https://irp.nih.gov/about-us/what-is-the-irp](https://irp.nih.gov/about-
us/what-is-the-irp)

~~~
dasudasu
The topic of funding is a recurrent one in these types of discussions. There
are always more people wanting to do science than what is available in
funding. There needs to be a process to allocate resources that takes into
account competency for the job. Just granting it at random or equally is
probably very very far from optimal if it means that an exceptional PI cannot
even do the things to push his/her research to the next level. Research
funding is a bit like a command economy in that there are no market forces
that can be drawn upon to just sort it naturally, so of course it ends up
being especially thorny.

~~~
wisty
> Just granting it at random or equally is probably very very far from optimal
> if it means that an exceptional PI cannot even do the things to push his/her
> research to the next level.

What does "far from optimal" mean? Let's pretend the allocators are not god-
like decision makers (if they are, let's fire all the scientists because our
god-like decision makers can do the research for them).

Let's just accept that if we have a large bunch of diverse potential
researchers, and limited funding, the best way to get a breakthrough is by
funding a random sample of these researchers.

Maybe this sample should be stratified, but it should not be stratified in
such a way that it incentives chasing grants (effectively making the grant-
allocators the real PIs - with all their flaws and biases systematically
driving the process, but without any accountability or management effort on
their behalf).

We can't get optimal without unlimited wisdom, and if we had unlimited wisdom
we wouldn't need scientists.

~~~
TomMarius
Why do you need unlimited wisdom to rate the merits of the people?

------
tlb
Estimating from my own reading, 1% of research is fraud and 80% is worthless
for other reasons. Numbers vary between fields.

If that's the case, what's the argument for why we should spend time doing
something about the 1%? Solving 100% of the 1% wouldn't change the overall
situation much.

Possible arguments include:

\- Fixing the 80% is hard, but fixing the 1% is satisfying (to the
aggressively conventional-minded, at least.)

\- The 1% is wrong in a more harmful way than the 80%. Perhaps falsifying data
is worse than hand-waving conclusions.

So if the maximum upside is 1% of wrong research removed, and the downside is
quenching some fraction of the good 19%, it's probably better to leave it
alone.

~~~
ketzo
I think the disproportionate harm argument is a big deal. A _single_ high-
profile case of fraud undermines trust in the entire field, and that’s
critical to maintain.

~~~
galimaufry
I don't think that's true. There have been frauds in materials science, but
people still trust it. The fields that are untrusted are the ones that are so
hard that it's plausible that the conventional wisdom might be wrong even if
everyone is honest.

------
logicslave
The root of the problem is that we allow people with high esteemed credentials
to run society. They jumped through a hoop.

"You got straight As at 14 - 18 years old and got into an ivy league school as
a result? Here run this venture fund."

"You got a PhD in Economics with a good publication by P-hacking your secret
data? Here take a run at the FED with power over the US Economy. Your Phd
shows that you are the man for the job."

"You got a PhD in ML by making some incremental improvement on some already
existing model and then doing massive hyper parameter tuning? Here, become
director of research at this big corporation."

Research will never be fully productive in this system, there are too many
people who have too much to gain from gaming the publication system.

~~~
opportune
All three of those examples are missing a step of like 5-20 years in the
middle. Nobody is _running_ a venture fund as a 22 year old ivy grad (barring
someone who's taking over for their family member, but in that case it has
nothing to do with being an ivy grad), nobody is running the FED right out of
a PhD, nobody is a director at a big corporation right out of a PhD.

Presumably once those people are in such high-power positions, they also have
a track record of real accomplishments behind them; it's certainly possible
they've lied and cheated their entire career but it's definitely less likely
they'll make it that far that way.

~~~
logicslave
Part of the point is that the gap in the middle often doesn't matter. I've
hung around the Stanford crowd and as I enter my thirties I am awestruck by
the opportunities they have regardless of what they have done since
graduation. The credential is almost all that matters, as long as they dont
mess up massively.

A common path is:

Graduate Stanford => Work at Mckinsey => Get hired into VC.

Graduate Stanford => Raise 15 Million dollar series A with your buddy =>
Doesn't matter what happens you will end up rich.

It's not a cynical opinion, I have seen it myself and I am doing very well for
myself. I worked under an Economics PhD who was the number one in his class
(top 5 phd program) and graduated top of his undergrad at UPenn. His
incompetence relative to his credentials shattered my respect for the way that
we allocate positions of power

~~~
apsec112
I think most people who raise series A and then fail don't get rich. I've
certainly known people who raised more than that, couldn't make the business
work and then wound up personally bankrupt.

~~~
logicslave
Possibly, but when you raise it at 24 or 25 years old it looks extremely
impressive and sets you up for upper management at other companies.

------
DoreenMichele
So, some years ago, Temple Grandin wrote a set of standards and McDonald's
adopted it and they buy so much beef that it became the de facto new standard
for the beef industry. And it's a set of standards that helps beef producers
succeed rather than a "gotcha" trying to find who is guilty.

And that's the way you make the world a better place. Not by looking for new
and creative ways to nail "bad guys" to the wall after you started from an
_assumption of guilt._

I don't like this article. I don't like it at all. My feeling is that it was
written as an emotional response to the pandemic and it is getting traction on
HN for the exact same reason.

People are stressed out and they are looking for a villain to go after. It
won't fix the real problem -- the pandemic -- but that's how people tend to
behave in a crisis.

And it's a slippery slope towards a more draconian world. It doesn't make
things better.

~~~
_Microft
The McDonald's story reminds me of the Brussels effect [0] where legislation
in the EU is extending (not by law but in its effect) to other parts of the
world because it is easier or cheaper to comply with it for all customers than
to treat EU costumers and others differently.

[0]
[https://en.wikipedia.org/wiki/Brussels_effect](https://en.wikipedia.org/wiki/Brussels_effect)

~~~
haihaibye
So that's why I have to accept cookies all the time.

~~~
_Microft
No, this is either because they have no idea what they are doing, or because
they like bossing people around in return for not getting their way regarding
user data while pretending this behaviour is because of consumer-unfriendly EU
regulations.

It's perfectly fine to set cookies that are necessary for providing the
service _without_ having to ask the user.

They only need to get one's permission if they want to do other things like
selling data to advertisers.

------
viburnum
A friend of mine was doing a chemistry phd when he discovered his supervisor
was falsifying data. If he had blown the whistle it would have ended his
career, but if he had played along his dissertation would have been based on
false data. Both options were bad so he just quit.

[https://www.statnews.com/2016/11/25/postdocs-grad-
students-f...](https://www.statnews.com/2016/11/25/postdocs-grad-students-
fraud/)

~~~
kovac
That's some guts to do the right thing. Respect.

~~~
ramraj07
I was not faced with such a shitty situation, and I probably would have done
the same as well, but the right thing would have been to report it.

~~~
viburnum
If you report it then the grant money is gone and everybody in the lab loses
their job and all the PhD students are derailed.

~~~
chrstphrknwtn
That’s the fault of the person falsifying data, not the person reporting the
fraud.

~~~
jjoonathan
Yeah but it still happens, and to people you care about who didn't deserve it.

~~~
henriquemaia
That's true. Several people will probably lose their jobs.

But that is the kind of thinking that condemns whistleblowing for the wrong
reasons. To put it differently, people rather prefer to keep on pretending if
that keeps their jobs than to do the right thing. And that makes everyone a
fraudster.

------
OminousWeapons
I'd argue that a lot of what the authors describe isn't actually "fraud", it's
more exploitation of an intentionally broken system.

I've definitely had authors on my papers who didn't do work. I've definitely
written papers for people who didn't do work. I've definitely done peer
reviews on behalf of PIs. Why do people do this? Because the regulators allow
it and they want the system that way. Why should who wrote the paper have any
impact on review? Why should it matter who the journal editors are? Why should
it matter where the paper is from? Etc...

~~~
MattGaiser
Because we use heuristics instead of actual analysis.

How many people citing a paper or reading it actually have time to think that
deeply about it?

The alternative to a world of trust and heuristics is a world where we are all
bogged down trying to make decisions.

This is heavily demonstrated in recruiting. Resume reading is about 7 seconds
a person. How long would it take if they spent a minute for each?

------
owenshen24
Interesting notes from the paper mentioned in the article:
[https://journals.sagepub.com/doi/pdf/10.1177/174701611989840...](https://journals.sagepub.com/doi/pdf/10.1177/1747016119898400)

\- "only 39 scientists from 7 countries have been subject to criminal
sanctions between 1979 and 2015 (Oransky and Abritis, 2017)" That seems...very
low.

\- "The Retraction Watch database—the largest of its kind—currently includes
more than 18,500 retracted articles (Retraction Watch database, 2019). A
recent analysis of 10,500 retracted papers up to 2016 showed that 0.04% of
papers are retracted." This is once again a lower-bound; presumably if you
account for additional authors and p-hacking the numbers go up a lot.

Pushing for replication and improved methodology can help, but some of these
issues seem to be related to scale. There are many more people outputting
papers than there are people willing to vet them (outside of peer review).
Furthermore, when you have many people researching hot fields, you should
expect false positives and overestimates to dominate published results, even
when everyone is trying to practice good statistical hygiene.
([https://journals.plos.org/plosmedicine/article?id=10.1371/jo...](https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124))

------
woofie11
Basic set of checks-and-balances:

* Preregistration and adoption of open science practices

* Public access to research results, methods, and data, with some exceptions (such as PII)

* Federally-funded universities can't use NDAs or non-disparage agreements

* Federally-funded universities must respond to records requests under terms similar to FOIA (note that FOIA has requestor pay costs)

* Federally-funded universities must adopt transparent governance

* Salary caps at federally-funded universities and affiliated organizations

* Conflict-of-interest laws with hard enforcement

* Federally-funded universities must publicly publish research misconduct and alleged research misconduct. The latter is tricky, since you don't want to smear the researcher without proof, but you also don't want to trust results.

This really needs reform.

~~~
seibelj
Salary caps? So the best people go into the private sector where they can
actually get paid?

~~~
wjn0
Agree, I don't think salary caps have much to do with the poor alignment of
incentives under discussion here.

------
DanielleMolloy
I've read that before 2003, the whole of humanity has published as many
scientific papers as from 2003 to 2016.

So what happened around 2000? Who has turned the scientific mission into a
blind competition for superficial metrics? So many people in science I meet
(apart from the few who benefited from this system, and therefore were
selected by it) are frustrated by publishing for the sake of publishing (not
science) and the bad incentives this system creates.

Who has thought that these superficial metrics would improve anything about
science and why?

~~~
nitrogen
With any exponential growth curve, you can point to some semi-recent point on
the curve and say 50% is on the right side of that point. If the population is
growing exponentially (it is), and the percentage of the population in
academia is consistent or growing (that I'm not sure), then you could
reasonably expect the quantity of research to be growing exponentially as
well. Maybe it's just a curve with a doubling period of 13 years.

------
RcouF1uZ4gsC
I think one of the big things that you can do is split up the data gathering
and data analysis.

Kind of like we don’t trust the companies to audit themselves, instead we have
an outside firm.

In this model, a researcher would create a hypothesis and collect the data.
Their team would write the background and methods sections of the paper.

Then the entirety of raw data would be sent off to a third party for data
analysis and they would write the results part of the paper.

The original team would then write the discussion part discussing the
implications of the study.

All papers would be required to be made public.

The idea is that there would be specialized firms that do the analysis on the
raw data for everyone. These would be carefully audited and certified by the
government. They have no incentive to play statistical games. If they get
caught cheating, then they have to pay for all the analysis on all the papers
to be done again by a competitor and if any errors are found in a paper
analysis it is automatically retracted. While the this re-analysis is going
on, all papers would be quarantined with a note stating the paper is having
its analysis redone. This incentivizes the analysis firm to be ethical, as
well as incentivizes researchers to pick ethical analysis firms.

Separating data collection from data analysis would help align incentives
better.

~~~
OminousWeapons
You would just get people massaging or generating the data to fit a conclusion
prior to sending it out for statistical analysis. If people want to cheat they
can find a way. The only way to really do what you are getting at is to
construct core labs or CROs to run all experiments on behalf of the
investigators. This is not infeasible in many cases (and is already done in
narrow ways) but it requires hiring staff scientists to run every experiment
rather than grad students or post docs and costs / complexity will explode.

The real way to defend against a lot of fraud is to force people to submit
actually detailed methods sections so experiments are legitimately
reproducible (they largely aren't now). This would catch a lot more fraud
quickly, although even this won't fully work as some experiments are simply
too costly to reproduce for validation purposes (eg animal studies).

~~~
guerby
This.

And also actually fund researchers who do reproducibility work. May be even
fund specialized teams that do only reproducibility work.

------
Gatsky
Research fraud gets a lot of attention because it is so black and white. But
it is a symptom of larger problems. One issue is that the pace of progress is
slowing, and as a result incremental gains are more prominent. This is fertile
ground for fraudsters, as they can produce results which are plausible enough
while seeming to be an important contribution to the field. All fraud fits
into this category. Nobody makes a new grand unified theory of everything
which they know is bogus. That would be too much work for a start.

The other issue is the huge expansion in university size. Most of the fraud
I've seen or heard about all happens in university research departments. This
shows you the importance of their incentive structure. One can make things up
and not only succeed, but do better than your competitors in this research
setting, AND get a tenured position with life-long security. All competitive
fields where achievement recieves external and highly persistent rewards
suffer from this problem, whether it be sport and performance enhancing drugs
or Ivy league univesrity admissions or even venture captial funding
(Theranos).

The natural response is to ask for more regulation and structural change in
how research is conducted eg pre-registration, different statistical standards
etc. But this has the major disadvantage of making life harder for the honest
people. It also requires the creation of some parallel work force to handle
all the checking. Research is already so difficult. Paradoxical effects, where
such measures increase fraud, are definitely possible.

There will never be zero fraud. The aim should be to change how research is
done to make the experience more humane, train and mentor young scientists
carefully and avoid perverse incentives. As far as I can tell, nobody has any
idea how to do this. Instead they want to create investigatory bodies which
will siphon off money that could be used for research, and then ruin lives
pursuing some key performance indicator like N successful fraud cases per
quarter. This experiment was already run in the USA with the Office of
Research Integrity, and it failed. Malcolm Gladwell, who I am not a fan of in
general, has a good podcast about it [1].

[1] [http://revisionisthistory.com/episodes/28-the-imaginary-
crim...](http://revisionisthistory.com/episodes/28-the-imaginary-crimes-of-
margit-hamosh)

------
DrNuke
Bluntly put, peer review is an overwhelming, unpaid, unsatisfying, time-
expensive task. The only step forward is to force the release of both data and
implementation, at least for the highest ranked journals, this possibly
opening more unpredictable cans of worms. It is the elitist academy model on
one side, not working any more, against the democratisation of research, still
and just a torrential flow of noise.

------
brownbat
You can't really improve this until the lead author doesn't have a final say
on how to treat outliers.

In any given study, there are going to be hundreds of special cases in the
data that you didn't anticipate, and you have to decide whether to include or
exclude them.

Any researcher will subconsciously be more sympathetic to arguments to exclude
subjects that go against the principal theory, and less so to subjects that
confirm it.

And it's a battle of reasonable arguments, most of the cases aren't bright
line fraud or misconduct, they're just humans finding some arguments more
compelling and the impossibility of escaping our own biases. (And yes,
sometimes it's fraud, but fraud is just the tip of the iceberg if we're
talking about genuinely improving the reliability of scientific findings.)

Prepublication is helpful, but more and more I'm convinced that the only way
to do proper science would be to completely disaggregate study design from
study execution.

------
lrnStats
Similar to research fraud, I want the medical field examined for anti-science
fraud.

I caught my Physician recommending an expensive and dangerous surgery that
could be done by a dentist or surgeon. I asked if there was data, she said
yes. There was no data. And the trend was using lasers rather than surgery
since it's safer. I confronted her and she said-

"If you ask a physician, they will recommend a physician. If you ask a
dentist, they will recommend a dentist."

This physician used factionalism rather than science.

I imagine this has happened on a massive scale.

------
lettergram
I think there is outright fraud and subtle fraud.

For instance, most mouse studies specifically as it relates to aging, drug
safety, and cancer research should be thrown out. This is well covered here:

[https://m.youtube.com/watch?v=pRCzZp1J0v0](https://m.youtube.com/watch?v=pRCzZp1J0v0)

That’s not the only issue, but it’s a known issue, everyone is ignoring as it
relates to all studies with the most common mice

~~~
Gatsky
This Weinstein mouse model thing is blown completely out of proportion, and is
only so prominent because his own brother Eric put it on The Portal.

The point is that everyone knows mouse models are imperfect, but they are
still useful. Pointing out one of the ways they are imperfect is fine.
Pretending this is a nobel prize winning earth shattering discovery that is
suppressed by the establishment is just annoying. Research about how mouse
models are imperfect comes out all the time, eg there was a paper about how
the relatively sterile lab environment changes the murine microbiome and
immune system. Why aren't these researchers on Joe Rogan's podcast?

~~~
aaron695
He specifically mentions comments like yours are not helpful. This is the
researcher fraud that is the problem in science that we are talking about,
issues at hand are ignored or twisted so the paper mill can continue.

The concept is all mice in the USA come from one laboratory and have a common
attribute that's different from wild mice and European suppliers.

And it's spelt out how in theory this different attribute could change medical
results.

To come back with we don't care, without proof it's not a issue, is a deeply
broken system, but is exactly how current universities work.

Here's the actual segment (18 mins) -
[https://www.youtube.com/watch?v=ve4q-1D_Ajo](https://www.youtube.com/watch?v=ve4q-1D_Ajo)

Related (12 mins) -
[https://www.youtube.com/watch?v=8ygLNOt43So](https://www.youtube.com/watch?v=8ygLNOt43So)

~~~
Gatsky
I've heard everything he said. I'm well aware of the concept. Again, everyone
knows that all mice come from the same place (because it's written in the
papers), and that they are very different to wild mice or mice from elsewhere.

I also didn't say we don't have proof that it is a problem. Again, all models
are imperfect, but some are useful.

I am telling you what I see as the significance of Weinstein's findings as
someone that works in the field.

It is nonsesnical to me that this particular concept is suddenly so prominent
because of Joe Rogan, whereas a multitude of other issues with research and
human translation are not talked about at all. To me what this is really about
is Weinstein's perception of the significance of his findings. He, like a lot
of scientists, thinks his work is hugely important and he deserves more
credit. Others do not. This happens all the time. The only difference is that
he has found a way to disseminate his findings to a non-specialist audience.

~~~
aaron695
You are still saying what Weinstein said you would say.

Weinstein has laid out a case. It theoretically makes sense. It is
quantifiable. He makes a good argument it is massively significant. If
confirmed it is fixable. If confirmed it means things in other fields.

This is in pop culture, anyone researching with mice should have heard of it,
so the fact no one can whip out an article disputing it says a lot.

Science is broken and scientists are not to be trusted. I think this is a good
test case how science deals with this and if it can actually move forward. It
needs a good rebuttal to address the issue.

~~~
Gatsky
I don’t think you are listening to me. I am not disputing his point. I think
it could well be valid and is certainly plausible. The idea is pretty clever.
Attempts should be made to understand it more and minimise the effect. Great,
add it to the list of problems with mouse models. It certainly isn’t at the
top.

~~~
aaron695
Honestly, the video only 18 mins (9 mins at 2X) long. Just give it a watch.

He says it's the top. He explains why it is the top. It is logical and well
thought out with theoretical and practical evidence it is the top.

I'm happy to consider it's not the top.

But you have to actually say what is at the top.

------
jostmey
I’ve seen lots of “questionable” research, so much, that I rarely read papers
published by other researchers because chances are the data is junk, and I’m
in academic biomedical research!!

------
hlfy_hn
Makes me think of Hans Lehrach. Dunno why

[https://www.molgen.mpg.de/hans-lehrach](https://www.molgen.mpg.de/hans-
lehrach)

------
AtOmXpLuS
Everything in
[AtOmXpLuS]([https://www.atomxplus.com](https://www.atomxplus.com))

------
BurningFrog
Starting to think it's time to close down the university system and start a
new one...

~~~
pas
Sure, but how would you organize the new one?

~~~
BurningFrog
I'd just hope new alternatives would emerge as the old system gets out of the
way.

Then we'll see what develops.

Not a selling argument, I know :)

------
IceCreamJonsey
Coincidence for me to see this here, as a book by Dr. Stuart Ritchie was just
published about this very thing.

[https://www.amazon.com/Science-Fictions-Negligence-
Undermine...](https://www.amazon.com/Science-Fictions-Negligence-Undermine-
Search/dp/1250222699)

------
jcahill
Certain groups have been serious the replication crisis for 10-15y, but
academic culture at large is simply not cut out to discuss fraud in a 'street
epistemology' sort of way, such as the way security researchers might discuss
cybercrime.

There's a wild amount of pushback to any amount of meta-criticism. But once
you get past that point, many roadblocks remain.

In particular, there's extreme bias for meta-statistical methodologies that
infer QRPs over other methods of investigation. Often, these methods aren't
strictly necessary in context, and afford the opportunity to turn the metasci
discourse into endless bikeshedding about the meta-framework rather than the
object of dispute.

Many interested parties will participate/perform in this discourse, but few
will sit down and really _look at_ things as simple as the logical structure
of the paper's claims, or even simpler problems with its content.

In the case of social psych, for instance, many problems lie with (a) stimuli
and (b) unexamined assumptions on the part of the researchers that a certain
manipulation holds, so these are important to assess. But extending critique
to these things is seen as "reviewer 2" behavior — uncollegial, unfair,
sniping, waaah etc.

Since "researcher x produces bad research, but evidence of fraud is only
circumstantial" isn't sufficient grounds for doing much of anything, little
comes of these efforts. When an investigation _does_ nail a fraudster to the
wall, well, so? The papers remain, the poisoned citation tree remains, the
culture remains.

More than anything, the pushback in the form of tone-policing is what gets me.
"Methodological terrorism" this, "reviewer 2" that. It shows how far from
consequences the gatekeepers are. If you're a young person, whole branches of
the academy have been pre-bankrupted for you. There's no hope there, short of
a research path that manages to avoid citing any prior literature. But don't
get tetchy with the grantlords!

Meanwhile, all around the US, real effects of this dogshit excuse for
scientific inquiry can be seen every single day. Police departments are
adopting new policies around known-bad implicit bias papers. These won't work.
_We know they won 't work. We've known this for years._

What would make it stop? Every time cops kill an unarmed person, academics who
still haven't retracted their implicit bias papers get fined?

There's the rub. You can't do much to force the point. Effective, timely
measures would be fairly brutal ones, and academics aren't ready to admit this
to themselves. In some ways, COVID-19 may end up being one of those measures,
though it will disproportionately affect younger researchers.

------
cblconfederate
Equalize the field, radically remove research hierarchies. Liberate research
funding from pointless bureaucracies and 4-year grants and instead pay
scientists to work on whatever they like most. Let science be fun again

I always return to this article from David Hubel (of Hubel&Wiesel):
[https://www.cell.com/neuron/fulltext/S0896-6273(09)00733-8](https://www.cell.com/neuron/fulltext/S0896-6273\(09\)00733-8)

 _I arrived at his lab around noon and found him working alone, recording from
an anesthetized macaque monkey. I asked him when he had started the
experiment, and he answered “in the morning”, which I finally realized was the
morning of the day before. So he had worked, by himself, all the day before,
all that night, and that day until noon. What was typical, in that era, was
not only the long hours but the fact that the project was done by one person,
single handedly. The major papers were either by Mountcastle alone or in
partnership with one other person. The leader of the physiology department was
Philip Bard, but the idea that Bard should have asked to have his name on any
of Vernon 's papers surely never occurred to anyone._

~~~
rumanator
> Liberate research funding from pointless bureaucracies and 4-year grants and
> instead pay scientists to work on whatever they like most.

Bureaucracy exists to ensure that funds are indeed spent on research instead
of jet skis and shiny toys, and hierarchies are established to focus work on
specialized topics pertaining to the state of the art of a field of research.

All your suggestions do is ensure that less funds are spent on research, and
the few work that is performed cannot be any form of deep dive or continuous
effort on a topic, thus resulting only in low-hanging fruits.

And by the way, the minute a researcher complains he needs help with his
research is the moment you get a hierarchy.

~~~
cblconfederate
bureaucracy cannot guarantee the quality of research beyond the jet ski bar

hierarchies are established to ensure that the PI gets citations from the work
of postdocs and postgrads

typical grants encourage shallow trend-chasing or incremental continuation of
the research of - often disinterested - senior PIs who simply get grants
because they have gotten a lot of grants.

The chase of the state of the art is just another incremental step - that 's
shallow

a research partner is not necessarily a subordinate

