
Fund ideas, not pedigree, to find fresh insight - seagullz
https://www.nature.com/articles/d41586-018-02743-2
======
cassowary37
Over the past decade I've sat on 10+ NIH study sections, and chaired a few.
This commentary is spot on. Consider the response to a flaw in a grant. For a
newer investigator or one no one has heard of, there will be much hemming and
hawing about concern about ability to execute the project. This is code for,
"who is this person in our sandbox anyway?" For one of us older guys (yes,
unfortunately the pronoun is mostly right), the response is, "but I'm sure he
can get this done, he must have thought about this." Even a modest scoring
'bonus' for new investigators doesn't correct this bias.

On the novelty front, reviewers are trained to focus on finding flaws or
absence of detail or preliminary data. The presence of any of these requires
that we score down - so it's just about impossible for something truly new to
get funded. Until we do the project, we don't necessarily know what problems
we'll encounter or what the data will look like. The small number of NIH
grants aimed at funding high-risk work by earlier-career folks (eg, the DP2
mechanism) are a great development but a tiny fraction of the portfolio. The
key thing to recognize is that any move to distribute funding towards younger
folks or higher-risk work causes the old guard to scream bloody murder - "what
do you mean I can't have 5 R01's at a time?". For study sections, I've
observed first hand that old habits favoring established investigators and
low-risk, incremental work are hard to break.

~~~
dekhn
By the way, I'm one of those people whose grants you turned down (well, I
don't know if it was your study section). After several attempts at getting
R01s with smart new ideas and receiving no feedback and no scores, and seeing
the same proposals being funded a few years later for more experienced
applicants, I reasoned that I could get more done in industry.

I left for Google, and got more done in 20% time (research-wise) than I ever
did using 100% time in academia.

When I did sit on study sections, I worked hard to help NIH understsand why
the older groups asking for closet clusters on a cloud grant weren't helping.
But NIH is very slow to move- it took at least 7 years to get to the point
where people could even apply for cloud credits instead of closet clusters.

~~~
cli
Is your current research done during that 20% time theoretical work (in
biology)?

~~~
dekhn
Mostly theoretical, see
[https://research.google.com/pubs/pub41893.html](https://research.google.com/pubs/pub41893.html)
although I have built a high throughput microscope for computer vision
experiments.

------
dalbasal
I think people are hesitant to over-criticise academia. Maybe they can already
hear the resulting anti-science sentiment, and fear losing hard fought ground
if academia is called into question.

However, I think there are serious signs that we should be looking for radical
changes to the metstructures of academia. They're not necessarily related, but
they point to something being really wrong.

First is the replicability crisis. Have we actually been accumulating
knowledge in fields where this is bad? Do we know more about human behaviour
due to the human behaviour research done in the last generation? What's the
point of all these "more research needed" conclusions. The process of academic
publishing and peer reveiw directs this whole thing, and it doesn't seem to be
all that well directed..

While we're on publishing, everyone in academia complains bitterly about the
publish-or-perish problem and about grant/funding politics.

One of the big complaints about grants is that senior academics spend all
their time on grants and administration. Success in many fields is more a
function of being good on administration and politics, than being good at
research.

While we're on administration... the number of administrators (not teachers or
researchers) has skyrocketed over the last 2 generations.

Ultimately, I think that the "scientific method" in practice is in large part
embodied the academic publishing system. ...the algorithm determining how
science works.

~~~
wgerard
> I think people are hesitant to over-criticise academia.

Really? Maybe inside academia itself that's true, but I think people outside
are fairly eager to criticize academia and put down academics.

I feel like anytime a study with even a slightly intriguing/controversial
abstract gets circulated, one of the following happens (in order of
frequency):

1) Narrow criticism of something very specific (usually sample sizes) with
very little understanding about why that criticism might be misinformed
(again, usually relating to sample sizes and how statistical significance
works) which they use to discredit the whole paper

2) Pointing to previous papers that may (but often don't) contradict what the
paper in question is suggesting - and "suggesting" is a loose term here since
people tend to draw all sorts of conclusions that the author often doesn't.
This is then used to basically throw hands up and say "it's unknowable."

3) Just plain dismissal, especially when the paper's implication is something
that the reader doesn't want to hear. This comes in all forms, from the
pedigree-based "oh who cares what someone from Montana State says what do they
know" to the more direct ones like "why the hell would I listen to Bernanke
talk about business, he's an ivory tower economist"

~~~
castle-bravo
I think there are two main kinds of criticism here; the first comes from
people who think a lot of published work is misleading at best and fraudulent
at worst (see replicability crisis); the second comes from people who think
there's something horribly wrong with the culture of academia, notwithstanding
the quality of output. Criticism in the second category touches on topics like
adjunct professors (most instructors of undergraduate courses are temp workers
making near minimum wage with no benefits), the takeover of universities by a
class of professional administrators (many of whom have unbelievably cushy
jobs), the distribution of research grants to researchers with the most
political pull rather than those doing the most interesting work, the pressure
put on researchers to acquire grants (see Stephan Grimm), and the crushing
debt burden most undergraduates take on to get a university degree (most of
which is spent frivolously while instructors and researchers work themselves
to death). I think what you've described sits more in the first category than
the second, while the parent belongs more to the second.

~~~
pdfernhout
Here are some related quotes on social problems in science cover a wide range
of concerns (from an essay I wrote in 2011: [http://pdfernhout.net/to-james-
randi-on-skepticism-about-mai...](http://pdfernhout.net/to-james-randi-on-
skepticism-about-mainstream-
science.html#Some_quotes_on_social_problems_in_science) ) -- although perhaps
they mostly all fit under the broad categories of fraud or culture as you
suggest? Even if they all do fall into one or the other, perhaps one could use
them -- long with your examples -- to begin to categorize the specific types
of fraud and the types of dysfunctional cultural interactions and then begin
to try to assess their frequency and impact?

From an article about a sociologist and anthropologist who studies science and
technology, Bruno Latour:
[http://en.wikipedia.org/wiki/Bruno_Latour](http://en.wikipedia.org/wiki/Bruno_Latour)
"In the laboratory, Latour and Woolgar observed that a typical experiment
produces only inconclusive data that is attributed to failure of the apparatus
or experimental method, and that a large part of scientific training involves
learning how to make the subjective decision of what data to keep and what
data to throw out. To an untrained outsider, Latour and Woolgar argued the
entire process resembles not an unbiased search for truth and accuracy but a
mechanism for ignoring data that contradicts scientific orthodoxy."

A quote from another academic, Brian Martin, involved with Science and
Technology Studies:
[https://web.archive.org/web/20100221213343/http://www.suppre...](https://web.archive.org/web/20100221213343/http://www.suppressedscience.net/physics.html)
"Textbooks present science as a noble search for truth, in which progress
depends on questioning established ideas. But for many scientists, this is a
cruel myth. They know from bitter experience that disagreeing with the
dominant view is dangerous - especially when that view is backed by powerful
interest groups. Call it suppression of intellectual dissent. The usual
pattern is that someone does research or speaks out in a way that threatens a
powerful interest group, typically a government, industry or professional
body. As a result, representatives of that group attack the critic's ideas or
the critic personally-by censoring writing, blocking publications, denying
appointments or promotions, withdrawing research grants, taking legal actions,
harassing, blacklisting, spreading rumors. (1)"

From David Goodstein, who was Vice Provost of Caltech:
[http://www.its.caltech.edu/~dg/crunch_art.html](http://www.its.caltech.edu/~dg/crunch_art.html)
"Peer review is usually quite a good way to identify valid science. Of course,
a referee will occasionally fail to appreciate a truly visionary or
revolutionary idea, but by and large, peer review works pretty well so long as
scientific validity is the only issue at stake. However, it is not at all
suited to arbitrate an intense competition for research funds or for editorial
space in prestigious journals. There are many reasons for this, not the least
being the fact that the referees have an obvious conflict of interest, since
they are themselves competitors for the same resources. This point seems to be
another one of those relativistic anomalies, obvious to any outside observer,
but invisible to those of us who are falling into the black hole. It would
take impossibly high ethical standards for referees to avoid taking advantage
of their privileged anonymity to advance their own interests, but as time goes
on, more and more referees have their ethical standards eroded as a
consequence of having themselves been victimized by unfair reviews when they
were authors. Peer review is thus one among many examples of practices that
were well suited to the time of exponential expansion, but will become
increasingly dysfunctional in the difficult future we face. "

About a book by Jeff Schmidt, a previous editor of Physics Today magazine:
[http://www.disciplined-minds.com/](http://www.disciplined-minds.com/) "In
this riveting book about the world of professional work, Jeff Schmidt
demonstrates that the workplace is a battleground for the very identity of the
individual, as is graduate school, where professionals are trained. He shows
that professional work is inherently political, and that professionals are
hired to subordinate their own vision and maintain strict "ideological
discipline"."

From Marcia Angell:
[http://www.nybooks.com/articles/archives/2009/jan/15/drug-
co...](http://www.nybooks.com/articles/archives/2009/jan/15/drug-companies-
doctorsa-story-of-corruption/) "The problems I've discussed are not limited to
psychiatry, although they reach their most florid form there. Similar
conflicts of interest and biases exist in virtually every field of medicine,
particularly those that rely heavily on drugs or devices. It is simply no
longer possible to believe much of the clinical research that is published, or
to rely on the judgment of trusted physicians or authoritative medical
guidelines. I take no pleasure in this conclusion, which I reached slowly and
reluctantly over my two decades as an editor of The New England Journal of
Medicine."

From the Atlantic from a few years ago: "The Kept University"
[http://www.theatlantic.com/past/docs/issues/2000/03/press.ht...](http://www.theatlantic.com/past/docs/issues/2000/03/press.htm)
"Commercially sponsored research is putting at risk the paramount value of
higher education -- disinterested inquiry. Even more alarming, the authors
argue, universities themselves are behaving more and more like for-profit
companies..."

Also from the Atlantic, just recently: "Lies, Damned Lies, and Medical
Science" [http://www.theatlantic.com/magazine/archive/2010/11/lies-
dam...](http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-
and-medical-science/8269/) "Much of what medical researchers conclude in their
studies is misleading, exaggerated, or flat-out wrong. So why are doctors --
to a striking extent -- still drawing upon misinformation in their everyday
practice? Dr. John Ioannidis has spent his career challenging his peers by
exposing their bad science."

From a book about how mainstream biologists have systematically edited out or
been oblivious to evidence for homosexuality in animals:
[http://www.amazon.com/Biological-Exuberance-Homosexuality-
Na...](http://www.amazon.com/Biological-Exuberance-Homosexuality-Natural-
Diversity/dp/0312192398) "Bruce Bagemihl writes that Biological Exuberance:
Animal Homosexuality and Natural Diversity was a "labor of love." And indeed
it must have been, since most scientists have thus far studiously avoided the
topic of widespread homosexual behavior in the animal kingdom--sometimes in
the face of undeniable evidence. Bagemihl begins with an overview of same-sex
activity in animals, carefully defining courtship patterns, affectionate
behaviors, sexual techniques, mating and pair-bonding, and same-sex parenting.
He firmly dispels the prevailing notion that homosexuality is uniquely human
and only occurs in "unnatural" circumstances. As far as the nature-versus-
nurture argument--it's obviously both, he concludes. An overview of
biologists' discomfort with their own observations of animal homosexuality
over 200 years would be truly hilarious if it didn't reflect a tendency of
humans (and only humans) to respond with aggression and hostility to same-sex
behavior in our own species. In fact, Bagemihl reports, scientists have
sometimes been afraid to report their observations for fear of recrimination
from a hidebound (and homophobic) academia. Scientists' use of
anthropomorphizing vocabulary such as insulting, unfortunate, and
inappropriate to describe same-sex matings shows a decided lack of objectivity
on the part of naturalists. ... Throw this book into the middle of a crowd of
wildlife biologists and watch them scatter. ..."

Some more links I've collected about failures of science as a social
enterprise (including educational aspects, like David Goodstein also talks
about) are posted in comments here:
[http://science.slashdot.org/comments.pl?sid=1932134&cid=3474...](http://science.slashdot.org/comments.pl?sid=1932134&cid=34740048)
[http://science.slashdot.org/comments.pl?sid=1932134&cid=3474...](http://science.slashdot.org/comments.pl?sid=1932134&cid=34740098)

More on the schooling aspects of dumbing people down and making them
conformists (according to New York State Teacher of the Year, John Taylor
Gatto):
[https://web.archive.org/web/20110815021909/http://listcultur...](https://web.archive.org/web/20110815021909/http://listcultures.org/pipermail/p2presearch_listcultures.org/2009-October/005379.html)
[https://web.archive.org/web/20110815021909/http://listcultur...](https://web.archive.org/web/20110815021909/http://listcultures.org/pipermail/p2presearch_listcultures.org/2009-November/005584.html)
[https://web.archive.org/web/20110815021909/http://listcultur...](https://web.archive.org/web/20110815021909/http://listcultures.org/pipermail/p2presearch_listcultures.org/2009-November/006005.html)

No doubt one could find quotes celebrating science (or even schooling, as
opposed to true education). I am not denying science (and even some schooling)
has not been useful in some cases. I agree science may move by fits and starts
and to an extent be self-correcting. Even as science and academia tend to take
the credit for a lot of engineering skill learned on-the-job and the
innovation that such skills may lead to. :-)

Still, if you think about all these quotes from professionals in the field of
science, you can see that science, as a human enterprise, has some major
social problems operating within a capitalistic framework. Now, science also
has problems operating within a feudal/religious framework, like Galileo
encountered (and David Goodstein discussed in "The Mechanical Universe"). And
science has problems operating within a totalitarian framework like with
Lysenkoism in the USSR.
[http://en.wikipedia.org/wiki/Lysenkoism](http://en.wikipedia.org/wiki/Lysenkoism)

But the key point is, science can have major systematic problems related to
the socioeconomic system it is part of. No amount of skepticism can really fix
that as a big issue. Skepticism can help us to deal somewhat with the
consequences, but ultimately, pervasive skepticism related to worries about
fraud and dumbed-down people everywhere is very wearing and psychologically
expensive.

~~~
castle-bravo
Thanks for taking time to write such a long response. I've bookmarked your
website.

------
abhv
I have served on dozens of NSF computer science funding panels at the small
(500k), medium (1.5m), and large (3m) levels. These review panels are not
blind, and decisions are based on (a) what problems are explained in the
grant, and (b) prior track record and proven expertise in the area.

In my own circumstance and what I know from my colleagues, CS researchers
_rarely_ write their best ideas in grants---not because they are afraid the
ideas are too bold---but rather because those ideas are often not fully worked
out, and nobody wants to just give those away to a review panel full of top-
rank scientists who might make connections faster than you!

The problem with "blind review" is that project proposals can rarely be
anonymous because just explaining the work and citing the relevant prior work
leaks a lot about the possible author. So making review blind can often be an
advantage to the higher profile researcher and a disadvantage to the capable
but slightly less well known one.

That said, as my own experiment, during my next NSF review, I am going to tear
away the first pages of all the proposals, and make my first pass without
knowing the authors to see if it makes a difference.

------
supernova87a
Upvoted the comment that says "I think this goes against YC philosophy". The
YC guide explicitly calls out team experience and likelihood to succeed as a
criterion.

I think there's a fetishization of complete blind objectivity in so many
fields right now, which has at the root of it trying to iron out inequality. A
laudable goal, but you shouldn't let that ideal completely upend the practical
question to yourself of, where am I going to place my bets?

An unavoidable fact is that ideas alone are not enough to make some technology
or venture successful. Sure, some one-off inventor may have a brilliant idea,
but the idea is just like 1% of the problem.

If your model is to fund a team to develop an idea, the track record /
personalities of the team are quite important.

Most people in this world are barely able to get their own lives under
control. What's the likelihood that a random person with a great idea can take
that to something commercially viable? There's a reason that who it is
matters, and unfortunately, that still leads to people being selected who
reflect the starting set of less-than-diverse people.

~~~
neuromantik8086
Most researchers applying for NIH grants aren't "random people"; all of them
have gotten through PhD programs and potentially have done one or two
postdocs, which already indicates that they have sufficient "team experience"
and are already quite accomplished in their own right.

This article isn't saying that _anyone_ should be able to get an NIH grant,
but rather that grants should be awarded less due to politics and more based
upon potential merit and the promise of an idea. I.e., This editorial is being
published in Nature, not viXra.

The central issue is not one of team dynamics, or rooting out inequality on a
societal level, but rather about rooting out inequality among equally capable
candidates. Currently, many academic fields are dominated by cronyism and that
is what the article is advocating against.

------
danieltillett
Peer review is not about funding the best science, it is about funding the
best connected and ensuring those that have climbed to the top keep their
position even if they are out of ideas.

More seriously, the problem with approaches like this is the people who are
chosen would be mad to actually work on the idea they put up. At most the
funding is for 18 month and what happens if at the end of that time you have
nothing?

If you want this idea to work the funding has to be for at least 10 years so
that people can afford to take real risk and have enough time to build a track
record if the idea doesn't pan out.

~~~
cassowary37
not sure this is entirely fair. While there's certainly some cronyism (see my
earlier comment), study sections are mostly about funding the _safest_ science
- i.e., the project with the least chance of failure and the fewest flaws. But
they don't have to be. Many of the 'big idea'-focused mechanisms at NIH are 4
or 5 years - plenty of time to make real headway. These would work if we
supported enough of them. Parenthetically, high-risk science does not mean
shooting the moon - a well-designed project, even if it's high risk, will
still yield /something/ valuable both for the field, and for the
investigator's career development. That's why reviewers tend to look closely
at the alternatives/pitfalls section of grants - what will you do if the
experiment blows up?

~~~
danieltillett
I am somewhat biased by having been through the Australian system where
cronyism is alive and well.

If you have 4 or 5 years you really only have 2 years before you have to
switch to low risk if things aren't working out to give yourself time to get
enough papers out before you need to apply for the next grant.

What I did when I was an academic was spend 70% of my and my students/postdocs
time on low risk activities and 30% on high risk. This keeps the risk down and
still allow for some dreams.

------
yetihehe
The biggest problem with funding ideas - they fail more than proven
conservative projects. When you select idea that fails, you can be accused of
wasting funds, and not selected for next funding meeting.

~~~
microtherion
It seems from an article that the funding agencies were pleased with the
return they got from an "ideas" portfolio. Maybe more ideas failed, but the
projects that succeeded did so in more spectacular ways — a bit like a VC
portfolio.

The problem, it seems to me, is similar to VC: The funding agencies can spread
their risk across a pool of ideas, but the investigators generally have to
commit to one or a small number of projects for some time, so they risk
spending several years without much to show for it if the idea fails.

------
pmiller2
When do we get to see the corresponding article for hiring? "Stop hiring
exclusively from the same handful of schools"?

------
mkollo
One other thing worth mentioning is the obsession of funders with hypothesis-
driven research. They do this out of defensiveness to justify spending, but in
the long term, it reinforces a daisy-chain of stale ideas and a selection bias
that distorts data to adhere to popular models.

------
mempko
This might be a fine way to fund innovation (taking invention and turning it
into capital ) but it's a terrible way to fund invention. By definition there
is no idea when you set out to invent.

innovation produces billions of dollars of wealth . Invention produces
trillions. Are we sure we have our priorities straight as a species?

------
EGreg
I think this goes against the YC thesis.

~~~
david927
And the thesis of almost all of venture capital, all of whom have made a lot
of money on the insight and innovation of the past, but which has failed to
created any new innovation of any significance.

This should be better said as, "Fund fresh insight, to find fresh insight."
And while that sounds tautologically obvious, almost no one is doing it.

------
mankash666
I think this applies to YC-type incubators and VC investment as well. Often
times pedigree seems to outweigh fundamentals regarding execution in VC
funding

