
Science has lost its way, at a big cost to humanity - smanuel
http://www.latimes.com/business/la-fi-hiltzik-20131027,0,1228881.column
======
FD3SA
_" The demand for sexy results, combined with indifferent follow-up, means
that billions of dollars in worldwide resources devoted to finding and
developing remedies for the diseases that afflict us all is being thrown down
a rathole. NIH and the rest of the scientific community are just now waking up
to the realization that science has lost its way, and it may take years to get
back on the right path._"

The institution of science is undergoing a catastrophic decline. The reason
behind this is simple: it is no longer a growing economy. Public funding for
science is frozen or being cut, private R&D labs are shuttering their doors,
and companies are increasingly concerned with quarterly results at the expense
of long term research.

And why should it be otherwise? Science has never payed off as a logical
financial investment. It is the riskiest of gambles by definition, requiring
inordinate expenditures of time and resources in the present for a chance at
some distant breakthrough decades or even centuries in the future.
Institutional science is not an economically sound choice in the best of
times, let alone during the current span of never-ending recessions.

The truth is, science is a creative pursuit much like the arts. Like the
creation of literary masterpieces or profound paintings, it has never made
economic sense in the present. Only afterwards, once the impact can be seen,
do we understand its significance. And that is why it will always be worth
pursuing.

The reality is that, increasingly, we live in a society that does not
understand this philosophy of life. People only care about how they will
survive tomorrow, and who can blame them, as the world economy gets ever more
competitive and cut-throat.

Increasingly, it has become clear that our society does not reflect one
designed with its own best interests at heart. Why this is, how it happened,
and how we can change it, will be the greatest challenge of our lifetimes.

~~~
freyrs3
> Public funding for science is frozen or being cut, private R&D labs are
> shuttering their doors, and companies are increasingly concerned with
> quarterly results at the expense of long term research.

It's ironic that Silicon Valley owes most of it's existence to investments by
the DoD and ARPA. Most of the technology we use in the electronics industry
today would not exist except for all the public money that was poured into
semiconductor research. Slashing public funding, especially of fundamental
research, is one of the worst things we could probably do for our economy's
future.

~~~
WalterBright
The transistor invention was privately funded.

~~~
cowsandmilk
The semiconductor material used by Bardain, Brattain, and Shockley for their
Nobel Prize work at Bell Labs was developed by researchers at Purdue
University under a grant funded by the National Defense Research Council.
Their developments were hardly independent of publicly funded research.

~~~
jlgreco
If you want to play this game, you can go back even further and find that they
built on work done by privately funded individuals during the Victorian Era
and during the Enlightenment. Go back even further and you'll find that much
(but by no means all) of the wealth that funded that was acquired thanks to
Feudalism.

I therefore credit feudalism for the creation of the transistor.

~~~
anaccountonhn
You seem to have misunderstood the comment, (the sibling comments should help
explain, or maybe some light reading about the invention of the transistor).

~~~
jlgreco
I have no idea what you are getting at. I am familiar with the invention of
the transistor, and understand the comment I was replying to completely.
Furthermore, there are no sibling comments to my comment....

What I am guessing went wrong here is that you are unfamiliar with sarcasm, so
I'll help you out: I don't actually credit Feudalism with the invention of the
transistor.

~~~
drzaiusapelord
Don't bother. HN is libertarian leaning thus will defend military spending and
fight any criticism of the US military or its massive spending to the death.

I'm not even going to go into how a lot of that spending, if freed up, would
go toward endeavors that are not military related and to a parallel timeline
we can never know. Imagine NASA with 10 or 50x the budget. Or NSF with 10x the
budget, etc. We'd probably be typing this on a moonbase or on a cottage on
Alpha Centauri's earth-like planet.

Instead we fawn over the peanuts that falls out of the elephants mouth and
praise its generosity for feeding the hungry.

~~~
jlgreco
I think you may have responded to the wrong person. I'm just making a point
that who you credit with a discovery can change many times if you are willing
to point at the owner of the shoulders that the inventor was standing on.

Transistors discovered by a private lab? A private lab with government
funding? A private lab building on work done by individuals such as Faraday?
Individuals who in many cases received government funding in the UK?
Individuals who were building on work by Benjamin Franklin, a self-made and
self-funded man? Benjamin Franklin, who doubtlessly was enabled by early work
on the scientific method itself by Roger Bacon? Roger Bacon, who was supported
by the catholic church? Roger Bacon, who built on work of earlier Muslim
scholars?

If we want to play the _" who gets credit"_ game, we need to decide beforehand
how many times we are going to go down the _" who funded who"_ tree, and _"
who researched the prerequisites"_ tree.

I'm not talking about politics; I am pointing out that you people are talking
past each other because you all have different ideas of how to assign credit.

~~~
drzaiusapelord
Right, my point is that a lot of people here are invested in the idea of
"military solves all" and will try to disingenuously tie all innovations to
military or defense financing.

------
timr
There's nothing wrong with criticizing science, but the reaction to _The
Economist_ article -- which itself was a bit too breathless for comfort -- is
heading rapidly into tiny-green-football-linkbait territory.

The scientific funding and publication system has problems that deserve
scrutiny, but _science_ itself is far more rational than nearly any large,
human-maintained system I can think of.

When we resort to hyperbole like _" science has lost it's way"_, we give a
group of vocal, clueless idiots more power to undermine the most consistently
productive engine for progress that humanity has ever devised. So let's talk
rationally about the problems, but don't throw the baby out with the
bathwater.

~~~
XorNot
"Science has lost its way" is usually a statement I see trotted out by people
in the media, who think that the quality of scientific journalism by the
media, represents the quality of the actual science it fails at reporting on
accurately.

"Scientific journalism has lost its way" would be more accurate were it not
for the fact it clearly never had one.

------
beloch
First of all, criticizing science is like criticizing democracy. Yes, it's
flawed, but still far better than anything else we've tried so far!

If you look at the typical application for funding, you'll see questions that
basically prod you to explain why your research/students/etc. are
exceptional/revolutionary/ground-breaking. Everything must look like a Nobel
prize waiting to happen if you're going to have a chance at beating out
everyone else making the same application (and exaggerations). It's utterly
ridiculous! It's as if thousands of guitarists were auditioning at the same
time and, in an effort to be heard, each has cranked their amp to 11. The
result is a cacophony where even each individual sounds awful because of the
distortion. If everyone dialed it down to 5 things would be bearable, but
there's always someone willing to nudge it up to 6 or 7...

A nice long list of high profile publications is great to hype when your amp
is set to 11. If you have published many papers in high impact factor journals
(again, often by inflating the significance of your work), you must be worth
funding!

Perhaps scientific funding needs to be awarded in a manner that is more...
scientific. Heck, perhaps funding agencies should reserve a certain percentage
of their funding specifically for reproduction of results. Currently, if you
apply for a grant to check other peoples work, people doing original research
will win absolutely every single time. Unfortunately, the preference for
original research goes right to the very top of governments. Politicians want
brilliant nobel price winners, not competent fact-checkers.

------
tensor
One idea to improve the state of things is to require graduate students to
verify some number of external studies. In addition to helping with the
problem of not enough review, it would make an excellent practical test for
doctoral candidates.

It wouldn't work for every field and area, but it could work for a significant
subset of research.

~~~
PeterisP
Motivation is an issue - each verification would require a bunch man-months of
effort, so it won't happen unless there is separate funding for that or
somehow magically started to be as prestigious as putting the same effort in a
new experiment/publication.

"Requiring" has the same motivation problem - those who could require it,
currently would rather require those students to do something that brings
funding or prestige, so they won't.

~~~
tensor
Graduate students are already required to take courses and various tests that
don't bring funding or prestige. This could simply be an additional
requirement, or replace an existing requirement.

In time, such a program would bring quite a lot of prestige as flaws are
discovered and fixed in existing work. It's really the easiest and most
immediate way to address this problem since it only needs to involve single
institutions (whose faculty presumably care deeply about this issue).

It doesn't seem likely that journals will suddenly start valuing verification
work. Similarly, politicians and funding agencies appear uninterested in
actual science; they care only about their careers or immediate application to
the politically popular cause of the day.

~~~
XorNot
Graduate students already verify people's work. Because every new graduate
project involves building on the work of previous science. Which by definition
involves re-verifying the work to show you get the same results.

If the results are hard to replicate, or dependent on other causes, then
usually that's when a project shifts or when the knowledge pool expands (for
example chemistry is fraught with environmental effect dangers - fluorescent
lights provide UV to reactions, your glassware has imperfections, the
temperature and humidity of labs varies with climate).

------
omnisci
I’m a scientist and I agree with this article. Fact is that the way we
incentivize science is what is causing these issues (that and tenure, but that
is a longer post).

Science, at least in biology, is just like any business. Both are motivated by
$.

The good news is that open science and the impact that the internets is having
on science can help this problem. In my opinion, transparency in science will
fix many of these issues.

------
brownbat
A nod should be thrown out here to John Ioannidis, who has been banging this
drum for a while.

1\. [http://www.theatlantic.com/magazine/archive/2010/11/lies-
dam...](http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-
and-medical-science/308269/)

2\.
[http://marginalrevolution.com/marginalrevolution/2005/09/why...](http://marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html)

Marginal Revolution also throws a tip of the hat to Brad deLong and Kevin
Lang's paper condemning econ journals for refusing to print any piece that
fails to reject the null hypothesis:
[http://www.jstor.org/discover/10.2307/2138833?uid=3739936&ui...](http://www.jstor.org/discover/10.2307/2138833?uid=3739936&uid=2&uid=4&uid=3739256&sid=21102824481747)

Speaking of econ, this feels like a market failure to me. I'd be in favor of
redirecting some portion of government research dollars towards an independent
"validation" shop staffed by scientists who attempt to independently replicate
submitted findings, and vow to write up all results (even detailing events
that lead to the interruption or cancellation of an experiment). Findings that
cannot be replicated by the validation shop should be viewed with extreme
skepticism. Researchers would quickly learn not to fudge the results, and find
more effective ways to control their own unintentional biases.

It wouldn't have to be government. It could be useful as a nonprofit, I just
don't think it'd be sexy enough for anyone to support, despite the sort of
urgent necessity of something like this.

~~~
aaren
For subjects with a lot of data analysis you could largely automate a part of
the validation as scientists use programming languages.

Papers would have to provide access to their raw data and the code used to
process it. You could then just run code(data) to generate the figures /
results in the paper.

Two problems here:

1) assumes that the raw data is ok (which is a big assumption).

2) existing scientific code is largely terrible (completely imperative, no
abstraction, no documentation, poor typing conventions), and it is very
unlikely you can do

    
    
        figures <- code(data)
    

to regenerate a paper.

This is what forced validation would aim to change though.

~~~
a-priori
That only proves that their analysis is right. It doesn't rule out the
possibility that their results are caused by a quirk of their implementation.

To really validate a paper you have to be able to recreate their code from a
high-level description, then check that.

~~~
aaren
I agree, but this would give you a starting point in that you could see
exactly what steps were used to create the figures in a paper.

It might be that their high level description (usually in papers anyway) is
correct but their implementation is flawed in some subtle way that peer review
doesn't pick up.

Assuming a correct implementation this would be useful for anyone wanting to
use the methods of the paper.

For example, I've spent the last 3 weeks coding a numerical solver for some
equations in a fluid mechanics paper. Having the code for this available would
have saved me figuring out all the quirks of the solution.

------
wallflower
"Because of science - not religion or politics - even people like you and me
can have possessions that only a hundred years ago kings would have gone to
war to own. Scientific method should not be take lightly.

The walls of the ivory tower of science collapsed when bureaucrats realized
that there were jobs to be had and money to be made in the administration and
promotion of science. Governments began making big investments just prior to
World War II...

Science was going to determine the balance of power in the postwar world.
Governments went into the science business big time.

Scientists became administrators of programs that had a mission. Probably the
most important scientific development of the twentieth century is that
economics replaced curiosity as the driving force behind research...

James Buchanan noted thirty years ago - and he is still correct - that as a
rule, there is no vested interest in seeing a fair evaluation of a public
scientific issue. Very little experimental verification has been done to
support important societal issues in the closing years of this century...

People believe these things...because they have faith."

From Kary Mullis, the Nobel Prize in Chemistry winner (and the genius inventor
of PCR) in an excellent essay in his book "Dancing Naked in the Mind Field".

------
learc83
Real science depends on room to fail, but starting in middle school science
fairs, it's clear that negative results and "failed" experiments aren't what
the teachers/judges are looking for.

I made it through to the state science fair in 8th grade. It was based around
magnetism, and after months of work, it turned out my tests just weren't
sensitive enough to measure any difference in any of the electromagnets I
built.

When I mentioned this to my teachers, they encouraged me to _fix_ the results
with a wink and a nod. Sure I could have turned in a "failed" project and
maybe got a B, but I was an A student and there was no room for failure.

I'd love to see some data on how many high level science fair projects are
faked each year.

~~~
XorNot
Except this is completely mis-stating it.

People love negative results. But proving something _doesn 't_ work requires
proving you didn't stuff up the implementation. That is _amazingly_ hard, and
about 10x more work then showing a positive result.

For example what you're talking about with magnetism isn't a negative - you
can prove that an effect is bounded by the lower limit of accuracy of your
instruments. Physicists do this all the time, because it's the correct way to
phrase the result. I'll not to speak to the quality of your teachers though.

------
DougN7
This is not only in medical science, but apparently archaeogy too. (Anecdote
alert) A very close friend took part in a dig by a very well known professor.
Artifacts that were found that disproved the professor's theories were
destroyed before my friend's eyes.

It really makes you wonder what percentage of what we "know" is true.

~~~
PeterisP
Nothing new here - read up on, for example,
[http://en.wikipedia.org/wiki/Bone_Wars](http://en.wikipedia.org/wiki/Bone_Wars)
almost 150 years ago.

------
aheilbut
There are a lot of issues flying around that are being inappropriately mixed
up for all sorts of political purposes.

The Begley 'study' is impossible to assess, because they didn't report what
the studies were, nor any of their methods, or anything. It's BS and hearsay,
not science. Moreover, according to the Begley article itself, "The term 'non-
reproduced' was assigned on the basis of findings not being sufficiently
robust to drive a drug-development programme."

Nobody said that the purpose of every single scientific paper was to enable
Amgen to go start a drug-development program.

There are many problems with the science funding situation, the glamour pub
game, excess hype, funding getting sucked up by mega-projects, lack of open-
access, inability to publish negative results, etc, etc, etc, but in general,
it is not true that scientists are making up sexy results to get them into
Nature and Science.

------
friendly_chap
This is just way too true. I know someone (anecdote alert) learning at a
really prestigious university in a medical field and she told me multiple
times that they intentionally cheat the results to match the expected output.

By cheating I mean... flat out lying. I don't know the implications of this
(how far misinformation can get) but it seems like a wrong culture and
attitude, especially for science.

~~~
dnautics
Such a thing happened in my graduate school lab. I'll even cop to having a
data point in one graph where I just got sick and tired of doing the
experiment so there's an N of 4 instead of an N of 5 as it states it is in the
methods section (this is actually impressive for the field I was in at the
time, which typically didn't even do replicates at all. I am sure my
colleagues' results are, at best, cherry-picked).

------
magicalist
"lost its way" suggests that science was once firmly on a sure path of
rigorously verified studies, never a thrice-checked statement assumed. That
was never the case.

Acknowledging, attempting to quantify, and then (some institutions) attempting
to fix systemic issues in the peer review system is not an emerging crisis. We
have to fix incentives, but we aren't about to see some fundamental tenets
about to be overthrown here.

It isn't clear what the "big cost" is referring to. Certainly money has been
spent on poorly-founded studies with fundamentally inconclusive results. If it
instead refers to opportunity cost, fortunately we have the entire future of
humankind to pick up what we might have figured out earlier.

------
cossatot
While I certainly agree with much of the factual content presented both in the
article and in the comments, I think that science already has a lot of self-
correction mechanisms built in. None are perfect individually, but the big,
messy system has a lot of redundancy built in. It's just not always so visible
to journalists or science writers, who don't hang around the scene for the
years that it often takes for science to find its way again, so to speak.

For example, many of these high-profile, possibly erroneous (or occasionally
fraudulent, it seems) Nature or Science articles are high-profile because they
seek to address a contentious or long-standing problem in the field. When this
happens, there are typically existing alternate hypotheses. It's much easier
to get papers published or grants funded by seeking to test competing
hypotheses than to simply try to verify an isolated study. It can also be
easier to find weaknesses in an individual study by testing it in a different
way, or against other models, or whatever, than by simply trying to reproduce
it. Often, a single study might be impossible to directly replicate, or the
underlying flaws may not be apparent until the problem is approached from a
different angle.

Granted, this can take a couple years or even decades, but falsehoods
(intentional or not) tend to become more apparent as their context becomes
more clear.

------
pg
Science Exchange is on it:
[https://www.scienceexchange.com/reproducibility](https://www.scienceexchange.com/reproducibility)

------
snowwrestler
Science is working exactly the way it has always worked.

Most papers have always been flawed, wrong, or not reproducible. There has
always been pressure to publish--going back even to Newton's battles with
Hooke over gravity, or Darwin's rush to publish _On the Origin of Species_
before Wallace.

What has changed are the cultural expectations. Culturally, we've become
spoiled by physics. We're used to the precision, speed, and accuracy of
physics and engineering. Moore's law, the iPhone, incredible bridges, the 787
and 380 airplanes--they all just work, safely and reliably.

Note that the reproduction problems are most prevalent in chemistry, biology,
medicine, etc. These are areas of science that are far more complex, and about
which we know far less, than physics. It will take a long time, and a lot of
failed research, to even start to approach that level of knowledge. Given the
complexity, it might be impossible.

------
PeterisP
What I'm reading in the article is claims that science _funding_ has lost its
way, and is rewarding exactly the wrong actions with money and prestige.

~~~
omnisci
Science funding is what drives science now. So the article is correct as are
you.

------
StandardFuture
The title should read: "A subset of the Scientific Community has lost it's
integrity, at a not so easily quantifiable cost to humanity"

------
whyenot
If you are a biologists and you want to keep your lab going, and you want to
have RA-ships for your graduate students, you need to get grants. You aren't
going to get grants unless you are cranking out publications. The days when as
a biologist, you could work on a problem for several years, being careful,
checking your work before you publish... those days are over. I'm confident
the system will right itself eventually, hopefully in my lifetime.

------
Houshalter
It seems like the solution to this is fairly simple. Use some statistical or
machine learning method to figure out the probability that a certain thing is
true using the information we know about it, like what journal it was
published in, the results of replications, maybe even stuff like how crazy the
result seems or the experience/reputation of the scientists, etc. There is a
ton of data to work with, on top of the actual data itself.

You could predict with decent accuracy how probable a study is to turn out to
be true or false. Then you can use that information to decide whether it would
be worthwhile to do more studies.

~~~
eli_gottlieb
If you can work out what learning problem you've got here, and what methods
you can apply towards a solution, you should go and pitch that as a start-up.

~~~
Houshalter
I'm not sure there would be anyway to monetize it but I considered trying it
as a personal project. It would be way to much work to manually enter the data
of thousands of papers into the computer though. I would also need objective
data on which studies actually turned out to be true or false. Or at least
which ones could be successfully replicated.

------
tokenadult
The PubMed Commons initiative[1] by the National Institutes of Health,
mentioned in the article kindly submitted here, is a start at addressing the
important problems described in the article. One critique[2] of the PubMed
Commons effort says that that is a step in the right direction, but includes
too few researchers so far. A blog post on PubMed Commons[3] explains a
rationale for limiting the number of scientists who can comment on previous
research at first, until the system develops more.

[1]
[http://www.ncbi.nlm.nih.gov/pubmedcommons/](http://www.ncbi.nlm.nih.gov/pubmedcommons/)

[2] [http://retractionwatch.wordpress.com/2013/10/22/pubmed-
now-a...](http://retractionwatch.wordpress.com/2013/10/22/pubmed-now-allows-
comments-on-abstracts-but-only-by-a-select-few/)

[3] [http://www-stat.stanford.edu/~tibs/PubMedCommons.html](http://www-
stat.stanford.edu/~tibs/PubMedCommons.html)

USING MY EDIT WINDOW:

Some of the other comments mention studies with data that are just plain made
up. Fortunately, most human beings err systematically when they make data up,
making it look too good to be true. So an astute statistician who examines a
published paper can (as some have done) detect made-up data just by analyzing
what data are reported in a paper. A researcher who does this a lot to find
made-up data in psychology is Uri Simonsohn, who publishes papers about his
methods and how other scientists can apply the same statistical tests to find
made-up data.

[http://opim.wharton.upenn.edu/~uws/](http://opim.wharton.upenn.edu/~uws/)

From Jelte Wicherts writing in Frontiers of Computational Neuroscience (an
open-access journal) comes a set of general suggestions

Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting
the daylight in: reviewing the reviewers and other ways to maximize
transparency in science. Front. Comput. Neurosci., 03 April 2012 doi:
10.3389/fncom.2012.00020

[http://www.frontiersin.org/Computational_Neuroscience/10.338...](http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2012.00020/full)

on how to make the peer-review process in scientific publishing more reliable.
Wicherts does a lot of research on this issue to try to reduce the number of
dubious publications in his main discipline, the psychology of human
intelligence.

"With the emergence of online publishing, opportunities to maximize
transparency of scientific research have grown considerably. However, these
possibilities are still only marginally used. We argue for the implementation
of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and
(3) online data publication. First, peer-reviewed peer review entails a
community-wide review system in which reviews are published online and rated
by peers. This ensures accountability of reviewers, thereby increasing
academic quality of reviews. Second, reviewers who write many highly regarded
reviews may move to higher editorial positions. Third, online publication of
data ensures the possibility of independent verification of inferential claims
in published papers. This counters statistical errors and overly positive
reporting of statistical results. We illustrate the benefits of these
strategies by discussing an example in which the classical publication system
has gone awry, namely controversial IQ research. We argue that this case would
have likely been avoided using more transparent publication practices. We
argue that the proposed system leads to better reviews, meritocratic editorial
hierarchies, and a higher degree of replicability of statistical analyses."

~~~
GuerraEarth
Tokenadult provides helpful input.

Like all other sectors, scientific research can get inbred and peer review
corrupted as a mechanism. It's similar to a character/job/performance
reference: "We want to hear what other people say about you," can be
problematic when the folks talking are untrustworthy--yet they wield
credentials imparting trustworthiness. Peer review only worked when the
majority of peers were rock-solid good scientists. (when there were fewer
scientists, with personal reputations and the discoveries to back up their
reputations) Wouldn't it be great if Pierre Curie, Darwin or Tesla were doing
peer reviews? (or men/women of similar caliber)

At leading research schools (they aren't all universities), falsification of
data exists at the student and professorial level.

Funding is definitely cut at NASA, whereas "sexy" research funding is
increasing. We need excellent researchers across all areas of expertise. And
we need increased accountability and transparency. And more funding.

I argue that what is most needed is increased scientific literacy at the level
of political leadership and the general population so that findings can be
accessible/understood/evaluated on a more concrete level by all.

Coming pathogen shifts associated with climate change, and extreme weather
events, etc. alarm people. And people want to be able to trust science and to
trust science reporting.

------
daughart
There is no reproducibility crisis in science.

1\. Reproducing work is a waste of resources. Use the money and researcher
hours to develop tools that are more reliable, cheaper, and easier to use. A
major reason why no one reproduces experiments is that the initial work was
very difficult. Let's invest in technologies that make science easier.
Reproduction (of experiments) should be done in high school biology classes.

2\. Technical reproduction is rarely done, but conceptual reproduction is
common. Findings in the literature become incorporated into disparate
subsequent hypotheses tested by many other labs. If something doesn't add up,
this will often increase the impact of the paper and eventually be addressed
through experiments to resolve different models of the phenomenon.

3\. There is no widespread fraud in science. Your academic career rests on
your integrity. When I publish a paper, I do my damn best to make sure it is
accurate. My reputation relies on it. When scientists continue to publish
results that are false or fraudulent, they become discredited within the
community. All graduate students in the life sciences are required to take a
class on the ethics of science.

4\. Publishing is a bitch and a source of real rot within the community.
Fortunately, many researchers and academics recognize this problem and are
addressing it. Look at the new journal eLife, or open access journals, or the
increasing interest in arXiv.org (moving to a publishing model closer to that
found in math and physics, which appear to be healthier research communities
than life sciences). As experiments become more technically advanced, the
expectation for methods sections have increased, not decreased.

5\. People want to commercialize scientific findings that are relatively new -
it's obvious that is risky! Why put the burden back on (under-funded)
scientists? Drug companies are the ones that would benefit financially. Or
wait until the phenomenon is better understood. Notice they're talking about
drugs for complex diseases like cancer, metabolic disorder, etc., not
Mendelian diseases. It's as if people complained that they couldn't get their
lasers to work using a 1917 understanding of the physics of light. But
Einstein demonstrated the fundamentals! Why did it take 40 years to make it
work in practice?

------
knappador
Someone do a sentiment plot with "goodness" on the Y-axis and years ago
relative to writing no the X-axis. I won't be surprised if there's a positive
correlation. Successes, new challenges, and shortcomings become apparent.
Whatever worked looks like it was good principle in hindsight. Whatever hasn't
panned out due to new challenges looks terrible. Cherry picking in order to
build a case that allows one to write authoritatively doesn't make anyone a
saint or cultural leader.

Therefore, when I see an article like this with such a broad, generalizing
headline, I just think it's click-bait. Lost it's way? I've read some
absolutely terrible papers. "Theory of the Origin, Evolution, and Nature of
Life" by Erik Andrulis is an excellent example of such unfathomably
speculative garbage. I've also read a huge number of well-done papers on
topics in aerospace engineering and materials science. It's always on the
reader to re-produce experiments if they depend on the result, to understand
the paper correctly etc. This is what my professors did. If part of the
community is circle-jerking, let evolution run its course. We used to treat
Aristotle as canon in the western world. Obviously things get better over
time.

Skimmed article. Old news. The fact that someone is raising the flag, saying
"there's a lot of low-hanging fruit to use to establish yourself as a more
accurate researcher," just means we will see more of such review activity,
making the title seem inaccurate. You never know when you can free yourself up
an adjunct professor position in exactly your preferred field of research.

------
th0br0
Especially in medical studies, where you've often got cases of n=40 or similar
(even in later stages!), this is a huge issue. In contrast, just think of the
size of n you need in physics to be taken seriously!

The major reason for that is, however, that most people in the medical &
biological area are rather lacking a profound mathematical education. There
are cases where papers get rejected because they are too mathematical.

~~~
aggie
There are 2 reasons you want a large sample size: (1) to have enough subjects
to expect a reasonably representative random sample (typically ~30+ for social
science [1]) and (2) to have sufficient statistical power. There's nothing
inherently wrong with n=40 and n=40 from an unbiased sample is better than
n=400 from a biased sample.

Physicists are generally looking for very very small effects, hence very high
n (higher n = higher power = more sensitive to treatment effects). This
doesn't mean lower samples sizes are insufficient for other areas of research.

Anyway, the real issue is over-reliance on convenience sampling.

[1] [http://sph.bu.edu/otlt/mph-
modules/bs/bs704_probability/BS70...](http://sph.bu.edu/otlt/mph-
modules/bs/bs704_probability/BS704_Probability11.html#centrallimittheorem)

------
pjc50
In the UK, there is the widely-derided "Research Assessment Exercise":
[http://en.wikipedia.org/wiki/Research_Assessment_Exercise#Cr...](http://en.wikipedia.org/wiki/Research_Assessment_Exercise#Criticism)

There is also the criterion of "impact factor", for papers and publications.
It's very similar to a "Karma" system as used by HN, Reddit, etc., and it has
many of the same problems. Imagine a system where you have to choose between
doing research that might be vital but probably won't, versus something safe
and predictable that ensures that you get paid next year.

The problem is not so much science as the _management_ and _funding_ of
science, which have been infected by the same managerialism that causes so
many problems in big government and corporate projects.

------
douglasgalbi
Here's a case study of mass-media attention-seeking through pseudo-science: an
evaluation of behavior on sinking ships, coincidentally issued five days
before the centennial of the Titanic's sinking. POS, but well, ok, surely
serious scientists wouldn't take such work seriously. The Proceedings of the
National Academy of Sciences of the United States of America (PNAS) received
the paper for review on May 2, 2012, approved it on June 29, 2012, and
published it on July 30, 2012. Science has lost its way.... For details, see
[http://purplemotes.net/2012/04/22/deadly-sex-
discrimination-...](http://purplemotes.net/2012/04/22/deadly-sex-
discrimination-in-titanic-chivalry-myth-reporting/)

------
tehwalrus
While I would say that the drop in rigour is worse in Medicine than Physics,
it is clearly still present even there.

The way funding works, in particular, means people publish stuff-that-will-
get-references with a similar attitude to web start-ups iterating their code,
(by which I mean too damn fast and not listening to the peer reviews.)

I'd love to see this change, but I don't know how central agencies can
easily/affordably work out which research(ers) to fund and which to cut. As
others have said, by definition we don't know what work was
useful/valid/critical until many years later.

------
rationalthug
Why do authors and publishers of articles like this, which invariably turn out
to be about medical knowledge/studies/research, resort to using the
misleading, incredibly broad word "Science" in their titles? All of "Science"
has lost its way? Really? Phyics? Chemistry? How about this for a headline:
"Journalism has lost its way"? The article could then simply be a list of all
the sensationalist, purposely misleading crap that's published in major
publications. Long list.

------
djillionsmix
This will continue because the tendency to make excuses for it is directly
proportionate to the ability to do anything about it, same as with most/all
social ills.

------
gwu78
The article refers to a move several years ago by one biotech company, Amgen,
to attempt to validate the results of some well-known studies.

Where can we find the list of these studies?

~~~
nkurz
This information has not been released:

[http://www.nature.com/nature/journal/v485/n7396/full/485041e...](http://www.nature.com/nature/journal/v485/n7396/full/485041e.html)

The irony is thick:

1) They were unable to reproduce the experiments from the papers alone, and
this is considered normal.

2) The "scientists" required them to sign an NDA before helping them to
reproduce their published results.

3) We're left here drawing conclusions from anecdotal evidence from a study
that cannot be reproduced.

Luckily, science is self-correcting so we need not be overly concerned.

------
jpadkins
Reproduction of results sounds like a good area of AI and automation.
Thankless toiling, not very gratifying work, largely mechanical (not
creative).

It would be very cool if someone came up with a unit test framework for
various fields of science. Then we could make reproduction unit tests a
requirement of publishing, so anyone with the proper equipment/framework could
sync and run the tests themselves.

------
mathattack
In the spirit of inquiry, I'm waiting for the other half of this story. These
articles seem sensationalist. So what's the catch? Does it have to do with
studies being pre-clinical and more likely to be wrong? Or is it that they're
being held to a different level of scrutiny? Or the evidence shows
correlation, but not statistical significance?

Or are things really this bad?

------
pbreit
Does this impact venues like HN and Wikipedia that value citations
substantially more than common sense?

~~~
tensor
No. Any attempt to do better than guessing should be valued above guessing.

------
return0
Why not allow comments from everyone? We all know that closed clubs lead to
politics. Many life scientists are extremely allergic to feedback, and they go
to great lengths to avoid scrutinizing their own or others' results.

------
anuraj
When you make publishing a number of papers mandatory and tie it to tenure, it
is inevitable that quality will suffer. The need is to rethink quantitative
metrics and move to a more quality oriented one.

------
pvdm
Society has lost its way.

------
RA_Fisher
I would highly recommend that anyone with further interest check out the book,
"The Cult of Statistical Significance" It's eye opening.

------
jgamman
Science (management) has lost its way, at a big cost to humanity

------
tpainton
climate change.

