
John Ioannidis has dedicated his life to quantifying how science is broken - based2
http://www.vox.com/2015/2/16/8034143/john-ioannidis-interview
======
closed
> Currently we have peer review done by a couple of people who get the paper
> and maybe they spend a couple of hours on it. Usually they cannot analyze
> the data because the data are not available – well, even if they were, they
> would not have time to do that. We need to find ways to improve the peer
> review process and think about new ways of peer review.

The whole article is worth reading, but this stood out to me. I've never felt
satisfied with the review process, even when it's been very friendly toward a
paper. I think it's the speed of the process: wait a couple months for
reviews, send a response a few weeks / months later. In contrast, twitter
seems like a nice place to get things churning. I realize these are two
different processes, and a person could throw a paper that's in review into
the twittersphere. I guess I'm interested in seeing where the process goes
(and what events precipitate a change to the traditional review structure).

------
tokenadult
This is a good overview of the research work and personality of Dr. Ioannidis.
He looks forward to improvements in the process of science, as this interview
transcript reports:

"Q: Are you optimistic or pessimistic about the direction science is going in?

"John Ioannidis: I am optimistic. I think that science is making progress.
There’s no doubt about that. It’s just an issue of how much and how quickly."

Not mentioned in the article kindly submitted here, but likely to be of
interest to many readers of Hacker News, is the site PubPeer[1] where you can
search by topic for discussions among researchers (some named, some anonymous)
about scientific papers that may have problems. Some papers have been
retracted after PubPeer discussion revealed problems with the underlying data
for study results.

[1] [https://pubpeer.com/](https://pubpeer.com/)

[http://retractionwatch.com/2015/08/31/pubpeer-founders-
revea...](http://retractionwatch.com/2015/08/31/pubpeer-founders-reveal-
themselves-create-foundation/)

------
bainsfather
The main problem is the incentives. A researcher's future employment is
dependent on their list of publications.

The question that researchers face is not 'how best can I contribute to
science?' but is instead 'what can I get published?'. Anyone
naive/stubborn/foolish enough to focus on the former question will be weeded
out by the system, unless they are exceptionally good.

We shouldn't be surprised by the outcomes we see: 'p-value hacking', accepting
significant results straight away but fiddling with results you do not like
until they become publishable, not wanting to 'waste time' demonstrating that
someone else's research isn't really reproducible, not publishing negative
(i.e. unsuccessful) results etc etc.

I wish I had an easy solution for this problem, but I do not see one - the
hiring metric of 'how many publications do you have? which journals?' is a
nice simple one, that administrators and funding agencies like to use, for all
that it leads to bad incentives and actions.

~~~
soyiuz
"How can I get published" part of the incentive is fine. The problem is that
the journal infrastructure does not always embody values that lead to "best
science." Things could be fixed on a systems level, with better peer review,
more stringent requirements for replication etc.

------
nonbel
I take issue with grouping non-reproducible (in principle), non-prediction-
generating (just "rejects" a null hypothesis without specifying a precise
research hypothesis) type of research as science. If anyone has missed
Feynman's discussion of cargo cult science, check that out because it
describes what is going on pretty well:
[http://calteches.library.caltech.edu/51/2/CargoCult.htm](http://calteches.library.caltech.edu/51/2/CargoCult.htm)

It is really something very different from the method that gave us the
benefits we see all around today. For example, the cancer reproducibility
project had to drop 1/4 of the studies before even trying because it was so
difficult to get the required protocols and materials:
[https://news.ycombinator.com/item?id=10687879](https://news.ycombinator.com/item?id=10687879)

"Research", ok. But "Science"? I would prefer reserving that word for
something better. Something better than a group of people publishing reports
lacking 2 weeks worth of full time work worth of necessary methodological
information.

~~~
stdbrouw
Feynman is great, and I'd add Meehl's paradox:
[http://www.fisme.science.uu.nl/staff/christianb/downloads/me...](http://www.fisme.science.uu.nl/staff/christianb/downloads/meehl1967.pdf)

A lot of the research that relies on null hypothesis significance testing
really is lousy. But it is not just a matter of sloppiness, it's also a
function of the area of study. For example, there is a lot of evidence that
people who had a bad childhood and little economic opportunity are more likely
to commit crime. Is the study of that phenomenon not science just because a
sociologist is unable to mechanistically predict that if your mother didn't
read you bed-time stories, you are guaranteed to rob a neighborhood store, at
the age of 23.2 years old? Is research into cancer drugs not science because
we can't predict beforehand if the drug will help a little or a lot, even
though we know of the causal mechanism that should make the drug effective?

~~~
nonbel
>"Is research into cancer drugs not science because we can't predict
beforehand if the drug will help a little or a lot, even though we know of the
causal mechanism that should make the drug effective?"

It is about the methods used rather than the level of understanding. Science
is about people performing independent replications on each others
measurements to ensure they are reliable and identify important confounds,
then coming up with models that explain these reliable observations.

The issue with cancer research right now appears to be it is common to not
even publish enough info to do the replications, let alone actually perform
them. Once we had some data to "hang our hat on", then it would be time to try
to explain this quantitatively. If you can't predict whether a drug will help
a little or a lot, I am extremely doubtful the causal mechanism is known.

------
danharaj
Maybe there should be peer review conferences comprising technical workshops
where researchers methodically go through chosen results in fine detail. If
peer review is so important to how science gets done without going off the
rails then it shouldn't be hidden away in an editorial backroom.

------
darawk
I see prior probabilities talked about often in relation to scientific studies
lately. But is it really appropriate to assign a prior probability to
something that is already true or false?

The theory's truth or falsehood is a fact in the world that exists
independently of any of our models. Does it really have a 'prior probability'
of being true or false? And if so is there really any sense in which we could
realistically assign it one? Should we look at the history of the field or the
researcher?

~~~
stdbrouw
Think of a prior as what's plausible or not plausible given the current state
of our knowledge, rather than as an objective state of the world.

Mind you, that doesn't mean priors are necessarily subjective.

In medicine, we often have information about the prevalence of diseases, which
gives us a prior that we can take into account when looking at the result of a
test: if someone tests positive on an incredibly rare disease, chances are the
test is wrong because the prior probability of having the disease is so
exceedingly low.

In psychology, if you read research that claims to have found statistically
significant evidence for extrasensory perception, you might not believe that
straight away, because it contradicts everything we know about the physical
universe and thus is a priori very improbable.

In marketing, if you're doing an A/B-test where you change some body copy, in
the very best case you might get a 100% increase in conversion, but certainly
not a 100,000,000% increase. A prior can be as subtle as "no effect is a tiny
bit more likely than an effect of 10^12" and this can be enough to get better
estimates when you have very little data.

Your interpretation is not unusual, though: in frequentist statistics,
parameters are considered fixed values and as such not something you can
assign a probability.

~~~
darawk
Ya I suppose my point is that such assignments are inherently subjective, and
derived from limited data (except in some limited cases, e.g. disease
_screening_ , though not disease testing in general).

Consider the theory that there is an unidentified teapot floating around the
moon. What prior would you assign that? 1x10^-20? What if I then told you that
a teapot was on board the space shuttle during the first moon landing? Maybe
then the exponent rises to -10. What if then I further told you that when it
came back, that teapot was unaccounted for? Now all of the sudden what seemed
wildly improbable actually seems remotely possible. Not because anything
changed in the world, mind you, but because your knowledge of the world became
more complete.

Particularly in the case of a scientific theory, it seems almost completely
impossible to adequately estimate the probability of its truth prior to
testing. For instance, if I were to propose that conservation of energy is
incorrect, most people would assign that an extraordinarily low prior. But
would they really be justified in so doing? Is there _really_ evidence that
conservation of energy is a fundamental property of the universe, or does
there just happen to be a lot of stuff that we've observed that comports with
it? I suppose i'm essentially making Plato's cave's argument here.

------
davidgerard
Great article, but a year old - please label "(2015)"

------
rl3
I'd argue the research paper format itself is a significant part of the
problem. It's simply far too verbose.

~~~
pavpanchekha
I can't recall the last paper that I had to fluff up to fill all 10 or 12
pages; nor do I remember any colleagues doing anything like that. In truth,
every paper halves in size between the first good draft and the submission.
Writing a good paper can take a month, and involve writing dozens of drafts (I
wrote a post about this at [https://pavpanchekha.com/blog/paper-section-
stats.html](https://pavpanchekha.com/blog/paper-section-stats.html)).

Now, you might say that the paper should contain less content. But what do you
want to chuck? The technical sections are the whole point of the paper, and
they're already squeezed to fit. Evaluation? But that only _hurts_ the causes
in this article. And the introduction, background, overview, and related work
sections are frequently described by non-specialists as the most useful parts
of a paper.

Here's a paper I'm very proud of:
[http://herbie.uwplse.org/pldi15.html](http://herbie.uwplse.org/pldi15.html)

What should I have cut?

~~~
rl3
> _What should I have cut?_

Nothing, because what you wrote conforms to expectations of the existing
system. It's not your fault it works that way.

I'm just saying that, as far the exchange of scientific knowledge is
concerned, having everyone labor away on their respective research papers—then
further spending inordinate amounts of time consuming others' papers—probably
isn't the most efficient or effective system.

~~~
stdbrouw
Why? What would be more efficient or effective?

~~~
rl3
[https://news.ycombinator.com/item?id=2425823](https://news.ycombinator.com/item?id=2425823)

------
aflyax
The simple truth is that science is a branch of entertainment industry. It’s
part of the bread-and-circuses program funded by the US government. That’s all
there is to it.

If science were solely funded by private entities, the situation would be
different. (And that includes private grants, whose sole purposes is not to
make profit, but to fund fundamental science. Still, they should be private.
There should be many of them. And they should compete for private donations of
their sponsors by demonstrating regular quality control. When you have a
monopoly, the product always sucks, compared to a non-monopoly alternative.
Basic economics.)

~~~
reitanqild
If you were to say parts of science I would ask you: what parts.

When you say: "...science is a branch of entertainment industry" it seems you
are just in the wrong forum.

