
Robust research needs many lines of evidence - lainon
https://www.nature.com/articles/d41586-018-01023-3
======
nonbel
This paper really sounds like it advocates the status quo of attempting to
take the "sum of the evidence", which usually leads to the generation of many
different weak (low sample size, etc) lines of evidence. They don't seem to
get that what made science so successful was its ability to generate simpler
models of phenomenon. To do this you need to come up with (most often
quantitative) theories that can synthesize various lines of evidence.

The practice of "counting heads" seen in fields like medicine is totally
inappropriate (eg 2 out of 3 results support the theory so it looks good). The
correct thing to do is more like what is seen in physics.

Eg, take the idea that "cancer is due to mutations that accumulate as cells
divide, therefore cancer rates should be related to number of divisions and
mutation rate." The status quo is to do something like:

    
    
      1) "cancer is positively correlated with number of divisions" 
      2) "cancer is positively correlated with mutation rate"
      3) "cancer is positively correlated with number of accumulated mutations"
    
    

Instead, we should say "this means the probability of accumulating at least
_n_ mutations after _d_ divisions given mutation rate _r_ should follow a law
similar to":

    
    
       p(x >= n) = (1 - (1 - r)^d)^n
    

The two approaches are similar in that you measure mutation rate, number of
divisions, accumulated mutations and frequency of cancer cells. The difference
is that for the latter approach all the lines of evidence combine in a
_nonlinear_ fashion.

This method allows us to subsume a myriad of factors such as genetic
disposition, exposure to toxins, etc to be incorporated via a single value
(mutation rate). So rather than an ever expanding apparent complexity as more
factors are incorporated, there is an accumulation of knowledge as we estimate
the model parameters more and more accurately and either reject, modify, or
keep our models of the phenomenon.

~~~
SiempreViernes
Which paper are you talking about?

Do you actually mean the linked article? Because it contains the following:

> But replication alone will get us only so far. In some cases, routine
> replication might actually make matters worse. Consistent findings could
> take on the status of confirmed truths, when they actually reflect failings
> in study design, methods or analytical tools.

This is a strong criticism against the counting heads method you also object
to.

~~~
nonbel
Look at their example.

Theory: _" One example in which triangulation has helped is in establishing
that smoking during pregnancy results in babies with lower birth weights"_

Lines of evidence:

1) _" women who smoke are more likely to have babies who weigh less"_

2) _" if a woman’s partner smokes during her pregnancy, many of the same
confounders apply as in maternal smoking, but the association with lower birth
weight is much weaker"_

3) _" Birth weight can also be analysed according to levels of cigarette
taxation across US states, which reduces the effects of confounders."_

4) _" the birth weights of siblings whose mother smoked during one pregnancy
but not another."_

5) _" In cohorts grouped according to whether or not people carry a genetic
variant associated with greater cigarette consumption in those who smoke,
mothers who smoke and carry the variant tended to have babies who weighed
less; non-smokers with the same variant did not."_

Conclusion: _" Taken together, these studies make it clear that maternal
smoking affects birth weight directly"_

This looks exactly like what I described as "counting heads". There is no
synthesis of this evidence, just a disconnected list of things (and who knows
whether contrary evidence has been left out or never checked).

------
SubiculumCode
I wanted to say that converging evidence is highly valued in neuroscience
already. We know every single study is flawed in some way, but attacking the
same problem from 3 angles and getting similar answers, you might really have
something.

~~~
mturmon
Yes, you can see this in astronomy and planetary science as well. As a basic
example, the wikipedia pages on dark matter and dark energy cite various
sources of converging evidence. In planetary science, one example is the
multiple lines of evidence that there were once significant water flows on
Mars.

------
aaavl2821
Often the first thing companies do when they license tech from academia is try
to reproduce the findings in a rigorous setting, rather than jumping straight
into novel work. Behind closed doors, big pharma scientists complain about the
quality of academic work even published in leading journals

This problem is real and is probably not going to be addressed within the
academic community. However, as more scientists realize that the path to
running your own lab and directing your own research agenda lies in working at
startups rather than in academia / big pharma, hopefully the way science is
done will change

In CA at least, venture capital funds more life science R&D than do NIH
dollars, and probably more than big biotech (Genentech / Amgen / Gilead etc)

~~~
epistasis
There have been public complaints about this too, such as that "cancer
findings don't reproduce" editorial (where "reproduce" is defined as
"generalize") that got lots of press 5 years ago. My best understanding of the
situation is that big pharma and research are going at opposite purposes. Big
pharma wants something that generalizes and can lead to drugs that hit almost
all of the target population. Research emphasizes novelty, and finding new
stuff, which most often won't generalize.

Chasing the findings in "leading journals" is going to escalate this
disconnect. Leading journals go for novelty and surprise alone. These are
exactly the findings that are least likely to generalize.

I'd love to see how and where VC is funding life science research. It seems
unlikely that the types of projects that VC funds would overlap the type of
research that NIH funds, but I would love to be wrong on that!

~~~
aaavl2821
Its definitely true that big pharma and academic research are def going at
divergent purposes, which isnt always a bad thing, but i agree that when
quality is sacrificed for "publishability" then i dont think anyone wins.

A more somber issue with reproducibility is that you simply cant replicate a
study's results independently even if you do it the exact same way. Not only
is it not generalizable, it isnt even robust in a specific case. A lot of this
is just due to chance, but in the worst instances, this happens because
researchers are able to bias experiments to get a result. There are so many
tiny details of experimental design and execution that influence results, and
a lot of these dont end up in the "methods" section of a publication

Some big pharma companies and startups are actually starting to look at
specific rather than general populations (the whole "precision medicine"
thing), which is good. This lowers revenue potential, but decreases risk of
clinical trial failure, bc you are only targeting a group of patients where
you have high-ish confidence it will work.

VC is increasingly funding stuff that would be done in academia. Biotech
startups have been so successful in recent years that investors are taking on
more risk, which means earlier stage, bolder science, and this overlaps with
some academic stuff. Most VC funded stuff is translational in nature though,
ie projects in between academia and big pharma. however only a few
"profitable" diseases get funded, like cancer and orphan disease. very little
going in in psychiatric disease, addiction, heart disease, diabetes

