
The Neutral Theory in Light of Natural Selection - ArtWomb
https://academic.oup.com/mbe/article/35/6/1366/4990884
======
NPMaxwell
Wikipedia explanation of Neutral Theory: "most evolutionary changes and most
of the variation within and between species is not caused by natural selection
but by genetic drift of mutant alleles that are neutral. A neutral mutation is
one that does not affect an organism's ability to survive and reproduce. The
neutral theory allows for the possibility that most mutations are deleterious,
but holds that because these are rapidly removed by natural selection, they do
not make significant contributions to variation within and between species at
the molecular level. Mutations that are not deleterious are assumed to be
mostly neutral rather than beneficial. In addition to assuming the primacy of
neutral mutations, the theory also assumes that the fate of neutral mutations
is determined by the sampling processes described by specific models of random
genetic drift."
[https://en.wikipedia.org/wiki/Neutral_theory_of_molecular_ev...](https://en.wikipedia.org/wiki/Neutral_theory_of_molecular_evolution)

~~~
nwatson
The article is long and I don't have time to read this, but the article seems
to imply that "neutral" mutations aren't driving the split between species.

I would have thought though that for a given species in a given timeframe
facing many-dimensional gradient pressure for "fitness" that:

\- few outlandishly positive mutations would crop up

\- natural selection would quickly weed out deleterious mutations (the article
addresses this)

\- over time a very large number of "neutral" mutations and therefore neutral
variations within the species would accumulate since their effect is
completely orthogonal to the current pressures the species faces (the article
implies that this is not relevant for evolution, however I think it would be
extremely relevant as I describe following ...)

The "neutral" mutations would form the "library" of variations that would
become relevant at a LATER time when conditions change for a subset of the
population of the species and a different gradient becomes relevant. Some of
the previously neutral changes would become deleterious in the new setting,
while others would provide outright advantages. It is these surviving
mutations that would give the sub-group of the original species its new
identity.

Conclusion: I would think that the mutations that drive the new species'
initial offshoot would come from the inherent variation built up by the prior
neutral mutations ... i.e., the evolution was already primed by the prior
library of previously neutral mutations and not something that happened solely
after isolation and changes in conditions.

EDIT: wording

------
ArtWomb
It's amazing to think first-principles mechanisms which trigger mutations at
the genetic level are still shrouded in mystery. Cold War era research sought
to establish effects on radiation exposure in pursuit of the question: can our
species survive the fallout of a general nuclear exchange. More recent imaging
techniques have actually been able to visualize the nucleotide "wobble" that
results in replication errors. But the environmental information that directly
codes for beneficial advantage seems to be a deep secret. And one that could
benefit from active inquiry ;)

~~~
simonh
>But the environmental information that directly codes for beneficial
advantage seems to be a deep secret.

How can any natural process directly code for beneficial advantages? Isn’t
whether or not it turns out beneficial determined by selective pressures, not
the mechanism causing genetic change?

------
kurthr
Abstract:

We evaluate the explanatory power of the neutral theory of molecular
evolution, 50 years after its introduction by Kimura. We argue that the
neutral theory was supported by unreliable theoretical and empirical evidence
from the beginning, and that in light of modern, genome-scale data, we can
firmly reject its universality. The ubiquity of adaptive variation both within
and between species means that a more comprehensive theory of molecular
evolution must be sought.

------
nonbel
>"Furthermore, the neutral theory is not merely a neutral model, to be used as
a null hypothesis against which more interesting hypotheses can be tested."

What possible scientific purpose would this serve? Why not just test the
interesting hypothesis and leave out the strawman?

~~~
ssivark
When the model predictions are deterministic and easy to observe, the null
hypothesis is not needed. When model predictions are statistical, then, you
want to answer the question: how "likely" is this model, given the data?
Answering that with a probability (as would be done in a Bayesian approach) is
hard because there is no natural "space of models" over which to compute that
probability. So, in the frequentist approach, you check whether the data is
sufficiently more likely to have resulted from your model, rather than the
null hypothesis model. The "sufficiently more likely" is quantified by a
p-value.

~~~
nonbel
>"When model predictions are statistical, then, you want to answer the
question: how "likely" is this model, given the data?"

Huh? Here is a model with statistical predictions. Its a model of flipping a
fair coin 20 times a day for 365 days:

    
    
      p       = 0.5
      nDays   = 365
      nPerDay = 20
      dat = rbinom(nDays, nPerDay, p)
    
      hist(dat)
    

[https://image.ibb.co/bwuYCT/flips.png](https://image.ibb.co/bwuYCT/flips.png)

Here is how I compare theoretical to actual results:

    
    
      > mean(dat)
      [1] 10.06575
      > var(dat)
      [1] 5.479181
      > 
      > nPerDay*p
      [1] 10
      > nPerDay*p*(1-p)
      [1] 5
    

>"So, in the frequentist approach, you check whether the data is sufficiently
more likely to have resulted from your model, rather than the null hypothesis
model. The "sufficiently more likely" is quantified by a p-value."

A p-value says something about the fit of the model used to calculate it. It
doesn't say anything about any other models.

~~~
ssivark
Your analysis tells nothing about whether the model of fair coin being flipped
is confirmed. You see a deviation of 0.06 in the mean and 0.47 in the
variance. Is that negligible? Imagine you had much less data i.e. nDays=5 —
What if your sample had a mean of 10.6 and a variance of 4.8? Could you
confirm or rule out the fair coin model? Or confirm/ rule out the
0.6-heads-0.4-tails model? Within the frequenting framework, you’d need a null
hypothesis to answer that question.

Regarding p-value, I agree I could have phrased my comment better.

In the frequentist framework, there is no way to confirm a model. The only way
to arrive at a model is to falsify all “simpler” models one can think of, that
might possibly have explained the data.

~~~
nonbel
>"Your analysis tells nothing about whether the model of fair coin being
flipped is confirmed. You see a deviation of 0.06 in the mean and 0.47 in the
variance. Is that negligible?"

Th easiest way is to just run a simulation thousands or millions of times then
check how often the observed results occurred. Eg, using R again to get
distributions for the mean and sd for the model and getting the one sided
p-value:

    
    
      p       = 0.5
      nDays   = 365
      nPerDay = 20
    
      obs   = rbinom(nDays, nPerDay, p)
      sim   = replicate(1e3, rbinom(nDays, nPerDay, p))
      stats = apply(dat, 2, function(x) c(mean(x), var(x)))
    
      mean(stats[1,] > mean(obs))
      mean(stats[2,] > var(obs))
    
    

>"In the frequentist framework, there is no way to confirm a model. The only
way to arrive at a model is to falsify all “simpler” models one can think of,
that might possibly have explained the data."

That isn't frequestist. Its the so-called "NHST hybrid" which it seems
confused stats 101 authors in the 1930-1940s cobbled together from the ideas
of Ronald Fisher, Jerzy Neyman, and Egon Pearson. This is a decent intro to
that topic, but there are thousands of papers on this: [http://library.mpib-
berlin.mpg.de/ft/gg/GG_Mindless_2004.pdf](http://library.mpib-
berlin.mpg.de/ft/gg/GG_Mindless_2004.pdf)

~~~
ssivark
Thanks for the Gigerenzer article. While he points out unfortunate problems
with the way NHST is commonly/mindlessly used, none of that changes the fact
that in many cases, there is a natural null hypothesis from an application of
Occam's razor. (which was the point of the response to OP's question)

~~~
nonbel
Great to hear! Here is another you may find interesting:

Meehl, Paul E (1967). Theory-testing in psychology and physics: A
methodological paradox. Philosophy of Science, 34(2), 103–115.
[http://www.jstor.org/stable/10.2307/186099](http://www.jstor.org/stable/10.2307/186099)

