
Evolution Is the New Deep Learning - jonbaer
https://www.sentient.ai/blog/evolution-is-the-new-deep-learning/
======
mav3r1ck
Having studied this extensively back when they were called Genetic Algorithms,
I would like to offer a few insights.

1) One of the biggest reasons they fell out of favor for more "mathematical"
approaches was that no one could really explain why exactly they worked. It
makes sense on the surface that "survival of the fittest" and doing something
akin to multiple stochastic gradient descents would work, but no one has
really been able to produce a mathematical proof as to why.

Since other folks are producing good examples of "explainable AI", I don't
know how Genetic Algorithms/programming could be made 'explainable' as to why
they achieved an optimal solution other than hand-waving to how evolution
works in nature.

2) The most important thing to define is the fitness function, this defines
what the search space looks like and how easily a globally optimal solution
can be derived. For a good example of an interesting search space that a
genetic program would have a difficult time with, see Schwefel functions [0].
Back when I researched these things closely, my intuition was that reality
rarely fits neatly into good fitness functions and I felt that at the point
you are understanding the problem, you may just be better off with a direct
approach, which leads to

3) Genetic programming should only really be considered when there are no
known alternatives or they are way too computationally expensive.

In either case, I would welcome a resurgence in a topic I once knew quite
well, though I haven't been in that field for a few years now.

[0]
[https://jamesmccaffrey.files.wordpress.com/2011/12/schwefels...](https://jamesmccaffrey.files.wordpress.com/2011/12/schwefelsfunction.jpg)

~~~
resu_nimda
_1) One of the biggest reasons they fell out of favor for more "mathematical"
approaches was that no one could really explain why exactly they worked._

Kind of like how nobody can really explain how the brain works, or life in
general. My gut feeling is that it is hubris to think that we are going to
"figure out" intelligence with increasingly sophisticated mathematical models
anytime soon. We are not giving proper credit to how complex it is, and the
multi-billion year developmental process that it took. We think we can just
short-circuit that with some fancy math because we've had success with
planetary orbits and other comparatively rudimentary phenomena.

The current industry approaches are great for extracting certain kinds of
value out of large data sets, but in terms of producing a result that could
even begin to be considered as interesting as life (i.e. AGI or "strong AI"),
I believe we will have to rely on creating a system whose inner workings are
too complex for us to understand.

In other words, going off of Arthur C Clarke's definition, life is magic. And
we're trying to create something equally magical. Almost by definition, if we
can analytically understand it, it's not going to be interesting enough.

~~~
pas
> We are not giving proper credit to how complex it is, and the multi-billion
> year developmental process that it took.

Or we are simply not ready to accept that it's simply a big book of heuristics
fine-tuned over biological eons.

It's just big. We have too many interwoven, interdependent, synergistic
faculties. Input, output, and a lot of mental stuff for making the right
connections between the ins and the outs. Theory of mind, basic reasoning, the
whole limbic system (emotions, basic behavior, dopaminergic motivaton), the
executive functions in the prefrontal cortex, all are very specialized things,
and we have a laundry list of those, all fine-tuned for each other.

And there's no big magic. Nothing to "understand", no closed formula for
consciousness. It's simply a faculty that makes the "all's good, you're
conscious" light go green, and it's easy to do that after all the other stuff
are working well that does the heavy lifting to make sense of reality.

~~~
resu_nimda
Your last paragraph seems to contain the kind of overconfidence that I'm
talking about. I don't understand how you can say "consciousness is simply X"
or "it's easy to do that [if you handwave away the hard parts]." Clearly it's
not that simple or easy, or we would have done it.

We can't even create life from non-life. How can we begin to understand all
the stuff you're talking about that's been layered on top? We don't understand
this stuff well enough to just handwave it away as unimportant or trivial.

~~~
pas
I'm assuming a simpler model, no need for magic, because so far I don't see
what behavior/data this simple model cannot explain.

> Clearly it's not that simple or easy, or we would have done it.

We don't have the computational power yet. Not to mention the vast amount of
development required. Think of the climate models, that are huge (millions of
lines of code), but they're still nowhere near complete enough, and they only
have to model sunlight (Earth's rotation, orbital position, albedo), clouds,
flows (winds and currents), some topography (big mountains, big flats), ice
(melting, freezing), some chemistry (CO2, salts). And they only have to match
a simple graph, not the behavior of a human mind (eg Turing test).

So, it's not easy, even if simple.

> We can't even create life from non-life.

We understand life. Cells, RNA, DNA, proteins, mitochondria, actins, etc. It's
big, it's a lot of moving parts, and we understand it, but we can't just pop a
big chunk of matter into an atomic assembler and make a cell.

And I think intelligence/sentience is similar. It's big, not magic.

~~~
textor
> We don't have the computational power yet

Certainly you do realise that this has been a moving goalpost for half a
century? It seems that lately people started to avoid giving a concrete
estimate of the power required, though. It was so easy in the 90-s! «Human
visual/verbal system processes gygabytes per second/has n flops to the xth» or
the like. Well, now we have that and more; how come a deeper modeling, a finer
processing, a more complicated network has come to be needed?

And your examples are incorrect, for example Navier-Stokes equations plus some
general physics knowledge always have allowed us to estimate how much data we
need for a certain fidelity of a finite-term weather forecast. Certainly we
need more for a complete climate model, but we know what we need. No such
thing about the brain.

It's an easy way to score some rationality points by voicing rejection of
"magic", but it's a strawman. Nobody will bother arguing for a mythical
homunculus in the seat of the soul, nor even for a concise formula summing up
the workings of the mind. Pick harder targets. "It's just big" or "it's just a
bunch of heuristics cobbled together" is a non-explanation. The brain is not a
Rube Goldberg machine that manages to produce any sort of work simply due to
its excessive complexity – it is energetically economical, taking into account
that neurons are living cells that need to sustain their metabolism and not
merely "compute" when provided with energy. Its discrete elements aren't
really small by today's standards, nor are they fast. The number of synapses
is ridiculous, but since they aren't independent, at a glance it doesn't add
that much complexity too (unless we abandon reason and emulate everything
close to the physical level).

Yet we have failed to realistically emulate a worm. By all accounts we have
enough power for 302 neurons already. There's no workload to give to overwhelm
available supercomputers. It's knowledge and understanding that we lack, and
it's high time to give up on the delusion that more power, naturally coming in
the future, will somehow enable a creation of predictive brain model, for this
would truly be magic.

~~~
pas
I know that people constantly underestimated the required computing power, as
more and more finer details of the brain and cognition are unraveling. That
doesn't make my argument invalid. I don't think we need to do a full brain
emulation. That's the worst case scenario.

We're getting pretty good at computer vision, what's lacking is the backend
for reasoning, for generating the distributions for object segmentation and
scene interpretation. Basically the supervisor. (As unsupervised learning is
of course just means that the supervision and goal/utility functions are
external/exogenous to the ML system, such as natural selection in case of
evolution.)

My example illustrates that yes, we can give an upper bound on molecule by
molecule climate modeling, but that's just a large exponential number, not
interesting, what we're interested in is useful approximations, which are
polynomial, but they being models, they need a lot of special treatment for
the edge cases. (Literally the edges of homogeneous structures, like ice-
water-air, water-air, water-land, air-land [mountains, big flats, etc]
interfaces. And the second order induced effects, like currents, and so on.)
That means precise measurements of these effects, and modelling them. (Which
would be needed anyway, even if we were to do a back to the basics N-S
hydrodynamics model, as there are a lot of parameters to fine-tune.)

For the brain we know the number of neurons, the firing activity, the
bandwidth of signals, etc. We can estimate the upper limit in information
terms, no biggie, but that doesn't get us [much] closer for the requirements
of a realistic implementation.

> Yet we have failed to realistically emulate a worm.

[http://openworm.org/getting_started.html#goal](http://openworm.org/getting_started.html#goal)
seems to be matter of time, not lack of understanding. (
[https://github.com/openworm/OpenWorm#quickstart](https://github.com/openworm/OpenWorm#quickstart)
) But maybe I'm not up to date on the issues.

> it's high time to give up on the delusion that more power, naturally coming
> in the future, will somehow enable a creation of predictive brain model, for
> this would truly be magic.

a) people are saying exactly this for years, that we have enough data already,
we need better theories/models

b) they fail to accept that more computing power and data _is_ the way to test
and generate theories.

> The brain is not a Rube Goldberg machine that manages to produce any sort of
> work simply due to its excessive complexity

A Rube Goldberg machine is simple, just has a lot of simple failure modes. (A
trigger fails to trigger the next part, either because the part itself fails,
or the interface between parts failed.)

> Its discrete elements aren't really small by today's standards,

If you mean cells, or cortices, agreed.

If you mean functional cognitive constituents, I also agree, but a bit
disagree, as they are small parts of a big mind, all interwoven, influencing,
inhibiting, motivating, restricting, reinforcing, calibrating, guiding,
enhancing each other to certain degrees.

So in that sense consciousness is a big matrix which gives the coefficients
for the coupling "constants" between parts. A magical formula if you will. But
not more magical, than the SM of physics.

------
peterstoziek
As expected, the article seems to be a typical content marketing piece. If
you're looking for real insights into evolutionary algorithms, specifically
"neuroevolution", I highly recommend to read this article:
[https://www.oreilly.com/ideas/neuroevolution-a-different-
kin...](https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-
learning)

I enjoyed it much more than - what feels like - a quickly thrown together
marketing piece with no real value for the reader.

~~~
vanderZwan
Thank you, saved it for later. Do you have any other links to offer?

~~~
ofrancon
A few links you can look at if you're interested in neuroevolution, from the
same group of researchers:

Ken Stanley and Risto Miikkulainen original NEAT (NeuroEvolution of Augmenting
Topologies) paper:
[http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf)

Ken Stanley's novelty search page, and a link to his book, "Why Greatness
Cannot Be Planned: The Myth of the Objective":
[http://eplex.cs.ucf.edu/noveltysearch/userspage/](http://eplex.cs.ucf.edu/noveltysearch/userspage/)

Risto Miikkulainen's Evolving Deep Neural Networks paper:
[https://arxiv.org/abs/1703.00548](https://arxiv.org/abs/1703.00548)

Ken Stanley & team's work at Uber, with links to some recent papers:
[https://eng.uber.com/deep-neuroevolution/](https://eng.uber.com/deep-
neuroevolution/)

~~~
yazr
Are these evolutionary techniques considered more or less sample-efficient
(and or cpu-efficient) compared to DL with GD ??

------
andreyk
"Like Deep Learning (DL), EC was introduced decades ago, and it is currently
experiencing a similar boost from the available big compute and big data.
However, it addresses a distinctly different need: Whereas DL focuses on
modeling what we already know, EC focuses on creating new knowledge."

What utter nonsense. Genetic Algorithms do exactly the same thing that Deep
Learning methods do: optimize a function for a particular criterion. Genetic
Algorithms are useful when taking gradients is not viable, as with RL methods
- and RL methods can use Deep Learning! Seriously misleading.

Also, this: "Remarkably, although several human-designed LSTM variations have
been proposed, they have not improved performance much—LSTM structure was
essentially unchanged for 25 years. Our neuroevolution experiments showed that
it can, as a matter of fact, be improved significantly by adding more
complexity, i.e. memory cells and more nonlinear, parallel pathways." \- this
is not at all a new idea? How can you even pretend what you're doing is novel.

~~~
antirez
Genetic algorithms yes but genetic programming is actually able to invent new
things. Possibly optimizing the set of weight of a neural network to solve a
given problem is more similar to genetic programming, if you see the NN as a
computational unit.

~~~
andreyk
Genetic algorithms 'invent new things' by doing exactly the same thing as
other optimization methods - tweaking the values of particular parameters. The
only difference in is how they do so.

~~~
antirez
Optimizing genes leading to an intelligent neural network is what biology did
basically. If your optimization problem changes the behavior of something able
to compute in order to maximize the ability to solve a task, then that's
inventing things IMHO.

~~~
pmalynin
Respectfully,

We've had "learning-to-learn" algorithms for a few years now. Including LSTMs
that can be learn the gradients for other LSTMs, and deep-RL algorithms that
can optimize neural networks.

Its hard to say its inventing new things... there is still a clear goal - a
loss - and we are optimizing it; poorly in the case of genetic algorithms.

I would say genetic methods are simply poorly described reinforcement learning
problems, which means that there is a 1-to-1 mapping between Deep Learning and
Genetic Algorithms.

PS. Making a distinction between "Genetic Algorithms" and "Genetic
Programming" is like calling Deep Learning "Differential Programming" \--
changing the name of a thing does not change the thing itself.

~~~
antirez
I don't want to mean that this is "novel" at all, I played a lot with genetic
programming like 20 years ago. However in my opinion genetic algorithms and
genetic programming are _not_ the same thing, while closely related: genetic
programming is using genetic algorithms as a search strategy to find a way to
write a program in order to solve a problem. It is more closely related to
reinforcement learning in theory, you just have a problem, a fitness function,
and you provide no hints about how to solve it. Note how this is fundamentally
different than having already an algorithm in lack of good _parameters_ ,
which is what GA does. When such parameters are computer instructions, things
start to be semantically interesting. However there is a big difference
between reinforcement learning and genetic programming: the second will output
a program that can be simplified and understood. NNs are much more opaque so
even when they outperform known techniques, what they do is not clear. A more
practical example: 20 years ago I wrote a GP framework based on a simple stack
language (so that programs are always valid, it's an alternative to use
S-expressions). Then I used it in order to generate a new hashing function to
minimize the collision I had in my hash table. The output of the GP was a code
snippet that I could understand, translate into a specification, re-write in
C. It effectively invented a good hashing function. You can see _everything_
as an optimization problem, but at the end of the day if the output of a GP is
similar to the output of a mathematician that you hired to write a better hash
function, well, this is kinda questionable if it was just optimizing or if
while doing it the program invented something.

Also note that in the 90s things produced by genetic programming went
patented, because they were novel algorithms.

~~~
pmalynin
Right, but what would be the difference if this program was produced by Deep
RL algorithm, where the state is the current program (an encoding of the
program) and the actions are valid instructions in your stack language?

An episode in this case is an iterative call to the Deep RL system until it
outputs <STOP> at which step you give it a reward (negative number of
collisions say), and before that you can send the actions to your stack
machine.

Once you're satisfied with the final number, just concatenate all the produced
actions to get your final program.

I just don't see a fundamental difference, especially once you start doing
things like Asynchronous Actor-Critic et al. And then you can start doing
Monte Carlo Tree Search on top of that since you have a simulator available...

------
boffinism
Interesting to see that we're a long way from this technique actually
generating anything valuable.

E.g. if you try the LSTM Music Maker they link to
([https://www.sentient.ai/sentient-labs/ea/lstm-
music/](https://www.sentient.ai/sentient-labs/ea/lstm-music/)) and enter a
melody, the resulting 'improvisation' will neither have any of the hallmarks
of your initial input nor obey any of the conventions of any genre of music I
recognise. It'll just spew out a random-seeming spray of notes. Compared to,
say, JukeDeck ([https://www.jukedeck.com/make/track-
generator/essential](https://www.jukedeck.com/make/track-
generator/essential)), which uses bog-standard AI, it's got a long way to go.

~~~
TheOtherHobbes
Evolutionary approaches won't necessarily produce better music. This is a
well-researched area.

The challenge with all GA work is finding the right fitness function. If you
don't understand your domain well enough to define a good fitness function
you're going to be wasting your time, and no amount of algorithmic and/or AI
magic is going to help you.

~~~
maksimum
I think the point here is that we don't need to define a fitness function for
"good music", but a fitness function for e.g. "reconstruction error" for auto-
encoders. While the latter is still challenging, it seems slightly easier,
although I'm not sure that it's significantly easier than general manifold
learning...

------
IAmEveryone
If only wishing made it so...

Evolutionary approaches have always had one big feature in their favour: they
are far more fun to work with.

They produce all these fascinating oddities, like the one that learnt to
outsmart it’s opponents at infinite tic-tac-toe by playing coordinates 10
__2312 and 4 __7875 and watching them run out of memory trying to build a data
structure for the board.

They are also far easier to combine with human expertise, i. e. “I can do
this! Let’s throw some of my ideas into the gene pool”.

But empirically it’s hard to deny that neural nets have been used for some
incredible things over the last years. I think it’s the rare hype that is
deserved.

~~~
simion314
As others mentioned the hard part is modeling your problem, how to encode the
"individuals", defining a good fitness function etc. I think he neural
networks are more poplar now because are easy to use, you throw data at it and
you get some results(though IMO I am scared that we are using things that we
do not understand exactly how they work)

------
chrisfosterelli
These "evolved neurons" look ridiculously more complicated and expensive to
evaluate than a base neuron for a minor improvement in performance. The Music
Maker paper mentioned that LSTMs haven't changed in "25 years", but didn't
mention the 2014 Gated Recurrent Units that have become so popular
specifically because they are much more simple to understand and evaluate.
This approach seems to be going in the opposite direction.

I guess the idea of them being evolved is that you don't _need_ to understand
them, but evaluation performance is definitely a concern and doesn't seem to
be addressed much. Interesting early work though!

~~~
verdverm
Likely solution bloat, another of the EC/GP oddities that arise. During
reproduction you randomly disrupt solutions, so to protect good solutions from
being disrupted, the solutions increasingly add irrelevant stuff. This drops
the probability that the good part gets messed up, but leads to loss of
population diversity, as there are an infinite num er of useless things you
can add /d o

------
fnbr
I think this is true, broadly. Biological evolution has a large advantage in
that it introduces a large amount of exogenous variation into the system (it
just randomly changes DNA via a number of ways, and selects the "best"
changes). This can be good as it allows you to do a lot more exploration than
with a comparable gradient based approach, as the gradient based approach
can't explore entirely new parameters; it can only look at parameters that
have already been shown to be good. This is why, for instance, dead neurons
are a problem with ReLUs. I don't think (but have not proof) that this would
be a problem with evolution.

I think that evolutionary approaches are a lot better at the exploration
phase, but weak at exploitation; gradient methods tend to be the opposite.

I think some sort of combination of the two, like Max Jaderberg's "Population
based training of neural networks" [1], is the way to go.

[1]: [https://deepmind.com/blog/population-based-training-
neural-n...](https://deepmind.com/blog/population-based-training-neural-
networks/)

------
patkai
I wrote my MSc thesis on Genetic Algorithms in 1998 and I've been waiting for
the time when they become popular. There is a lot of discussion going on about
the need for "explainable models" and I'm quite surprised that it's not
considered trivial that e.g. Genetic Programming builds computer programs that
are actually models. Explainability is of course harder but how much closer
can you get to building models automatically?

------
rdlecler1
The problem is in the genotype-phenotype map and getting that right. We should
be looking at developmental evolutionary neural networks. That is,
evolutionary algorithms evolving the genetic architecture (genotype) that
generates (development) neural networks (phenotype).

------
baxtr
This is funny. I did my master thesis back in 2003 using "genetic" algorithms.
Maybe I should try to get funding for an AI startup

~~~
hmm_really
I did a 1st year Uni project using GA to evolve neural network topology in
1996 ... maybe I should dig that out and get me some VC $.

~~~
jnwatson
That stuff is hot right now.

------
azinman2
Everything old is new again

~~~
levesque
I guess we should get ready for another AI/neural network winter. Kernel
methods will be popular again!

~~~
azinman2
Only if you can fit the buzzwords neuro or deep into empirical/statistical
methods ;)

------
raker
This article explains very little about the actual technique. Digging into the
paper, it seems like this has to do more with training a network with multiple
data sources at the same time, to take advantage of related patterns across
data sets.

I'm not professionally versed in NNs myself, but what would be the closest
equivalent to recreating this sort of training with tools available today? (in
Keras or R for instance)

------
foobaw
I just tried their music maker ([https://www.sentient.ai/sentient-
labs/ea/lstm-music/](https://www.sentient.ai/sentient-labs/ea/lstm-music/)).

Was this ONLY trained by Bach Chorales? Everything sounds so fugue.

------
LMSchmitt
Schmitt, Lothar M (2001), Theory of Genetic Algorithms, Theoretical Computer
Science 259: 1–61

Schmitt, Lothar M (2004), Theory of Genetic Algorithms II: models for genetic
operators over the string-tensor representation of populations and convergence
to global optima for arbitrary fitness function under scaling, Theoretical
Computer Science 310: 181–231

------
XnoiVeX
A philosophical tangent: Evolution did not always produce the best outcomes.
Would neuroevolution be vulnerable to similar effects? Probably a good
research area. [https://www.wired.com/2009/07/st-
best-5/](https://www.wired.com/2009/07/st-best-5/)

~~~
cvaidya1986
What’s the definition of best

~~~
goatlover
350,000 species of beetles.

~~~
TheOtherHobbes
In music, evolution produced Bach, so it can't really be considered a failure.

It took a while though, and it's possible alternative approaches would have
been produced a similar result more efficiently.

~~~
goatlover
It also produced Justin Bieber and Weird Al. Point being, if the goal of your
evolutionary algorithm is Bach, how do you filter for that result among the
other million possibilities? And what makes Bach the success of music? Maybe
more people prefer listening to Taylor Swift.

------
jnwatson
I had success with GP because, as a programmer, I found it straightforward to
map the problem domain onto a set of operations. Mapping the problem domain,
IMHO, is the hardest part, and in all of my problems, it was hard to map it to
differentiable functions. GP doesn't have that problem.

------
cruzai
Don't want to jump into the fight between DL and Genetic algorithms, but can
somebody explain their experience with the music paper, demo, and work? I
personally am not impressed...Are you?

~~~
cruzai
I believe these type of algorithms work really well for stocks but not music,
GANS work better for music. What is your take?

------
m3kw9
Gradient descent is already a type of evolution that converges to a truth

------
badloginagain
"In this Omniglot multitask character recognition task, our research team
improved error of character matching from 32% to 10%"

Death to CAPTCHA

------
Mc_Big_G
I submitted 4 bars of a Boston song and what the AI generated was incredible
if you're deaf or enjoy bleeding from the ears.

------
make3
it's just not. Evolution learns more sloly than reinforcement learning, which
itself learns much more slowly than supervised deep learning. Evolution
doesn't have gradiants, and needs a huge amount of samples to learn anything.

~~~
cshenton
Actually, evolutionary strategies make use of either empirical or naturalised
gradients on the search space.

~~~
make3
I guess sexual reproduction is kind of like sharing gradients. Is that what
you mean?

~~~
cshenton
Not quite, genetic algorithms represent the search space usually as bit
sequences, so you can just mix subsequences from two parents, that's
reproduction.

In contrast, evolution strategies have a search distribution, and compute a
gradient update to that distribution from an entire generation. So it's really
more 'search gradients' than evolution. The only thing 'evolutionary' about it
is that the minibatches are called generations.

------
yters
If evolutionary algorithms are so magical, why do I only encounter them in
academia?

~~~
randomsearch
They're fairly widely used, but often in places where they are not well
publicised - e.g. engineering optimisation or financial trading.

~~~
yters
My MSc advisor wrote a big book on multi-objective evolutionary optimization.

[http://www.springer.com/gp/book/9780387332543](http://www.springer.com/gp/book/9780387332543)

He liked to have his students compare random search to evolutionary
algorithms. There tended to not be a huge difference. I think that's why they
are not so widely used when there is any kind of better method around.
Probably in most cases you'd want to just understand the domain better.

Hence, I don't believe EAs are the next big thing.

------
billconan
the music generation demo isn't really impressive to me. the outputs were
pretty random.

~~~
nathan_f77
Yeah this demo was not great. But I just found this article about AI-generated
music, and it's actually very good: [https://futurism.com/a-new-ai-can-write-
music-as-well-as-a-h...](https://futurism.com/a-new-ai-can-write-music-as-
well-as-a-human-composer/)

------
pishpash
Would the B-school types leave science and engineering to real scientists and
engineers, please?

~~~
jmmcd
Look up the author's name and then see if you can stand behind your comment.

------
yters
The only thing EAs have going for them is a biological metaphor, the magic of
Darwinian evolution, fountain of endless novelty. But, modern science shows
evolution does not really work in a Darwinian manner, so thus the metaphor
ends.

~~~
_mhr_
Can you provide a source for what modern science you're referring to? What
alternative to Darwinian evolution do you have in mind?

~~~
yters
[https://en.wikipedia.org/wiki/Extended_evolutionary_synthesi...](https://en.wikipedia.org/wiki/Extended_evolutionary_synthesis)

Essentially, variation comes from many other sources than random mutation on
DNA. Random mutation itself is seen as mostly a bad source of variation,
leading to destruction of the genome.

~~~
_mhr_
That's a very interesting hypothesis, thank you for the link. However, it
doesn't seem to support what you're claiming about mutations, from my cursory
reading. Can you provide a source for this claim that random mutation is a bad
source of variation?

[http://extendedevolutionarysynthesis.com/about-the-
ees/why-i...](http://extendedevolutionarysynthesis.com/about-the-ees/why-is-
the-ees-contentious/) explicitly rejects the idea of a revolution regarding
mutation and other genetic sources of variation:

> How can the EES seek 'profound change' and yet disavow 'revolution'?

> All recognized causes of both evolution (e.g. natural selection, genetic
> drift, mutation, etc) and inheritance (e.g. genes), as well as the vast body
> of empirical and theoretical findings generated by the field of evolutionary
> biology, are accepted by the EES.

> Hence the EES does not entail a rejection of current understanding within
> the field, and does not require revolution. The EES seeks only to supplement
> the existing causal framework through recognition of additional causes of
> evolution (e.g. developmental bias) and inheritance (e.g. epigenetic
> inheritance), entirely complementary to those long-established within the
> field. Nevertheless, should these additional processes prove to be
> important, the conceptual change to evolutionary biology could very well be
> fundamental.

Although not in your source, I did find the following. Known as Drake's rule:
as a genome's size increases, the mutation rate also tends to decrease.
[https://en.wikipedia.org/wiki/Genome_size#Drake's_rule](https://en.wikipedia.org/wiki/Genome_size#Drake's_rule).
A recent update to Drake's rule suggests that mutation rate is also negatively
correlated with population size too, not just genome size
([https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3494944/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3494944/)).
And another interesting article about mutation in reproduction:
[https://www.nature.com/articles/ncomms15183](https://www.nature.com/articles/ncomms15183)
(somatic mutation is two orders of magnitude higher than germline mutation).
This is intriguing, though I have to wonder how the genome ever grows if
mutation supposedly becomes so strict over time as to become a non-factor
(transposons, gene duplication?). It's worth noting that mutations do occur
more frequently in non-coding sections of the genome, which may not be
accounted for in the studies I've linked to.

Still, regardless of my sources, I would like to see you provide support from
specifically EES proponents for the devaluation of mutation as a source of
genetic variation, as you implied.

~~~
yters
There is John Sanford who claims random mutation is destroying our genome.

[http://www.geneticentropy.org/](http://www.geneticentropy.org/)

~~~
n4r9
Right, but he also believes the Earth is less than 100,000 years old. Do you
believe that, too?

~~~
yters
I don't know :| It seems to be the most plausible explanation if genetic
entropy is true. I'd be interested is a good refutation of his thesis, but it
makes a lot of sense mathematically.

