
Google Engineers Mutate AI to Make Systems Evolve Faster Than We Can Code Them - jzer0cool
https://www.sciencealert.com/coders-mutate-ai-systems-to-make-them-evolve-faster-than-we-can-program-them
======
rvz
> The system starts off with a selection of 100 algorithms made by randomly
> combining simple mathematical operations. A sophisticated trial-and-error
> process then identifies the best performers, which are retained - with some
> tweaks - for another round of trials. In other words, the neural network is
> mutating as it goes.

This approach sounds quite similar to this neuroevolution paper [0] but
perhaps that this sort of works if you have a system that has a large amount
of memory that can handle simultaneously evolving these neural networks. Thus,
the hardware requirements would unsurprisingly be demanding; at least for
Google they'd just use their own infrastructure and training it on their TPU
hardware.

This is the only use-case that makes sense to apply neuroevolution to search
for the optimal network, unless you're Google or on their GCP infrastructure
that is.

[0]
[http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf)

~~~
jzer0cool
Thank you for sharing above paper, which references modifications to the
topology, which I think is just another dimension that could mutate.

------
jzer0cool
Here is a paper they referenced in the article:

2003) AutoML-Zero: Evolving Machine Learning Algorithms From Scratch -
[https://arxiv.org/abs/2003.03384](https://arxiv.org/abs/2003.03384).

Although I could not finish reading all of it yet, I think here algorithms
mutating is another dimension which can mutate, which then effects of course
new topologies, weights, and other factors.

