
Evolutionary algorithm outperforms deep-learning machines at video games - hunglee2
https://www.technologyreview.com/s/611568/evolutionary-algorithm-outperforms-deep-learning-machines-at-video-games/
======
mindcrime
There are some things about this article that irk me, like this:

 _Neural networks have garnered all the headlines, but a much more powerful
approach is waiting in the wings._

Granted it's probably written for lay-people who don't know much about AI/ML
techniques, but this is still pretty sloppy. It hasn't been proven that EA's
are "more powerful" than NN's. And so far as that goes, I'm not even sure it
makes sense to use a term like "more powerful" at all in this context. And EA
approaches aren't some new, up-and-comer that haven't had their chance yet...
those techniques have been around for a long time as well.

All of that said, I'm a big fan of EA's, dating back to when I implemented a
parallel GA optimization algorithm in a parallel programming class in college.
I've been fascinated with them ever since, and I _do_ think that evolutionary
approaches have a lot of potential. So I'm happy to see some positive press
being directed in that direction, but I still get annoyed with some of the
hand-wavy stuff and sloppy language.

Anyway, one thing about (some|many|most?) EA approaches is that they
parallelize very well. And depending on what you're doing, you aren't
necessarily doing Linear Algebra / matrix math, so you can likely accomplish a
lot without spending beaucoup $$ on GPU's. A Beowulf cluster of CPU machines
can be pretty effective.

~~~
eggy
I read Koza's "Genetic Programming: On the Programming of Computers by Means
of Natural Selection" back in 1993, and a lot of the holdups were that
computers and memory just weren't as good as today to deal with large solution
space trials. Also, dead ends due to the choice of starting functions to
evolve. Today using TWEANNs and other methods (Topology and Weight Evolving
Neural Networks) is really a good way to go. I still like the early examples
in Koza GP where you give it a few functions to evolve a boolean function. I
had started studying neural networks way back in the 80s and hit pay dirt with
Mark Watson's 1991 "Common LISP Modules. Artificial Intelligence in the Era of
Neural Networks and Chaos Theory", which threw me into NLP, Chaos, and ANNs.

------
aub3bhat
To quote Ben Recht

"When you end up with a bunch of papers showing that genetic algorithms are
competitive with your methods, this does not mean that we’ve made an advance
in genetic algorithms. It is far more likely that this means that your method
is a lousy implementation of random search."

[http://www.argmin.net/2018/02/20/reinforce/](http://www.argmin.net/2018/02/20/reinforce/)

~~~
soVeryTired
The article seems reasonable and well-argued. But policy gradients are a major
cornerstone of reinforcement learning - just about every textbook will
dedicate some time to them.

So how can we reconcile that observation with the arguments in the article? Is
recht overstating his case or is this a big screw-up in the field in general?

Can anyone who knows about reinforcement learning weigh in?

~~~
johnmoberg
Ben's blog series culminated in a nice article[1] touring reinforcement
learning. He also held a tutorial on the topic at ICML[2]. They might address
some of your concerns.

[1]: [https://arxiv.org/abs/1806.09460](https://arxiv.org/abs/1806.09460)

[2]:
[https://www.facebook.com/icml.imls/videos/428757614305426](https://www.facebook.com/icml.imls/videos/428757614305426)

------
wholemoley
I've been using the python-neat library in open-ai's retro with some success.
And while it works quickly, it normally finds local maximas. It seems to
struggle with long sequences. And defining the fitness function/parameters is
an artform.

Here's a video of Donkey Kong Country played by python-neat in open-ai's
retro. It took 8 generations of 20 genones to beat level one. I'll post the
code if anyone's interested.

[https://vimeo.com/280611464](https://vimeo.com/280611464)

~~~
godelski
I'd be interested in the code.

~~~
wholemoley
Here you go!

[https://www.youtube.com/watch?v=CFa6NhLgeL0&list=PLTWFMbPFsv...](https://www.youtube.com/watch?v=CFa6NhLgeL0&list=PLTWFMbPFsvz3CeozHfeuJIXWAJMkPtAdS)

[https://gitlab.com/lucasrthompson/Sonic-Bot-In-OpenAI-and-
NE...](https://gitlab.com/lucasrthompson/Sonic-Bot-In-OpenAI-and-NEAT)

------
rdlecler1
It seems clear to me that the biggest advances will be made through
evolutionary and developmental neural networks where evolution lays down the
algorithm that builds the gross neural network architecture and learning then
refines this. However this will need massive amounts of computational power
because you have G generations of population size P, and each individual
phenotype needs to go through a developmental step (neurogensis) and then an
evaluation step. On top of that we need a good genotype to phenotype map
specifically for neural networks.

Conveniently gene regulatory networks that would control cell growt, division,
and wiring up the neurons are themselves represented mathematically as neural
network so in effect you’re evolving one class of neural networks the build
another class of neural networks. Nature is quite elegant.

~~~
MrQuincle
Check hyperneat and for example also the work from Josh Bongard on GRNs.
Shorthand is evo-devo.

[https://m.youtube.com/playlist?list=PLAuiGdPEdw0iRhEnF5yPuqe...](https://m.youtube.com/playlist?list=PLAuiGdPEdw0iRhEnF5yPuqegKVjKktDni)

I once built a simulator that used evo-devo to evolve morphology of modular
robot organisms in response to environmental factors, completing evo-devo-eco.
If I remember it right it was creating a snake at night and disassembling
during the day. The fitness function becomes complicated though. I think it
was something like a distance function in body morphology plus environmental
condition (light).

------
Drdrdrq
It would be interesting to see how good the combination of EA an NN would
be... This is basically us, humans: evolution + learning. Have there been any
attempts to combine the two?

~~~
ufo
Despite the catchy names, Genetic Algorithms and Neural Networks don't work
quite the same way as their biological counterparts.

~~~
sprt
I thought GAs did, but I do lack knowledge in the area. Could you highlight
some differences?

~~~
ufo
Genetic Algorithms is a description for a wide range of techniques for adding
some extra variety to search algorimhs. You maintain a population of candidate
solutions and at each step you improve them by a combination of local search
(like gradient descent or greedy neighbor search) and solution mixing (to help
escape local minima). When doing this there are lots of ways to go about it.
How to decide what population to keep, what local optimization to use, and how
to mix the candidate optimizations.

If you look closely, all of these are not quite how biology works. In biology
the population is decided by natural or artificial selection while in a GA
there may be more factors at play (such as favoring a more diverse pool). The
way solutions are represented is also different. In biology you have genes
which behave according to the laws of genetics. In a GA, on the other hand,
the solution representation and the operations to mutate and mix solutions are
carefully planned by an intelligent designer.

------
jaclaz
Maybe largely off-topic but I still remember the fun I had actually building
as a kid (after reading an article in Scientific American by Martin Gardner)
the MENACE (Machine Educable Noughts And Crosses Engine):

[http://www.mscroggs.co.uk/blog/19](http://www.mscroggs.co.uk/blog/19)

For the beads I used Smarties, so each time the machine lost I ate the "wrong
colour one".

------
plainOldText
I've been meaning to read _Handbook of Neuroevolution Through Erlang_. Has
anyone done it? If so, what's your opinion about it?

~~~
mindcrime
I'd like to read that book, but the pricing... wow. On Amazon the cheapest new
copy is ~ $138.00. And this is one of those cases that shows how goofy pricing
for marketplace sellers gets: the cheapest used copy is even worse, at a
whopping $230.59. And that's for one in "Good" condition.

Probably all of these resellers are using some brain-dead stupid bot to set
their prices for them. Too bad - if somebody was selling a used copy for, say,
$75.00, I'd probably order it right now. Instead, it's going to sit in their
store forever, because who would pay $230 for a used copy of a book, when they
can get a new copy for $138.

SMH.

~~~
p1esk
[http://libgen.io/search.php?req=Handbook+of+Neuroevolution+T...](http://libgen.io/search.php?req=Handbook+of+Neuroevolution+Through+Erlang)

~~~
avshyz
Thanks mate!

------
0xBA5ED
I didn't know this was new. Most of the neat NN experiments you find on
YouTube the past several years use genetic algorithms. Good ol' MarIO, for
example. The rigged 3d human models "learning" how to walk and run. The
various "navigate the maze" ones. Etc.

~~~
bobbean
There's an entertaining guy on YouTube, Code Bullet, who does a bunch of stuff
like this, genetic algorithms and neural networks. It's pretty interesting
because he rewrites all the games from scratch.

Here's a video of him playing with The Worlds Hardest Game:
[https://youtu.be/Yo2SepcNyw4](https://youtu.be/Yo2SepcNyw4)

