Hacker News new | past | comments | ask | show | jobs | submit login
Neuro Evolution of Augmented Topologies (fev.al)
95 points by charlieirish 14 days ago | hide | past | favorite | 9 comments



Apart from NEAT/HyperNEAT there are also other approaches to neuroevolution (I think in this context it is referred to as "Evolutionary Neural Architecture Search" [0]). Evolution in general can be applied in different ways (e.g. optimizing the architecture, replacing training gradient descent etc.).

A while ago I co-authored a paper in this space [1] and released some code for interested folks [2].

[0]: https://arxiv.org/pdf/2008.10937.pdf

[1]: https://arxiv.org/abs/1801.00119

[2]: https://gitlab.com/pkoperek/pytorch-dnn-evolution/-/tree/mas...


I also find the related ideas of neuroevolution of the weights of a neural networks to be fascinating in its own right.

I've implemented "cooperative coevolution" which felt like magic when I considered how good it is (on some tasks like continuous control RL problems) relative to known good methods like anything involving gradients.

I wish that this stuff was explored a bit more. Seems we are leaving the paradigm of evolutionary methods...


They seem to be similar to Gene Sher's TWEANNS - Topology and Weight Evolving Neural Networks - that I learned about in his 2012 book, "Handbook of Neuroevolution Through Erlang" (sure, it didn't pick up because Erlang ;))

Gene's specific implementation is DXNN (Discover and eXplore Neural Network) implemented in 2010 [1]

[1] https://arxiv.org/abs/1008.2412


Sher's TWEANN work cites NEAT and HyperNEAT quite extensively — Stanley's original work here is pretty influential in the neuroevolution space!

I missed that, thanks! I'll have to take another look.

When I think of neural networks that evolve changes in their topology and the weights between those connections, over and over towards optimizing a solution to function, I get the vision of the recursive Life in Life video[1] where Conway's Game of Life is emulated in Game of Life.

[1] https://www.youtube.com/watch?v=xP5-iIeKXE8


Author there, I'm curious how you found this post that was not indexed yet? :) (and not finished then)

There's also for example HyerNEAT. I'm out of the field now. Has recent progress been made recently with these technologies?

I'm by no means an expert in the field but I do find it exceptionally interesting so i try and keep tabs on some of the research done by people who originated from the same group as Kenneth Stanley.

Seems like many from this group now pursue open-endedness in AI and view evolution as a way towards this goal (or lack thereof).

A very interesting evolution (ha!) of these ideas was presented in POET[0] towards evolution of agents in evolving environments.

There is also an interesting paper about accelerating neural architecture search when generating fake training data in generative teacher networks[1].

Lastly, a paper that i find very very interesting but might not be as relevant but still is 'First return, then explore'[2]

[0] : https://eng.uber.com/poet-open-ended-deep-learning/

[1] : http://proceedings.mlr.press/v119/such20a.html

[2] : https://arxiv.org/pdf/2004.12919.pdf


For the logical endpoint of this approach, check out FreeWire:

https://github.com/noahtren/Freewire

You can experiment with freely wired neural networks without traditional layers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: