A while ago I co-authored a paper in this space  and released some code for interested folks .
I've implemented "cooperative coevolution" which felt like magic when I considered how good it is (on some tasks like continuous control RL problems) relative to known good methods like anything involving gradients.
I wish that this stuff was explored a bit more. Seems we are leaving the paradigm of evolutionary methods...
Gene's specific implementation is DXNN (Discover and eXplore Neural Network) implemented in 2010 
When I think of neural networks that evolve changes in their topology and the weights between those connections, over and over towards optimizing a solution to function, I get the vision of the recursive Life in Life video where Conway's Game of Life is emulated in Game of Life.
Seems like many from this group now pursue open-endedness in AI and view evolution as a way towards this goal (or lack thereof).
A very interesting evolution (ha!) of these ideas was presented in POET towards evolution of agents in evolving environments.
There is also an interesting paper about accelerating neural architecture search when generating fake training data in generative teacher networks.
Lastly, a paper that i find very very interesting but might not be as relevant but still is 'First return, then explore'
 : https://eng.uber.com/poet-open-ended-deep-learning/
 : http://proceedings.mlr.press/v119/such20a.html
 : https://arxiv.org/pdf/2004.12919.pdf
You can experiment with freely wired neural networks without traditional layers.