This reminds me a lot of the work on compressed neural network from Jan Koutnik and his colleagues. They don't evolve topology of a NN, but they learn weights of a neural network in some compressed space. That seems to be very similar to weight sharing.
For example, in the case of the cart pole (without swing up) benchmark a simple linear controller with equal positive weights is required which can easily be encoded with this approach.
Here are some related papers:
- original idea: http://people.idsia.ch/~tino/papers/koutnik.gecco10.pdf
- vision-based TORCS: http://repository.supsi.ch/4548/1/koutnik2013fdg.pdf
- backpropagation with compressed weights: http://www.informatik.uni-bremen.de/~afabisch/files/2013_NN_...
For example, in the case of the cart pole (without swing up) benchmark a simple linear controller with equal positive weights is required which can easily be encoded with this approach.