This looks really great!
To be honest I tend to tinker a bit in the simulation space so this all is really interesting. It seems like a github pledge is better for you than patron at the moment, as you keep all the money, would you prefer that?
I'm glad you like it! I'm trying out both GitHub sponsors and Patreon at the moment to fund more projects, I'm not sure which will stick! So far it looks like Patreon has better support for giving back to your backers, but GitHub also doesn't take a cut. I am of course very grateful and flattered - please choose whichever you prefer!
If you are thinking of making patron only content then that's better.
Hence why I was asking ;)...
- Have you looked into Latin Hypercube Sampling for your initial population creation? Or in general, some initial set that tries to cover the space better than a pseudo-random sample? LHS tries to guarantee some distance between the points, which can be beneficial for rather non-convex problems.
- For very high dimensionality, have you considered classification to identify which dimensions are significant?
With a sufficient population size, taking advantage of better sampling techniques for initial population generation can make a significant difference. I've used LHS and modified LHS approaches before, but I wanted to keep things simple, at least in the earlier parts of the book!
Something I've worked on with a colleague recently is using a more data science with pre-optimisation exploration (https://link.springer.com/chapter/10.1007/978-3-030-43722-0_...), hoping to do more work on this soon. With regards to high dimensionality in the search space, this kind of approach could be useful.
Previously I've been more interested in high-dimensional objective space and how to deal with them, primarily using progressive preference articulation.
If you have any requests for additional sections in the book I would love to hear them! Something high up on my list is doing a section or two on my neuroevolution algorithm, but keeping it at the right level is tricky.
Do you have any favorite approaches to reduction?
My main use of dimensionality reduction is to improve visualisation to support decision making. I've seen PCA and differential evolution approaches in the decision space to show promising results on benchmarks.