In other words, rather than apply GAs to the optimization of a large complex system, these days the paradigm seems to be to find a way to derive the needed gradients (or extract them via automatic differentiation). Does this seem to be the case, or are GAs still shining in some applications?
Instead, genetic methods work well for various messy industrial engineering such as strip packaging design or antenna design. One of my favorite recent results solves a structural engineering problem using grammatical evolution .
I seem to remember encountering some work in ML parameter selection (which can be simplified to a combinatorial problem), which made use of metaheuristics. Having trouble recalling the group who was working on it right now, though.
They've also been pushed out of most combinational optimisation problems by Constraint Programming methods and by other more principled meta-heuristics such as Large Neighbourhood Search.
In my opinion part of the problem is that so much of the literature around GAs involves what I like to think of as voodoo optimisation. You take a simple to implement randomised algorithm, apply it to some poorly studied but high-dimensional problem and then poke things as appropriate until you eventually find some feasible solution. Maybe.
If I'm being honest, the only time I'd consider GAs is if I had very little insight into the problem at hand and even then only after I've run out of other things to try.
For example, a 'random' tour of the traveling salesman might have a nice sensible path between Paris and Moscow, while another solution has a good path between San Francisco and Mexico City. Chromosomal crossover (the fundamental operation of genetic algorithms) can create "offspring" tours that contain both of these components.
This is not at all analogous to exploring the middle between two local optima - the solution space often has no meaningful geometric structure, so being located "between" two other solutions may not make sense at all.
This is an annual competition on evolving "human competitive" solutions to difficult real world problems.
Anybody who's interested in aircraft design should start by looking at Dan Raymer's website. He wrote the book. It's about 800 pages long.
But back to GAs. A lot of comments in this thread speculate as to why they're not used more. The paper illustrates one of the main reasons: Goldberg's GA. It's got a lot of problems. At the root is Goldberg's attempt to build a theory that makes GAs seem well-founded, more than just a heuristic search. If you read the GA literature from the late 1980s and early 1990s, you'll see a lot of talk about "Schema Processing," "Temporal Salience Structures," and "Building-Block Theory." The reality is, it's all hand-waving that tries to justify in principle something that doesn't work very well in practice: binary encoding and roulette-wheel selection. Even at the time (1989) when Goldberg published his GA book, the Germans had come up with something much better: the Evolution Strategy (ES). The ES encodes real-valued variables with floating-point numbers. An ES manages the population much more sensibly and tends to work in "steady-state" rather than "generational" mode (i.e. uses new information immediately rather than waiting for a whole generation to be evaluated). The most successful modern descendents of the ES are differential evolution (DE) and CMAES.
This is unfortunate, because Goldberg's approach ignores the single biggest advantage available to population-based heuristic search: it can be used for multiobjective optimization. This is known to the aerospace community, and indeed it was already known in 2003 when the paper we're discussing was published. The very next year, Simpson and D'Souza published an aircraft design paper based on work with a multi-objective evolutionary algorithm.
Unfortunately, multiobjective optimization algorithms are inherently hard to compare with single-objective algorithms. This means that comparitive studies almost always favor single-objective optimization, because it does a better job optimizing a single objective. The strength of multiobjective optimization is that it helps its users understand the problem space -- single-objective optimization works great if your mind is already made up about what performance characteristics you want.
I could go on all day; this was a chapter in my dissertation. But I'd better wrap it up. I'll conclude by saying that there are people using multi-objective evolutionary algorithms both inside and outside academia. You'll meet them if you go to an SMO conference. GAs aren't dead, they're just not where you're looking for them.
>>I could go on all day; this was a chapter in my dissertation
I'd love a link (assuming it is publically available!)
Here's a link to my dissertation:
I forgot to put a disclaimer in my post: Tim Simpson was my advisor, which is how I know about his paper with D'Souza.
For DE, your best bet is probably to start with Rainer Storn's website: http://www1.icsi.berkeley.edu/~storn/code.html
For CMAES, Hansen has a good website: https://www.lri.fr/~hansen/cmaesintro.html