
Parametric Study of a Genetic Algorithm (2003) [pdf] - happy-go-lucky
http://www.genetic-programming.org/sp2003/Marta.pdf
======
thearn4
I remember being fascinated by GAs as an undergraduate, but haven't seen much
discussion come out of the space in a while. I'm guessing machine learning
took off (in particular deep learning) and put an emphasis on the impact of
large-scale gradient-based optimization via backpropagation/adjoint
approaches.

In other words, rather than apply GAs to the optimization of a large complex
system, these days the paradigm seems to be to find a way to derive the needed
gradients (or extract them via automatic differentiation). Does this seem to
be the case, or are GAs still shining in some applications?

~~~
TezlaOil
Machine learning is automated classification, regression, and decision-making.
Genetic algorithms don't tend to perform so well in these areas (just as ML is
not so appropriate for combinatorial optimization).

Instead, genetic methods work well for various messy industrial engineering
such as strip packaging design or antenna design. One of my favorite recent
results solves a structural engineering problem using grammatical evolution
[1].

[1]
[http://www.sciencedirect.com/science/article/pii/S0926580513...](http://www.sciencedirect.com/science/article/pii/S0926580513002124)

~~~
amelius
Perhaps genetic algorithms could be useful for hyperparameter optimization of
deep learning methods? ;)

~~~
thearn4
I remember using GAs to tune SVM kernel hyperparameters a few years ago. They
worked well enough, but I mostly did it because I was lazy.

------
yters
The big interest in GAs is due to their analogy with bio evolution, and its
amazing ability to generate novel and effective solutions to very difficult
problems with very few iterations. Unfortunately, computational evolution has
not discovered the secret sauce that makes biological evolution so effective,
so has languished to quicker and simpler methods such as simulated annealing
and particle swarm optimization.

------
gradstudent
GAs are in this weird place where much of their lunch (by which I mean tasks
such as classification and regression), has been eaten by more principled
methods from Machine Learning.

They've also been pushed out of most combinational optimisation problems by
Constraint Programming methods and by other more principled meta-heuristics
such as Large Neighbourhood Search.

In my opinion part of the problem is that so much of the literature around GAs
involves what I like to think of as voodoo optimisation. You take a simple to
implement randomised algorithm, apply it to some poorly studied but high-
dimensional problem and then poke things as appropriate until you eventually
find some feasible solution. Maybe.

If I'm being honest, the only time I'd consider GAs is if I had very little
insight into the problem at hand and even then only after I've run out of
other things to try.

~~~
radarsat1
I think there are lots of applications for gradient-free methods, but it's
never been clear to me what the advantage is of the "genetic" aspect as
compared to other randomized search strategies. However, the ability to
dynamically grow a program tree is interesting and somewhat difficult to model
in an ANN-style framework. But in gradient free methods, there are lots of
interesting ideas like particle swarm optimisation and simulated annealing, or
even just random search. It's not really clear to me what the "gene swapping"
approach brings to the table, except that it probably encourages exploring the
area in the middle between some good minima that are potentially far apart in
the search space. But you could formulate this same idea without appealing to
evolutionary metaphors.

~~~
TezlaOil
Gene swapping makes perfect sense when the success of a given solution is due
to the various contributions of the solution's components (this is closely
related to the Credit Assignment Problem).

For example, a 'random' tour of the traveling salesman might have a nice
sensible path between Paris and Moscow, while another solution has a good path
between San Francisco and Mexico City. Chromosomal crossover (the fundamental
operation of genetic algorithms) can create "offspring" tours that contain
both of these components.

This is not at all analogous to exploring the middle between two local optima
- the solution space often has no meaningful geometric structure, so being
located "between" two other solutions may not make sense at all.

------
ramgorur
This might also be relevant, [http://www.human-
competitive.org/](http://www.human-competitive.org/)

This is an annual competition on evolving "human competitive" solutions to
difficult real world problems.

------
sevensor
It's interesting that this was posted, in that (1) it uses an approach to GAs
that was already well out of date by 2003, and (2) the problem domain
(aerospace) has been beaten to death with GAs. The problem with structural and
multidisciplinary optimization (SMO) is that it's really hard. Each domain has
complex, computationally expensive models that interact with all of the other
domains. So to optimize across all domains, you have to propagate constraints
across the different models (e.g. between your model for the hydraulic system
and your fluid dynamics model for the wing.) Optimizing for range, as this
paper does, is a gross oversimplification that results in designs that might
not be remotely flyable.

Anybody who's interested in aircraft design should start by looking at Dan
Raymer's website. He wrote the book. It's about 800 pages long.

But back to GAs. A lot of comments in this thread speculate as to why they're
not used more. The paper illustrates one of the main reasons: Goldberg's GA.
It's got a lot of problems. At the root is Goldberg's attempt to build a
theory that makes GAs seem well-founded, more than just a heuristic search. If
you read the GA literature from the late 1980s and early 1990s, you'll see a
lot of talk about "Schema Processing," "Temporal Salience Structures," and
"Building-Block Theory." The reality is, it's all hand-waving that tries to
justify in principle something that doesn't work very well in practice: binary
encoding and roulette-wheel selection. Even at the time (1989) when Goldberg
published his GA book, the Germans had come up with something much better: the
Evolution Strategy (ES). The ES encodes real-valued variables with floating-
point numbers. An ES manages the population much more sensibly and tends to
work in "steady-state" rather than "generational" mode (i.e. uses new
information immediately rather than waiting for a whole generation to be
evaluated). The most successful modern descendents of the ES are differential
evolution (DE) and CMAES.

Anyway, Goldberg's GA got an early head-start on popularity, especially in
North America, because he published source code in the back of his book.
Anybody could pick it up and write a GA, and many did. This is the lineage for
every Javascript GA demo I've ever seen, whether the authors know it or not
(likely not.) This is unfortunate, because Goldberg's approach ignores the
single biggest advantage available to population-based heuristic search: it
can be used for multiobjective optimization. This is known to the aerospace
community, and indeed it was already known in 2003 when the paper we're
discussing was published. The very next year, Simpson and D'Souza published an
aircraft design paper based on work with a multi-objective evolutionary
algorithm.

Unfortunately, multiobjective optimization algorithms are inherently hard to
compare with single-objective algorithms. This means that comparitive studies
almost always favor single-objective optimization, because it does a better
job optimizing a single objective. The strength of multiobjective optimization
is that it helps its users understand the problem space -- single-objective
optimization works great if your mind is already made up about what
performance characteristics you want.

I could go on all day; this was a chapter in my dissertation. But I'd better
wrap it up. I'll conclude by saying that there are people using multi-
objective evolutionary algorithms both inside and outside academia. You'll
meet them if you go to an SMO conference. GAs aren't dead, they're just not
where you're looking for them.

~~~
cbennett
Thanks for this extremely educational comment. I'm excited to read a bit more
about DE and CMAES in particular!

>>I could go on all day; this was a chapter in my dissertation

I'd love a link (assuming it is publically available!)

~~~
sevensor
Glad it was useful! Once I get started writing about GAs I have trouble
slowing down. By the time I posted my comment it was three or four pages back,
so I'm encouraged that somebody actually read it!

Here's a link to my dissertation:

[https://etda.libraries.psu.edu/catalog/28690](https://etda.libraries.psu.edu/catalog/28690)

I forgot to put a disclaimer in my post: Tim Simpson was my advisor, which is
how I know about his paper with D'Souza.

For DE, your best bet is probably to start with Rainer Storn's website:
[http://www1.icsi.berkeley.edu/~storn/code.html](http://www1.icsi.berkeley.edu/~storn/code.html)

For CMAES, Hansen has a good website:
[https://www.lri.fr/~hansen/cmaesintro.html](https://www.lri.fr/~hansen/cmaesintro.html)

