
Introduction to Genetic Algorithms - ReginaDeiPirati
https://blog.floydhub.com/introduction-to-genetic-algorithms/
======
privong
One of the neater implementations/uses of genetic algorithms that I have seen
was the design of a spacecraft antenna.

This is what the GA came with up for the design given the constraints:
[https://en.wikipedia.org/wiki/Genetic_algorithm#/media/File%...](https://en.wikipedia.org/wiki/Genetic_algorithm#/media/File%3ASt_5-xband-
antenna.jpg)

And this paper describes the process: [http://ti.arc.nasa.gov/m/pub-
archive/1244h/1244%20(Hornby).p...](http://ti.arc.nasa.gov/m/pub-
archive/1244h/1244%20\(Hornby\).pdf)

~~~
Gibbon1
I was just thinking about that recently. One of the antenna's my company
designed had some weird constraints based on the plastics. The engineer took a
wire and bent it by hand to fit while looking at the impedance on a signal
analyzer. And... it's a wire with some odd kinks in it.

Worked 'fine'

Later they redesigned the product to use a better classic PCB antenna and it
'sucked'.

~~~
pkolaczk
Impedance is easy to measure. But this is not the only parameter. A 50 Ohm
resistor has a perfect impedance, but makes a terrible antenna. I wonder how
they optimized the radiation characteristic this way.

~~~
Gibbon1
I mistyped, he uses a vector network analyzer for tuning RF stuff. Measures
the amplitude and phase of a circuit and generates a smith cart in real time.
They've come down in prices a lot but they used to be $50k or more.

There are also simulation tools that allow you to simulate antenna's.

------
wwweston
Took a GA class during my undergrad years -- actually went to a neighboring
university to do it because mine didn't offer it -- and it informed a research
project I got a tiny grant for. One of the things my prof noted was that many
people looooved GAs as a research topic because, at least at the time, a good
chunk of related work was in coming up with ideas for fitness functions and
examining them and that's a vein a reasonably creative person could mine for
publishing for a long time.

~~~
philipkiely
I think that the most fun part of writing this article was coming up with
simple examples that adequately demonstrated the components of a GA. I agree
that creativity is a huge asset in the topic!

------
snrji
When I was in college I was mystified by genetic algorithms, without knowing
much about them. After taking 2 subjects on the matter and reading some books,
I came to the conclusion that apart from being inherently inefficient (that's
what you apply them when you have no alternative), they are actually
outperformed by hill climbing (which can be seen as a particular case of the
former if population = 1). Also, the crossover operator seems to make more
harm than good, and it's not fully understood it's usefulness in nature,
although there are some theories (this last point is taken from Pedro Domingos
book).

~~~
sago
Interesting. I want to ask this in a way that is respectful because I'm not
grandstanding, but coming to that conclusion after two undergraduate courses,
based on preceding mystification, did it not occur to you that it might be
your understanding, rather than the whole field?

My personal story: I worked for just under a decade on EAs (PhD and
professionally, late 90s and early 00s). I don't entirely disagree with you
about pure GAs. But even back then, pure GAs were rarely used. But I do
disagree about evolutionary algorithms in general.

The usefulness of EAs is highly problem dependent. And I will definitely
concede that the space of problems in which they excel and are tractable is
smaller than the space of problems for something like a neural network, in my
experience.

As for crossover, when it works (and in engineering contexts the operator
needs to be designed with the problem in mind) it does so by allowing
individuals at different points on the landscape to share the optimisations
they find. At the cost of increased genetic load. This, of course, is only
useful if the fitness landscape is both heavily multimodal and self-similar.
And it requires some effort to avoid the population converging around a single
solution (where you do have a stochastic hill climb).

In my experience, EAs come into their own with non-fixed genotypes or complex
genotype to phenotype expression. Multivariate optimisations with a fixed
number of variables is probably best done with other tools.

~~~
geezerjay
> And I will definitely concede that the space of problems in which they excel
> and are tractable is smaller than the space of problems for something like a
> neural network, in my experience.

The only thing that EAs and neural networks have in common is the buzzword
abuse. Neural networks are actually useful as they are at their core an
efficient way to approximate multivariate functions, while EAs are
computationally expensive heuristics abused by being passed off as
optimization algorithms even in domains where deterministic algorithms (and
even brute force approaches) actually perform better.

------
benrbray
Out of curiosity, does anyone know of examples where genetic algorithms are
the "right choice"?

I thought they were really nifty when I first heard of them, but thinking of
them as an optimization procedure, they don't really stand up in my experience
to basic gradient descent methods. Sure, you can e.g. train a neural network
with genetic algorithms, but why would you?

I'd love to be proven wrong though :)

~~~
chrxr
I wrote a GA for picking an optimal Fantasy Football (soccer, Brit here) team
based on player scores from the previous season(1). The team did above
average, even though it was hobbled by the fact that I didn't do any transfers
after the initial pick.

The knapsack problem(2), of which fantasy team picking is an example, is a
classic use case for genetic algorithms.

1) [https://github.com/chrxr/ga-prem-
team/blob/master/scripts/ga...](https://github.com/chrxr/ga-prem-
team/blob/master/scripts/ga_script.py) 2)
[https://en.wikipedia.org/wiki/Knapsack_problem](https://en.wikipedia.org/wiki/Knapsack_problem)

~~~
pwbdecker
Wow that's funny I did the exact same thing! I was working at a web/mobile
gaming company building a sports draft platform and I was tasked with building
a bot player to fill in empty player slots. I used a GA and it worked really
well even though I didn't know anything about sports.

------
closetCS
This is probably extremely cynical, but does floydhub have a vested interest
in promoting genetic algorithms because they can take long times to train,
especially with neural networks hyper parameters as the search space. These
kind of beefy, complex training processes would be fantastic for its business.

~~~
nurettin
I think this is a reasonable insight into their business model. Especially if
they are able to somehow cheaply spin up long-running processes. Not sure how
they might be able to do that, though. Conventional tech is all about short
term processes.

------
joe_the_user
Hmm, so what is the theory of the mutation function? Is it just determined ad-
hoc from looking the problem and throwing some standard examples at it?

It seems like you could implement effectively any function with a complicated
enough combination of fitness, selection and mutation functions but without a
theory of which to use, progress would be a bit hard.

~~~
thephyber
I think it's usually a function of what data structure your "genotype" is
stored in.

In my experiments with GAs, defining a coefficient for mutation that is either
far too high or far too low means you search search for too much of the
problem search space or far too little, respectively. Testing parameters such
as the frequency of mutation are part of the iterative process of tuning your
GA.

~~~
jacques_chester
Some schemes add those parameters to the genome input itself. IIRC it's not a
big win, but I don't recall at the moment why.

------
jfz
I've been working on a Genetic Algorithm/Evolutionary Computing framework in
Scala, using network-parallelism to solve optimization problems fast. If
you're interested, check it out at [https://github.com/evvo-
labs/evvo](https://github.com/evvo-labs/evvo)

------
hazeii
An interesting case is where a human does the selection; this was pretty well
covered in Dawkin's 'The Blind Watchmaker' [0] with his program for generating
biomorphs. Some nice speculation in there too:

'Dawkins speculated that the unnatural selection role played by the user in
this program could be replaced by a more natural agent if, for example,
colourful biomorphs could be selected by butterflies or other insects, via a
touch-sensitive display set up in a garden.' (from the wikipedia article).

[0]
[https://en.wikipedia.org/wiki/The_Blind_Watchmaker](https://en.wikipedia.org/wiki/The_Blind_Watchmaker)

------
GordonS
A lot of the time there are better solutions that a GA, but something I've
always liked about them is how easy they are to understand, even for AI noobs.

I guess it's because of the similarities to the evolution of life, but also
because they're just quite simple.

------
jacques_chester
GAs are a lot of fun, but a lot of the time they are very time-consuming in
the case of fairly straightforward optimisation problems. Other iterative
searches like simulated annealing or just plain ol' dumb hill climbing is a
lot faster most of the time.

~~~
marcosdumay
If your problem is simple enough for solving with SA (let's face it, most
problems are), there is really no gain on using GA.

Genetic algorithms beat simulated annealing when different dimensions of your
solution interact strongly and unpredictably, SA and gradient searches
completely fail at those cases.

------
philipkiely
Hi, author here!

I’m really excited by the conversation this has generated and I’m happy to
answer any questions.

~~~
vanderZwan
> _When you 're solving a problem, how do you know if the answer you've found
> is correct? In many domains, there is a single correct answer. A
> mathematical function may have a global maximum or other well-defined
> attributes. However, other problems, like how a cell behaves in a petri
> dish, do not have clear solutions. Enter evolution, which does not design
> towards a known solution but optimizes around constraints._

What about situations when there are no real optima and minima, but only
intransitive relations? So "rock/paper/scissor"-like scenarios[0]. Could you
use genetic algorithms in any way in those circumstances?

[https://aeon.co/essays/attempts-to-choose-the-best-life-
may-...](https://aeon.co/essays/attempts-to-choose-the-best-life-may-be-
doomed-to-failure)

~~~
philipkiely
One issue with “rock paper scissors” is the fixed input domain; there is
nothing to mutate if your three starting options are the only possible inputs.

If you’re trying to do something like predict the next move based on
historical patterns, I think a classifier would be better.

