
How Do Genetic Algorithms Work? [video] - henning
https://www.youtube.com/watch?v=ziMHaGQJuSI
======
cmsmith
My issue with GA is that they are definitely the sexiest/best known
evolutionary algorithm in their class, so they get used for a lot of things
that they're not really designed for. GA work best if you have lots of
discrete parameters that combine in weird ways to determine fitness (you know,
like DNA). I'm a structural engineer, and most search problems I run into have
a smaller number of continuously valued parameters (like the locations of the
joints in the car in the video). GA tends to try to swap parameters between
the fittest of the previous generation, instead of looking in the area between
those individuals.

~~~
Houshalter
Yes, I believe there was a paper that hillclimbing with random restarts almost
always did better than genetic algorithms.

However does it really matter? It might save a little bit of time at best. But
almost everyone has heard of genetic algorithms, so it's easier to talk about,
than talking about particle swarm optimizations or whatever. Also genetic
algorithms tend to excite people in a way that other metaheuristics just
don't. People are fascinated with genetic algorithms, not so much with
hillclimbing.

~~~
ai_maker
I would say that hill climbing with random restarts works better because you
must first define a cost function in a precise (and concise) symbolic
mathematical way. Very much similar to simulated annealing, stochastic
gradient descent and the like. However, GA's can be applied regardless of the
piece of knowledge that is required to build this cost function. GA is a
universal optimisation technique: as long as you can encode features in a gene
form (this is very convenient for binary attributes) and rate the fitting of
that gene you are ready to search the optimum.

My stab at explaining GA's: [http://ai-maker.atrilla.net/the-%EF%BB%BFgenetic-
algorithms/](http://ai-maker.atrilla.net/the-%EF%BB%BFgenetic-algorithms/)

------
theatraine
I'm curious to see what people have used genetic algorithms for in practice.
Personally I've used them for feature selection when I was trying to reduce a
high amount of features to smaller amount for a classification problem.

Of course they're definitely great for evolving little cars, like this other
example shows:
[http://rednuht.org/genetic_cars_2/](http://rednuht.org/genetic_cars_2/)

~~~
nickpsecurity
Google "applications of" genetic algoritms. I got results that way years ago.
The abstracts of conference papers are also informative. I remember them used
for scheduling/planning and optimization problems. One person even made one to
find the optimal set of compiler arguments to get best performance for Haskell
programs. Look up Humie Awards for Genetic Programming for a long list of
human-equivalent achievements too.

------
dyarosla
How difficult of a problem is it to break out of local optimums several
generations down in a genetic algorithm? If we prune the domain space from the
get-go randomly, and then keep on creating sub-domains of the original large
(or potentially non-finite) domain, wouldn't we generally only head towards
local optimums, albeit, quickly?

------
lateguy
What will be the difference between Reinforcement learning and Genetic
Algorithm? The stackoverflow question here has mixed responses
[http://stackoverflow.com/questions/12411197/can-
evolutionary...](http://stackoverflow.com/questions/12411197/can-evolutionary-
computation-be-a-method-of-reinforcement-learning). According to me I have
implemented Reinforcement learning once
[http://somedeepthoughtsblog.tumblr.com/post/134793589864/mat...](http://somedeepthoughtsblog.tumblr.com/post/134793589864/maths-
versus-computation), they look same as both boils down to exploring and
exploiting the environment for maximum reward.

~~~
danielbarla
Reinforcement learning refers to the general problem of reward maximization
where the agent must try solutions for itself and get feedback of the results
via some kind of evaluation / fitness function. (As contrasted with supervised
learning, where it the agent would be given examples, and would try to learn
from that). Genetic algorithms are one specific class of optimization
algorithms which accomplish this.

Exploration and / vs exploitation is just a natural consequence of trying to
make best sense of the unknown environment. Different algorithms have
different strategies for this, and are generally suited for different classes
of problems (generally, the issue would be matching the nature of the problem
with the nature of the optimization algorithm - which brings up things like
the no free lunch theorem).

------
razerw0lf
On a side note, the item id of this post is a palindrome: "11066011", which
you can often see in GA algorithms because of how sequence splits happen.

