
Intro to Genetic Algorithms in Javascript Part 2 - bkanber
http://burakkanber.com/blog/machine-learning-genetic-algorithms-in-javascript-part-2/
======
xxbondsxx
These are great articles, but I'm hesitant to lump Genetic Algorithms into the
rest of machine learning. It's essentially non-convex optimization and one of
the more popular metaheuristics, but I think of ML as classification /
tracking / reinforcement learning.

That being said, GA's are a great starting point to get into the world of non-
convex optimization. I actually wrote a version of GA's that run on the GPU
using WebGL -- the demo runs in the browser:

<http://petercottle.com/GeneticGPU/index.html>

It's also distributed, so you can send the link to a friend and both browsers
will divide up the search space and optimize over their half (and compare
minimums). Very hacky but fun project for a grad class at Berkeley.

~~~
Dn_Ab
My definition of Machine learning is more inclusive than yours. What about
regression? I suppose you would put that with tracking. Density estimation,
Structure learning, Matrix Factorization, Ensemble Methods, Boosting,
Clustering or merely estimating a joint distribution? None of those cleanly
fit your categories yet are machine learning techniques.

You limit to gradient based techniques but why emphasize convex optimization?
That leaves out neural nets and places where SGD can do very well.

Machine learning is basically the combination of statistics and optimization
where you can work with a lot of data and the output of your computation is
more important than the model. Here GA and other heuristics such as tabu and
annealing are just search/optimization techniques for gradient free or NP-
Complete problems.

------
Swizec
I thought the knapsack problem had a pretty good deterministic solution via
the magic of dynamic programming?

That said, isn't it usually best to use GA algorithms only where a simpler
algorithm does not exist? Or does the GA version actually produce better
results than dynamic programming?

~~~
bkanber
Correct and correct.

The knapsack problem does have an excellent dynamic programming solution. And
of course, where there's a simpler algorithm, you should use it.

This article isn't about the knapsack problem; it's about learning GAs. The
knapsack problem is just an easy-to-visualize and easy-to-implement problem to
solve in order to demonstrate features of genetic algorithms.

~~~
Swizec
Fair point.

I once used genetic algorithms for path planning where A* would do. While it
was fun to implement, the scoring was based on "goodness of path found in time
X" ... I didn't get a very high score on that homework.

On the other hand, I got a surprisingly high score for what was essentially
"random guessing until you run out of time".

------
torstein
Here is an exercise proposal: Find the hidden terms of a polynomial
expression, where the fitness function is the integral of the hidden function
minus the new one.

------
TazeTSchnitzel
Maybe I'm not thinking enough about this, but wouldn't the best solution be to
just stock up with elements that have the best value per weight, starting with
the highest?

~~~
bkanber
I talk about that technique in the article. It's called the "greedy algorithm"
or "greedy solution". In this example, the GA handily defeats that approach.

Edit: the greedy algorithm gives you a solution with a total value of $4,901.
So far, the best observed GA solution yields $5,968. 20% or so is a big
difference!

~~~
TazeTSchnitzel
Ah, whoops. My bad.

