

Coding a Tetris AI using a Genetic Algorithm - mukyu
http://luckytoilet.wordpress.com/2011/05/27/coding-a-tetris-ai-using-a-genetic-algorithm/

======
matt1
Similar story...

My senior year in high school I got hooked on an online multiplayer Tetris
game called TetriNET. I eventually created a pretty sophisticated bot to play
for me, which absolutely crushed the games. A friend recommended I submit it
for consideration in the regional science fair, where I wound up taking first
in the computer science category.

Lots of details on the blog [1], including screenshots, source code, and
details of the algorithms used.

[1] <http://www.mattmazur.com/2009/05/creating-a-tetrinet-bot/>

------
lincolnq
I would be interested to see whether a genetic algorithm performs better than
a hill-climbing one in this context. By hill-climbing I mean searching for a
vector in the seven-dimensional space you've defined which makes the biggest
improvement in the fitness function. I tend to think hill climbing would
converge faster than evolution among random mutations.

~~~
loboman
Yes, a comparison with hill climbing or with simulated annealing would be nice

------
apu
I hate to be "that guy", but I'm somewhat confused why everyone online seems
to be so obsessed with genetic algorithms. What they are is a kind of
optimization technique, and one of the worst ones at that. (Genetic algorithms
are what you resort to in the rare cases when no other optimization technique
will work.)

As 'lincolnq said, standard hill-climbing would probably work much better for
this kind of problem.

Still, cool results, I guess.

~~~
yid
In my experience (I worked on convergence analysis of some early GAs), genetic
algorithms suffer from a combination of extremely easy implementation, a
"cool" factor that plagued neural-networks-as-classifiers research in the
early 90s, and a belief that biological "analogies" would perform better than
decades of research in stochastic optimization.

Still, they have pedagogical value because they are easy to implement, and for
many students, are their first real soup-to-nuts "AI" algorithm
implementation. I've seen them serve as the gateway for students to get into
CS research, but IMO aspects such as the last resort principle, no free lunch
theorem, and principles of stochastic optimization are generally under-
stressed, which leads to some abhorrent research papers along the lines of
"Problem X using GAs".

------
mdda
Population size of 16 is _way_ too low : it's not surprising that he's getting
premature convergence. Also, his mutation is huge : so his overall process is
more like a directed random walk.

More reasonable parameters would be 100-1000 for the initial population, and
+/- 2% on a parameter for mutations.

------
uberstuber
[http://www.ryanheise.com/tetris/tetris_artificial_intelligen...](http://www.ryanheise.com/tetris/tetris_artificial_intelligence.html)
here's one using Google's pagerank algorithm

------
positr0n
That is very impressive for a 16 year old.

