When I first saw this thread on teamliquid, I thought "Oh man, proggit and HN will love this" so I wrote it up. And sure enough, it was near the top of both, for a day. Next came the random news articles (like this one). And here several years later is an HN repost of a blog article that only exists because HN and proggit upvoted the original blog post. You guys created the article that's being reposted :)
From the last time around, the most common criticisms were:
1. Without any form of crossover, it's just hill-climbing/gradient-descent, it's not really a "genetic algorithm".
2. BFS would work better (maybe. IIRC people tried BFS and there's a bunch of subtle things that make the BFS search space huge (e.g. putting workers on and off gas at certain times). Of course since we have a goal state, something like A* might win the day).
Real-time strategy games have recently stimulated a lot of great research  because they exhibit several interesting subproblems such as resource allocation optimization, strategy selection, or plan optimization (like here). It also has the nice side effect that it makes it easy to evaluate the quality of the resulting plan in a real-world setting and see how it fares when certain assumptions are relaxed (non-static opponents, effect of imperfect strategy execution, etc.).
In this case what they did is solve a multi-objective optimization problem  using a genetic algorithm (a popular way to do so). The goal is to optimize some metric (e.g. time) with respect to some objective (e.g. "build 8 units of this type") and some constraints (resource requirements, unit trees). Of course you could perform some form of search in the solution space, but this space can become pretty big pretty fast and for complex games it's fairly easy to be stuck in a local optimum because the net effect of some choices can be hard to assess (some can be bad in the short term but be a nice set-up in the medium term, and two bad tactics can interact to be a good strategy). In contrast, a genetic algorithm can help find the global optimum.
While it's not as big as the article makes it sound (it's not a "winning build" per se -- more like a method to find the fastest way to achieve a known opening), if you combined plan selection methods (e.g. ) with plan-optimization techniques like this one, you could come up with an IA that is able to come-up with both near-optimal tactics (how to achieve an objective) and strategies (which objectives to pursue). Pretty interesting stuff overall, and applicable to more than just real-time strategy games.
Well, just to add, the algorithm used in the article was "evolution with assexual reproduction". Genetic algorithms rely on sexual reproduction, to the point that the assexual algorithm has a completely different name (and origin), it's called "simulated anealing".
So it took a few days and all the players at my level knew about it and could handle it easily.
It's hard to beat a general purpose strong AI in the long run!
Sure 'genetic' sounds cool, but when the search space is as limited as it is in a build order optimization like this, you should use the right tools for the task.
What stands out is if you had asked anyone even a professional player to build this exact army composition (7 roaches) they would almost certainly not have pulled it off as fast as the method this algorithm discovered. The algorithm was able to do it by doing a few tricks that normally aren't intuitive but, work out so your resources match up perfectly to make the build happen.
Also, I can confirm as a mediocre Starcraft player the build was quite powerful.
How would a breadth first search work better? The search space is huge.
If your edges are 'building things' and your nodes are records of some type (current time, number of stuff, stuff currently building), then you can calculate when you will be able to build what next, and generate the descendent states.
Given we are interested in four types of buildings and three types of units, that's maximum 7 descendents per node. Getting to 7 roaches doesn't require more than 20-30 actions. That's 7^30 and too much, but a lot of states are easy to prune, e.g. don't build 2 zergling dens, and on top of that you can add some goal searching heuristics, like a*.
That being said there's usually better local/heuristic search methods (eventhough they are also often quite a bit harder to understand unless you just want to use them).
There is also a paper by the co authored by the inventor of genetic algorithms, where they try to come up with a toy problem where GAs beat hill climbing. They start off with a toy problem that is designed to be an example where GAs shine, but they show that hill climbing actually works (far) better. Then they modify the toy problem in several steps, yet they only succeed barely in coming up with a toy problem where GAs beat hill climbing. Keep in mind that this is an extremely contrived toy problem. Suffice it to say that if coming up with a toy problem is that hard, it's unlikely that you will see a real world problem where GAs do better.
Every field has its dark areas, like chemistry has had alchemists and their quest to create gold. GAs are one of the dark areas of computer science, where a lot of papers have been published but it ultimately turned out to be all bullshit.
Anyway, my experience with automatic programming is completely oposite to yours, and in line with comicjk. I've always had better results with GA than with hill climbing* or simulated annealing. I have the same experience for resource scheduling. Of course, your experience will vary a lot depending on the data you want to fit/schedule.
Also, nothing is stopping you from including hill climbing as a kind of mutation...
* Last time I used it, I didn't even have a well behaved gradient to climb, it was zero at a huge number of places, and I knew it beforehand. Automatic programming applies to all kinds of crazy domains.
I was trying to fit a potential field, that I knew consisted of a map of 12 dimensions into 1. I could approximate it pointwise, but only with a very slow procedure, and needed both a faster one and a derivative.
And as the other comment pointed out, genetic programming is far too weak to achieve anything this complex. You need to simplify the problem tremendously by hand and then have the computer optimize the last few bits. Though seeding existing metaheuristics to start with might work.
By the way, in my automatic programming programming experiments hill climbing hardly did better than just brute force search. It fails for all but the smallest examples. So that's another one of those things that doesn't really work in practice.
The biggest problem is that the search space isn't smooth and even crossover is very mutative on discrete, arbitrary graphs.
Thank you for saying this.
Realizing this is sort of similar to the time when I first learned about how standard error-backprop neural networks really work, and I realized that they're "just" a stochastic optimization algo and not really actually "smart like a brain" that can learn everything (I was young :) ).
About, a "practical case" where GAs work better, I'm going to go with (generative/digital) art projects. The fact that a GA is sort of based on a simulation of a biological process lends extra flavour to an art piece that is somehow crafted using a GA, more so than if it had used hill-climbing or brute force. It'll help ask questions like "who is the artist here?" and "what is creativity" and "can a computer be alive?" etc etc.
And just for fun, a more optimized version using a 13 pool in place of the 11 overpool to get the seventh roach 4 seconds earlier:
If you're looking for a current GA build order optimizer, SCFusion is probably the best one: https://github.com/Carbon-12/SCFusion
I made a Tic-Tac-Toe GA in undergrad as my senior project. It was horribly stupid, but I didn't reduce the problem space down, and I could have easily implemented rotation and flip mapping to reduce the search space by a significant amount (just over 7% of the nieve search space), but the algorithm still worked pretty well and served as a great learning experience.
The genetic part is ridiculously simple, IMO. The hard part is figuring out how to define the rules of the game, break them out, and figure out what maps to the genome and how it maps. Evolution from that point is simply modifying the genome in different ways.