So let me get this straight, there are Google software engineers, that passed all the 3 month interview process which some considered the toughest in the world, and they never have heard of conway's game of life.
this makes me feel a little better about myself now.
Also, their interview process is usually not 3 months. Most engineers have a 45-minute phone interview or two, followed by a day of onsite interviews lasting 4-5 hours.
Most of them who cleared it might be kind of guys who read interview questions on programming forums every day and master the art of gaming interviews.
I heard about the game of life, but I couldn't tell you the rules or describe it exactly. I never bothered to look into it. Judge away!
RWW: So it took two months to do this?
PD: It’s not a trivial problem. It’s tricky.
I’m quite proud of the way I did it.
I can’t talk about the way we actually implemented
-oo- =\ -oo-
-oo- =/ -oo-
----- ----- -----
--o-- ----- --o--
--o-- =\ -ooo- =\ --o--
--o-- =/ ----- =/ --o--
----- ----- -----
------ ------ ------
-oo--- -oo--- -oo---
-oo--- =\ -o---- =\ -oo---
---oo- =/ ----o- =/ ---oo-
---oo- ---oo- ---oo-
------ ------ ------
1. The description line for the Github package reads "Gosper's HashLife in CoffeeScript"
2. The readme says: "Cafe au Life implements Bill Gosper's HashLife algorithm."
3. The docs say "Cafe au Life is based on Bill Gosper's brilliant <a href="http://en.wikipedia.org/wiki/Hashlife>HashLife</a...; algorithm."
Apologies for not being able to use a browser and falsely accusing you!
The benchmark can be really slow in some steps.
Edit: found it http://www.ibiblio.org/lifepatterns/lifeapplet.html
I tend to think of cellular automata optimization as being related to data compression. This is also a simple concept with no simple solution, and what solutions are best depends on the type of data being processed. In Conway's Life, patterns tend to be blobby.
For blobby universes, one should probably consider dividing the universe up into blocks approximately the size of the blobs. For Life, 4x4 to 8x8 seem reasonable. I chose the upper bound, 8x8, for reasons of convenience: There happen to be 8 bits in a byte. I strongly considered 4x4, but it didn't work out as nice....
You should put the blocks in some kind of list, so that you waste zero time in the empty parts of the universe.
Already, note a complication: New elements in the list must be introduced if the pattern grows over a block's boundaries, but we have to know if the block's neighbor already exists. You can either do a simple linear search of the list, or binary search, or keep some kind of map. I chose to make a hash table. This is solely used for finding the neighbors of a new block; each existing block already keeps a pointer to its neighbors, as they will be referenced often.
There must also be an efficient algorithm within the blocks. I chose to primarily blaze straight thru each block. There are no inner loops until all cells in a block are processed. Also, fast-lookup tables are employed. I look up 4x4 blocks to determine the inner 2x2.
Note: CA programs typically consist of 2 main loops (plus a display loop), because CA rules operate on the cells in parallel, while the microprocessor is conceptually serial. This means that there must be two copies of the universe, effectively, so that no important info is destroyed in the process of creating the next generation. Often these 2 copies are not symmetrical. It was a great struggle for me, since almost every time I took something out of one loop to make it faster, I had to add something else to the other loop! Almost every time, that is; the exceptions to that rule lead to the best optimizations. In particular, there are good tradeoffs to be considered in bit-manipulations: shifting, masking, recombining to form an address in the lookup table....
It can also be considered that sometimes the contents of a block may stabilize, requiring no further processing. You could take the block out of the list, putting it in a "hibernation" state, only to be re-activated if a neighboring block has some activity spilling into it. These blocks would take zero processing time, just like a blank region of the universe.
Period-2 oscillators might also not be very difficult to detect, and remove from the processing time. This might be worthwhile in Life, because the blinker is the most common kind of random debris. Higher period oscillators are much more rare. It is also possible that gliders could be detected and simulated. You will get diminishing returns from this kind of optimization, unless you take it to an extreme (cf. HashLife).
Also, a block of cells that's completely empty might not be worth deallocating and removing from the hash table for a while. That takes some processing time, which could be significant in the case of an oscillator moving in and out of its space repeatedly. Only when memory gets low should the oldest blocks from the "morgue" be recycled.
When the program is fast enough, it should be considered that it isn't worth displaying generations any faster than the eye can see, or at least not much faster than the refresh rate of the monitor. Especially in windowed environments, display time can be a real bottleneck.
E: it seems to always appear like that; so probably not organic and I'm not up for searching minified sources to verify.
You can confirm this by pausing the simulation and clicking on all the cells in the region to switch them to the 'active' state: http://imageshack.us/a/img138/2751/71405409.png
Like "zerg rush", "kerning" and "keming", searching for bacon numbers, etc...
I know there are a ton more phrases that work as calculator aliases, but I can't think of any off the top of my head.
And from the 2010 archives: "Conway's Game of Life is a Monad."
Here is the original:
3% CPU max
e.g. perhaps people see what's top on HN (a legitimate article) and search on the topic, and increase the probability of more posts in that topic, also, psychologically, if A is popular, then when seeing something that looks like A you might up vote it just by assuming it's popular too.
The seed of the system is a random cluster of N stories (cells)
that represent the "front page".
Each cell has votes and topics (Erlang, Apple, X in CSS, etc.).
1. Any cell with fewer than N votes,
and fewer than two neighbors dies
(due to lack of interest and/or promotion).
2. Any cell with two or three live neighbors,
lives on to the next generation
(riding along the front page wave).
3. Any cell with more than three live neighbors,
that share the same topic dies
(due to overexposure of the topic).
4. Any dead cell with exactly three live neighbors,
becomes a live cell with a topic shared by one of the neighbors
(the spread of interest of a topic).
After any generation, votes and stories can be added.
The low threshold of votes (N), is determined to be
some percentage of all votes in the system.
Also, http://www.amazon.com/Bursts-Hidden-Pattern-Behind-Everythin... (even though it is not a great book; it is mediocre IMO).
edit: I've realized that certain blocks are darker blue in the shape of the word Google.
It really seems Google is trying to be more and more like Wolfram|Alpha!
(and my comment was meant tongue in cheek)
How would you explain it?
I'll give it another shot with that link, perhaps tomorrow :)