Very sparse on details. Are they partitioning the data? If so how? Is it MongoDB shards or at the application level or...? What's their average record size? Do they see a long-tail kind of access such that MongoDB really does keep the most commonly used records in memory all the time? How many machines are they serving from? What are their specs (particularly RAM)?
The 47.7 queries per second figure in those slides surprised me a bit. I'd like to know more details there...is that for a single node, or the entire cluster?
Some of these questions are answered in a separate set of slides, which the author links to in the comments of the post. Hardware looks like a single server, 2x4 core CPUs, 32 GB of RAM, and a FC SAN.
Well I'd never heard of Wordnik but this is very cool and I'll be using it. This is like a respectable Dictionary.com+ Urban Dictionary. I don't see the category theory definition of a monad on it, though. Which is somewhat odd.