1) Make sure to permanently increase the hard and soft limits for Linux open files and user processes for the MongoDB/Mongo user. If not, MongoDB will segfault under load and when that happens, the automatic recovery process works incredibly slowly. It's a bit tricky to get this right, depending on your level of sysadmin knowledge. 10gen doesn't emphasize or explain the issue very well in their docs: "Set file descriptor limit and user process limit to 4k+ (see etc/limits and ulimit)" That probably makes sense to just about 0.1% of the people setting up MongoDB: http://www.mongodb.org/display/DOCS/Production+Notes#Product...
2) Make sure to disable NUMA. This 10gen documentation note is a great example of clear documentation: "Linux, NUMA and MongoDB tend not to work well together ... Problems will manifest in strange ways, such as massive slow downs for periods of time or high system cpu time." Massive slowdowns and mysteriously pegged cpu usage on production database systems are definitely 'strange'. I would probably choose stronger and more precise language, but 10gen clearly knows what they're doing: http://www.mongodb.org/display/DOCS/NUMA
tl;dr If you have problems with MongoDB, you aren't using it right. Read the documentation more carefully, and then when that doesn't work, hire an expert.
I'm getting the idea that is rather challenging to use mongoDB right. While there's certainly a place for power tools that can only be used by highly trained experts or you risk disaster... that kind of goes against the idea that mongodb has anything to do with 'simplicity', don't it?