Hacker News new | past | comments | ask | show | jobs | submit login
[benchmarks] MongoDB kicks MySQL's ass no matter the circumstances
2 points by zeeone on June 25, 2011 | hide | past | favorite | 1 comment
A recent post in HN claimed that MongoDB doesn't scale well, because the performance drops after the indexes are bigger than the available memory.

Scaling with MongoDB (or how Urban Airship abandoned it for PostgreSQL) - http://news.ycombinator.com/item?id=2684423

I decided to put this claim to the test, so I set up an Arch Linux virtual machine with 128M RAM and wrote a quick and dirty benchmark test. The test is a simulation of a simple blog with N user accounts and M posts per account. It creates all accounts first, then it creates the same number of posts for each account. The posts are not created in a consecutive order, and they are indexed by their "account_id" field. Once all accounts and posts are created, the script runs a query to find all post per account for all accounts.

So this script does two things: 1. Writes a lot of records. 2. Reads all records using an index.

You can download it from Github and use it to test in your own environment: https://github.com/naturalist/MongoDB-MySQL-compare

I used MongoDB v 1.8.2 and MySQL 5.5.13. My laptop is a Toshiba Satellite with Intel Core i5 @ 2.53GHz, and the virtual machine I set up was with 128M RAM, running Arch Linux.

Here are the results of my tests:

  MongoDB 
  5000 accounts
  50 posts each
  using MongoDB's native _id

  Create: 317 secs | 185.94 CPU
  Read: 595 secs | 480.64 CPU

  ---

  MySQL
  5000 accounts
  50 posts each

  Create: 322 secs | 99.77 CPU
  Read: 740 secs | 487.50 CP

  ---

  MongoDB
  5000 accounts
  50 posts each
  using an auto incrementing int for _id
  Create: 252 secs | 112.31 CPU
  Read: 520 secs | 218.29 CPU
So it looks like that MongoDB is significantly faster not only when writing but when reading too. The above test completely exhausted the VM's memory and kept using the swap partition.

Next, I decided to really push the limits of both and ran the MongoDB test with 10,000 accounts and 300 posts each. That's 3 million records. It created the records in 3643 secs using 1831.45 CPU, and read the records in ~8000 seconds using ~3000 CPU.

MySQL could not complete the above test. The MySQL server froze and I had to manually kill the process and restart the server.

Disclaimer: I do not work for 10gen, I don't even know them. I'm just a guy who's considering MongoDB for his next project.




From the perspective of that topic IMHO it would be better to 1) use Postresql 2) use tests that are more complex than random gets/puts by key, i.e. a few consecutive updates that modify intersecting sets of indexes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: