For sure it seems like that MongoDB client library you are using has a smaller latency, perhaps it is a C library? As Redis-rb is instead written in Ruby itself.
Another huge problem of this benchmark is that it should try to increment a random key per every iteration, out of a dataset with a few millions of keys, as this is realistically what you need. You'll see how Redis with 50 clients will increment this counters 100k times per second or more without any performance degradation as the number of keys grows.
If I were writing a high-concurrency multi-threaded system to handle millions of keys at a time, Ruby would not be my first choice by far.
No, it doesn't. You ran 100k increments on one key and divided by time ONCE and printed the result.
Plus this was on your Macbook Pro, which is probably running a mail client, Twitter client, web browser, and hundreds of other things that could have affected the times.
Graphing the response times for each increment would be a good way to start looking at what the data really shows, instead of going by a single opaque, fairly useless number.
it could even be a dummy server just replying +OK and the result will not change.
You should try this against MySQL too, and you'll see similar numbers, for sure not an order of magnitude different than that even if SQL involves much more query processing and so forth.
(Mongo + Mongo Ruby Gem) Vs (Redis + Redis Ruby Gem), The Increment Battle with No Useful Statistical Analysis Included
When it comes to things like benchmarking, I find it easiest to avoid making absolute conclusions like "X is faster than Y", but to merely present the data, all the data, and nothing but the data.
In this case, I'd have wanted to see the response times per request, and perhaps the memory trend for the client. At the very least, I'd like to know how Mongo and Redis were setup, and if either had begun to swap.
Running the redis-benchmark from (http://redis.googlecode.com/files/redis-2.0.4.tar.gz)
with 50 parallel clients 37800 requests per second
I would like to see the numbers for 50 clients to mongo doing the same.
INCR: 126743.98 requests per second
Just curious if you were using reddis 2.0 or possibly 2.2 ? Also what version of the reddis client were you using? There were some performance improvements recently.
Edit: thanks for sharing your experience, even if your test wasn't done with the help of a nobel winning computer team, it can still be useful data.
Basically every kind of networked server with a simple request-response protocol, in a busy loop over loopback, will show performances between 10k and 20k requests per second, mostly up to the speed of the implementation of the client.
Generally the popularity of a comment is a good enough indicator of how well a comment is received. If I got 100 upvotes on that comment I may get the impression that people like that. Considering I have < 0 on the comment, I can take the hint. I don't need the proper patrol reprimanding me about a non-stipulated rule.