
MemcacheDB, Tokyo Tyrant, Redis performance test - _pius
http://timyang.net/data/mcdb-tt-redis/
======
codahale
These benchmarks don't seem to take HotSpot compilation into account, which
means some of those numbers are for interpreted Java bytecode and some of
those numbers are for native code and you don't know which are which.

The behavior of the DBs as far as resource consumption is interesting, but the
numbers are meaningless.

Further reading:

<http://www.ibm.com/developerworks/java/library/j-jtp12214/>

[http://www.ibm.com/developerworks/java/library/j-jtp02225.ht...](http://www.ibm.com/developerworks/java/library/j-jtp02225.html)

~~~
Periodic
I had a physics professor who used to drill into us during our labs that, "a
number without an error estimate is meaningless." I didn't really get it at
the time, and thought he was just being curmudgeonly.

Now I understand how important it is. What's the error in these tests? The
standard deviation? What was done to limit the error?

It's great to see people putting numbers behind their claims, but let's get
some real science back into computer science and do some serious data
analysis.

------
Maro
You can get 100,000 ops/sec out of BDB even in transactional mode (MemcacheDB
uses BDB), look at the various BDB flags. Also, when storing larger values,
you should increase the BDB pageSize parameter (default: 4096 bytes),
otherwise BDB will allocate external pages (default: if key+value is larger
than 1007 bytes) and you will experience severe performance degradation.

Also, the Keyspace KV store can do ~100,000 ops/sec as described in hour
whitepaper at <http://scalien.com/whitepapers>

~~~
henryl
Perf is only that high for BDB because it is in process. Tokyo tyrant has to
communicate over the network layer.

Also, how often are any of these databases actually _syncing_? For TT, not at
all until you terminate the process or call it manually.

~~~
Maro
The 100,000 number is for a performance benchmark over a LAN with grouped
commits.

