Note that Redis 0.100 adds non blocking replication, so you can connect a slave at runtime while the master continues to happily serve queries. Also we have a SLAVEOF command now that makes it possible to control replication at runtime: to maka a running server a replica of some other server, to stop the replication and turn a slave into a master, and so on.
Redis replication is trivial to setup.
You may try to load a dataset larger than your memory in Redis and see what happens
This is what threw me off when I evaluated redis.
When choosing a database then I really don't want to "try and see what happens". I want defined and documented behaviour, please.
Anyway this is an edge case, it's not a good idea even if it will work to have datasets bigger than available RAM.
Before 1.0 stable we are even introducing 'maxmemory' config option. If the dataset starts to get bigger than 'maxmemory' Redis will try to free ram removing keys with a timeout set (starting from older ones), cached objects in free lists, and so on. If still it is out of memory will start to reply only to read-only operations and will issue "-ERR using more usage bigger than maxmemory parameter" error if you try to write more data.
Btw the way to go is maxmemory, that will be in the next tar.gz and in days into Git. maxmemory also allows to use Redis with a memcached semantic of volatile keys expiring to make room for the new ones when we are low on memory.