Hacker News new | past | comments | ask | show | jobs | submit login

you can select how frequently to save based both on time and number of updates, but if it's not enough this is where Redis replication comes into play. Just sync another server and you are sure the dataset is durable.

Note that Redis 0.100 adds non blocking replication, so you can connect a slave at runtime while the master continues to happily serve queries. Also we have a SLAVEOF command now that makes it possible to control replication at runtime: to maka a running server a replica of some other server, to stop the replication and turn a slave into a master, and so on.

Redis replication is trivial to setup.




From the redis FAQ:

You may try to load a dataset larger than your memory in Redis and see what happens

This is what threw me off when I evaluated redis.

When choosing a database then I really don't want to "try and see what happens". I want defined and documented behaviour, please.


When your program is using more ram then available it's all up to the memory usage pattern. If you have 4x the dataset then ram, but you happen to use only the latest 10% keys inserted, then most pages will be swapped on disk and rarely touched, and it will work, otherwise not.

Anyway this is an edge case, it's not a good idea even if it will work to have datasets bigger than available RAM.

Before 1.0 stable we are even introducing 'maxmemory' config option. If the dataset starts to get bigger than 'maxmemory' Redis will try to free ram removing keys with a timeout set (starting from older ones), cached objects in free lists, and so on. If still it is out of memory will start to reply only to read-only operations and will issue "-ERR using more usage bigger than maxmemory parameter" error if you try to write more data.


And what when my dataset is bigger than RAM + swap?


You should use maxmemory to avoid this, but before to reach this condition Redis will start to be very slow of course, it's hard to reach this condition without to note it.

Btw the way to go is maxmemory, that will be in the next tar.gz and in days into Git. maxmemory also allows to use Redis with a memcached semantic of volatile keys expiring to make room for the new ones when we are low on memory.


With these constraints I'd say redis is more a persistent cache rather than a database, don't you think?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: