However if you should do it or not depends on the data you need to store, and even more in the kind of queries you want to run against the data set. If you end with a complex schema in order to support SQL-alike queries it is a bad idea. If you want to query data in a very Redis-API-obvious way, it is a good idea.
A few remarks about not correct ideas expressed in different posts here:
1) Redis transactions are not good since there is no rollback: not true since in Redis query errors can only happen in the case of a type mismatch more or less, so when you have a MULTI/EXEC block if there are no obvious bugs in your code you are going to see all the commands executed.
2) Redis durability is weak. Not true since with AOF + fsync it offers a level of durability that is comparable to many other data stores. Depends on the actual configuration used of course. Your usual MyISAM table for sure is not more durable than Redis AOF + fsync everysec just to make an example. Replication can add another level of durability of course.
3) RDB persistence instead is a weak durability configuration, but works for many applications where in the case of a bad event to have 15 minutes of data lost is a viable option.
So if you, for instance, are planning to use Redis to store your meta data, you can avoid having this same dataset duplicated in other places.
Redis is also pretty solid in providing solutions to backup your data. For instance while you use AOF + fsync you can still call BGSAVE and create a compact .rdb file to backup. You'll always be able to restore the data set from the RDB file.
That said I think that in most complex applications using Redis + an on disk and "more query friendly" data store makes a lot of sense. Redis + Mongo, Redis + Mysql, Redis + Riak and so forth are very interesting configurations IMHO.
There are some rather relational-data-intensive projects I've been involved with where that would have been an Exceptionally Poor Idea, both for the amount of pain one would go through writing a poorly tested version of LEFT JOIN to be able to get it to work, and because one will eventually discover that your SQL database of choice has been improved for hundreds of man-years along axes you care about and Redis has not.
But, the real question is, why do you think a SQL database will not be scalable to begin with? I'm going to say that well over 75% of the top sites, use a SQL database.
In fact, Redis' persistence layer is best understood as a best-effort value-add. If the server shuts down for any reason, you have to simply hope the last disk flush was recent and successful. Otherwise, your data is lost. This is (again, to me) fundamentally at odds with the contracts that any database should provide. Also, Redis cluster is not yet released, which means running more than one Redis server requires you to manage keyspace sharding at the app layer.
Not that any of this is a knock against Redis. Even with those caveats, there are a huge class of problems that Redis is perfectly suited for. I love the software and use it daily. But Redis competes with memcached, not MongoDB; if you ever find yourself shoe-horning Redis into a role where total data loss is anything other than temporarily annoying, you're doing it wrong.
tl;dr: IMO, using Redis as a database is a really bad idea, for most common definitions of "database."
1. No disk. Everything in memory, and if redis dies, so does your data. This is closest to memcached.
2. In memory, with periodic background flushes to disk. After a timeout (shorter if there's a lot of modification to your data), redis will spawn a background thread and write out all its data to a file in the background. (Then it will atomically rename the file in the place of the previous dump file.) This is the easiest form of disk persistence, and good enough for most of what people use redis for.
3. You can also configure redis to write to an append-only file that logs every operation, which you can periodically compact with a cron job or something. The flushing interval is configurable, and makes a big difference in speed. This is not particularly convenient -- who the hell wants to write a cron job to compact database logs? -- but it gives you durability on par with a conventional database.
4. If you have another machine lying around, there's an option that you can combine with any of the three options above: master-slave replication. A read-only slave redis server receives a stream of all writes to the master, and changes itself to match. This gives a small data-loss window in the event of a master failure, and makes fail-over possible. If the master goes down, you can have a slave become the new master. Coordinating this can be tricky, but it can certainly be done.
tl;dr: If the reliability approaches above look good enough for your application, and redis looks like the best match from a semantic or performance standpoint, then go for it!
If Redis were a database, I would expect a successful HMSET to generally be available in perpetuity, even if the machine was rebooted immediately after I got the "OK".
Append-only command logs don't solve the problem; replaying a complex series of state transitions is not a viable substitute for the storage of the end result of those transitions. It's computationally correct, and a straightforward implementation, but quickly and easily becomes unacceptably inefficient. I hope that's clear enough that I don't need to provide an example.
Replication is a solution for the problem of network or machine instability, assuming a valid Redis use-case. It doesn't address persistence in the sense of databases. In distributed computing, High-Availability is orthogonal to Persistence.
Periodic background flushes to disk in another thread come the closest to solving the persistence problem, but to get database-class QoS you'd need to trigger on every change (or, say, every second). Obviously this is a bad idea and not what the feature is designed for, which circles back to my main point: Redis is not designed to be a database. If it's operating as a memcached replacement in your stack, great. If it's standing in for authoritative, long-term storage of critical data, it's being misused.
The difference here is not one of durability, but in how the data is stored on disk. InnoDB keeps the logs small by periodically updating a B-tree with the changes in the logs, after which those changes can safely be removed from the logs. The result of this is strong durability, a reasonably compact on-disk representation, and fairly fast recovery when someone trips over the power cord.
Redis, in AOF mode, logs every command to the log file and (if you specify it in the config file) flushes to disk after every write. The problem is that this file grows without bound: if you leave redis running forever, it will eventually fill up your hard drive, and recovering from a restart will take way too damn long if you have to replay a 1 TB log file. The conventional way of dealing with this is to periodically use the BGWRITEAOF command, which does essentially the same thing as a background data dump: it writes out a new AOF file from the current contents of redis in memory, and deletes the old AOF file. This is roughly equivalent to augmenting the usual periodic-data-dump behavior of redis with periodically-flushed logs, just like a more conventional database.
If there's something I'm missing here, I'd love to hear it.
Redis does have a number of shortcomings. Firstly, it doesn't provide a very sophisticated transactions system. You get multi blocks, and the ability to watch keys (which is like a check and set), but you don't get true transactions. For example, there's no rollback mechanism, and commands will still be executed in a multi block if one of them fails.
Secondly, Redis by default does not provide strong guarantees of durability. It writes a snapshot to disk of your data periodically, so if something happens to the server that causes the program to shut down unexpectedly, you'll lose a lot of data. Redis can be configured to provide stronger guarantees of durability, but at the expense of speed.
Thirdly, there is currently no sharding mechanism built into Redis. They're working on Redis Cluster, which will allow your data to be spread across multiple servers, but it won't come out for sometime. You can build your own distribution system into your application however. You can read up on consistent hashing algorithms to help you with ideas for that.
Fourthly, everything is in memory at all times. That's pretty expensive, though Redis is quite efficient with memory usage.
Redis is really fast, which is awesome, but we still use MySQL as our primary datastore. We have a write-through/read-through caching layer, and if the transaction ever fails in part, we just rollback the MySQL transaction and invalidate the key in Redis, because we can trust that MySQL's records are more authoritative.
Redis does what it does well -- make great use of the CPU and the memory on a single box. SQL keeps your data consistent long-term and makes it easy to do ad-hoc queries.
A few things you might want to consider:
* depending on the size of your dataset, starting/rebooting redis can take a while. The bigger your dataset is, the longer it takes
* AOF can be a pain to maintain, since you need to allocate enough disk space for it + for when you back it up.
* if you plan to have millions of keys, compressing them isn’t a bad idea.
I wrote model/helper classes to wrap repetitive redis code, you can check the source at Draw! github repository.
Anyway after this experience I learnt Redis is a great tool but it doesn't fit good as a datastore for "everything" you should store for your app.
It really depends on how much data your application uses (and how much you expect it to use as you grow).
EDIT: If it meets your application's logical and scaling needs, there's no reason you couldn't use it