Hacker News new | comments | show | ask | jobs | submit login
[dupe] Redis as the primary data store? (moot.it)
36 points by dpaluy 1633 days ago | hide | past | web | 10 comments | favorite

On the one hand, I think it's a bit unfortunate that Redis has been stagnant on the persistence side since the virtual memory experiment ended up nowhere. It's a wonderful tool with so much potential as a primary data store, not merely as a smart cache. But people are bound to get hesitant about a data store if it offers no straightforward way to persist large amounts of data.

On the other hand, Redis was probably born 10 years ahead of its time. If and when we finally get to mass-produce persistent storage media with the speed of RAM and the capacity of HDD -- SSDs are getting there, but not quite yet, and we don't know when memristors will become commercially available -- Redis will be the most obvious database to run on it.

Once we get the kind of storage you're talking about Redis will be much less interesting since its big difference is that it stores everything in RAM. If that is no longer an advantage then Redis will become a much more niche tool because you'll be able to get similar speeds out of other databases when they aren't constrained by disk IO.

Theoretically, you may be right. But in practice, Redis was designed from the ground up with data structures that are heavily optimized for RAM, whereas traditional databases like PostgreSQL are heavily optimized for disk storage with RAM buffers. Although every database will benefit from faster storage, it will take at least several months, but more likely a few years, to modify traditional databases to take full advantage of the new paradigm. Redis on the other hand will probably require much fewer modifications.

The previous link had some questions about performance and equivalence for in-memory use for redis vs. MongoDB.

I can tell you that we tried to make MongoDB do high write updates, and a 24 server cluster (8 shards with 2 replicas per shard) was unable to keep up due to the massively slow response times.

I replaced it with a single server running a twemproxy cluster of 4 redis instances and that single server was able to handle more than 10x the load.

So, there are many cases where using redis makes a lot more sense, even if it makes it complicated to maintain a persistence model.

From the article:

    > These are the two primary reasons Redis sucks as a primary store:
    > You have to be able to fit all your data in memory, and
    > If your server fails between disk syncs you lose anything
    > that was sitting in memory.
Antirez explained how redis persistence works[0] some time ago. A good read, and only then one can evaluate the "sucks" part, for one's particular use case.

[0]: http://oldblog.antirez.com/post/redis-persistence-demystifie...

RAM generally doesn't get more expensive by time, so might be the right time to do it.

RAM prices have increased by 1/3 since last year, for example this[1] item was $60 back then.

[1] http://www.newegg.ca/Product/Product.aspx?Item=N82E168201391...

They could switch to edis when it gets too expensive to keep everything in memory.

As I didn’t know the Edis project existed, and had to google for it - here’s a link: http://inaka.github.io/edis/

It’s summary is: Edis is a protocol-compatable Server replacement for Redis, written in Erlang. Edis's goal is to be a drop-in replacement for Redis when persistence is more important than holding the dataset in-memory.

And so I don’t have to hijack this thread a discussion: https://news.ycombinator.com/item?id=5621574

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact