Hacker News new | past | comments | ask | show | jobs | submit login
Random notes on improving the Redis LRU algorithm (antirez.com)
74 points by dwaxe on July 29, 2016 | hide | past | web | favorite | 32 comments



Anyone familiar with caching algorithms will groan when they read the description of the "sampling" LRU. First, LRU isn't that great--it is a _lot_ better than nothing and having a suitably large cache tends to cover up cache eviction problems, but LRU is still susceptible to a number of problems. The pseudo-LRU redis used to have is appropriate for very high-speed caches (it's essentially a 5-way associative cache), but is so very not appropriate when going out to disk, where the access latency is horrible.

ARC as mentioned below incorporates the LFU statistics, but it is patent encumbered. A number of other NoSQL systems implement either 2Queues ("Segmented LRU") or LIRS. My team recently implemented both of these for a block-caching system and found LIRS to perform better for our purposes; essentially LIRS has a bit more information that it collects so it can evict low future-probability data faster and thus retain more data that is likely to be accessed again in the future. It only takes a few more hits to be worth it. The downside to LIRS is that it's more complicated than 2Queues, and far less well-described in the literature.

See https://en.wikipedia.org/wiki/Cache_algorithms

The thing that's really necessary, though, is to collect metrics and to have an instrumented version that will log all accesses. Given a log of all accesses, you can compare whatever implementation you use to Bélády's algorithm (i.e., perfection). Without instrumentation and testing, it is very very difficult to know what the right strategy is.

Anyway, this helps confirm redis in my mind as a great amateur NoSQL system.


Hello jonstewart,

That LRU is not that great, is the main argument of the blog post, this is why now Redis contains an eviction policy with LFU elements. Pseudo-LRU in Redis is a compromise between different goals: Redis has many use cases, so there are tensions between different features. The Pseudo-LRU/LFU, augmented with the pool of visited objects, as you can see reading the whole blog post, provide quite good results without forcing Redis to bind the eviction policy to the underlying data structure used to represent the key space, and especially, without using additional memory.

All we can do for now is to have 24 bits per object, which is different than a 24-bit overhead per object, as the 24 bits must be stored in the object itself.

With the new LFU policy I hope to provide a more quality eviction strategy to Redis compared to the previous one, however note that in practice the sampling LRU + the pool provided acceptable real world performances in many use cases.

What I think you don't get, is that providing real-world software that has many features is not like: let's read the Wikipedia page about caching algorithms plus 100 papers, select the best, and implement. System software that works in many use cases and environments is an exercise in compromises, not "pick the best". Otherwise whoever is able to read the latest paper on a topic and implement it would be the winner, which is not the case.


Hello antirez,

Indeed, I think the pseudo-LRU approach as you describe, especially when combined with a pool, is a pragmatic caching solution for when you don't want to make major changes to your codebase. You found 24 bits, you dedicated a small data structure off to the side, and those two in combination are a really good approximation of LRU, without a lot of code.

I'm obviously not as familiar with the guts of redis's implementation as you are, so I don't know all the ways that keys can be stored and what the implications for caching are. Generally a caching data structure can be used that references the cached data/objects (pointers, basically), so it could be external to how you otherwise store data. That may not be the right approach with a low-level C/C++ block caching server, but that's not what redis is, either. I think you would find that adding a separate data structure would help you out, but that's just naïve advice; I don't know the guts of redis so perhaps you can't store keys in a separate structure.

I also well understand that providing real-world software is an exercise in compromises. However, I recently discovered how horrible an existing LRU implementation performed in practice for my application (digital forensics) and spent considerable resources exploring alternatives. And what we really found out during all of this is that instrumentation and testing under a diverse set of input is necessary in order to make an informed decision. For some input, LRU was gangbusters. For others, it was lousy. I've also learned that sometimes it's necessary to put pragmatism aside and go off and implement the ideal approach; it can pay off.

Cheers,

Jon


Hello Jon, adding an additional data structure has the problem of slowing down the ability to serve queries, and there is a memory overhead problem as well. A lot of time spent when modifying the eviction policy is to make sure there are on speed regressions. Moreover in Redis you can switch on and off the "maxmemory" setting and change eviction policy at runtime: when it's off no resources should be needed, when it is turned on, no pause should happen. So it is very hard to find both the time and space resources in order to implement the perfect solution. However the blog post in its latter part describes the LFU implementation that is now into Redis (unstable branch). This implementation was derived with just a few days of work, I had only that much to work at the problem, but was obtained writing a few simulation programs and traces in order to check the behavior, so I don't think it's an "obvious loser". For instance I can imagine different workloads where it can work better compared to LIRS. For example when you get accesses in "spaced bursts", LIRS, by remembering only the last two access times, may get confused about what data is really worthwhile to take. In teams like your where you have a specific problem to address, and you can test N approaches, you can select a specific algorithm that works the best for you, but in Redis it would be a lot better if there is a more adaptive approach. Intuitively, and by the traces I was able to run on it, it looks like that a logarithmic counter and the halving timer while not always optimal should be able to provide decent performances under different access loads, since in some way, it really is an LFU algorithm (in that the frequency of accesses is retained by the algorithm in a direct way, instead of being guessed by secondary data), so it should not be trivially fooled by odd access patterns. Also being it tunable in the two main parameters, it should be possible for users to improve it for application specific loads. I understand your frustration with LRU implementations, but given you care and have experience on this stuff, I invite you to take a look at the performances of the new Redis algorithm, maybe after all it could work better than expected, or you could be able to do trivial changes to it in order to significantly improve how it works, so that all we could benefit. Regards.

EDIT: p.s. if somebody is inclined to test it, in 24 bits, I'm sure it's possible to implement some kind of LIRS approximation easily, by using two 12 bits timestamps. The problem with a 12 bit timestamp is that all objects having more than 1 hours are ranked the same, but in the case of a cache like Redis, it could be acceptable perhaps... Anyway to try different approaches in the unstable branch is just as hard as writing a few lines of code AFAIK.


The "ideal approach" to caching, Jon, is an oracle. There is no such thing as an "ideal cache eviction algorithm", just FYI.


Also known as Bélády's algorithm.


This is my final note (and I am not downvoting you, fyi) but you said "implement the ideal approach". Magic?

Let's see the code. I checked your github repo and can't find this "implementation".


The last sentence I meant more generally than the domain of cache algorithms, setting up "ideal" as a contrast to "pragmatic", since I'm taking flak for being "academic" (haha, you should have seen my grades). One lesson I've learned in my career is that there are instances where it pays to take some time, see what the research says, experiment, and then refactor. It takes a lot of time, but the best software is also long-lived so it can pay to take the time to make one's software the best. antirez has 24 bits to play with and doesn't want to make major changes, probably also doesn't have a lot of spare time, and that's his choice.

But I am also aware of the oracle algorithm.


You still don't understand what Redis is and what antirez wrote in first reply to you. Your "major changes" will have impact on my use cases. Redis is not only used for caching, I am not using Redis in production for caching and many other people also do not use Redis for caching.


> setting up "ideal" as a contrast to "pragmatic"

Have to retract the last comment['s promise of final note] and (academically) note here that "ideal" and "optimal" are distinct notions in computer science.

A provably "optimal" algorithm is implementable. An "ideal" that requires magical facilities is a trope to make a point and spare one from attempting the impossible.


This is probably over-complicated and would cause hard-to-diagnose behaviour, but here's a random idea: use a multi-armed bandit algorithm to pick which eviction policy to use.

You'll want to keep a queue of recent evictions. When you get a cache miss, check if the key was recently evicted, and if so, penalize the algorithm that evicted it. Otherwise, if after a period of time you don't get a cache miss, credit the algorithm that evicted it so it's used more often.

This way Redis would self-tune its overall eviction policy to the workloads it's presented with. The idea is that, hopefully, you wouldn't have to do as many upfront trade-offs between conflicting use cases.


The problem is that then you have K caching algorithms in use, and the memory used by K-1 of them would be better used if given to the winner. The more sophisticated algorithms that use a combination of recency and frequency, like LIRS and ARC, will adapt to most workloads. 2Queues does better than LRU at not letting sequential reads evict frequently accessed data and it's pretty simple to implement, but, in my experience, LIRS seems to evict low probability data faster, letting you use more memory for higher probability data.

Regardless, if you have instrumentation, you can do a lot of testing and offline analysis.


> "amateur NoSQL"

That was entirely unnecessary and imho open to question given that quite a lot of /businesses/ run on Redis.


Lots of them run on visual basic too.


A lot of businesses also run on MS Access. I think redis is in the great tradition of amateurism, and is about as good as you can get under that approach. It's a lot better than a number of other systems, and it's also somewhat limited.


Redis is no Access, but at least it goes to show that heavily-funded enterprise software is often worse, not better.

To me this smacks of academic eliteism. At the scale and density it is used, Redis is by definition not amateurish. Amateur means either unpaid or inept. Describing Redis as inept can only mean you've never used it and are unable to appreciate the incredible diversity of problems for which Redis provides truly excellent reliability, usability, and performance all at once.

Antirez succeeded in providing a great measure of all 3 of those traits all at once, and in a code base which is surprisingly easy to read and maintain.

To me Redis is in fact a modern marvel. Proof of what a 10x engineer can achieve when left alone to get shit done.


I definitely don't mean to suggest that antirez is inept. While redis is technically now a commercial/professional endeavor, I don't think that the commercial side of it (which is not a technical comparison at all) compares to some of the other NoSQL vendors, and that means it's hard to hire experts. Hence, redis is a great amateur NoSQL system.

And to be clear, it is way better than Access.


I look at it a bit differently. Would Redis be better today if antirez had raised $10m and hired 50 engineers and then gone out to maximize profit? Certainly that option existed. Personally I doubt the end product really would be that much better, and it would be a tragedy to lose such a great piece of free and open source software.


>> (which is not a technical comparison at all)


Yes, I saw that. Which is why I responded to discuss the economic aspect -- like other NoSQL startups, should Redis have raised a war-chest and gone out an hired all those experts? I just have a gut feeling that VC funds would endanger the Redis we know and love more than it could actually help it.


Redis is brilliant. It's preposterous to think that systems have to be complex or featureful in order to be "professional".


Perhaps the problem is the overspecification of words like "amateur" and "professional" to mean things they don't.


> The pseudo-LRU redis used to have is appropriate for very high-speed caches (it's essentially a 5-way associative cache), but is so very not appropriate when going out to disk, where the access latency is horrible.

Redis is an in-memory database, so what's the problem?

Also, keep in mind that it has to fit in 24 bits, so not all solutions are possible, even if they are theoretically better


It's possible that he refers to a cache miss "going out to disk" by requiring a read from a different data store (a SQL database, for example).

But this is really kind of silly - the entire idea with caching is that misses are much slower than hits. The statement is something like "you need a stupid good cache because misses hurt". Software engineering is always about tradeoffs, and antirez does a fantastic job of describing them in this post.


I also ran an exhaustive analysis and had similar conclusion. I found that TinyLFU (mentioned in the article) matches or beats LIRS in database and search workloads. It is also much simpler to implement and is more memory efficient.

https://github.com/ben-manes/caffeine/wiki/Efficiency

The W-TinyLFU variant better handles recency by using an admission window so that new entries are not immediately discard. This resolves antirez's concern and improves performance in sparse burst workloads. In a way its similar to the HIR/LIR regions (window/main) using TinyLFU as a frequency filter.

Happy to work with you, antirez, and others to simulate more policies and add trace files.


Thanks for the insight on the cache algorithms. Just wondering, are your implementations of 2Q and LIRS open-sourced?

Also I agree that not having more expertise on caching is amateur-ish for maintainers of redis. But as long as they keep up this level of openness and learn from the community calling them out on it, they won't/can't stay amateurs for too long!


No, they're not open source at this time. It's possible to find some Java ones out there.


I wonder if he looked at Adaptive Replacement Cache[1]? It's supposedly the best of both worlds of least recently and least frequently used and the only reason people don't use it more is because of IBM's patent.

[1] https://en.wikipedia.org/wiki/Adaptive_replacement_cache


With random sampling?


That logarithmic counter trick is awesome. It's obviously not something you'd implement everywhere, but it's seems really elegant here. I've never really thought of treating memory that way, it blew my mind a little.


Great post. Antirez delivers once again!

I think this is an excellent read for anyone who wants to gain some insight into coming up with pragmatic, real world solutions to problems (caching) that are usually taught in a very formal, "use this bestest algorithm or you're a dummy" kind of way.

Also the tricks used here are beyond clever.


I love reading the antirez blog, he has such great analysis you rarely get to read.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: