Hacker News new | past | comments | ask | show | jobs | submit login

One thing you can do is use :ets.slot/2 to do random probing of the cache and evict entries. The way to do so is simple, just have a process every few milliseconds check a random set of keys (say, 50 keys), and check if they expire. If >25% of keys were about to expire, repeat this process again instantly, until the % of keys expired in the batch is <25%, then poll periodically.

This is how Redis versions prior to 6.0 implemented key expiry. It's a very simple algorithm and can prove quite efficient at evicting expired keys without having to constantly scan the entire range.




How does Redis do it now in 6.0 and later?


Why not use a min heap to track expiry times? (Or a doubly-linked list, if lifetimes are uniform?)


The min heap would need to live in the process heap of a particular process. It would only be accessible by that process, so other processes would need to interact with it by sending messages to the "min heap" process and waiting for responses. The min heap process would became a massive throughput bottleneck at much lower QPS than ETS, just due to the overhead of processing messages. This would be exacerbated by GC pauses as the min heap grows larger than around 100k or 1M entries, whereas an ETS table can handle 100M entries or more (never hit a limit actually).

Large, mutable, shared state like a cache works much better if it lives in ETS. Any process can read/write to a properly configured table in a few microseconds. Scales very well to large data sizes and high QPS.

The random probing approach has the disadvantage that it stores expired entries longer than necessary. This either wastes memory or reduces cache hit rate. Either of these seem preferable to limiting throughput and maximum entry count, as the min heap would.


that was my immediate first question




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: