

LRU implementation in C++ - ahanjura
http://www.bottlenose.demon.co.uk/article/lru.htm

======
tlack
For comparison's sake, here's a simple LRU in Erlang:
[http://erlangquicktips.heroku.com/2008/10/05/an-erlang-
lru-m...](http://erlangquicktips.heroku.com/2008/10/05/an-erlang-lru-module/)

~~~
rudiger
In Java, LinkedHashMap[1] is well-suited to building LRU caches.

[1]
[http://docs.oracle.com/javase/6/docs/api/java/util/LinkedHas...](http://docs.oracle.com/javase/6/docs/api/java/util/LinkedHashMap.html)

~~~
Someone
However, one should be aware of that "Note that insertion order is not
affected if a key is re-inserted into the map".

Because of that, a timestamps update must do:

    
    
      Cache.remove(key);
      Cache.put(key,value);
    

Only doing the latter will not move the item to its correct location.

Also, I am too lazy to chrck it, but I do not think this will have the same
big-O as the C++ code.

------
jsaxton86
This algorithm is arguably already patented:
<http://www.patentstorm.us/patents/5893120.html>

In fact, you may remember that Google was sued over this last year:
[http://news.cnet.com/8301-13577_3-20056192-36.html?part=rss&...](http://news.cnet.com/8301-13577_3-20056192-36.html?part=rss&subj=news&tag=2547-1_3-0-20)

Granted, using a timestamp to expire hash table entries is probably different
enough from using a timestamp to evict cache entries where I would hope this
wouldn't hold up in court, but IANAL and I could be totally wrong here.
Another possibility is that there could be a totally different patent that
already covers this algorithm.

------
albertzeyer
Here is another simple implementation in C++:
[https://github.com/albertz/grub-
fuse/blob/master/include/cac...](https://github.com/albertz/grub-
fuse/blob/master/include/cache.h)

Usage example: [https://github.com/albertz/grub-fuse/blob/master/grub-
core/f...](https://github.com/albertz/grub-fuse/blob/master/grub-
core/fs/reiserfs_key_cache.cpp)

------
bborud
If this article shows anything it is that C++ never really "grew up". Even in
Java, implementing an LRU cache is such a trivial thing to do because most
people actually use, and build on, the standard library.

------
abcd_f
How is this notable?

~~~
jcapote
How is your comment worthwhile?

~~~
abcd_f
In a condensed form of a rhetorical question it states that LRU cache
containers are fairly trivial and they are routinely implemented as a part of
larger projects. It typically takes under an hour to do and it comes out
lighter and more legible if one does not depend on boost.

So, how is this notable? Is it L2-cache friendly? Is it optimized not to
fragment heap? Perhaps it's lockfree?

