Hacker News new | past | comments | ask | show | jobs | submit login
Building a Bw-Tree Takes More Than Just Buzz Words [pdf] (cmu.edu)
63 points by jsnell 11 months ago | hide | past | web | favorite | 6 comments



This paper is very well laid out and a joy to read.

The conclusion seems to be that Adaptive Radix Trees [1] outperform all b-tree variants, skiplists and Masstree [2] for all ops except range scans. The claim is more credible if one considers that the authors of this paper are not the authors of [1].

Since ARTs are significantly faster, it would be nice to throw in the benchmarking mix one hash map as well, such as dense_hash_map<>, if only to see what is currently the gap between ordered and unordered containers.

[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=654...

[2] a hybrid trie/B+tree data structure


ART is just another Judy variant. And they being faster than a hash table is popular myth, but wrong. They are only faster than a very bad hash table. The linux page table implementation as hash e.g. would be much faster than the current radix tree plus cache. The Power CPU e.g uses a hash, not an ART for it's page table cache.

A Comparison of Adaptive Radix Trees and Hash Tables - Big Data Analytics PDF https://bigdata.uni-saarland.de › ARCD15


Thank you for the reference [1]. Maybe I missed something on the post's paper, but I don't think they make any claims wrt ARTs and hash tables. They do seem to conclude that ARTs are better than the ordered containers they compared them against.

[1] https://bigdata.uni-saarland.de/publications/ARCD15.pdf


I actually ran into some of the same problems while trying to write a bw-tree implementation a couple of years ago. My naive implementation ended up having very poor performance compared to other in memory KV stores due to the inefficient busy waiting on contention. There were a few tricks that make it more efficient, but never got around to fleshing them out.

I guess I should have polished it up and wrote a paper about it.

If anyone is interested the code is available at https://github.com/d4l3k/skeletondb


Interesting work. In general, how does it perform compared to LMDB, BoltDB, Badger, etc.? What kind of performance increases would you expect from the optimizations you didn't get around to?


This was mostly just a toy project, I wouldn't recommend actually using it for anything. It's kind of depends when it comes to performance. A perfect implementation would likely get better performance than all those databases there since it resides entirely in memory. I'm not sure how it currently compares.

The main thing that caused issues was that compaction of logs into the main tree conflicted with appends. This ended up causing a lot of thrashing since the compaction cycle would run a lot more than necessary.

The key point of bw-trees is that once changes have been added to the log, only the last element will change (to point to the next item). Thus, instead of trying to compact the entire log, you can compact everything but the last item to make it conflict free.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: