I wrote my own hash table[1], in the intrusive style, which puts the burden of checking for double insertion on the user. This allows me to do O(1) insert and delete and only rehash when expensive lookups due to collisions are encountered. For some use cases, it can speed things up greatly.
Also, due to the intrusive style, the table is usable without any dynamic allocations.
I wonder how it compares to the one in the article, if anyone is willing to adapt the benchmarks... :-)
Also, due to the intrusive style, the table is usable without any dynamic allocations.
I wonder how it compares to the one in the article, if anyone is willing to adapt the benchmarks... :-)
[1] https://github.com/Peaker/small_hash