Hacker News new | past | comments | ask | show | jobs | submit login

Also, there is nothing magic about his fibonacci constant. Any large odd constant with roughly half the bits set at random will do the same thing.



I also did not follow what is special about 2^n/phi. I found this more concise justification: http://mathforum.org/kb/message.jspa?messageID=431065

"Knuth's finding was that the dispersion of indexes for a sequence of consecutive keys is maximized when M is chosen this way, thus a multiplicative hash table with a dense set of keys will have the fewest possible collisions when M approx= 2^N * R."

Although, if the keys are dense, then we could just use the low bits directly. I guess the unstated assumption is that in the real world, we'll have a mix of keys that are sequential and keys that are strided in such a way that using low bits directly would cause a lot of collisions. So we need a multiplicative hash to take care of the latter and we should use 2^n/phi to take care of the former.

Austin, you've done a lot of great work on this. What is the hash function you'd use today for small (<=32byte) keys?


Phi is the most irrational of the irrational numbers, meaning that successive rational approximations are just slightly better: 1/1, 1/2, 2/3, 3/5, 5/8, 8/13, etc. There are no rational approximations that "jump out" as being much better, so you won't find any unexpected patterns when using it.

The Golden Ratio (why it is so irrational) - Numberphile https://www.youtube.com/watch?v=sj8Sg8qnjOg


Murmur3 has good distribution properties and is much less code than City/Highway/Spooky. There are also some hashes based on hardware AES instructions (not crypto, just taking advantage of the mixing properties) that are near perfect but I don't recall their names and haven't benchmarked them.


The code for these hash tables is open source. You keep repeating that he should be using your hash functions for reasons xyz. Honestly, if you really believe this, show that doing so is better.

Just because you are the author of uvw does not make yo right.




The best is currently Leonid Yuriev's t1ha. See https://github.com/rurban/smhasher/

Note that in contrast with what Andy or DJB say, the collision safety is not a problem of the hash function per se, as you cannot fix collision attacks with any "safer" hash function. You can easily brute-force even the worst of all siphash in under 4min. Safety begins with 256 bits, in a hash table you got typically 10-14, max 32 to attack. It is only making it marginably safer, but much slower. It is only doable by checking collision attacks in the collision scheme, either by collision counting or using a non-attackable collision scheme.

Unfortunately most implementations have no idea, and just go with the slowest of all. The latest such sin done in the linux network stack. I thought at least those guys were above that, but apparently not.

This simple multiplication hash is indeed superior, just with linear probing it has problems. You would use double hashing with another similar constant then. Or find two such constants dynamically, as this would be universal then.

And also note that the upper bits of a hash function are always superior to the lower bits. power-by-2 & checks are thus always worse than right-shifting as with this one.


Thanks. Also, from smhasher text:

"See e.g. A Seven-Dimensional Analysis of Hashing Methods and its Implications on Query Processing

https://infosys.cs.uni-saarland.de/publications/p249-richter...

for a concise overview of the best hash table strategies, confirming that the simpliest Mult hashing (bernstein, FNV*, x17, sdbm) always beat "better" hash functions (Tabulation, Murmur, Farm, ...) when used in a hash table."

The conclusion is not properly stated, it's surely not "always" but there are enough use cases where the simple multiplications (or, even more probably, the "rotate, multiply, xor" and variants) are the fastest in practice.

From the paper:

"We could observe that Mult produces indeed more collisions than the expected amount on uniformly distributed keys. However, this larger amount of collisions does not get highly reflected in the observed performance. Thus, we consider Mult as the best candidate to be used in practice when quality results on high throughputs is desired, but at the cost of a high variance across data distributions."

It’s still a trade-off, but a good choice in some use cases.


I haven't found a usecase yet where a better hash function leads to a faster hash table. Theoretically this would happen with CPUs with extremely slow or small cache.

It's also confirmed with the recent trend to add slower hash functions everywhere, leading to dramatic performance regressions.


> You can easily brute-force even the worst of all siphash in under 4min

You assume that you know the seed or can directly observe the output, which is rare in practice.

> upper bits of a hash function are always superior to the lower bits

Why is that? The identity hash for ints is quite common. And for good hashes, the difference should be negligible.


Actually there is something special about the using the golden ratio. By some measure it is "the most irrational" number, and that property makes it really good at distributing numbers evenly along an interval.

You can use it to generate a nice sequence of different colours - set the hue to (n * the golden ratio) mod 1.

But I don't know if that property is really necessary for a hash table. I guess only if you are using a terrible key hash.


In fact, if you actually choose a random odd number, you'll get multiply-shift, which is probably the fastest universal hash function that exists.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: