Heheh a bit tangential, but a long time ago, I had a similar thought: how much performance could we gain if we just compared hash values (typically integers) and avoided comparing actual keys -- and the pointer-chasing that entails -- as far as possible?
The problem is that for a regular hash table, eventually keys must be compared because two keys could have the same hash value. So maybe we relegate key comparisons only in cases when we encounter a collision.
The only case where this can work is when the set of keys that could ever be looked up is static. Otherwise we could always get a lookup for a new, non-existent key that creates a new collision and return the wrong value. Still, there are cases where this could be useful, e.g. looking up enum values, or a frozendict implementation. Something like minimal perfect hashing, but simpler.
In SwissTable, the main difference is that you only get a probabilistic signal of presence (kind of like a Bloom filter), but the core idea feels very similar.
Not an different algorithm, but around 15 years ago I looked at the JDK’s hashmap implementation and saw it was doing a loop to calculate the next highest power of two (after some multiplier) to determine the table capacity.
For fun, I swapped it for a bitwise calculation from Hacker’s Delight. It benchmarked surprisingly well, multiple percent improvement at various map sizes (even after JIT warmup). Swapping a 3 line loop for a one line bit twiddle wasn’t the most beautiful change, but I was surprised I could beat the original with such a trivial change. It also made me wonder why it hadn’t already been done.
I didn’t have any interest in trying to push it back upstream, but it was a fun to do.
Ohhh, I went digging after your comment and I think this is exactly what you were referring to: JDK-7192942 ("Inefficient calculation of power of two in HashMap").
I honestly love hearing about these hidden gem micro-optimizations.
The SWAR (SIMD-within-a-register) numbers are strictly better than the SIMD versions as well as the standard library baseline. Why is that? SIMD should be strictly faster if the machine supports it, since the SWAR max bitwidth is 64, while SIMD starts at 128 bits.
The Java SIMD API used here must not result in using actual SIMD machine code.
Thanks for the great point.
This is actually the main topic I'm working on for the next post.
It's understandable to expect SIMD to win purely because it's wider, but in practice the end-to-end cost matters more than raw VL.
With the Java Vector API, the equality compare can indeed be compiled down to real SIMD instructions, yet the overall path may still lose if turning a VectorMask into a scalar bitmask is expensive. The "best case" is a vector compare followed by a single instruction that packs the result into a bitmask; if the JIT doesn't hit that lowering, it can fall back to extra work such as materializing the mask and repacking it in scalar code. From what I can tell, they have been working on intrinsic for VectorMask.toLong (https://bugs.openjdk.org/browse/JDK-8273949).
Also, SWAR avoids that entire transition by staying in GPR and producing the bitmask directly with a small, predictable sequence of bit operations. For small fixed-size probes, that simplicity often outweighs SIMD's theoretical advantage, and on some CPUs heavier vector usage can even come with frequency effects that further narrow the gap. So, I guess the more likely explanation isn't that the Vector API never uses SIMD.
I'll take a closer look at how it compiles down to machine code and share what I find.
P.S.
Benchmark results can vary a lot depending on the environment (OS, CPU and JDK/JIT version and flags), so it’s also possible the benchmark picture changes on a different setup.
The article wasn’t great at laying out the concepts at the start. As I understand it, the big idea is essentially a bloom filter as the first phase of a retrieval.
In a concurrent environment, I wonder if the overhead of wrapping every API call with a synchronized would make this significantly slower than using ConcurrentHashMap.
Thanks. This is actually one of the topics I really want to tackle next.
If we just wrap every API call with synchronized, I'd expect heavy contention (some adaptive spinning and then OS-level park/unpark), so it'll likely bottleneck pretty quickly.
Doing something closer to ConcurrentHashMap (locking per bin rather than globally) could mitigate that.
For the open-addressing table itself, I'm also considering adding lightweight locking at the group level (e.g., a small spinlock per group) so reads stay cheap and writes only lock a narrow region along the probe path.
I think that's a great idea! I just checked one of my larger projects and it 55% ConcurrentHashMap and 45% HashMap so I'd personally benefit from this plan.
The problem is that for a regular hash table, eventually keys must be compared because two keys could have the same hash value. So maybe we relegate key comparisons only in cases when we encounter a collision.
The only case where this can work is when the set of keys that could ever be looked up is static. Otherwise we could always get a lookup for a new, non-existent key that creates a new collision and return the wrong value. Still, there are cases where this could be useful, e.g. looking up enum values, or a frozendict implementation. Something like minimal perfect hashing, but simpler.
So I came up with this silly project and benchmarked it in Java, C# and a hacky CPython "extension": https://github.com/kunalkandekar/Picadillo
A very micro optimization, but turned out there was a 5% - 30% speedup on a PC of that era.
reply