
Pure Go implementation of D. J. Bernstein's cdb constant database library - SlimHop
https://github.com/jbarham/go-cdb
======
cypherpunks01
What are people using this 'cdb' for, irl? Sounds interesting but this is the
first i've heard of it.

~~~
tlack
It's useful if you have data that is easy to cache (i.e., rebuilt every 6
hours) but very commonly accessed. Because the lookups are so quick (two
seeks) it's almost raw disk speed. But yeah, rebuilding the files is an
offline process (build new file and swap it in using a rename), so your data
has to be cache-friendly.

It's a good alternative to memcache if your data is larger than what memcached
can support in RAM.

In the early 2000s I used it to implement most of the frontend for a PPC
marketplace for search engines. Held up well. These days I'd just use
memcached or redis.

~~~
yummyfajitas
_It's a good alternative to memcache if your data is larger than what
memcached can support in RAM._

Unless you are running memcached on an ec2 large instance (8gb) or bigger:

"No random limits: cdb can handle any database up to 4 gigabytes."

~~~
jbooth
It's pretty easy to throw together a variant using longs for position instead
of unsigned ints, the rest of the code stays the same. Slightly more overhead
in the file but as long as the items you're storing are bigger than a few
bytes it's not a huge deal.

Anyways, it's useful for stuff where you want to ship out a big dictionary
once a day or so and you need fast lookup but it doesn't have to be updated
transactionally.

------
hncommenter13
For those interested, there are also implementations of cdb in java:
<http://www.strangegizmo.com/products/sg-cdb/>
<https://github.com/sunnygleason/g414-hash>

------
gringomorcego
Okay, semi-related:

Why is the speed of the Go compiler so important? Why not just use an
incremental compiler? Why the hell would you want to recompile a 100k+ line
program when there are known better alternatives?

Just doesn't make sense to me.

~~~
pcwalton
When your compiler is doing interprocedural optimizations like inlining (which
Go's does), then changing an upstream dependency generally requires that all
downstream dependencies be recompiled as well. So incremental recompilation
isn't a panacea, and I think the Go designers made the right choice in
striving to make compilation fast.

Of course, you can do something like incremental compilation only at -O0 with
no inlining, which is what I suspect we'll end up doing in Rust (the relevant
bug is [1]).

[1]: <https://github.com/mozilla/rust/issues/2369>

