

Bigdis, Redis' ugly brother - uggedal
http://github.com/antirez/Bigdis

======
pierrefar
My first thought: Bigdis with a FUSE filesystem mapping to Amazon S3, you can
have a huge database that's stored in triplicates, backed up, can be accessed
from anywhere, and scalable.

~~~
antirez
good point IMHO, the idea of using the non exactly efficient encoding of one
file per key is not just because the idea is based on large values, but also
because this allows to exploit filesystem-level things.

For instance if you happen to have a distributed filesystem running, the
effect is to have a distributed DB.

~~~
pierrefar
I've been thinking exactly along those lines for a while now. It started with
me backing up MongoDB to S3 using one document = one S3 object. Each S3 stored
document is a gzipped JSON string. From there, the idea of having a write
through memory cache that saves to S3 is a quick leap, followed by allowing
multiple servers to access the same S3 store.

End result is a really eventually consistent scalable key=>document store.

Yes I'm quite excited by Bigdis!

------
rb2k_
It would be awesome if redis supported a "bigdis" type that one could use to
archive things that should still be reachable by key, but don't need any of
the fancy datastructures that redis provides.

~~~
petrilli
That sounds like any other KV store then. One of the usefulnesses (wow that's
actually a word) of Redis is that it has all the rich data types and
operations that are atomic. At that point, if you're not going to use them,
why not look at TokyoCabinet, or something similar?

~~~
rb2k_
I didn't say that redis should DROP the other types. I'd just like to be able
to have a redis db for data that I'd like to keep around, but don't plan on
working with it a lot.

