
Show HN: Akumuli – time-series database - chaotic-good
https://github.com/akumuli/Akumuli
======
misframer
The wiki states that random reads are not needed for a time series
database[0]. I think they are, at least for the use case I'm interested in.

Let's say I have 1000 different metrics at 1-second resolution for a day.
That's 1000 * 86400 values. Now suppose I want to get the points for a single
metric for the entire day. Won't that require a scan of the entire data set
(in other words, will this read 999 * 86400 values I'm not interested in)? How
is this different from a table in an RDBMS that's only indexed by a timestamp?

[0] [https://github.com/akumuli/Akumuli/wiki/How-it-
works](https://github.com/akumuli/Akumuli/wiki/How-it-works)

~~~
chaotic-good
Yes, it will be equivalent to db table indexed by timestamp. All data will be
scanned but 86400000 values can be scanned in 10 seconds or so. In future I'm
planing to implement some indexing based on bloom filters to speedup this. Now
it is write optimized storage.

~~~
misframer
How is this any better than LevelDB?

~~~
chaotic-good
Faster sequential writes. No compaction step needed. Constant amount of disk
space is used.

------
fasteo
Looks good. Is it deployed in production ? Any figures you may share ?

~~~
chaotic-good
It's not deployed into production yet. What figures do you interested in?

~~~
fasteo
Figures from a production deployment

