Looks really interesting for Graphite-like use cases.
Anyways, I'm interested in timeseries so I read the article and tried to understand the datastructure but to be honest it opens up more questions than it answers. I applaud people though who try to describe their datastructures that are core to the app. Thanks for that.
1. What is the exact datastructure of a leaf? You mention a leaf node can hold 1000 datapoints.
2. Why is using a WAL impossible? That should be possible for any datastructure.
3. In your example if level 3 is full and everything gets merged into level 4, there are no levels 1-3. How does one query the datastructure? Does it maintain a pointer from the root to level 4?
4. Related to above: if I want to query all data from last month until now which happens to be the start of level 4, it will first go to root, then to the level 2 Leaf, from there to the level 2 SBlock, from there to the level 3 Leaf, then level 3 SBlock, then level 4 Leaf, then level 4 Sblock, then the first level 4 leaf? That seems a lot of random access. How many iops does a lookup need?
5. SBlocks need to be kept in memory. If I have ten million timeseries (not uncommon, can be much more), each with 3 levels, then upon startup the app has to load 30M SBlocks into memory?
6. You say that several trees can be interleaved in a single file, how does that not break the linear IO advantage of columnar layouts?
7. How do you store information in the "inner nodes" (SBlocks?) to speed up aggregation? Do you store every possible aggregation in there? E.g. sum, avg, min, max, stddev, ...
8. Storage format of an individual time series is only a part of what a TSBD needs to do, another part is how to find all the timeseries for a particular query. How does that work?
A) good write performance
B) no write amplification
C) low RAM usage
I think there are some interesting ideas in this structure, it looks to me more like a Linked List of one level deep B+Trees, not a big overall tree.
2. Because there is a lot of data-structures. I'm using tree per series. The database can simply store hundreds of thousends of series. Creating WAL per series is not feasible.
3. It maintains a list of roots.
4. One I/O operation per node. You will fetch a leaf node for every ~1000 data points and a superblock for every 32 leaf nodes. It's not as bad as it sounds because you will read data for one series only. To span over 4 levels the series should contain tens of millions of points.
5. Yes. You will need a beefy machine for this with a lot of RAM.
6. Random reads are fast on modern SSDs. It's optimized for SSD (I simply don't have a computer with HDD).
7. It stores only composable aggregations - min, max, count, sum, min/max timestamps.
8. All series names is stored in memory. During the query time this memory is scanned using regexp to find relevant series names and they ids. This is a kind of a temporary solution. It works good enough for the datasets with small cardinality (around 100K series).
Nice work and thank you for sharing it.
Random reads and writes are significantly slower if you write everything from one thread. To speed everything up you should write in parallel (for example using Linux AIO + O_DIRECT, or libuv + O_DIRECT). OS level buffering and many OS threads will deliver good random write throughput as well.
There are other effects to consider, e.g. read-write interference.
So as I see it you'd be constrained by 32GB Optane modules today, but they will eventually (one, maybe 2 years) be 2 TB modules like the Samsung 960 Pro modules are today. And an M.2 port is really just a PCIe slot so you're looking at systems with maybe 32 TB of Optane storage on the high end within the next 5 years.
Otherwise, a lot of these "actually you need X for time series!" are just talking past each other because "time series" means any number of actual patterns.
- "there is no longer a single file per series but instead a handful of files holds chunks for many of them"
- "We partition our horizontal dimension, i.e. the time space, into non-overlapping blocks. Each block acts as a fully independent database containing all time series data for its time window."
I don't believe this will work out well because it will introduce read amplification during query time (compared to file per series approach that they're using now).
And I'm really curious how they managed to get 20M writes per second on laptop. The article states that they're using compression algorithm from Gorilla paper and Gorilla paper authors claims that they managed to get 1.5M on a single machine.
Maybe I'm getting this all wrong, but aren't the leaves also representing chunked data, which is compressed.
The Prometheus solution also sequentially places compressed chunks for the same series. The time slicing actually has a lot of benefits and can simply be seen as the first level of the described B+ tree. An index of chunks for a series can then be seen as the second level.
The potential read amplification here seems completely equivalent. Just from my high-level view, all properties of the read and write path seem almost identical.
Leaf nodes contain data from one series (this data should be read together) and SSTable with time-series data contains many series and there is no guarantee that all these series will be used by the query.
>> The Prometheus solution also sequentially places compressed chunks for the same series.
I'm not really that familiar with Prometheus internals, especially with indexing part. As I understand it doesn't align writes so there is a lot of write amplification on the lower level that translates to cell degradation and non-optimal performance, but I can be wrong here.
It'll end up about the same in practice, only the time series data that needs to be read is read.
Query performance is looking quite a bit better with this design.
> And I'm really curious how they managed to get 20M writes per second on laptop.
I understand that was a micro-benchmark of one part of the system. The whole system is looking to be roughly in line with the Gorilla numbers.
This makes sense now. I've found out that the compression algorithm performance numbers affect the overall performance in a big way. On modern SSD the entire workload is CPU bound.