"time-series database" is some of the most overhyped nonsense since noSQL.
Time-series is just data with time as a primary component. It comes in all shapes and volume, but if you have a lot of data and are running heavy OLAP queries than we already have an entire class of capable databases.
Use any modern distributed relational column-oriented database, set primary key to metric id + timestamp, and you'll be able to scale easily with full SQL and joins. You can keep your other business data there too, along with JSON, geospatial, window functions, and all the other rich analytical queries available with relational databases.
We have trillion row tables that work great. No special "TSDB" needed.
While I understand your point, you are quite mistaken if you think that time is just another key. Dealing with time properly requires a concept of point distance, similar to GIS systems requiring 2d distance understanding. You cannot do joins on time with SQL databases unless you want to throw away important data.
As an example, in the industry I work in, you may have no readings for days or weeks, and then hundreds of readings from the same sensor. Why? Many systems in industrial environments send new readings only "on-change", and assume the underlying data storage architecture will forward fill to in-between times. This is why the practically ancient time series architecture of data historians still dominates in these use cases.
In fact, for many time series functions you actually have to throw away the notion of relational joins to be able to efficiently perform time-preserving joins. Window functions only work in basic use cases with relatively small amounts of data where you're aggregating.
Vertica has this functionality and it has been there for years. Fully functional database and you can do time series joins, gap filling with linear interpolation or constant. You can define the intervals at what you want the data points. And you can scale from a few gigs to petabytes of data. https://www.vertica.com/docs/9.1.x/HTML/index.htm#Authoring/...
I agree! Vertica's temporal capabilities are marvelous and the engineers who worked on the planner optimizations for the time extensions are brilliant.
There are of course trade-offs to the approach Vertica takes -- look at StreamBase for a very different take on the problem, another Stonebraker project.
Any of course historians represent yet another take, optimized for point-in time queries that are native and don't need the processing extensions Vertica uses.
I'm not sure where the complexity is that you claim, nor what it has to do with data historians?
So what if there are missing rows? This doesn't affect the database and any aggregations will work fine. Databases don't "fill-in" data, but you can definitely write whatever SQL you need to create averages and buckets to smooth out results.
From reading your website, it seems you're talking about the "last value recorded" as of a certain time, which doesn't seem to be a common query but is totally possible. KDB+ has "asof" joins and others can handle it with window functions using last_value().
We run queries on a table containing 2.7+ trillion rows of data that has no set pattern and infinite cardinality, and results return within seconds. Window functions and joins work without issue. Have you actually tried using a columnstore?
Relational databases represent a column/row-oriented architecture. Data historians are a specialized, non-relational, time-oriented architecture. Using time as a key in a relational index implies that only ordering is important, but that is not the case. Distance between points in time is extremely important because time operates on a continuous 1d line and data points are represented at varying distances between each other on that line. Data historians are architected to both preserve this temporal relationship and take advantage of this by eliminating duplicate data, employing temporal compression techniques to be able to store millions of readings per second for years worth or data.
> From reading your website
My website doesn't have much to do with this because Sentenai isn't a time series database system. I did, however, spend most of my time in research working on temporal data systems, and have been fortunate to collaborate with or learn from researchers who have spent decades solving the unique problems that temporal data presents. What you might consider uncommon for your use cases is extremely common in manufacturing, defense and other areas.
There's a decades-old industry around database systems that handle time natively. And while many support SQL as a lingua franca, and some are column stores, they're not relational by any means as they either extend SQL to support time, or limit non-temporal joins to ensure performance. StreamBase, Kdb, Aurora and many other specialized architectures exist because one size does not fit all. Michael Stonebraker, whose work has included StreamBase, Vertica, Tamr, Postgres, Aurora, and many others, famously published this paper about the very problem: https://cs.brown.edu/~ugur/fits_all.pdf .
I appreciate your links to further reading, and I'm trying to read the Aurora paper right now but after reading the abstract and the intro (I'm in progress right now), I can't find a case that is uniquely fit/perfect for data historians... I know this is already asking a lot, but would you mind giving me one go-to- use case that really made you think "this is what purpose-built data historian-style databases are good for?".
Every issue mentioned in the abstract/intro (which are meant to motivate the paper) seems like it can be solved as an add-on to existing application databases (albeit with their most recent developments/capabilities in mind). The very description of HADP vs DAHP systems seems silly, because it's just a question of write load, and that's fundamentally only solved with batching and efficient IO, or if you give up durability, it doesn't seem inherent to the data model. There's also assertions like:
> Moreover, performance is typically poor because middleware must poll for data values that triggers and alerters depend on
But like, postgres though, you're free to define a better/more efficient LISTEN/SUBSCRIBE based trigger mechanism, for example, you can highly optimized code right in the DB... Thinking of some of the cases called out in the paper here's what I think in my head:
- Change tracking vs only-current-value -> just record changes/events, as far as tables getting super big, partitioning helps this (timescaledb does this)
- Backfilling @ request time -> an postgres extension could do this
- Alerting -> postgres does have customizable functions/procedures as well as LISTEN/SUBSCRIBE. The paper is right (?) about TRIGGERs not scaling then this might be the most reasonable point.
- Approximate query answering is possible with postgres with stuff like HyperLogLog, but the paper is certainly right in that it is not implemented by default.
Maybe I'm mistaking the extensibility of postgres for the redundancy of the paradigm, akin to thinking something like "lisp is multi-paradigm so why would I use Haskell for it's enhanced inference/safety".
I'm still reading the paper so maybe by the end of it it will dawn on me.
So Aurora isn't a historian, but is a complex event processing system. It's an entirely different beast that solves very specific problems around high-speed queries that could theoretically require scanning through all data stored historically for queries.
I'm not a huge fan of historians (I've spent too much of my career working with them), but I can definitely tell you where they make sense. The scenario is this:
Imagine you have a large facility with thousands of machines, each with a programmable logic chip for controls and monitoring. These machines create lots of data and so often employ data reduction semantics by reducing data to on-change rather than sampling sensors at thousands of hertz. A single machine may have dozens or hundreds of variables to track. These tags might be hierarchical: Machine 1, subsystem 5, variable b. If you say there's 100,000 total tags to track in the facility, and they're on average sampled at 10hz, you need a system capable of writing a million durable timestamped values per second. Now that's child's play for, say, google, but if you're a manufacturer, you can't afford to spend massive amounts of money on cloud systems, and usually want to do this all on a single server on the factory floor because you need realtime monitoring that can display the current value in time for every single tag. ( https://www.ws-corp.com/LiveEditor/images/SLIDES/10/3.jpg ). Ideally, in a single node scenario, you want compression. It's not uncommon to store 100 billion timestamped values per day and keep them for a year or more for audit purposes is something goes wrong. Today, for the sake of predictive maintenance, data retention policies of up to 10 years are becoming more common.
So what would you sacrifice to be able to do efficient realtime monitoring and ingestion of millions of data points per second? You can't use queueing semantics to protect an RDBMS because logging can't take more than a 10th of a second per point. If you think about the use case, what you'd sacrifice is transactional queries and row-level joins, because you just don't need them. At the same time, this data is really sparse when you look at it from a table's perspective, so you'll want something like a column store to underly the data storage.
So what we do is throw out transactional guarantees, choose a storage system that is good at compression (roll-ups in some historians will store a formula approximating the data instead of raw data itself over a window), and prioritize speed of point retrieval for most recent "hot data" by caching it in-memory.
You can of course extend Postgres to achieve many of these things, but having done it myself, in practice it's sub-optimal in the exact same way that using bubble sort for all your programmatic sorting needs is sub-optimal.
One thing you might want to keep in mind is that many of the people involved in Aurora are the authors of Postgres. They're not arguing you can't do things in Postgres, they're arguing that in practice the RDBMS's guarantees are theoretically incompatible with high performance in the area of Complex Event Processing, because alignment between different simple events (recorded as rows in a database) can drift so far that memory requirements become prohibitive if you don't use a stream-processing architecture.
Also keep in mind that Aurora is from 2002 and many of the ideas have been implemented elsewhere over time. The great thing about Postgres is that it's perfect scaffolding on which you can build other stuff.
Most data/operational historians are separate programs on top of databases, so is that what you're actually talking about? Your papers seem to suggest that.
Stonebraker is talking about OLTP vs OLAP. I agree that they are very different scenarios.
Over the past 10 years I've had the misfortune of working with most major historians in the field (Oracle's the biggest one I never encountered) and not a single one implemented process data storage on top of a relational database. Some used relational databases to store asset metadata, so maybe that's what you're thinking of?
If you read all the way to the end Stonebraker is actually advocating for specialized architectures like, array databases (SciDB) and stream processing engines like StreamBase, which at the time was just gaining GUI-based query creation capabilities because it was difficult to teach its non-relational concepts to SQL users.
You don't have "missing rows". For time series A you have a time point at 12:01 AM, a datapoint at 12:02 AM, and another datapoint two weeks later at 5:04 PM. For time series B the times are different. You need some notion of whatever state the physical system was in at any given time.
Yes, I understand this as the "last value recorded" concept in my comment. KDB+ supports this with "asof" joins. Others can just do it by scanning a wider time frame or the entire table.
Null gaps in a columnstore can be skipped over basically instantaneously and usually are just zone map/index lookups. Again I question how common this query is and whether it's really worth limiting yourself to a special TSDB because of it.
> Others can just do it by scanning a wider time frame or the entire table.
"Scanning the entire table" for every request to have the last value recorded is rarely a practical option.
> KDB+ supports this with "asof" joins.
> [...]
> Again I question how common this query is and whether it's really worth limiting yourself to a special TSDB because of it.
KDB literally markets itself as a time series database. What's the point you're making again?
I think TimescaleDB lacks an "asof" function for now, but it makes up for it by having the full power of PostgreSQL for other stuff. Regardless, Time Series databases like KDB and TimescaleDB are useful.
EDIT:
it looks like TimescaleDB recommends using
ORDER BY time DESC LIMIT 1
to get the most recent value for any particular set of sources that you're SELECTing over, which would use indices and be reasonably fast.
> KDB literally markets itself as a time series database
kdb+ also markets itself as OLAP/OLTP as well.
A lot of these guys market themselves as a "time series database" because kdb+ do and they want to be compared with kdb+ by people who haven't used kdb+ (but might be considering it).
Distributed relational column-oriented databases are best at large data volumes and OLAP queries. KDB+ is one of those, even though they call it a TSDB in marketing terminology because of its FinTech customer base.
TimescaleDB is not a TSDB, it's an extension to add automatic partitioning to PostgreSQL tables. Timescale helps Postgres get more performance, but it does not give you the full capabilities of a real distributed column-oriented system. If you must use PostgreSQL though then it's a good compromise.
The query you posted does not match the discussion about the last value at a specific instant in time, only the last value ever recorded in the table for that key.
> Distributed relational column-oriented databases are the best at large OLAP data volumes and queries. KDB+ is one of those, even though they call it a TSDB in marketing terminology because of its FinTech customer base.
You're mistaken about Kdb's relational features. Kdb was designed as a time series processing engine using arrays (columns). Column storage doesn't have anything to do with whether a database is relational or not, and Kdb wasn't originally any more relational than the language Erlang is.
I never said relational is related to column-storage.
kdb+ has SQL semantics and relational queries, but it's a combination of the q language integrated into a database so sure, it's a superset of a relational database. Perhaps we disagree on what relational means.
My point was that the current relational features of Kdb didn't exist originally (they were grafted on later) so it's not "marketed" as a TSDB, but it is in fact a TSDB marketed as a relational DB.
The definition of relational is very precise, whether you use the domain calculus, relational calculus or relational algebra. Wiki has a good summary of what must be natively supported by a database system to be relational: https://en.wikipedia.org/wiki/Relational_algebra
If you don't implement this at the transaction log level, but implement it via emulation at the output level, you can't make full relational guarantees, so these operations are fundamental to database design.
Why does it matter what it was originally? We're talking about what the product is today, not 20 years ago.
kdb+ supports a superset of SQL and relational algebra, so it's a relational database. How it's implemented doesn't matter if it provide the functionality, which it can.
I was drawing a comparison between the Kdb runtime and the Erlang runtime, because the OP seems to be conflating the ability to emulate relational features at the application level with first-class support for relational semantics as in a relational database. Support for relational semantics can be emulated via programming languages with sophisticated runtimes like Erlang, but I wouldn't classify any database supporting a Turing complete language with a sophisticated runtime as a relational database.
In our case, "last value" isn't good enough. We do interpolation, and use compression algorithms for which interpolation minimizes reconstruction error.
There are some unique challenges to storing time series data that are different than those of relational databases. Namely, read/write asymmetry, data safety, data aggregation, and analysis of large data sets.
All modern columnstores can handle vast ingest rates and query speeds. It's all down to sharding, zone maps and sparse indexing, fast algorithms that operate on compressed data, and storage throughput. These are well-solved problems at this point.
Your blog post doesn't mention a single columnstore database though. KDB+, Clickhouse, MemSQL, or any of the GPU-powered variations will happily beat any TSDB out there.
They can't handle high cardinality. Imagine having millions of columns in the column-oriented database (70% of those columns are updated every second). Imagine that you have to add new columns all the time.
The main misconception about TSDB's is that it's just a data with timestamp. TSDB's has multi-dimentional data model, time is only one of the dimensions.
'metric_name text' is actually a tag-value list. Many TSDB's allows you to match data by tag. Each tag should be represented by a column in your example.
Single table design will be prone to high read/write amplification due to data alignment. Usually, you need to read many series at once so your query will turn into full table scan. Or it will read a lot of unneeded data which happened to be located near the data you need. Writes will be slow since your key starts with metrics name. Imagine that you have 1M series and each series gets new data point every second. In your scema it will result in 1M random writes.
Cardinality of the table will go through the roof, BTW. Every data point will add the key. Good luck dealing with this.
> 'metric_name text' is actually a tag-value list. Many TSDB's allows you to match data by tag. Each tag should be represented by a column in your example.
For the life of me, I can't figure out why this would be a good idea. I feel like I must not understand what you're saying:
If I've got a million disks that I want to draw usage graphs for, why I would put each one in a separate column?
What's the business use-case you're imagining?
> Usually, you need to read many series at once so your query will turn into full table scan.
Why do I need a full table scan if I'm going to draw some graphs?
I've got something like 4000 pixels across my screen; I could supersample by 100x and still be pulling down less data than the average nodejs/webpack app.
> Imagine that you have 1M series and each series gets new data point every second. In your scema it will result in 1M random writes.
No that's definitely not what manigandham is suggesting. One million disks each reporting their usage means a million rows in two columns (disk name/sym, and volume) would be written (relatively) linearly.
Modern TSDB is expected to support tags. This means that every series will have a unique set of tag-value pairs associated with it. E.g. 'host=Foo OS=CentOS arch=amd64 ssdmodel=Intel545 ...'. And in the query you could pinpoint relevant series by this tags thus the tags should be searchable. For instance, I may want to see how specific SSD model performs on specific OS or specific app. If the set of tags is stored as a json in one field such queries wouldn't work efficiently.
About that 1M writes thing. You have two options. 1) Organize data by metric name first, or 2) by timestamp. In case of 2) the updates will be linear but reads will have huge amplification. In case of 1) updates will be random, but reads will be fast.
You are talking about regular relational databases. I'm talking about distributed column-oriented databases. Big difference.
You can store tags and other data in JSON/ARRAY columns. The primary key is used for automatically sharding and sorting.
Groups of rows are sorted, split into columns, compressed, and stored as partitions with metadata. This means you can 'scan' the entire table in milliseconds using metadata and then only open the partitions, and the columns inside, that you actually need for your query. There are no random writes either, it's all constant sequential I/O with optional background optimization. And because of compression, storing the same key millions of times has no real overhead.
As stated several times before, we deal with this everyday on trillion row tables inserting 100s of billions of rows daily. Queries run in seconds. We do just fine.
We have a lot of data stored in Postgres JSON fields at my work. Around three months ago, we were trying to optimize some queries by adding sub-key indexing to the JSON field. We tried multiple times, but Postgres seemed to keep using sequential scan on the records, rather than the JSON index. So, we just decided to normalize the data and use proper foreign key fields for query performance.
Compared to what? Economically viable is very vague and relative. Columnar storage can easily reach 90% compression levels, is faster to read, and vectorized processing beats per-row/record iteration, so there's a reason it's the best for OLAP currently.
Why not benchmark IronDB against Clickhouse and post the results?
Both of you commenters have your own TSDBs which seems to be coloring all of your posts.
I'm going to leave this conversation as unproductive unless you care to benchmark your products against modern column-stores, although I think it's telling that there are never such benchmarks available.
Granted its an exceptional case, but the query loads we saw at Datadog were poorly served by off-the-shelf solutions.
Maybe things are different now, but I doubt it. You can spend a fortune to get good performance, or you can deal with slow performance, or you can invest a lot of engineering effort and get both, but there's not a ready-to-use solution that will magically replace an entire engineering team for real scale.
But almost no one ever hits that scale, so maybe it's better to adopt this line as a rule of thumb anyway...
I think Clickhouse would do well but I've seen other metrics/observability vendors (like Honeycomb) also build their own systems given the scale and cost factors.
Isn't Datadog on AWS? If you have very specific needs and can build a vertical infrastructure stack then it makes perfect sense to build your own.
I think the challenge is that there are multiple competing needs which are in tension. Data isn't uniform, a large write load that's almost never queried, recent data is accessed way more often than older data, flexible tagging means (org,metric) queries produce potentially millions of points (imagine disk usage across every node for every disk), but indexing tags can be very costly, and its difficult to predict what someone is going to want to query.
I agree that hyper-focus on those needs can distort the picture though. You don't actually have to solve them most of the time, and a relatively poorly optimized solution goes a lot further than people realize. Simply adding caching, for example, solves almost all these issues.
A modern, distributed, relational, column-oriented database will often stress emphatically in the documentation that using timestamps as primary keys is an anti-pattern that's likely to lead to hot tablets: https://cloud.google.com/spanner/docs/schema-design#choosing...
Yes, that's why my comment said use metric name and timestamp.
Spanner isn't a column-oriented database, but they all support multiple columns as the primary/sort/shard/distribution key. Use the name as the first column, and timestamp as the last column, for scalable distribution.
It really depends. Capabilities like time-series-specific compression, automatic rollups, complex aggregations and/or ranking, stable storage in S3, clustering, and replication vary a lot and I think that's why we see so many TSDBs out there. I maintain a list of TSDBs[0] and it started as an evaluation of what already available for my previous employer to use. We didn't find one that fit our exact use case, so we ended up building our own on top of MySQL.
All the features you mentioned are already part of distributed column-oriented databases. S3 storage is orthogonal and unrelated to time-series data. It's usually not included in shared-nothing local storage architecture of databases but you can definitely mount S3 storage in a variety of ways.
Many options on your list are not TSDBs, like Aerospike, Elasticsearch, Cassandra, Kudu, GridGain/Ignite. EventQL and Riak are obsolete. Apache Apex is a stream processing framework. Many of the others are just extensions to Prometheus built-in mini-storage or offer time-series indexing on top of existing databases.
> All the features you mentioned are already part of distributed column-oriented databases.
I disagree, but even if that was the case, not all of them perform well. For example, we could've used Cassandra for our use case at my previous employer but the lack of push-down aggregations (at the time, not sure if they're supported now) would've been terrible for our top-K aggregate queries.
Cassandra is not a distributed relational column-oriented database, so yes, it will be bad at OLAP queries.
Cassandra is a "wide-column" or "column-family" database, which is unfortunately confusing industry jargon but better referred to as an advanced/nested key-value store. It comes from the original Dynamo whitepaper, along with similar systems like HBase, BigTable, DynamoDB, Azure Table Storage, etc. They can sometimes handle time-series queries with good data modeling because of fast prefix scans but the lack of a real query language makes them a bad choice for analytics scenarios.
Fwiw, more recent versions of Druid have a no-rollup mode that does ingestion row-for-row. It ended up being useful for cases where you _do_ care about every row, maybe because you want to retrieve individual rows or maybe because you don't want to define your rollups at ingestion time. And in that mode, Druid behaves like the other DBs you mention.
Some of those we’ve looked at before and decided not to go with because of unknown observability, high operational requirements, or cost. But yeah, no real problems with data models or queries.
I think Druid has come the closest to the most ideal system for the requirements I’ve had to deal with, but haven’t used it yet.
Timescale, for all their wonderful marketing, is just an automatic sharding extension for PostgreSQL. You can accomplish the same yourself using native partitioning, or pg_partman, or Citus.
Partitions are a basic building block for scaling performance and storage so it helps when you have lots of data, but Postgres w/Timescale does not have column-oriented storage and is still single-node only so it comes nowhere near the capabilities of cutting-edge columnstores like Clickhouse, KDB+, MemSQL, Kinetica, etc.
> Timescale, for all their wonderful marketing, is just an automatic sharding extension for Postgres database. You can accomplish the same yourself using native partitioning, or pg_partman, or Citus or any number of other tools.
Put another way...
"Postgres, for all their wonderful marketing, is just an automatic data organization system for <underlying filesystem>. You can accomplish the same yourself using open, read, write, or any number of other syscalls."
You're doing the whole "large simplification" thing again. Yes, you can do everything yourself. No, you don't want to do that. Postgres by itself is not great for time-series data. Time series databases are useful, as your reply even showed, except for the part where you seem to assume any software that doesn't do something entirely novel is simply a quick abstraction that you could just whip up yourself.
Column stores have advantages over row stores, but they also have disadvantages. Your statement that it "comes nowhere [close to] the capabilities of cutting-edge column stores [...]" could just as easily be reversed as well.
Timescale adds automatic partitioning to Postgres, a single-node rowstore relational database. This will naturally give you better performance for larger data (whether time-series or not).
This will not approach the performance and scalability of a fully distributed relational column-oriented database like Clickhouse or MemSQL, because automatic partitioning is just one of many techniques they use for fast performance. There is nothing a special TSDB, or TSDB extension, can do that these database cannot already do faster, while providing rich SQL and joins.
Considering that TimescaleDB has only been available for a relatively short period of time, I would love to see a source for that statement. It sounds like a really fun article to read.
Since TimescaleDB is creating a new partition for each chunk of time, it should be able to maintain its ingestion rate consistently for as long as you have storage to store that data. Perhaps it won't keep up with distributed, eventually consistent databases, but such databases generally have very limited analytical power, and if you're using them for anything but time series data, that whole "eventual consistency" thing requires a lot of careful thought.
Timescale is an extension to add automatic partitioning to PostgreSQL to give you some scalability and performance benefits. It is nowhere near the performance potential of a real distributed column-oriented database, which are strongly consistent, have rich SQL support, and even support transactions.
google is you friend google clickhouse vertica etc. the comment about limited analytical power is especially fun.
Cloudflare is ingesting 11 million rows per second into CH.
I think time will change the foundation of a lot of databases, once someone gets it right, and I’m not sure time-series is really it.
We’re currently spending billions trying to build bitemporal public data in Europe, and it’s no easy feat so far.
Basically what we need is to be able to register future data, that don’t come in to play until they are supposed to, as well as keeping a live history that you can spook through to view a data set from any given date, as well as make changes to some past data as if you were there at that date.
You can obviously do so with code, and a lot of the old SAP systems actually support this, but the first DB that handles this well will get to run every single public system.
The PI data archive (it's the database part of the pi system) actually has a lot of these features, including the future data stuff (probably the biggest feature in the past few years). It's made by OSIsoft which is a big player in time series databases for industrial settings. (fyi I work on this product)
This works for a small / medium size company but certainly does not scale for bigger companies. There are several problems that you gonna run at scale.
I don't see why that's so special. Other data can have incrementing IDs or some other value. In fact all data can be considered to have a timestamp, at the very minimum being when it was inserted into the database, so it's a rather vague definition overall.
I think, its different because its dimension where usefulness of data goes does as the data ages. TimescaleDB for example, does optimisations ( they call it chunks or something) based on this fact.
Timescale is just an automatic partitioning extension for Postgres. You can also do it with the native partitioning feature, or pg_partman, or Citus, or other tools.
Partitioning the table is the optimization, so that you skip over data when querying and manage it in smaller parts, but Timescale doesn't do anything about older data and neither do most databases.
I've had great success using event timestamp ( milliseconds since 1970 ) as a column, with an index on it. And then when you query you can use BETWEEN. If you write your own ORM you can make it automatically calculate the time range according to some defaults - minute, second, etc. Works great.
If you are using spatial data, You can also use two columns like this for longitude and latitude.
Time-series is just data with time as a primary component. It comes in all shapes and volume, but if you have a lot of data and are running heavy OLAP queries than we already have an entire class of capable databases.
Use any modern distributed relational column-oriented database, set primary key to metric id + timestamp, and you'll be able to scale easily with full SQL and joins. You can keep your other business data there too, along with JSON, geospatial, window functions, and all the other rich analytical queries available with relational databases.
We have trillion row tables that work great. No special "TSDB" needed.