> Hopefully our story will make you think twice before trying to build your own TSDB in house using open-source solutions, or if you’re really crazy, building a TSDB from scratch. Building and maintaining a TSDB is a full time job, and we have dedicated expert engineers who are constantly improving and maintaing our TSDB, and no doubt will iterate the architecture again over time as we hit an even higher magnitude of scale down the line.
> Given our experience in this complex space, I would sincerely recommend you don’t try and do this at home, and if you have the money you should definitely outsource this to the experts who do this as a full time job, whether its Outlyer or another managed TSDB solution out there. As so many things turn out in computing, it’s harder than it looks!
When I see:
"You Can Lose a Few Datapoints Here and There"
I see that these guys are barking the wrong tree.
1. We used single thread per network card. (Yes, we architected clusters/failovers, etc... but not once was it required because of data rates)
2. The server could handle a fully saturated Gibit network at <50% CPU (per core)
3. Data was NEVER thrown away (but we had allowances in our API to let the client reading the data to drop updates and get sub-second aggregates instead -- eg OHLC or summation)
4. Data was stored in basically flat file systems.
5. Our calculation engine was run 'downstream' toward the client ends, or on the client end, away from data collection. If needed (ie. the calcs were expensive to run), these could feed back into the server for long term storage.
This was mid 2000. I'm sure this is not rocket science for modern day timeseries guys.
Hardware capture almost never drops and timestamps with GPS sync.
You can then take those capture files and manipulate them however you want into normalized market data.
Market data has the notable feature of being segmented by trading day, so the combination of symbol-venue-date is an appropriately small unit of data to run aggregations of any kind over or to distribute over a cluster.
So for market data at least, there's not much to "rolling your own" time series DB in Python or what-have-you.
Prcessing that firehouse in real time for trading is a different matter though and how you build that depends heavily on your latency requirements.
It blows my mind that businesses are willing to outsource metrics. When I worked at Amazon it was trivial to estimate the next quarter's results from the app metrics. Naturally this meant we were subject to trading restrictions.
If a monitoring company ever starts applying Google/Facebook style ethics with regards to exploiting the data their customers give them, they will be in an incredibly powerful position.
It makes sense at various scales compared to hiring, training, maintaining infrastructure, handling incidents, etc. related to your own metrics solution.
> "You Can Lose a Few Datapoints Here and There"
is not the one I'm going with...
Isn't this the mantra of all of these types of articles?
OK, yes, it usually makes sense. Especially in the case where you are like these guys and experienced enough in some relevant area to know just how difficult it can be. These are perfectly good reasons from technical, business, and project planning perspectives.
Isn't there a TLDR where they mentioned when it does make sense to build your own TSBD? Presumably in some case where you have a team of serious, high-grade experts who know exactly what they are doing; have requirements that cannot be met by any of the other offerings out there; and where the whole thing has been specced out and deemed reasonable?
Clickhouse would also be a good option for doing aggregating queries that TSDBs are mostly used for.
One of my wishlist items in the data space is a Managed Clickhouse offering. :-)
All our development was around query engine using plain JDBC/SQL to talk to Hbase via Phoenix. Scaling is as simple as adding a node in the cluster.
On a separate note we didn't use joins even though they are supported in Phoenix, data was completely denormalized into one big table.
Discourse: Industry user now OSIsoft employee
It's in the same league as timeseries data in financial markets, which I have more direct experience of. Financial timeseries data probably has more metadata (trade/order flags etc) but in terms of raw data it's similar.
>its not uncommon for some of our customers to send us millions of metrics every minute
What kind of customers/services generate millions of points a minute?
Of course, our metrics were all handled in house. From talking to the teams that handled the metrics pipeline, the vendors were great for smaller companies, but there was no off the shelf solution for a companies that large with that volume. But I did very little with that myself, other than look into the fact that Spring Boot published way too many default metrics. Who needs P50, P70, P75, P80, P85, P90 - P99 on all web requests?! Just set a default that is small and worthwhile and let the developers adjust as needed.
If you're pumping out a million metrics per minute, almost none of those are ever going to actually be used to generate meaningful insight.
In most cases, time spent maintaining terabytes of rapidly aging time series data would be better spent elsewhere.
A particularly good high-frequency trader might be interested in Terabytes of minutia when they're trying to sort out what caused yesterday's spike and crash of ticker XYZ.
Systems and sales analysts that are looking at web store front ends (and back ends, if there are issues) would be interested in large volumes of data, specifically corner cases (users who don't follow a statistically significant path), when trying to sort out a UI/UX redesign.
Traffic and transit analysts might want terabytes of data (especially with date and weather indicators) when considering what kind of freeway interchange to add to a growing area.
I suppose I could go on...
(Meraki actually did implement its own time-series database, and after I left published a paper describing its design and implementation. https://meraki.cisco.com/lib/pdf/trust/lt-paper.pdf. Good quote on the motivation: "As discussed in Section 2.3.3, customers have a nearly insatiable demand for high-resolution historical data, even though they mostly query data from the recent past."
The example from the article was "one team at one of our customers decided to dump 30 million metrics on us, send all of their mobile product metrics into Outlyer"
There is virtually an unlimited number of applications that could generate 16k events per second (million per minute).
The result was a tens of terabytes a day Niagara of data.
When I left they were in early stages of Hadoop because ordinary parse/analyze was starting to fall behind.
Across their organisations it can be much more, Fastly has reported 2.2M/s (https://promcon.io/2018-munich/slides/monitoring-at-scale-mi...) for example.
Time-series is just data with time as a primary component. It comes in all shapes and volume, but if you have a lot of data and are running heavy OLAP queries than we already have an entire class of capable databases.
Use any modern distributed relational column-oriented database, set primary key to metric id + timestamp, and you'll be able to scale easily with full SQL and joins. You can keep your other business data there too, along with JSON, geospatial, window functions, and all the other rich analytical queries available with relational databases.
We have trillion row tables that work great. No special "TSDB" needed.
As an example, in the industry I work in, you may have no readings for days or weeks, and then hundreds of readings from the same sensor. Why? Many systems in industrial environments send new readings only "on-change", and assume the underlying data storage architecture will forward fill to in-between times. This is why the practically ancient time series architecture of data historians still dominates in these use cases.
In fact, for many time series functions you actually have to throw away the notion of relational joins to be able to efficiently perform time-preserving joins. Window functions only work in basic use cases with relatively small amounts of data where you're aggregating.
There are of course trade-offs to the approach Vertica takes -- look at StreamBase for a very different take on the problem, another Stonebraker project.
Any of course historians represent yet another take, optimized for point-in time queries that are native and don't need the processing extensions Vertica uses.
So what if there are missing rows? This doesn't affect the database and any aggregations will work fine. Databases don't "fill-in" data, but you can definitely write whatever SQL you need to create averages and buckets to smooth out results.
From reading your website, it seems you're talking about the "last value recorded" as of a certain time, which doesn't seem to be a common query but is totally possible. KDB+ has "asof" joins and others can handle it with window functions using last_value().
We run queries on a table containing 2.7+ trillion rows of data that has no set pattern and infinite cardinality, and results return within seconds. Window functions and joins work without issue. Have you actually tried using a columnstore?
> From reading your website
My website doesn't have much to do with this because Sentenai isn't a time series database system. I did, however, spend most of my time in research working on temporal data systems, and have been fortunate to collaborate with or learn from researchers who have spent decades solving the unique problems that temporal data presents. What you might consider uncommon for your use cases is extremely common in manufacturing, defense and other areas.
There's a decades-old industry around database systems that handle time natively. And while many support SQL as a lingua franca, and some are column stores, they're not relational by any means as they either extend SQL to support time, or limit non-temporal joins to ensure performance. StreamBase, Kdb, Aurora and many other specialized architectures exist because one size does not fit all. Michael Stonebraker, whose work has included StreamBase, Vertica, Tamr, Postgres, Aurora, and many others, famously published this paper about the very problem: https://cs.brown.edu/~ugur/fits_all.pdf .
Further reading that might be illuminating:
3. https://books.google.com/books?id=BK6oCAAAQBAJ&pg=PA9&source=gbs_toc_r&cad=4#v=onepage&q&f=false (excerpt)
Every issue mentioned in the abstract/intro (which are meant to motivate the paper) seems like it can be solved as an add-on to existing application databases (albeit with their most recent developments/capabilities in mind). The very description of HADP vs DAHP systems seems silly, because it's just a question of write load, and that's fundamentally only solved with batching and efficient IO, or if you give up durability, it doesn't seem inherent to the data model. There's also assertions like:
> Moreover, performance is typically poor because middleware must poll for data values that triggers and alerters depend on
But like, postgres though, you're free to define a better/more efficient LISTEN/SUBSCRIBE based trigger mechanism, for example, you can highly optimized code right in the DB... Thinking of some of the cases called out in the paper here's what I think in my head:
- Change tracking vs only-current-value -> just record changes/events, as far as tables getting super big, partitioning helps this (timescaledb does this)
- Backfilling @ request time -> an postgres extension could do this
- Alerting -> postgres does have customizable functions/procedures as well as LISTEN/SUBSCRIBE. The paper is right (?) about TRIGGERs not scaling then this might be the most reasonable point.
- Approximate query answering is possible with postgres with stuff like HyperLogLog, but the paper is certainly right in that it is not implemented by default.
Maybe I'm mistaking the extensibility of postgres for the redundancy of the paradigm, akin to thinking something like "lisp is multi-paradigm so why would I use Haskell for it's enhanced inference/safety".
I'm still reading the paper so maybe by the end of it it will dawn on me.
I'm not a huge fan of historians (I've spent too much of my career working with them), but I can definitely tell you where they make sense. The scenario is this:
Imagine you have a large facility with thousands of machines, each with a programmable logic chip for controls and monitoring. These machines create lots of data and so often employ data reduction semantics by reducing data to on-change rather than sampling sensors at thousands of hertz. A single machine may have dozens or hundreds of variables to track. These tags might be hierarchical: Machine 1, subsystem 5, variable b. If you say there's 100,000 total tags to track in the facility, and they're on average sampled at 10hz, you need a system capable of writing a million durable timestamped values per second. Now that's child's play for, say, google, but if you're a manufacturer, you can't afford to spend massive amounts of money on cloud systems, and usually want to do this all on a single server on the factory floor because you need realtime monitoring that can display the current value in time for every single tag. ( https://www.ws-corp.com/LiveEditor/images/SLIDES/10/3.jpg ). Ideally, in a single node scenario, you want compression. It's not uncommon to store 100 billion timestamped values per day and keep them for a year or more for audit purposes is something goes wrong. Today, for the sake of predictive maintenance, data retention policies of up to 10 years are becoming more common.
So what would you sacrifice to be able to do efficient realtime monitoring and ingestion of millions of data points per second? You can't use queueing semantics to protect an RDBMS because logging can't take more than a 10th of a second per point. If you think about the use case, what you'd sacrifice is transactional queries and row-level joins, because you just don't need them. At the same time, this data is really sparse when you look at it from a table's perspective, so you'll want something like a column store to underly the data storage.
So what we do is throw out transactional guarantees, choose a storage system that is good at compression (roll-ups in some historians will store a formula approximating the data instead of raw data itself over a window), and prioritize speed of point retrieval for most recent "hot data" by caching it in-memory.
You can of course extend Postgres to achieve many of these things, but having done it myself, in practice it's sub-optimal in the exact same way that using bubble sort for all your programmatic sorting needs is sub-optimal.
One thing you might want to keep in mind is that many of the people involved in Aurora are the authors of Postgres. They're not arguing you can't do things in Postgres, they're arguing that in practice the RDBMS's guarantees are theoretically incompatible with high performance in the area of Complex Event Processing, because alignment between different simple events (recorded as rows in a database) can drift so far that memory requirements become prohibitive if you don't use a stream-processing architecture.
Also keep in mind that Aurora is from 2002 and many of the ideas have been implemented elsewhere over time. The great thing about Postgres is that it's perfect scaffolding on which you can build other stuff.
Stonebraker is talking about OLTP vs OLAP. I agree that they are very different scenarios.
If you read all the way to the end Stonebraker is actually advocating for specialized architectures like, array databases (SciDB) and stream processing engines like StreamBase, which at the time was just gaining GUI-based query creation capabilities because it was difficult to teach its non-relational concepts to SQL users.
Null gaps in a columnstore can be skipped over basically instantaneously and usually are just zone map/index lookups. Again I question how common this query is and whether it's really worth limiting yourself to a special TSDB because of it.
"Scanning the entire table" for every request to have the last value recorded is rarely a practical option.
> KDB+ supports this with "asof" joins.
> Again I question how common this query is and whether it's really worth limiting yourself to a special TSDB because of it.
KDB literally markets itself as a time series database. What's the point you're making again?
I think TimescaleDB lacks an "asof" function for now, but it makes up for it by having the full power of PostgreSQL for other stuff. Regardless, Time Series databases like KDB and TimescaleDB are useful.
it looks like TimescaleDB recommends using
ORDER BY time DESC LIMIT 1
kdb+ also markets itself as OLAP/OLTP as well.
A lot of these guys market themselves as a "time series database" because kdb+ do and they want to be compared with kdb+ by people who haven't used kdb+ (but might be considering it).
TimescaleDB is not a TSDB, it's an extension to add automatic partitioning to PostgreSQL tables. Timescale helps Postgres get more performance, but it does not give you the full capabilities of a real distributed column-oriented system. If you must use PostgreSQL though then it's a good compromise.
The query you posted does not match the discussion about the last value at a specific instant in time, only the last value ever recorded in the table for that key.
You're mistaken about Kdb's relational features. Kdb was designed as a time series processing engine using arrays (columns). Column storage doesn't have anything to do with whether a database is relational or not, and Kdb wasn't originally any more relational than the language Erlang is.
kdb+ has SQL semantics and relational queries, but it's a combination of the q language integrated into a database so sure, it's a superset of a relational database. Perhaps we disagree on what relational means.
The definition of relational is very precise, whether you use the domain calculus, relational calculus or relational algebra. Wiki has a good summary of what must be natively supported by a database system to be relational: https://en.wikipedia.org/wiki/Relational_algebra
If you don't implement this at the transaction log level, but implement it via emulation at the output level, you can't make full relational guarantees, so these operations are fundamental to database design.
kdb+ supports a superset of SQL and relational algebra, so it's a relational database. How it's implemented doesn't matter if it provide the functionality, which it can.
To wit: SQL wasn't originally any more relational than the language Erlang is.
: Codd, E. F. A Relational Model of Data for Large Shared Data Banks -- Communications of the ACM, Vol. 13, No. 6, June 1970, pp. 377-387
I'm not sure what value such a statement brings the world.
I wrote in depth about these problems and how different TSDBs solve them here.
Your blog post doesn't mention a single columnstore database though. KDB+, Clickhouse, MemSQL, or any of the GPU-powered variations will happily beat any TSDB out there.
The main misconception about TSDB's is that it's just a data with timestamp. TSDB's has multi-dimentional data model, time is only one of the dimensions.
CREATE TABLE metrics (metric_name text, ts timestamp, properties json, key(metric_name, ts))
Single table design will be prone to high read/write amplification due to data alignment. Usually, you need to read many series at once so your query will turn into full table scan. Or it will read a lot of unneeded data which happened to be located near the data you need. Writes will be slow since your key starts with metrics name. Imagine that you have 1M series and each series gets new data point every second. In your scema it will result in 1M random writes.
Cardinality of the table will go through the roof, BTW. Every data point will add the key. Good luck dealing with this.
For the life of me, I can't figure out why this would be a good idea. I feel like I must not understand what you're saying:
If I've got a million disks that I want to draw usage graphs for, why I would put each one in a separate column?
What's the business use-case you're imagining?
> Usually, you need to read many series at once so your query will turn into full table scan.
Why do I need a full table scan if I'm going to draw some graphs?
I've got something like 4000 pixels across my screen; I could supersample by 100x and still be pulling down less data than the average nodejs/webpack app.
> Imagine that you have 1M series and each series gets new data point every second. In your scema it will result in 1M random writes.
No that's definitely not what manigandham is suggesting. One million disks each reporting their usage means a million rows in two columns (disk name/sym, and volume) would be written (relatively) linearly.
About that 1M writes thing. You have two options. 1) Organize data by metric name first, or 2) by timestamp. In case of 2) the updates will be linear but reads will have huge amplification. In case of 1) updates will be random, but reads will be fast.
You can store tags and other data in JSON/ARRAY columns. The primary key is used for automatically sharding and sorting.
Groups of rows are sorted, split into columns, compressed, and stored as partitions with metadata. This means you can 'scan' the entire table in milliseconds using metadata and then only open the partitions, and the columns inside, that you actually need for your query. There are no random writes either, it's all constant sequential I/O with optional background optimization. And because of compression, storing the same key millions of times has no real overhead.
As stated several times before, we deal with this everyday on trillion row tables inserting 100s of billions of rows daily. Queries run in seconds. We do just fine.
Why not benchmark IronDB against Clickhouse and post the results?
I'm going to leave this conversation as unproductive unless you care to benchmark your products against modern column-stores, although I think it's telling that there are never such benchmarks available.
Maybe things are different now, but I doubt it. You can spend a fortune to get good performance, or you can deal with slow performance, or you can invest a lot of engineering effort and get both, but there's not a ready-to-use solution that will magically replace an entire engineering team for real scale.
But almost no one ever hits that scale, so maybe it's better to adopt this line as a rule of thumb anyway...
Isn't Datadog on AWS? If you have very specific needs and can build a vertical infrastructure stack then it makes perfect sense to build your own.
I think the challenge is that there are multiple competing needs which are in tension. Data isn't uniform, a large write load that's almost never queried, recent data is accessed way more often than older data, flexible tagging means (org,metric) queries produce potentially millions of points (imagine disk usage across every node for every disk), but indexing tags can be very costly, and its difficult to predict what someone is going to want to query.
I agree that hyper-focus on those needs can distort the picture though. You don't actually have to solve them most of the time, and a relatively poorly optimized solution goes a lot further than people realize. Simply adding caching, for example, solves almost all these issues.
Anyway I mostly agreed with your opening comment.
Spanner isn't a column-oriented database, but they all support multiple columns as the primary/sort/shard/distribution key. Use the name as the first column, and timestamp as the last column, for scalable distribution.
Many options on your list are not TSDBs, like Aerospike, Elasticsearch, Cassandra, Kudu, GridGain/Ignite. EventQL and Riak are obsolete. Apache Apex is a stream processing framework. Many of the others are just extensions to Prometheus built-in mini-storage or offer time-series indexing on top of existing databases.
I disagree, but even if that was the case, not all of them perform well. For example, we could've used Cassandra for our use case at my previous employer but the lack of push-down aggregations (at the time, not sure if they're supported now) would've been terrible for our top-K aggregate queries.
Cassandra is not a distributed relational column-oriented database, so yes, it will be bad at OLAP queries.
Cassandra is a "wide-column" or "column-family" database, which is unfortunately confusing industry jargon but better referred to as an advanced/nested key-value store. It comes from the original Dynamo whitepaper, along with similar systems like HBase, BigTable, DynamoDB, Azure Table Storage, etc. They can sometimes handle time-series queries with good data modeling because of fast prefix scans but the lack of a real query language makes them a bad choice for analytics scenarios.
Two capabilities that are important in my work are roll-ups (reducing resolution of data) and fast bulk deletes of old data.
If you just want rollups and don't care about every row, then look at Druid (or imply.io for a startup making it easier).
All these systems can delete old data very quick as they just delete entire compressed partition files.
(I am a Druid committer.)
I think Druid has come the closest to the most ideal system for the requirements I’ve had to deal with, but haven’t used it yet.
Thanks, by the way! This helps a lot.
I think that is a large simplification. I recommend this (relatively) short article: https://blog.timescale.com/timescaledb-vs-6a696248104e
The first graph is particularly salient.
Partitions are a basic building block for scaling performance and storage so it helps when you have lots of data, but Postgres w/Timescale does not have column-oriented storage and is still single-node only so it comes nowhere near the capabilities of cutting-edge columnstores like Clickhouse, KDB+, MemSQL, Kinetica, etc.
Put another way...
"Postgres, for all their wonderful marketing, is just an automatic data organization system for <underlying filesystem>. You can accomplish the same yourself using open, read, write, or any number of other syscalls."
You're doing the whole "large simplification" thing again. Yes, you can do everything yourself. No, you don't want to do that. Postgres by itself is not great for time-series data. Time series databases are useful, as your reply even showed, except for the part where you seem to assume any software that doesn't do something entirely novel is simply a quick abstraction that you could just whip up yourself.
Column stores have advantages over row stores, but they also have disadvantages. Your statement that it "comes nowhere [close to] the capabilities of cutting-edge column stores [...]" could just as easily be reversed as well.
Timescale adds automatic partitioning to Postgres, a single-node rowstore relational database. This will naturally give you better performance for larger data (whether time-series or not).
This will not approach the performance and scalability of a fully distributed relational column-oriented database like Clickhouse or MemSQL, because automatic partitioning is just one of many techniques they use for fast performance. There is nothing a special TSDB, or TSDB extension, can do that these database cannot already do faster, while providing rich SQL and joins.
Since TimescaleDB is creating a new partition for each chunk of time, it should be able to maintain its ingestion rate consistently for as long as you have storage to store that data. Perhaps it won't keep up with distributed, eventually consistent databases, but such databases generally have very limited analytical power, and if you're using them for anything but time series data, that whole "eventual consistency" thing requires a lot of careful thought.
We’re currently spending billions trying to build bitemporal public data in Europe, and it’s no easy feat so far.
Basically what we need is to be able to register future data, that don’t come in to play until they are supposed to, as well as keeping a live history that you can spook through to view a data set from any given date, as well as make changes to some past data as if you were there at that date.
You can obviously do so with code, and a lot of the old SAP systems actually support this, but the first DB that handles this well will get to run every single public system.
Doesn't it have a special feature that data comes in a linearly increasing time dimension.
Partitioning the table is the optimization, so that you skip over data when querying and manage it in smaller parts, but Timescale doesn't do anything about older data and neither do most databases.
That, and a good PR operation.
If you are using spatial data, You can also use two columns like this for longitude and latitude.
They manage to do time-series storage on a pretty large scale (over 5PB/day for their largest customer).
We use it exactly for this, works great.
Without going through comments one by one, the main ones about this being a solved problem or there's already solutions out there that do this, I would just say those comments remind me exactly of the type of conversations I had years ago with my team. We all thought it would be much easier or thought there would be something off the shelf that could do everything, and after several years of fire fighting, the reality was the problem looks much simpler than it really is, by a long mile.
Now that we've been doing this for a few years, and spoken directly with creators of many other TSDBs, we take a very skeptical view of all claims made about any database. They all sound amazing when you first read about them, maybe even work great in testing, till you hit scale and then you find all the limitations. If there was a perfect TSDB out there, everyone would be using it and there wouldn't be a new one announced on a weekly basis!
I think the one comment on query loads being different sums things up - I've no doubt all the other options thrown out there work well for data historians, but for monitoring tools with loads of concurrent users, loading dashboards with 10's or 100's of queries each, and alerting systems polling every few seconds in parallel, the query load can get very high quickly, and making those fast while still writing metrics in at scale, is a hard problem and I don't think any individual TSDB has really solved that properly, which is why we ended up building our own distributed architecture ourselves.
They're a great way to store data efficiently, accessing specific data if you know the time range you are looking for is very fast and simple, and you can roll your own in a few dozen lines of C if that's what you want to do. If that's all you need, why not?
As a thought exercise, for the most trivial solution, you could create a single append only flat file. This may work well for writes, but what happens when you want to read the datapoints for only a single series in time order? This would result in an expensive scan over the whole file. An improvement could be to create a file per series, but this becomes problematic when writing many small datapoints across each different file. The problem worsens in the case of a dynamic containerised infrastructure which produce a unique number of timeseries over very short intervals, which was the catalyst for the development of Prometheus TSDB v2, as the prior version stored a file per timeseries.
As the post states, there is a balance between the read and write pattern - achieving that with a few lines of C for a general purpose case is a difficult task, if not impossible.
I have a hard time believing that "a dynamic containerised infrastructure which produce a unique number of timeseries over very short intervals" is the superset of all time-series use cases, but perhaps it is so.