As for the hosting option, currently sadly AWS doesn’t offer Timescale as part of RDS. There are two options: Azure offers Timescale now as part of their hosted Postgres. Or you go with aiven.io who can host you postgres with TimeScaleDB on all cloud providers (AWS, GCP, Azure, DO, ?) as a service, including replicas and backups.
Overall, I’m very happy to see the Postgres ecosystem growing.
Fully-managed TimescaleDB, including community and enterprise capabilities, high-availability, etc, available on AWS, GCP, and Azure.
There are other options as well (Azure, DigitalOcean, Alibaba, Aiven) but they only offer the OSS version of TimescaleDB.
More here (including a pricing calculator): https://www.timescale.com/cloud
Options are as follows:
 Fully-Managed TimescaleDB Enterprise on Timescale Cloud
TimescaleDB Open Source hosted on Public Clouds:
 Azure PostgreSQL
 DO Managed-Postgres (mentioned above)
 Alibaba Cloud
Not hosted, but managed service options:
 ClusterControl from our friends at Severalnines
How do you think timescaleDB would handle that size and also velocity of data?
Probably, ClickHouse  would fit better your needs. It can write millions of rows per second . It can scan billions of rows per second on a single node and it scales to multiple nodes.
Also I'd recommend taking a look at other open-source TSDBs with cluster support:
- M3DB 
- Cortex 
- VictoriaMetrics 
These TSDBs speak PromQL instead of SQL. PromQL is specially optimized query language for typical time series queries .
For on-prem deployments I'm using TimescaleDB on a single node with up to 100 million events, and for SaaS I'm using it on Azure, and the performance is little short of amazing!
Something I particularly like is the almost instantaneous deletes (because of the "chunking" model) - perfect for data retention jobs that delete old data.
Then, out of nowhere, some 'interesting' stuff happens. It'll all be in that one chunk,which will get hammered during reads.
Like, imagine all the telemetry data and video that was taken during a single moon landing. Most of the data made into a time series is from the days in transit. 99% of it will be "uninteresting." But the moment Neil Armstrong puts his feet on the Moon surface, and the moments leading up to and subsequent of that event, will be the "hot chunk."
Advice: Take Zipfian distributions into account for data access.
(Disclosure: I work at ScyllaDB, which scales horizontally and vertically, and we work under various open-source time series databases like KairosDB and OpenNMS' Newts. Not trying to knock them, but hopefully save them from worlds of hurt found out the hard way.)
Thanks for the advice. FWIW, though, TimescaleDB supports multi-dimensional partitioning, so a specific "hot" time interval is actually typically split across many chunks, and thus server instances. We are also working on native chunk replication, which allows serving copies of the same chunk out of different server instances.
Apart from these things to mitigate the hot partition problem, it's usually a good thing to be able to serve the same data to many requests using a warm cache compared to having many random reads that thrashes the cache.
We're using postgres presently for some IoT, B2B applications, and the timeseries tables are a half dozen orders of magnitude larger than the other tables in our application. Certain database operations, like updates, take a very long time because of this. I've wondered if by splitting the timeseries tables onto their own server I could handle updates independently, with the main app gracefully handling the timeseries DB being offline for some period of time.
It's more than just about downtime though. If through poor querying or other issues the timeseries db is overloaded the customer impact of the slow down would be limited.
We commonly see hypertables (time-series tables) deployed alongside relational tables, often because there exists a relation between them: the relational metadata provides information about the user, sensor, server, security instrument that is referenced by id/name in the hypertable.
So joins between these time-series and relational tables are often common, and together these serve the applications one often builds on top of your data.
Now, TimescaleDB can be installed on a PG server that is also handling tables that have nothing to do with its workload, in which case one does get performance interference between the two workloads. We generally wouldn't recommend this for more production deployments, but the decision here is always a tradeoff between resource isolation and cost.
Now, if this time/space partitioning alone isn't sufficient (i.e., demand for a single device/userid/etc at a specific time overcomes the read capacity of K nodes), having time-series data being primarily insert heavy (or even immutable) also gives us a lot of flexibility about how we replicate (as a sibling comment also suggested). And what really helps is that, by design, the architecture we built tracks fine-grained chunk information (rather than just course-grained hash-partitions), which can enable dynamic replication of individual chunks. More on this to come.
Writes is a catch-all term usually used to describe either updates or inserts. If you are inserting new data and a single chunk is hot because a you are inserting a lot of data into it, then replicating won't help. You can imagine a scenario like a single device is going haywire and starts sending you a ton of data points.
If you are only performing reads on your dataset, then replicating will only improve performance.
If you have a chunk that's hot because there are a lot of reads going to it, yes, increasing replication will help because you are decreasing the amount of work you have to do per replica.
If you have a chunk that's hot because a lot of writes are going to it, increasing replication will make things worse as you are doing just as much work per replica as you were before, but you're now doing it on more replicas.
Does that make sense?
1) The naive approach is to assign all writes to a chunk randomly. This makes reads a lot more expensive as now a read for a particular key (e.g. device) will have to touch every chunk.
2) If you know a particular key is hot, you can spread writes for that particular key to random chunks. You need some extra bookeeping to keep track of which keys you are doing this for.
3) Splitting hot chunks into smaller chunks. You will wind up with varying sized chunks, but each chunk will now have a roughly equal write volume.
One more approach I would like to add is rate-limiting. If the reads or writes for a particular key crosses some threshold, you can drop any additional operations. Of course this is only fine if you are ok with having operations to hot keys often fail.
* vminsert (stateless)
* vmselect (stateless)
* vmstorage (stateful)
(see Disk Usage benchmark graph at )
That also hurts not only storage, but performance, as queries bottleneck becomes disk IO, check out this benchmark we conducted with TimescaleDB v1.2.2: 
Good job on going multi-node in v2! Can't wait to benchmark it with VM cluster version :)
Recognizing this, the engineering team has been hard at work bringing native compression to TimescaleDB, which is also in private beta right now.
Huge wins, but more details & performance numbers in a future blog post =)
I have almost 8 tables with over 60Million rows and I'm very happy with the performance. Considering I have a t2.medium instance(2 CPUs with 4GB RAM). Like, everyone else mentioned, having AWS hosted option will be awesome!
Is there a way to optimize storage? I have set chunk size to 1 day interval . About 2 million rows per day writes.
My many thanks to the engineering team.
Presumably the custom operators you’ve implemented in your query plan can push projection and predicates down to a more efficient architecture without so many indirect function calls / branches / etc, but once you get up past that, aren’t you back in iterator land?
An example would be TPC-H Q1, which is a little weak on the group by cardinality, but is good for testing raw aggregation performance.
We're actually currently focused on query optimization for our multi-node product, but we don't have any numbers we're currently ready to share.
also, while you can make columnar data, sql lacks a rich enough language to take advantage of it. your advances queries seems like they aren't very good at exploiting the layout and you need to be specially written into the db (you cannot make your own high performance queries easily). I've never seen a decent LEAD/LAG query perform well, and they are too simplistic. I think you are fighting a losing war if you are trying to optimize sql down to good array based access.
A good tsdb isn't just changing the storage layer. Performance is also heavilty influenced by how queries are able to be expressed.
Individual time intervals are also spread across the cluster. So if you are collecting data from a lot of, say, sensors, servers, or financial instruments, then reads/writes for the same time interval are then parallelized across servers.
Point wast (and the others's i think) was the you often have as very hot segment and yesterday's data is only used at night for example. and you can have a hot device (eg, top 10 symbols). the parting doesn't help a lot there until you can spread the time around and rejoin (netezza used to do something similar and it wasn't very good at it). Do you ever rebalance the partitions? getting you top 10 symbols accidentally stuck on the same partition would be painful especially without a way to control it.
splaying the record column-wise helps in this, but i'm not sure if you are doing this.
So this architecture fully allows various striping or distribution options across time and space, even though the default might collocate chunks belonging to the same device on the same machine (at least since the last elasticity event).
Wish they would tone down the hype in the blog posts though, a shard/chunk/partition are all the same. How you define the splits is completely arbitrary and every database uses its own algorithm, including multiple levels.
I understand conceptually that this is all about splitting data, but I think if you look at most scalable databases that use sharding, it’s really meant as a partitioning of primary keyspace over servers, and then you just globally map this sharding through client libraries, some transparent proxy, or some map that every node maintains, because O(map) = O(# servers). Examples: Cassandra, DynamoDB, scale-out memcached, Vitesse, ZippyDB/RocksDB, etc.
We are instead tracking per-chunk state in catalogs to give us this level of flexibility, and allowing the movement/migration of individual chunks on a much finer-grained basis. This is both for placement/management across the cluster but also for data management on single nodes, e.g., for data retention policies, tiering, lazy indexing, etc.
I realize this isn’t a hard-and-fast rule, and exceptions always exist. But one reason we try to call this out is we’re often asked why we don’t just use a standard hash-based partitioning tool/system as a black box, which wouldn’t give us this level of fine-grained visibility & control that we find highly useful for time-series data management.
[Timescale co-founder & post co-author]
I get it, you guys are using the primary keys for data => chunk and a second level for chunk => server/node. Other databases do this as well to abstract physical and logical partition placement.
Anyways, nice to see the SQL interface and AN/DN role implementation. Easier and more usable overall compared to some other solutions like Citus.
I don't know if anybody observed but the article is so damn intuitive, it literally covered almost all the things. Often times when such articles are published I have to google it deeper to get a sense of its practical use.
I have one query: How efficient is the deletion (from disk) of chunks in a new distributed model?
The upside of deleting entire tables (chunks) like this is that you don't pay the same PostgreSQL vacuuming cost normally associated with row-by-row deletes.
"Hypertables support all standard PostgreSQL constraint types, with the exception of foreign key constraints on other tables that reference values in a hypertable"
Naively I'd assume this could cause a two-colouring of your schema - the partition that can use referential integrity and another with hypertables that doesn't which feels like a pretty big trade-off.
Next order of business: Making mud pies.
PostgreSQL is geared towards transactional work. With time series, you basically just append data occasionally, and do analytics. PostgreSQL is terrible for analytics - its architecture is all wrong. 2 or 3 orders of magnitude slower than the state of the art if not more.
In my experience, being able to do advanced ad-hoc SQL queries is priceless for analytics. Timescale helps in scaling time series use-cases that used to scale badly in plain PostgreSQL.
Postgres is getting pluggable storage engines in the next version (and already has foreign data wrappers) so that can at least lead to a better storage design.
For FOSS, have a look at MonetDB. For research-oriented systems, look for publications regarding HyperDB or VectorWise/Actian Vector (VectorH in the cluster version). Other commercial offerings are Vertica (formerly C-Store) and SAP Hana.
PostgreSQL is not even something anyone compares against in analytics...
But why don’t you just try out TimescaleDB and see for yourself?
On the linked-to article I only see references to irrelevant transactional DBMSes...
One possible exception is InfluxDB - I'm not familiar enough with it.
Anyway, try running TSBS on columnar DBMSes like Actian VectorH, Vertica, SAP HANA etc. ClickHouse may also be relevant; they don't support any possible schema, but it may be enough to run TSBS.
We usually implement ones that we hear about a lot from customers, and so far those haven't come up a ton. We'll keep it in mind though as we look to keep adding new ones.
Performance comparisons to the candidates you named would be very interesting to see.
Also it is important to keep this site attractive to people that have a different opinions and experiences - do not do that trump thing! Thanks!
Of course, for each claim replicable facts are needed.
Is it better to do aggregations with the DB or through some MapReduce method (Google Dataflow?) and write that to a DB?
Also one of our investors is Two Sigma so this is an area of interest to us.
If you're open to it, I'd love to learn more about your specific use case. Want to chat sometime? ajay (at) timescale.com
I have no experience with this combination myself, so don't want to speculate about performance, etc., but when reading the docs it really seems like an afterthought.
Have you thought to release any benchmarks against KDB+?
"1.3 Kdb+ On Demand Software Performance. End User shall not distribute or otherwise make available to any third party any report regarding the performance of the Kdb+ On Demand Software, Kdb+ On Demand Software benchmarks or any information from such a report unless End User receives the express, prior written consent of Kx to disseminate such report or information."
There's definitely a huge opportunity to displace KDB+ as the mainstay for timeseries in capital markets. It is a premium product, but it obviously comes with a cost - both for the product and its operators.
Assuming all use cases can be catered for, if one needs an extra N machines if using TimescaleDB to cater for the same workloads, it might nullify any savings. If only there were a way to understand that without breaking their EULA...
"On Demand" is IIRC the cloud kdb+, which is thus much less predictable and easy to misrepresent.
32-bit version: "(c) 32 Bit Kdb+ Software Evaluations. User shall not distribute or otherwise make available to any third party any report regarding the performance of the 32 Bit Kdb+ Software, 32 Bit Kdb+ Software benchmarks or any information from such a report unless User receives the express prior written consent of Kx to disseminate such report or information."
But more broadly, feedback from the finance/capital markets suggest that the choice of kdb's proprietary Q query language vs. standard SQL is top-of-mind, expanding access and insights to time-series data from a small set of highly-specialized engineers to any of their developers / analysts / tools.
I have used kdb in the past, and it is friendly in the Unix sense (picky about who it makes friends with) - first time users often write queries that use slow scalar loops.
Regardless, thanks; i’ll Be looking closely at timescaledb
Timeseries = data with a primary key that includes time, potentially with other time properties. For example, metrics are commonly associated with a value at some time.
Think automotive monitoring (external or internal), algorithmic trading, retail monitoring, aviation, etc.
Outflux (snapshot migration): https://www.outfluxdata.com/
Telegraph (streaming migration): https://blog.timescale.com/blog/introducing-the-postgresql-t...
Both of these tools will perform automatic schema generation in TimescaleDB, which greatly simplifies the migration.