It is a postgresql extension that you install on top of a normal postgresql server, so it is not worse in any way.
Timescale works by creating a 'hypertable', which is an aggregate of a lot of smaller 'chunk' tables. These chunk tables are automatically split by date or incrementing id. This means that for queries that specify IDs or a date range within a certain range, you only have to query results within a few chunks, instead of looking through all the contents of the entire 'hypertable.' [1]
Timescale also offers some other things like compression which can save you up to ~96% disk space while also improving query performance in some cases. [2][3]
It also has something they call 'continuous aggregates' [4], which are similar to postgresql's materialized views, but do not require manual refreshing - they instead update periodically through an automatic background job. There is also a feature which builds on this called 'realtime aggregates' that allows you to combine the data within a continuous aggregate with the raw data in the tables that has yet to be materialized.
There are a lot more things besides that, but I think that's a decent overview of the major features it brings to the table. From a dev perspective these things all make the data and the database easier to work with (especially targeting timeseries data). There is an api reference [5] that has some of the other commands timescale adds, if you want to see some of the other things it can help you do.
The two main things most developers will benefit from is how we manage the automatic partitioning of your incoming data (hypertables), something which is non-trivial to do yourself even though other tools exist for it. And because we do it with a time-based focus, we can be really efficient and smart about it.
Second, we've improved the query planner in PostgreSQL around the parts that relate to querying time-based, partitioned data, and provided special time-based functions. These improvements help you efficiently query data that time-series applications most often need. A quick example is something like "LAST()", which retrieves the most recent value for a given time-range. There are ways in SQL to do something similar (LATERAL JOINs or CTEs for instance), but they're usually slower and bulkier to maintain. When dealing with time-series data, getting the most recent value for an object is usually what you're doing the most often.
When you add those two foundational features, everything else that @drpebcak mentioned become amazing value-adds that you just can't get elsewhere.
Back in 2015, I'd architected and deployed a system for a AAA game that handled 24B events/day on launch without breaking a sweat, and supported 200ms round-trip ingestion-to-aggregation SLAs with no windowing (the protocol and ingestion layer did most of the heavy lifting: sequentially ordered _guarantees_ on events even when loadbalanced/connection migration meant no need for windowed batch ordering)... but the scenario for which it was designed was cut and we ended up using it for just 15m slices. :eyeroll:
Still, it was used by a dozen+ games, including a few more AAA titles, and still in use today, and portions of the tech have been cannibalized into other products. I still get the occasional inquiry about memory fencing or memory boundaries on Console X for the 5-15μs event generation API (improperly aligned memory could cause interlocked increment corruption!).
Annnyways:
I had an opportunity to chat with one of the founders at Snowflake in 2017? 2018? for a few hours. I tried to convey how imperative I felt true-realtime time series engines would be critical moving forward, an the reception was rather lukewarm. If they had been as excited as I, it'd have been one of the few opportunities to pull me away from my dream job.
I still feel the world will need this architecture, as we start moving towards more ML/AI driven decision making, and that the company which can get traction will be in a pivotal position moving forward.
Sometimes I wonder about feeling pressured to shift into Data & Applied Science to stay at that org (there just didn't seem to be vertical opportunities in the dev track). I excel in this job too, and I love what I work on... but dang sometimes I feel that the architect career path had even bigger impact potential. It was a fun couple decades. :P
Timescale works by creating a 'hypertable', which is an aggregate of a lot of smaller 'chunk' tables. These chunk tables are automatically split by date or incrementing id. This means that for queries that specify IDs or a date range within a certain range, you only have to query results within a few chunks, instead of looking through all the contents of the entire 'hypertable.' [1]
Timescale also offers some other things like compression which can save you up to ~96% disk space while also improving query performance in some cases. [2][3]
It also has something they call 'continuous aggregates' [4], which are similar to postgresql's materialized views, but do not require manual refreshing - they instead update periodically through an automatic background job. There is also a feature which builds on this called 'realtime aggregates' that allows you to combine the data within a continuous aggregate with the raw data in the tables that has yet to be materialized.
There are a lot more things besides that, but I think that's a decent overview of the major features it brings to the table. From a dev perspective these things all make the data and the database easier to work with (especially targeting timeseries data). There is an api reference [5] that has some of the other commands timescale adds, if you want to see some of the other things it can help you do.
[1] https://docs.timescale.com/latest/using-timescaledb/hypertab... [2] https://docs.timescale.com/latest/using-timescaledb/compress... [3] https://blog.timescale.com/blog/building-columnar-compressio... [4] https://docs.timescale.com/latest/using-timescaledb/continuo... [5] https://docs.timescale.com/latest/api