Hacker News new | past | comments | ask | show | jobs | submit login

The Hypertables and Distributed Hypertables can be used to store any kind of data, but works best as long as it has a monotonously increasing partitioning key (e.g. time), with high ingest load, few data modifications (preferable bulked)

The beauty of TimescaleDB being built on Postgres is you can have your regular Postgres tables (OLTP schema) and time-series data (Hypertables) live side by side. Use 1 language (1 mindset) to query them, join them, work with them as you see fit. With Distributed Hypertables (what the post is about) you can now partition your data to live across multiple servers, and still use your 1 mindset to query all that data.

edit: With the preferred workload you get the most out of TimescaleDBs advanced features like compression, continuous aggregates and data retention policies. You can use the aggregates to build complex auto-updating materialized views that are automatically used even when you query the raw tables also (https://docs.timescale.com/latest/using-timescaledb/continuo...)






This sounds like the perfect fit to a write only event log table we stored in postgres at a previous employer. I pushed to move it to BigQuery but this sounds like it would have been fine.

There is more cost effective alternative to BigQuery for storing and analyzing big amounts of logs - LogHouse [1], which is built on ClickHouse.

[1] https://github.com/flant/loghouse


Here is a community post on storing logs in TimescaleDB:

https://www.komu.engineer/blogs/timescaledb/timescaledb-for-...


Continuous aggregates look like a killer feature.

Thanks! You might also find this related feature, real-time aggregation, really powerful as well.

We just released it last month: https://blog.timescale.com/blog/achieving-the-best-of-both-w...

"With real-time aggregation, when you query a continuous aggregate view, rather than just getting the pre-computed aggregate from the materialized table, the query will transparently combine this pre-computed aggregate with raw data from the hypertable that’s yet to be materialized. And, by combining raw and materialized data in this way, you get accurate and up-to-date results, while still enjoying the speedups that come from pre-computing a large portion of the result."




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: