Hacker News new | past | comments | ask | show | jobs | submit login

How does the multi-node version work with data compression compared to the single-node version?

I like how on a single-node I can utilize data compression and get a 95% storage saving.






In the current version, you can execute `compress_chunks` on each of the data nodes and enjoy those same savings (and will work transparently with queries, as before).

In subsequent releases, we'll add full support of compression, e.g., just create a compression policy on the access node and you are off and running.


Sounds great. So I just manually execute this `compress_chunks` command once on each data node and then I have compression enabled forever on those nodes?

Not yet, I should have been clearer:

compress_chunk operates on a single chunk, the way to define "compress all chunks older than 1 week is":

   SELECT compress_chunk(i) from show_chunks('conditions', older_than => INTERVAL '1 week'); 
https://docs.timescale.com/latest/using-timescaledb/compress...

So you'd need to setup a cron job that runs that script every night or something...at least until we release compression policy support.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: