Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Obviously I'm biased, but in our testing of TimescaleDB we've found it to do well with large amounts of data. We're still working on benchmarks that we hope to present in the coming weeks/months, but we've been able to sustain high insert rates even when the database contains billions of rows and metrics. And in our preliminary comparisons to Cassandra the query latency for TimescaleDB was much better.

Happy to try and address any concerns you may have



Billions total? Per month? Per day? Per hour? Per minute?

I hate the word "large" in these contexts.


Our internal benchmarks (which we are publishing soon), show a sustainable insert performance of 100K+ rows/second (where each row contains 10 metrics -- some would call this 1M metrics/second), even at 10 Billion rows. All on a single commodity instance (with only 16GB RAM). (We will publish these soon, along with scripts to reproduce them.)

But I agree that "large" often means different things to different people.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: