Hacker News new | past | comments | ask | show | jobs | submit | hagen1778's comments login

It usually comes with increase of active series and churn rate. Of course, you can scale Prometheus horizontally by adding more replicas and by sharding scrape targets. But at some point you'd like to achieve the following:

1. Global query view. Ability to get metrics from all Prometheis with one request. Or just simply not thinking which Prometheus has data you're looking for.

2. Resource usage management. No matter how you try, scrape targets can't be sharded perfectly. So you'll end up with some Prometheis using more resources than others. This could backfire in future in weird ways, reducing stability of the whole system.


What makes you think that about docs? Of course, it was written by developers, not tech writers. But anyway, what do you think can be improved?

We use the same approach in time series database I'm working on. While file creation and fsync aren't atomic, rename [1] syscall is. So we create a temporary file, write the data, call fsync and if all is good - rename it atomically to be visible for other users. I had a talk about this [2] a few month ago.

[1] https://man7.org/linux/man-pages/man2/rename.2.html

[2] https://www.youtube.com/watch?v=1gkfmzTdPPI


You can't atomically allocate a unique identifier (e.g. next step of a counter) with rename, it'll overwrite. That's what the link(2) is for.


ClickHouse recently got the support of TimeSeries table Engine [1]. It is marked as experimental, so yes - early stage. This engine is quite interesting, the data can be ingested via Prometheus remote write protocol. And read back via Prometheus remote read protocol. But reading back is the weakest part here, because Prometheus remote read requires sending blocks of data back to Prometheus, where Prometheus will unpack those blocks and do the filtering&transformations on its own. As you see, this doesn't allow leveraging the true power of ClickHouse - query performance.

Yes, you can use SQL to read metrics directly from ClickHouse tables. However, many people prefer the simplicity of PromQL compared to the flexibility of SQL. So until ClickHouse gets native PromQL support, it is in the early stages.

[1] https://clickhouse.com/docs/en/engines/table-engines/special...


Here you go https://victoriametrics.com/blog/mimir-benchmark/ It is from Sep 2022, it would be great to get newer results.


Storing telemetry efficiently is only part of what Monitoring is supposed to do. The other part is querying: ad-hoc queries, dashboards, alerting queries executed each 15s or so. For querying to work fast, there has to be an efficient index or multiple indexes depending on the query. Since you referred ClickHouse as efficient columnar storage, please see what makes it different from a time series database - https://altinity.com/wp-content/uploads/2021/11/How-ClickHou...


And yet people use ClickHouse quite effectively for this very problem, see the comment here: https://news.ycombinator.com/item?id=39549218

There are also time-series databases out there that are OK with high cardinality: https://questdb.io/blog/2021/06/16/high-cardinality-time-ser...


> And yet people use ClickHouse quite effectively for this very problem

There is no doubt that ClickHouse is a super-fast database. No one stops you from using it for this very problem. My point is that specialized time series databases will outperform ClickHouse.

> There are also time-series databases out there that are OK with high cardinality

So does this blog say that tolerance to cardinality means that QuestDB indexes only one of the columns in the data generated by this benchmark?

TSDBs like Prometheus, VictoriaMetrics or InfluxDB will perform filtering by any of the labels with equal speed, because this is how their index works. Their users don't need to think about the schema or about which column should be present in the filter.

But in ClickHouse and, apparently, in QuestDB, you need to specify a column or list of columns for indexing (the fewer columns, the better). If the user's query doesn't contain the indexed column in the filter - the query performance will be poor (full scan).

See like this happened in another benchmarketing blogpost from QuestDB - https://telegra.ph/No-QuestDB-is-not-Faster-than-ClickHouse-...


I agree that specialised DBs outperform a general-purpose OLAP database. The question is - what does outperform mean. In this area queries should not be actually ultra-fast, they should be reasonably fast to be comfortable. And so missing indexes for some attributes would be likely okay. Looking at https://clickhouse.com/blog/storing-log-data-in-clickhouse-f..., they added just bloom filters for columns. Which makes sense, but this is not a full-blown index, and likely it will yield reasonable results. But this all is theoretical, I haven't built such a solution by self (we're working on it now for in-house observability), so likely miss something that can only be discovered on practice.

Btw we use Victoria Metrics now at work. It works good, queries are fast. But we're forced to always think about cardinality, otherwise either performance or cost get hurt. This is okay for the predefined set of metrics & labels and works well, but it doesn't allow having deep explorations.


In QuestDB, only SYMBOL columns can be indexed. However, sometimes, queries can run faster without indexes. This is because, under the hood, QuestDB runs very close to the hardware and only lifts relevant time partitions and columns for a given query. Therefore table scans between given timestamps are then very efficient. This can be faster than using indexes when the scan is performed with SIMD and other hardware-friendly optimizations.

When cardinality is very high, indexes make more sense.


> Would it be possible to do this in Postgres as well?

Of course! The question is only in your requirements. Keeping a simple counter with limited cardinality should work just great. But nowadays monitoring is much more serious than that. For monitoring k8s clusters the average ingestion rate of metrics per second varies from 100K to 2Mil. I don't know if, resource-wise, it would be a right decision to use Postgres for storing this.

So when requirements are high, and they are for real-time infrastructure and applications monitoring, it is better to consider something like ClickHouse (for people familiar with Postgres) or VictoriaMetrics (for people familiar with Prometheus).


> mimir because of scale & self-host options

Have you looked at VictoriaMetrics [0] before opting for Mimir?

[0] https://victoriametrics.com/blog/mimir-benchmark/


> Prometheus only handle aggregated data, though.

That's not true. You're referring to pull-based approach for metrics collection. It has its tradeoffs (like fixed interval scraping), but has a lot of benefits too (like higher reliability). Check the following link [0] from VictoriaMetrics docs, which supports both push and pull approaches. Prometheus also gained push support this year, though.

However, the main difference between Prometheus-like systems (Thanos, Mimir, VictoriaMetrics) and more traditional DBs for time series like InfluxDB or TimescaleDB is that first are designed to reflect system's state, and last are designed to reflect system's events. That's the main difference in paradigm, data model, and query languages. There is a reason why PromQL is so easy in 99% of cases, and so complex and annoying when users want to express what they get used to in traditional databases.

I'm saying this because I went through creating a Grafana datasource for ClickHouse [1] and I felt how complicated it is to express a most straightforward PromQL query in SQL, and vice versa.

If you'd like to learn more about differences between common queries for plotting time series in PromQL and SQL see my talk here [2].

[0] https://docs.victoriametrics.com/keyConcepts.html#write-data

[1] https://grafana.com/grafana/plugins/vertamedia-clickhouse-da...

[2] https://youtu.be/_zORxrgLtec?t=835


That's only a matter of time when they hire younger engineers who are familiar with modern monitoring systems and eager to apply their knowledge in practice.


What is it that you think prometheus offers over other solutions? It is more likely that the younger engineer is going to learn that companies don't care about what is popular on HN.


> What is it that you think prometheus offers over other solutions?

I like Prometheus and think this is a great piece of software. But even if we won't go into actual details, Prometheus is baked in into Kubernetes monitoring [0]. That's the first monitoring system young engineers will meet with when learning k8s. Although, k8s and Prometheus are both CNCF projects which means both of them will be promoted in synergy with each other.

> It is more likely that the younger engineer is going to learn that companies don't care about what is popular on HN.

This is not what I think younger engineers do :)

[0] https://kubernetes.io/docs/tasks/debug/debug-cluster/resourc...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: