Full disclosure: I work at StarTree, which is powered by Apache Pinot.
ClickHouse's ClickBench is a good general tool. However, it's not the end-all, be-all of performance benchmarking and testing. Its results may or may not be applicable for guidance on the performance of your specific use case when you get to production.
It is definitely a stab at getting an objective suite of tools for the real-time analytics space. But just like you had YCSB as a good general performance test, eventually a subset of users wanted something specific for Cassandra and Cassandra-like databases (DSE, ScyllaDB, etc.), so you eventually saw cassandra-stress. We have to consider cases where certain databases may need to have testing suites that really capture their capabilities.
ClickHouse themselves publishes a list of Limitations that everyone should keep in mind as they run ClickBench:
> It looks like the queries are all single table queries with group-bys and aggregates over a reasonably small data set (10s of GB)?
>I'm sure some real workloads look like this, but I don't think it's a very good test case to show the strengths/weaknesses of an analytical databases query processor or query optimizer (no joins, unions, window functions, complex query shapes ?).
> For example, if there were any queries with some complex joins Clickhouse would likely not do very well right now given its immature query optimizer (Clickhouse blogs always recommend denormalizing data into tables with many columns to avoid joins).
So, again, ClickBench is a good (great) beginning. As an industry we should not let it be seen as the end. I'd be interested in the community's opinions on what and how we should be doing better.
ClickHouse's ClickBench is a good general tool. However, it's not the end-all, be-all of performance benchmarking and testing. Its results may or may not be applicable for guidance on the performance of your specific use case when you get to production.
It is definitely a stab at getting an objective suite of tools for the real-time analytics space. But just like you had YCSB as a good general performance test, eventually a subset of users wanted something specific for Cassandra and Cassandra-like databases (DSE, ScyllaDB, etc.), so you eventually saw cassandra-stress. We have to consider cases where certain databases may need to have testing suites that really capture their capabilities.
ClickHouse themselves publishes a list of Limitations that everyone should keep in mind as they run ClickBench:
https://github.com/ClickHouse/ClickBench/#limitations
CelerData (based on StarRocks) also wrote up this:
https://celerdata.com/blog/what-you-should-know-before-using...
Plus, I want to direct people to the discussion generated when ClickBench was first posted to HN:
https://news.ycombinator.com/item?id=32084571
As user AdamProut commented back at the time:
> It looks like the queries are all single table queries with group-bys and aggregates over a reasonably small data set (10s of GB)?
>I'm sure some real workloads look like this, but I don't think it's a very good test case to show the strengths/weaknesses of an analytical databases query processor or query optimizer (no joins, unions, window functions, complex query shapes ?).
> For example, if there were any queries with some complex joins Clickhouse would likely not do very well right now given its immature query optimizer (Clickhouse blogs always recommend denormalizing data into tables with many columns to avoid joins).
So, again, ClickBench is a good (great) beginning. As an industry we should not let it be seen as the end. I'd be interested in the community's opinions on what and how we should be doing better.