1. As it is already mentioned, if metadata (data about timeseries) are already in PostgreSQL, then it is nice to stay in the same database engine for querying data with joins of both metadata and timeseries data, so there is no need to implement integration of the two source in the application layer.
2. Also related to the first item: advantage of already knowing PostgreSQL API. ClickHouse has different management API, so it is necessary to learn. While if you know PostgreSQL, you don't need to learn new management API and only timeseries specific API of TimescaleDB.
3. ClickHouse doesn't support to update and delete of existing data in the same way as relation databases.
Then the final decision still depends on your need.
Originally Timescale wasn't much more than automatic partitioning but with the new compression and scale out features, along with the automatic aggregations and other utilities, it can actually be pretty good overall performance. It still won't get you the raw speed of Clickhouse but instead you get all the functionality of Postgres (extensions, full SQL support, JSON, etc) and can avoid big ETL jobs.
Another PG extension is Citus which does scale-out automatic sharding with distributed nodes but is more generalized than Timescale for handing non-timeseries use-cases. Microsoft offers Citus on Azure.
If you need effectively storing trillions of rows and performing real-time OLAP queries over billions of rows, then it is better to use ClickHouse , since it requires 10x-100x less compute resources (mostly CPU, disk IO and storage space) than PostgreSQL for such workloads.
If you need effectively storing and querying big amounts of time series data, then take a look at VictoriaMetrics . It is built on ideas from ClickHouse, but it is optimized solely for time series workloads. It has comparable performance to ClickHouse, while it is easier to setup and manage comparing to ClickHouse. And it supports MetricsQL  - a query language, which is much easier to use comparing to SQL when dealing with time series data. MetricsQL is based on PromQL  from Prometheus.
I'm super excited about this news, but TSDB please work on allowing us to put data over 1 year old on slow disk seperate servers, so we can keep the hot stuff on the NVME servers, once you get this sorted it will be the perfect fit for us.
ClickHouse recently added multi-volume storage for exactly the use case you describe.  It's a great feature.