If you need effectively storing trillions of rows and performing real-time OLAP queries over billions of rows, then it is better to use ClickHouse , since it requires 10x-100x less compute resources (mostly CPU, disk IO and storage space) than PostgreSQL for such workloads.
If you need effectively storing and querying big amounts of time series data, then take a look at VictoriaMetrics . It is built on ideas from ClickHouse, but it is optimized solely for time series workloads. It has comparable performance to ClickHouse, while it is easier to setup and manage comparing to ClickHouse. And it supports MetricsQL  - a query language, which is much easier to use comparing to SQL when dealing with time series data. MetricsQL is based on PromQL  from Prometheus.
I'm super excited about this news, but TSDB please work on allowing us to put data over 1 year old on slow disk seperate servers, so we can keep the hot stuff on the NVME servers, once you get this sorted it will be the perfect fit for us.
ClickHouse recently added multi-volume storage for exactly the use case you describe.  It's a great feature.