Imply is a startup with a modern cloud/on-prem distribution of Druid with a built-in visualization and querying tool: https://imply.io/
Anyone who wants to chime in on whether this has fit your requirements for time series data processing? Thanks!
PipelineDB is a Postgres build (and soon to be extension) that runs real-time continuous "queries" on incoming data as materialized views. This is similar to the Kafka KSQL or Spark streaming and is effectively taking in a constant stream of data and re-running a query incrementally to give you up-to-date analytics with windowing.
Neither products are full-scale data warehouse solutions. TimescaleDB is currently limited to time-based partitioning, single-node only, and still uses standard row-based tables. PipelineDB discards raw incoming data and only keeps materialized views so you must know your queries beforehand. Both give you the advantage of using full SQL and the rest of the postgres ecosystem to easily join and analyze with other operational data without doing lots of ETL.
Another worthy mention is CitusDB which is another auto-sharding extension to Postgres, mainly focused on scaling horizontally for OLTP scenarios.
To be clear, TimescaleDB supports partitioning by other dimensions, as long as one dimension is time-based. That is, one partition must be time based, but you can additional dimensions as well (e.g. device_id).
This approach dramatically limits disk IO and long-term storage requirements, and enables super high performance in most cases on modest hardware.
PipelineDB has been used in production for nearly four years now and is used by Fortune 100 companies.
My hunch says that it's possible as far as there is some additional computation done with the future aggregate query on the coordinator in Citus.
PPDB looks interesting, but we also need to keep the underlying raw data and multiple clusters require more complex pipeline.
One thing I will mention here is that we do have plans to add support for persistent streams  after version 1.0.0 is released. We've learned a lot over the years about how our users/customers interact with streams in production and persistent streams will be built atop that foundation of understanding.
Please feel free to comment on that issue with your use case, requirements, etc. and we'll see what we can do!
A pair of both would only be missing an in-memory and on disk column-oriented 'analytics' tier (perhaps as separate instance that's being continuously fed from the first two?).
I think I had seen that PostgreSQL also has columnar store extension, wondering if that would work close the remaining gap for the analytics instance.