I'm doing this with clickhouse querying parquet files on S3 from an EC2 instance in the same region as the S3 bucket (yes DuckDB pretty similar). S3 time to first byte within AWS is 50ms and I get close to saturating an big EC2 instance's 100Gb link doing reads. For OLTP type queries fetching under 1 MB you'll see ~4 round trips + transfer time of compressed data so 150-200 ms latency.
Are you using s3 local cache? Do you have heavy writes? What type of s3 disk type, if any, are you using? (s3, s3_plain, s3_plain_rewritable)? Or are you just using the s3 functions.
Clickhouse is amazing but I still struggle getting it working efficiently on s3, especially writes.
My workload is 100% read. Querying zstd parquet on s3 standard. Neither clickhouse nor duckdb has a great s3 driver, which is why smart people like https://www.boilingdata.com/ wrote their own. I compared a handful of queries and found DuckDB makes a lot of round trips and Clickhouse takes the opposite approach and just reads the entire parquet file.