I like pipelining as a 1.1 feature. I have successfully used TCP/TLS clients like netcat and openssl to retrieve text/html in bulk for many years. The servers always worked great for me.
However I never liked chunked encoding as a feature. In theory it sounds reasonable but in practice it is a hassle. As TE chunked became more widespread eventually I had to write a filter to process it. It is not perfect but it seems to work.
Not surprised chunked encoding can cause problems on the server-side. NetBSD has a httpd that tries to be standards-compliant but still has not implemented chunked requests.
HTTP/2 only has chunked encoding (it's not called that, but it's what it is). Chunked is much much nicer than not, because sometimes you don't know the length a priori.
Chunked is also unnecessary sometimes. For me, that happens to be most of the time. Sometimes I can avoid it by specifying HTTP/1.0 but not all servers respect that.
However I never liked chunked encoding as a feature. In theory it sounds reasonable but in practice it is a hassle. As TE chunked became more widespread eventually I had to write a filter to process it. It is not perfect but it seems to work.
Not surprised chunked encoding can cause problems on the server-side. NetBSD has a httpd that tries to be standards-compliant but still has not implemented chunked requests.