I have never seen it removed from a server. It is an option that some website operators may disable. However most sites enable it, or the httpd they use has it enabled by default.
Ok, sure, but if the clients don't implement it... does it matter that some servers do?
EDIT: Well, maybe it does matter. E.g., if it creates request smuggling vulnerabilities, say, in the presence of reverse proxies that don't interpret the spec the same as the servers.
I like pipelining as a 1.1 feature. I have successfully used TCP/TLS clients like netcat and openssl to retrieve text/html in bulk for many years. The servers always worked great for me.
However I never liked chunked encoding as a feature. In theory it sounds reasonable but in practice it is a hassle. As TE chunked became more widespread eventually I had to write a filter to process it. It is not perfect but it seems to work.
Not surprised chunked encoding can cause problems on the server-side. NetBSD has a httpd that tries to be standards-compliant but still has not implemented chunked requests.
HTTP/2 only has chunked encoding (it's not called that, but it's what it is). Chunked is much much nicer than not, because sometimes you don't know the length a priori.
Chunked is also unnecessary sometimes. For me, that happens to be most of the time. Sometimes I can avoid it by specifying HTTP/1.0 but not all servers respect that.