Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Which servers / sites support HTTP/1.1 pipelining? This seems somewhat hard to lookup.

https://en.wikipedia.org/wiki/HTTP_pipelining: says that proxies and web browsers mostly don't support it. It claims that it's easy for servers to support it, but provides no more details. It also mentions that curl removed pipelining support.

https://forum.nginx.org/read.php?2,269248,269249#msg-269249: Some person says that Nginx doesn't support pipelining. Other people agree. I can't find anything to contradict that.

https://serverfault.com/questions/266184/does-apache-webserv...: Someone says that Apache doesn't support pipelining. Again, I can't find anything to contradict that.

https://stackoverflow.com/questions/17299489/iis-and-http-pi...: This person says IIS doesn't support pipelining. Against, I can't find contradictory evidence.

Twisted used to support pipelining, but removed it: https://twistedmatrix.com/trac/ticket/8320

So, who exactly supports HTTP/1.1 pipelining?

It is true that if you fire multiple requests off to an HTTP/1.1 host without waiting for a response, you'll probably get responses back. The thing is that most hosts will process those requests one at a time. This is not pipelining - this is just processing requests one at a time as they come in. So, you're saving the latency required to get the request to the server - but not getting any benefit from parallel processing since the servers process the requests serially. With HTTP/2, however, at least some servers will actually process those requests in parallel - potentially with better performance.



This isn't a conspiracy - the problem with HTTP/1.1 pipelining is that responses have to go out in the same order as the requests that generated them. So, if a slow request comes in that generates a small amount of data, followed by 10 fast requests that generate a lot of data, the server has to hold all those big, fast responses in memory until the slow one finishes. That's bad for utilization - and it makes it easier to DOS a server. With HTTP/2, the server can respond immediately to whichever request finishes first - which is much better for utilization of server resources.


From a server point of view there is very little difference between connection reuse and pipelining. The latter just means more data might already be in the input socket when the first request is handled - but it will not be used and either sit around in socket buffers or some TLS buffer.

And i actually guess the person who mentioned pipelining just meant connection reuse.


I guess the difference is that the server is still free to return the "Connection: close" header on the first response and simply not read and/or process the other requests?

The client would then be expected to close the socket, possibly causing a TCP reset (if the server hadn't read all the data off the socket)


"And I actually guess..."

Your guess is incorrect. I mean sending multiple requests over a single TCP connection. Usually 100 or more.

When performing information retrieval, e.g., fetching a series of pages from the same host, I do not want out of order responses. I want the responses returned in order. I want the HTTP headers, too, as record separators and so I can be sure all requests were filled.

This sort of pipelining is not useful for websites that want to source myriad resources into their pages from external sources to serve ads, perform tracking and all that commercially-oriented stuff that is necessary for companies like Google to survive. HTTP/2 is useful for commercially-oriented use of the web. Surveillance and ads. I'm not interested in using the web that way.

HTTP/1.1 pipelining is useful for informational retrieval, i.e., retrieving hundreds of resources from the same host without opening up hundreds of connections. That sort of bulk information retrieval is not compatible with online advertising and tracking. Thus, Google and other companies supporting HTTP/2 have no interest in it. It benefits users, not advertisers.

I fully expect some nasty comments from "tech" workers whenever I bring up this topic. I am speaking from a user persepctive, not a "developer" perspective.

In the early days of the web, opening up hundreds of connections at once would be poor etiquette (toward the server operator). Today, many so-called "engineers" do not know any other way. Not only do they do it to servers (e.g, asynchronous requests), they do it to clients, causing a user's browser to make hundreds of requests to different servers for a single web page. Looking at a Network panel in Developer Tools in a popular browser when accessing an "average" web page reveals the sheer insanity of so-called today's "web development". The other commenter clearly has never even used HTTP/1.1 pipelining, at least not consciously, and yet he wants to offer his opinion on it. Nice.

I use HTTP/1.1 pipelining every day. For example, I use it to retrieve bulk DNS data from DoH servers. The future of HTTP/2 is uncertain. HTTP/1.1 is not going away anytime soon.

I can test every website that is currently submitted to HN for pipelining support. I would bet the majority allow pipelining. If I wanted to retrieve a large number of pages from any of them, I could use pipelining to do it.

I get no benefits from HTTP/2 because I mainly use the web for non-commercial purposes. For that use, I do not use a popular browser. I do not wait for websites to "load" whilst they open dozens or even hundreds of connections to other servers. I retrieve HTML from the command line using netcat and haproxy. It's fast. I use a text-only browser to read HTML.

When using the web in the way I do, without seeing any ads, without all the automatically triggered requests to external servers, performance is not an issue. When using the web the way Google wants people to use it, chock full of ads, then performance is an issue and something like HTTP/2 makes sense. Thus, how one uses the web matters. One size does not fit all.


There’s a couple of golang servers that implement it to make their benchmarks look better.


"This is not pipelining - this is just processing requests one at a time as they come in."

That's incorrect. This is pipelining as it is defined in RFC2616.

https://tools.ietf.org/html/rfc7230#section-6.3.2

It is not latency we are trying to save with HTTP/1.1 pipelining, it is server resources, namely the number of simultaneously open connections. (See Section 6.4)

RFC 2616 does not require parallel processing. It's optional.

Personally, I do not care about parallel processing. I want the responses returned in order. I get satisfactory performance from FIFO. That's because I only request resources from the domain I type into the computer.

A website that allows ads to be served from a variety of domains that the user never typed might have a problem with performance. However that is the web developer's problem, not the user's. Online ads are optional. There is nothing in RFC2616 that requires online advertising. The web works great without ads, and that is how I use it.

HTTP/2 is designed to serve the goals of companies that assist with online advertising. Google and others. Automatically triggering requests for ads from third party domains through webpages is a particular type of web usage, promoted by "tech" companies to support their online advertising "business model", but it is not the only type of web usage. It has performance problems. Go figure.

There is nothing to indicate any person outside of the "tech" industry is interested in this type of web use. How many users intentionally request ads. None. No user ever types in the domain of an ad server.

Requesting many small resources from the same domain, i.e., the domain the user types into the computer, using pipelined requests generally does not suffer from performance problems. It is fast and efficient for the types of web use that are not requesting ads from third party domains. Not to mention it is far more energy efficient.


"Which servers / sites support HTTP/1.1. pipelining?"

There are probably millions. IME, over 20 years of using pipelining, it is quite rare to find ones that don't. Here is a simple example.

Download the k-tree transcript archive from stackexchange.com

1116 HTTP requests, 1 TCP connection

26MB download

        stunnel -fd 0 << eof
        debug=debug 
        pid=$HOME/1.pid
        foreground=no
        [ x ]
        accept=127.0.0.77:80
        client=yes
        connect=198.252.206.29:443
        options=NO_TICKET
        options=NO_RENEGOTIATION
        renegotiation=no
        sni=
        sslVersion=TLSv1.3
        eof 
        export Connection=keep-alive; 
        sh -c "$(sh -c "$(seq -f "seq -f 'seq -f "https://chat.stackexchange.com/transcript/90748/%g/%%g/%%%%g" 1 31' 1 12" 2019 2021)")" \
        |a.out \
        |nc -vvn 127.77 80 > 1.htm
        cd;kill $(cat 1.pid)

        cat > 1.l         

        int yy_get_next_buffer();
        int fileno(FILE *);
        int setenv (const char *, const char *, int);
        int dprintf(int, const char *__restrict, ...);
        #define httpMethod "GET"
        #define httpVersion "1.1"
        #define Host ""
        #define jmp BEGIN
        #define Y(x,y) fprintf(stdout,x,y)
        int count,path,ka;
        int httpheaders(){
          setenv("httpMethod",httpMethod,0);Y("%s ",getenv("httpMethod"));
          Y("%s HTTP/",getenv("Path"));
          setenv("httpVersion",httpVersion,0);Y("%s\r\n",getenv("httpVersion"));
          if(0==setenv("Host","",0))Y("Host: %s\r\n",getenv("Host"));
          if(getenv("Connection"))Y("Connection: %s\r\n",getenv("Connection"));
          fputs("\r\n",stdout);
          return 0;}
    %option nounput noinput
    %s xa xb xc
    xa "http://"|"https://"
    xb [-A-Za-z0-9.:]*
    xc [^#'|<> \r\n]*
    %%
    ^{xa} count++;setenv("Scheme",yytext,0);jmp xa;
    <xa>{xb} setenv("Host",yytext,1);if(!getenv("Host"))setenv("Host",Host,0);jmp xb;
    <xb>\n path=0;setenv("Path","/",0);httpheaders();jmp 0;
    <xb>{xc} path=1;setenv("Path",yytext,1);httpheaders();jmp 0;
    .|\n
    %% 
       int main(){yylex();exit(0);}
       int yywrap(){if(count>1){
         fputs("GET /robots.txt HTTP/1.1\r\n",stdout); 
         Y("Host: %s\r\n",getenv("Host")); 
         fputs("Connection: close\r\n",stdout);
         fputs("\r\n",stdout);};
         exit(0);}

      ^D

     flex 1.l
     cc -std=c89 -Wall -pedantic -static -pipe lex.yy.c




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: