
HTTP/2 Adoption Stats - dedalus
http://isthewebhttp2yet.com/measurements/structure.html
======
bjacobel
Looks like isthewebhttp2yet.com, isn't HTTP/2 yet.

    
    
      $> curl --head http://isthewebhttp2yet.com
      HTTP/1.1 200 OK
      x-amz-id-2: GXE1k2FX9Wmpzdjer4hI7lmA7LK/Znlo6dN6/qUdOVSjb95g+VaDGL6EB6QODo+cv3QNkvTZcIc=
      x-amz-request-id: 51C2129F4B496C03
      Date: Sat, 23 Apr 2016 16:35:29 GMT
      Last-Modified: Tue, 05 Apr 2016 15:12:25 GMT
      ETag: "543317ac1e885c2039b13925f35afdd3"
      Content-Type: text/html
      Content-Length: 9793
      Server: AmazonS3
    

Not to imply this is their fault - I wish S3 (and Cloudfront) would support
HTTP/2 but that day is probably a long ways off. However, I've used Caddy[1]
for hosting static files with HTTP2 and free TLS via Let's Encrypt and had a
really great experience with it - might be something to look into.

[1]: [https://caddyserver.com](https://caddyserver.com)

~~~
imperalix
Look like http2 support for cloudfront is coming soon[1].

[https://forums.aws.amazon.com/message.jspa?messageID=708630#...](https://forums.aws.amazon.com/message.jspa?messageID=708630#708630)

~~~
bjacobel
Whoa, that's awesome. I'm familiar with that forum thread (haha) but I didn't
realize there'd been a response on it from an AWS employee just in the last
couple of weeks.

------
tete
Honest question: After reading PHK's (creator of Varnish) "rant" on HTTP/2[1]
what are some good reasons for using this protocol?

Yes, there is TCP Multiplexing, but isn't the normal keep alive good enough in
most cases? And doesn't HTTP/2 make good caching harder?

The fact that HTTP/2 takes more computing resources to parse, given the fact
that it's binary seems counter productive.

Am I too negative? Am I missing something?

I barely know anything about HTTP/2, but it's hard to find good answer to
these question. Hoping someone can help with that and maybe get an idea about
whether a migration to HTTP/2 is really worthwhile - or in which cases it
actually is.

[1]
[https://queue.acm.org/detail.cfm?id=2716278](https://queue.acm.org/detail.cfm?id=2716278)

~~~
saturncoleus
A couple reasons. Look at these as something HTTP/2 does well, but maybe not
the best:

Head of line blocking is mostly solved. You can interleave sending big
messages with small ones. You can send control messages (like "Hey, I'm
shutting down soon, don't send any more traffic this way") along with other
messages. The alternative would be using multiple connections, or
reimplementing your own version on top of HTTP/2.

The above is much more useful in the presence of streaming. H2 has first class
support for bidirectional streaming. It is now feasible to do a stock ticker,
or chat room, or whatever over a normal H2 connection, and not have a whole
extra protocol or browser work-arounds. Web sockets work, and hanging GET
request work, but they are extra burden. It would be great if the standard
protocol supported it out of the box.

TCP Keep alive is not good enough, especially in the presence of Proxies. TCP
Keep alive only goes over the first hop. It is possible to work around this,
but wouldn't it be nice if this was part of the spec? Also, for what it's
worth, TCP Keep alive only works over TCP. In the case of not using TCP (like
Unix Sockets), what do you send to check round trip time? What about over
shared memory? Other transports?

H2 Header support is pretty useful too. Sending repetitive headers (like user
agent, referrer, auth tokens) is wasteful in H1.1. Huffman encoding allows you
get get back the size of base64 encoded strings pretty easily, so the penalty
for having to only use safe characters in your headers.

Some people have mentioned that this protocol was designed for making
advertising faster. While this is possibly true, Google is planning on using
HTTP/2 as its new intra/inter-Datacenter RPC transport (See gRPC). The
protocol is good enough to support browsers, mobile, and servers without
having to transliterate between protocols.

~~~
thwarted
_The alternative would be using multiple connections, or reimplementing your
own version on top of HTTP /2._

HTTP/2 is re-implementing multiple connections on top of a single TCP stream.
It's turtles all the way down.

 _The above is much more useful in the presence of streaming. H2 has first
class support for bidirectional streaming. It is now feasible to do a stock
ticker, or chat room, or whatever over a normal H2 connection, and not have a
whole extra protocol or browser work-arounds. Web sockets work, and hanging
GET request work, but they are extra burden._

This is in contrast to HTTP/1.1. However, we already have support for
bidirectional streaming in the form of TCP. Stock tickers, chat rooms, or
whatever, have always worked over TCP. Web sockets and hanging GET requests
are extra burden because, the real problem, browsers don't support and
networks/firewalls are not configured to support, end-to-end blind TCP
connections, the concentration is on HTTP.

 _TCP Keep alive only works over TCP. In the case of not using TCP (like Unix
Sockets)_

The comparison to UNIX domain sockets not supporting keepalive is a mild red
herring. TCP keepalive exists to address the limitations of a distributed
system, recovering from disconnections and lost packets, which doesn't exist
in the UNIX domain sockets model because UNIX domain sockets are centrally
managed.

 _Google is planning on using HTTP /2 as its new intra/inter-Datacenter RPC
transport (See gRPC). The protocol is good enough to support browsers, mobile,
and servers without having to transliterate between protocols._

This portability across different networks and form factors is definitely a
strong rationale for HTTP/2.

When it comes down to it, the _true_ advantage of HTTP/2 is that it is closer
to Layer 7 than it is to Layers 3 or 4. This addresses the social and
political disadvantages of trying to use things that are lower in the stack.
It's non-trivial to get support for anything new that isn't in Layer 7. The
RFC process is arduous and there ends up being a lot of vested interests
attempting to co-opt a standard for their own aims, making adoption of a layer
4 protocol nearly impossible, not the least of which you need to get operating
system and network hardware vendors to support the changes, and then users
(read: companies) to understand them.

With changes closer to Layer 7, it's application only changes, so a single
vendor can implement and deploy something to, effectively, the entire
internet, without having to mess with getting everyone else on board. Only
once you're looking at portability/interoperability does it then become worth
going through a standardization process. This is exactly what happened with
Speedy and HTTP/2 and it's made adoption much easier and faster than other
things that have required mass scale (I'm thinking of things like IPv6 and
SPF/DomainKeys/DMARC here, which require a much larger group on board in order
to see any progress on deployment).

~~~
stephen_g
Are you saying HTTP/2 're-implementing multiple connections on top of a single
TCP stream' is a bad thing?

There is an entirely valid reason to do so (that TCP connections need to have
slow-start to avoid congestion). TCP will always be inefficient for sending
single, small files.

HTTP/2 doing this is great, because it means you can do away with the hacks
that were used with HTTP/1.1 to get around connection limits, and you can do
away with the hacks like putting all your JS in a single file to get around
slow-start (by reducing the number of files sent), which is bad for caching
(if users have your site cached and you change one JS file, their browsers
have to download the whole concatenated collection of files again).

I'm all for trying to keep things simple, but I think much of the complexity
in HTTP/2 is solving valid, real problems that will improve the speed of the
web, which is a good thing.

~~~
thwarted
_Are you saying HTTP /2 're-implementing multiple connections on top of a
single TCP stream' is a bad thing?_

Not necessarily; I'm saying that it is another layer of multiplexing on to of
a layer of single connections, that is on top of multiplexing, which is on top
of a single connection, etc. I'm saying that the main reason HTTP/2 exists is
because of the limits we had to put in place in HTTP/1.1, like connection
limits, and the uphill battle it is to get standards changed like how TCP
window size works or how to deal with slow start.

Eventually, we'll come full circle le when someone uncovers an optimization
that occurs when you make multiple TCP connections to a server, all running
HTTP/2 over them.

------
sandstrom
I wish AWS would adopt HTTP/2 quickly (for Elastic Load Balancer). Hopefully
AWS IPv6 adoption isn't any indication of how fast they'll get to HTTP/2.

~~~
meddlepal
This is one of the really frustrating things about AWS... you have no sense of
their product roadmap. My guess is the biggest customers will be drivers of
HTTP 2.x support for ELB, for example a Netflix.

On that note though, can you pump HTTP 2.0 traffic through a TCP listener? I
haven't tried.

~~~
Confiks
> On that note though, can you pump HTTP 2.0 traffic through a TCP listener? I
> haven't tried.

Yes, we're using that right now with Proxy Protocol to see the original
requester. When using a TCP listener, you're of course free to use any TCP
protocol of your choosing, including HTTP/2.

A consequence is that you have to terminate TLS at the webserver instead of at
the load balancer.

------
rspeer
This link points to the "structure" graphs instead of the "adoption" graphs.
It shows some unnecessarily difficult-to-interpret graphs about the structure
of pages served with and without HTTP/2.

The fact that the graphs are going dramatically up and to the right is a
consequence of the fact that they are cumulative graphs. Keep in mind that
_none of these axes represent time_.

I believe the link was supposed to be
[http://isthewebhttp2yet.com/measurements/adoption.html](http://isthewebhttp2yet.com/measurements/adoption.html)
.

------
homero
CloudFlare is the only reason these technologies get quick adoption, kudos

------
yeaaaaaaaaah
My website will not be adopting HTTP/2.

~~~
ancarda
Why not?

~~~
darkhorn
Because HTTP/2 doesn't support client side certificates. For this reason his
bank's web site won't serve with HTTP/2.

~~~
euyyn
But HTTP/2 does support client certificates.

