Yes – you won't get things like server push but the core feature of allowing the browser to queue a large number of requests and receive responses out of order is a big win even if you do nothing else.
chadaustin already mentioned nginx, which seems to be the most popular choice.
… and Apache Traffic Server just shipped support in v5.3.0, which might be of interest if you want to setup a generic front-end layer for a ton of backend services:
One thing I haven't looked into is whether any of these will try to use server push for content which has is referenced by <link rel=preload> in an HTML page. That could be very useful for render-blocking CSS/JS.
Do these HTTP/2 server implementations downgrade to HTTP0.x/1.x if the client support only an older version? Will there be v2-only servers in near future?
If the answers are in the slides, forgive me - using the iPad the Google slide software breaks the back button and is too annoying to read beyond a few slide pages.
Debugging and implementing older protocols seem to be easier, as they were text based.
> Do these HTTP/2 server implementations downgrade to HTTP0.x/1.x if the client support only an older version?
Many do, yes.
> Will there be v2-only servers in near future?
Yes.
> Debugging and implementing older protocols seem to be easier, as they were text based.
Implementing text protocols seems to be easier, but writing an implementation that can handle the wide variety of both compliant and slightly non-compliant traffic is an exercise in frustration.
This is not helped by the fact that people see the text protocol and think that it's easy to implement, so they go and write their own HTTP/1.1 server and leave it on the internet. Their server is probably not quite spec-compliant, so everyone else is left trying to interop with it.
Binary protocols are hard to debug by eye, but they aren't hard to write parsers for.
> Binary protocols are hard to debug by eye, but they aren't hard to write parsers for.
Implementing a text based protocol (SMTP, POP3, HTTP0.x/1.x) for a client application is certainly easier and less documentation is required. Knowing the clusterfk of the binary Office document formats, the newer text based ones re far easier to parse (be it XML or plain text doesn't matter). Be it binary or text based, one has to write a parser anyway. Only with text based protocol one could also use Regex or string match during, which is quite useful for non-production development/testing.
I read about "prioritisation" of data as a hint for the server, and less caching of data on the client. With the reoccurring "net neutrality" debates, let's hope this protocol cannot be misused/used to prioritise certain packets for parties who pay extra. I am not into this debates, but it would be certainly a disadvantage for startups over established parties. Given the many problems with SSL (heartbeat, broken/outdated certs, hijacked cert vendors) an HTTP/2 without SSL would be a nice fallback scenario - wildcard certs for new startups are still a bit expensive, especially if one will have to replace (=costs) the certs every few months due security concerns.
I'm going to strongly disagree with this. The problem with office docs was due to lack of documentation not because they were binary.
When you're parsing text, all kinds of crazy stuff can happen. You need to resize buffers as you read data, you need to know the escaping rules of each field, you need to know about line continuations, you have to know the text encoding, and many many other things.
Binary data in the abstract form has three elements: tag/type (may be inferred from position), data length (may be inferred from tag) and optional data itself. There are various ways to compose that information, but that's it otherwise. Binary protocols may be harder to read for people (you can just use wireshark dissectors though), but writing a correct and bug free de/encoder for one is massively simpler than for a text one.
By design, every text protocol will require more documentation than binary, because you need to include information about data escaping and encoding.
If you want to see this in practice, implement a client for something which does support both options. I recommend memcache.
Parsing Office docs is parsing XML. We have lots of tools for parsing XML, and escaping, line continuation, text encoding etc. are all well-defined and don't need to be reimplemented specifically to support Office.
Whether parsing binary is easier or harder than parsing text depends almost completely on the grammar of the language being parsed; and let's not forget, text is, of course, a type of binary format.
If I have to do ad-hoc parsing or generation, I prefer a text format, because I have lots of tools that understand text. If I need to do production-quality work, I prefer a binary format, because I need to be complete. But if I'm integrating multiple heterogeneous systems, I want a format that is trivial to inspect and test; that may mean a well-specified text format, like JSON or XML.
I'm fairly sanguine about HTTP/2 because it's at a lower level. If I were in the business of writing HTTP clients or servers on a regular basis (rather than using existing libraries), I'd be more concerned. I only do a telnet HTTP/1.0 session every 4 months or so.
Regarding missing docs, I meant the original .doc. That was all undocumented, proprietary binary.
But again, I have to disagree about parsing text ever being easier than binary. Basically for the same protocol, passing the same data and implemented in a sane way, the text protocol is the same as binary + variable length metadata + data escaping + value conversion + text encoding of metadata. I'm happy to challenge anyone with the following: it's not possible to create a simpler text protocol than a well designed binary protocol. (looking only at encoding/decoding, not debugging side)
Where by simpler I mean, less likely to get exploited, less ambiguous, shorter to document (when concatenating with docs of all encoding protocols you depend on, like JSON or XML)
it's not possible to create a simpler text protocol than a well designed binary protocol
Huh? That's irrelevant, surely? It doesn't speak to your assertion. The best binary formats don't necessarily need "parsing" at all; it could be a simple matter of mapping into memory and adjusting offsets, like an OS loader. I don't think there's any debate that binary formats can be designed so that they are far easier to load than text. We're not talking about the design of protocols here (in this subthread). We're talking about writing parsers.
Parsing an obscure binary format is harder than parsing a simple text format.
I find the response confusing. First, I was responding to "Be it binary or text based, one has to write a parser anyway. Only with text based protocol one could also use Regex or string match during, which is quite useful for non-production development/testing." which we both seen to agree it's false. Well designed binary doesn't even need parsers sometimes. If you add steam multiplexing regexes won't help you anyway.
I'm not sure where do obscure binary formats come in. Http2 had a choice of slightly complicated binary, or more complicated text. Office had lots of programmers, even more money and simply didn't care. It's a completely different situation than http2.
So finally: why the obscure binary format? Http2's choice is good text or good binary really.
Knowing the clusterfk of the binary Office document formats
The Office "binary format" is simply the objects as they exist in-memory serialized to the disk. Doing it this way was a design decision made when machines were a lot more resource constrained, and fair enough, perhaps it could be revisited, but it was made for sensible reasons by people smarter than you.
So the spdy directive also does http/2? I seem to remember looking for HTTP/2 support in NGINX and finding that they only support it in the commercial version?
No, but FF and Chrome which are the main browsers currently supporting HTTP/2 also support SPDY/3.1.
I don't know where you saw that nginx commercial has HTTP/2 (it's not mentioned on the nginx feature matrix[1]), but their announcement[2] stated that both open and commercial versions would be getting HTTP/2 support by end of year.
SPDY is an experimental protocol that does much of what HTTP/2 does ( you can think of it as the beta version of HTTP/2). It will be phased out starting next year over favor of the real protocol but people use it now with nginx.
thank you, I have been looking for this exact explanation. Ive been wondering why everyone keeps recommending SPDY when this whole thing is about http2
I never took the time to try WebSockets, so please forgive me if this question doesn't make sense, but, does HTTP/2 supersedes WebSockets? I'm under the impression that HTTP/2 covers all the WebSockets' use cases, is this a correct observation?
There's not much you can do with a websocket that you can't do with a Server Sent Event stream plus AJAX requests. HTTP/2 just makes that design (relatively) performant by shoving all those together into one multiplexed connection.
Websockets are still important if you need something like real-time input-event streaming (e.g. for an online game); a properly-structured binary protocol sent over a websocket will be much lower-overhead than the equivalent HTTP/2 frames.
If you use WebSockets mainly because you want to multiplex many requests/responses over a single TCP channel, then HTTP/2 may be a preferable substitute for WebSockets.
If you use WebSockets for "realtime push", then HTTP/2's server push feature could potentially be used as an alternative (though I've not heard of anyone actually doing this yet).
If you use WebSockets because you actually want a bidirectional message-oriented transport protocol, well then you'll keep using WebSockets. :)
> If you use WebSockets for "realtime push", then HTTP/2's server push feature could potentially be used as an alternative.
One thing to keep in mind with HTTP/2 server push is that a server can only send a push in response to a client request. So this isn't a drop-in "real-time push" mechanism. To implement the equivalent of real-time push would likely require client/server to keep a stream within the connection in the "open" state whereby the server can send continue to send data frames on that.
One thing to keep in mind with HTTP/2 server push is that a server can only
send a push in response to a client request.
This was the difference that I was not aware of, thanks. So HTTP/2 server push is just opportunistic, while WebSockets are real-time push with a persistent connection.
From what I could find online, there are no plans to include websockets as part of HTTP/2 (according to this: https://webtide.com/http2-last-call/ ). HTTP/2 is meant to supersede all websocket use cases.
Hmm, do the HTTP/2 authors say that it supersedes WebSockets? The article you linked to says that there may have been a missed opportunity regarding consolidation of framing protocols, but that alone doesn't imply that WebSockets are obsolete. I'd assume HTTP and WebSockets would continue to exist as separate protocols, as they always have.
HTTP/2 does not allow communication from server to client. Server push is another matter, not really controlled by server or client. So you'll end up to polling server. This might be a bit more effective than HTTP/1 polling, but websockets will be better.
Sure it does. HTTP/2 streams are bidirectional. The websocket flow can be perfectly emulated in HTTP/2: make a websockety request, get a 200 HTTP response, then both sides keep the stream open and send data through it.
I can't actually tell if that's sarcasm anymore or not. Next slide is a new URL, new page. Based on initial testing, the behavior seems fine. Same behavior as one would have for something like http://example.com/my-presentation/1.htm, 2.htm, 3.htm, etc. and navigating them with hyperlinks on each slide that link to neighboring slides.
It's not like Back/Forward browser buttons are overrode to behave and previous/next for the slide presentation.
Remember those horrible embedded Flash presentations that you couldn't directly link to a particular slide within the blob? Yeah, that "breaks the Web". Back/Forward is supposed to go back to the previous page the user was on (which is a "slide" in this case).
I would agree except that they also use scrolling to change slides. It feels a little weird to scroll down 3 ticks of the mousewheel, then have 3 clicks of the back button do the reverse action.
Scrolling is a separate matter. I didn't even bother to scroll the first time. I just pressed the next/previous buttons on the slide navigation bar.
In the scrolling case, I still don't see how it's "overriding the browser buttons", but rather having a JavaScript that advances to the next page on scroll.
In the scrolling scenario, my actual back and forward browser buttons behaved as expected — just for the pages (slides) I visited. No more, no less.
I opened it on my iPad, it added a browser history entry for each slide as the OP mentioned and unlike my parent mentioned. Pressing 32 times on the back button isn't fun. It's just bad UX design.
There should be a "share" button that spites out the URL with the hash-tag of the current slide - similar to Youtube where you can click on the "share" button below the video to link to a specific time in the video, e.g. https://youtu.be/I26EwcssMbY?t=1m37s
That actually Google Docs slides in presentation mode. I'm... not a fan.
If you find yourself having to interact with these a lot, which I do, you might find it useful to click on the cogwheel icon at the bottom of the screen and select 'Open in editor'. That gives you a completely different and IMO much easier to read view of the same document:
I think it is assumed you will open a presentation in a separate tab and not need to 'return' anywhere.
You could argue that every link should open in a new tab by default and only replace the contents of the current tab in special cases, instead of the other way around. In the past 20 years we've been 'trained' to have different expectations, because of the design of the first websites, browsers and the resource limitations of those days. However, the designs and abilities of all of those have changed and it may be worthwhile to reconsider the current default.
Interesting read, but as far as I can tell this article mostly makes a strong case that using HTTP/2 at all is an "anti-pattern" for most projects[1] today, for at least three reasons.
Firstly, the presentation seems to start by arguing that round-trip latency has much more of an impact on perceived performance than bandwidth, but then argues for several techniques whose principle advantage is saving small amounts of bandwidth. So how much improvement will these new techniques really offer over current best practices for "an average web site"?
Secondly, the presentation seems to argue that simplifying front-end development processes by avoiding things like resource concatenation is a big advantage of HTTP/2, yet despite repeatedly emphasizing the need for the server to provide just the right responses to make HTTP/2 work well, it almost completely ignores the inevitable challenges of actually configuring and maintaining a server to take advantage of all of these new techniques in a real, production environment, with rapidly evolving site structure and content, numerous contributors, etc.
Essentially, this seems to be advocacy for dumping tried, tested, universal "workarounds" for the limitations of HTTP/1.1 in favour of new techniques that work well with HTTP/2 and only HTTP/2, but as an industry we have relatively little experience in what actually works well or doesn't with HTTP/2 and we have relatively few tools and relatively little infrastructure available that support it right now. And crucially, making the shift is not by any means a neutral activity; it is actively and severely harmful to several of the most important tried-and-tested techniques we've used up to now.
Finally, there is the simple matter of trust or, if you want to be kinder, future-proofing. The presentation notes that Google are deprecating SPDY from early 2016. That is the supposed HTTP replacement that was the New Shiny... yesterday, I think, or maybe it was the day before. When arguing for fundamental and irreversible changes in the basic development process and infrastructure set-up, you lose all credibility when your so-called standards fall out of favour faster than a GUI or DB library from Microsoft, and when your own browser frequently breaks due to questionable caching and related behaviour.
It's certainly true that HTTP/1.1 isn't perfect and there are practical ways it could be improved, but I don't think this presentation makes a strong case for adopting HTTP/2 as the way forward.
[1] YMMV if you actually do work for Google/Facebook/Amazon, and you really do have practically unlimited resources available to maintain both your sites and your servers, and you really are making/losing significant amounts of money with every byte/millisecond difference.
I think you are missing a lot of context regarding what SPDY is, was and it's relation to HTTP/2[1]. In short, SPDY was an experiment designed to test some proposed strategies for future HTTP protocol development, which it did. It was adopted as a starting point for HTTP/2. After HTTP/2 was released, Google announced it would eventually be phasing out SPDY. This is a good thing.
> It's certainly true that HTTP/1.1 isn't perfect and there are practical ways it could be improved, but I don't think this presentation makes a strong case for adopting HTTP/2 as the way forward.
Well, I read it more as a "If you are planning to use HTTP/2, here are ways in which you might want to alter how you plan out your service", not "You should use HTTP/2, now here's the stuff you need to go change so you can do so." Alternatively it can see it as "here's the performance hacks you no longer have to do with HTTP/2 because it has good mechanisms for this built in." In any case, I didn't get a strong vibe about how we need to abandon HTTP 1.1.
In short, SPDY was an experiment designed to test some proposed strategies for future HTTP protocol development, which it did.
That may always have been the intent, but I'm not sure that was clear to those outside the Google bubble.
For example, there were numerous articles and blog posts and conference talks around the time that SPDY became a serious proposition with a similar tone to the HTTP/2 commentary we are seeing today. No doubt there were exceptions, but much of that advocacy didn't exactly come with "big red box on the first slide" warnings that SPDY was an experiment, and readers/viewers should only use it in production if they had the resources available to update things again in the relatively near future.
I do appreciate that the big news for HTTP/2 is that it has reached a more formal level of standardisation, but in the modern Web industry when browsers do whatever they want every six weeks anyway, that isn't worth as much as it used to be.
Even today, there are few production-grade tools around that actually support HTTP/2 in anything close to its final form, including a lot of the projects that do support SPDY. Phasing out browser-side support for the latter as soon as early 2016 seems like a typically aggressive Google drive to shift the web to a new technology it favours, at the expense of forcing everyone to adjust/upgrade their existing working deployments. I'm very uncomfortable about how readily they are willing to do that in their browser these days, but I don't think the permanent-beta mentality has any place in server/network infrastructure at all.
Finally, there is the simple matter of trust or, if you want to be kinder, future-proofing. The presentation notes that Google are deprecating SPDY from early 2016. That is the supposed HTTP replacement that was the New Shiny... yesterday, I think, or maybe it was the day before.
SPDY was never meant to be "the supposed HTTP replacement". It was clear from the start that it was an experiment, designed to research alternative protocols and inform the design of some future hypothetical HTTP/2.0 - which it did, since they actually used SPDY as its base.
I almost didn't reply because it sounds like you have a bit of an axe to grind against HTTP/2, but the concerns you state are overblown. All I really gathered from the presentation was that spriting and concatenation negate the caching advantages HTTP/2 could provide and are unnecessary. It doesn't make HTTP/2 worse than HTTP/1.1. As for protocol turnover, arguably adopting SPDY was premature, but everyone knew it was when they did it. It'd be the same for someone who chooses to adopt QUIC now. One point of standardizing HTTP/2 was to let the more conservative among us start to use it now.
Definitely. I actually gave them props for their SPDY work, including the name: it matching a common word would make it easier for lay people to remember. Many organizations and academics made alternatives to common protocols without the user base or influence to make them real. That Google was in a position to make things better, built a practical improvement, deployed it, and inspired the update to HTTP is a great credit to them. I wish more companies in similar positions would follow suit.
And, hopefully, Google's experimentations in other protocols and domains will challenge those controlling the status quo to adapt to the times as well. Not holding my breath but it would be nice.
All I really gathered from the presentation was that spriting and concatenation negate the caching advantages HTTP/2 could provide and are unnecessary. It doesn't make HTTP/2 worse than HTTP/1.1.
But changing an existing production site so it no longer uses techniques like spriting and concatenation will damage performance if anything degrades the communications to HTTP/1.1.
One point of standardizing HTTP/2 was to let the more conservative among us start to use it now.
I'm not sure anyone who is switching to HTTP/2 today could reasonably be called "conservative". There are few production-level servers available yet, and some of the big potential performance wins due to the multiplexing, prioritisation and server push aspects -- the things that in theory could render those spriting and concatenation techniques obsolete -- are barely even mentioned right now outside of the standards documentation and related announcements. I just spent a few minutes searching to see how you'd actually configure a web server to implement customised server push for maximum performance, and I literally didn't find a single explanation, nor for that matter a single server even advertising the capability.
It seems like he made fair points about HTTP2. The presentation urges change as a topic, but fails to provide compelling arguments to do so. Nowhere in there did he imply it was worse, nor did he "overblow" anything...multiple slides imply this is highly questionable advice dependent on current tech. 41-42 say "check for yourself or do your own way that's best for you based on your server for which we don't know enough" - Most of the time it works as well as http1.1 - so great. I'll keep doing that.
SPDY and HTTP/2 are pure wins without any real effort. Slap the spdy directive into nginx, a 5 character change, and presto, site goes faster. On sites I've worked on, getting a 10-30% advantage is common. That's an awesome ROI for 5 chars. I'm very happy that Google and others had the need and capability to replace HTTP/1.1. They did the hard work, now we all won.
SPDY and HTTP/2 are pure wins without any real effort. Slap the spdy directive into nginx, a 5 character change, and presto, site goes faster. On sites I've worked on, getting a 10-30% advantage is common.
I'm happy for you that you got those kinds of results just from the switch to SPDY. None of the experiments I've seen did that well, but of course the benefits for these new protocols will depend on the specifics for each individual site.
Just to be clear, my main concern here isn't so much the protocol itself as the effect it has on all the infrastructure that is built around it and the implications for compatibility and long-term stability.
For example, there have been calls for various kinds of compression/encryption to be required. Then CRIME happened, and it turned out that using gzip compression within an encrypted stream wasn't such a good idea after all. What if the new standards had baked in that kind of security flaw?
There is also a whole industry of network monitoring tools used for intrusion protection, virus scanning, fault diagnostics, and many other useful applications. How many of those tools will work with these new protocols?
What about cache/proxy tools? Just as there are still few options for production-level servers that cope with HTTP/2, there is also a whole range of intermediary tools that serve useful purposes but don't currently have HTTP/2 counterparts.
What's your point, though? Staying on 1.1 would guarantee we continue to lack tools to use HTTP/2. The MITM proxies, which seem to all be written by incompetent software vendors, can strip out the HTTP/2 indicators and force their users back to 1. So if that's their goal, it's easy enough to achieve it.
For everyone else, it's optional. 2 has mandatory TLS in browsers (that's how the browser vendors are making it), so random proxies and so on just remain ignorant to it.
From an implementor's POV, 2 is easier to implement than 1. The new feature complexity is balanced by having sane parsing rules.
And remember: We've (the Internet) have been using SPDY for a couple years now. That's a very long beta period and gave everyone plenty of time to play around, raise objections, write software, etc.
The MITM proxies, which seem to all be written by incompetent software vendors,
I'm not sure not spending time and money supporting a proprietary protocol created by an organisation with a reputation for dropping anything it doesn't like any more at short notice qualifies anyone as "incompetent".
can strip out the HTTP/2 indicators and force their users back to 1.
At which point anyone whose site was designed to be friendly to HTTP/2 in the ways described in the presentation will perform dramatically worse than it does under HTTP/1.1 if nothing is changed.
2 has mandatory TLS in browsers (that's how the browser vendors are making it), so random proxies and so on just remain ignorant to it.
And you don't see a problem with that?
We've (the Internet) have been using SPDY for a couple years now.
Which Internet? A few high profile sites have been using it, but who else? Several major browsers haven't even supported SPDY until quite recently. Most don't support HTTP/2 fully yet. And in my search earlier, literally no-one was even talking about how to use some of the new capabilities to best effect server-side, never mind demonstrating server-side software that can actually do it.
That's a very long beta period and gave everyone plenty of time to play around, raise objections, write software, etc.
You must be joking.
The kind of network monitoring and security devices I was talking about have 5-7 figure price tags in US dollars, sales cycles measured in months, purchasing cycles measured in years, and working lifetimes potentially measured in decades. Two years to understand, integrate, fully test, advertise, provide for evaluation, and ultimately sell support for a new protocol is nothing in this industry.
Among the projects I have recently worked on, I think three different major web servers were used, a couple of different reverse proxies for load balancing, and assorted other proxies for caching and the like. As far as I'm aware, exactly none of them fully supports HTTP/2 as of today. And just to be absolutely clear, I'm talking about a collection of software that runs probably 90% of all web sites that exist today, if not more. Perhaps you would argue that the developers of all of these tools are also incompetent, in which case I refer the honourable gentleman to the answer I gave a few moments ago.
It's true that there are some organisations that don't rely on mainstream networking hardware and standard software stacks any more. They really do literally design and build their own network infrastructure and write a lot of their own software stack instead of buying from the big brands. However, it's also true that I could probably count the number of such organisations in the entire world on my fingers. For everyone else, these issues do matter.
Edit: Here's a handy survey of the adoption of HTTP/2 by various significant tools as of a couple of months ago. It seems broadly in line with my own experience.
SPDY is used on about 4% of all websites, according to [1]. My assertions to MITM proxy vendors being incompetent have nothing to do with their SPDY support, and just takes into account their overall track record.
I'm not sure what you're point is when you say you couldn't find many products with HTTP/2 support now. So what? They will come to market over time. Meanwhile, SPDY has more support, and is available in free software like nginx.
I don't see a problem with TLS being required, and I don't really care for proxies being in the middle. Again, people that want this behavior can opt in by installing a cert and be MITM'd, and if a vendor can't get HTTP2 support, they can easily strip it until they do (and we'd hope this caching proxy is close to the user, so the perf penalty for downgrading is less).
And, for a site, supporting users on crappy proxies that force a downgrade is little different than supporting older browsers that don't have SPDY or HTTP2. So they'll make a choice. What's the big deal? Who is being hurt? How could this possibly work any other way? At some point, no matter the procedure, the final version of HttpVNext would have been completed. And vendors not paying attention would be in the exact same place.
Anyone who for whatever reason is using HTTP/1.1 will have a much worse experience if sites start making the kinds of changes recommended in this presentation to work better with HTTP/2 today.
I can see how that might help if you're using an unreliable network with significant packet loss, such as a mobile connection, but how does it reduce the number of round-trips? You're still sending the same number of requests and presumably getting equivalent responses, so at HTTP level it makes no difference. Are you talking about something at a lower level, perhaps the TCP level mechanics for retransmission of dropped packets?
So you're talking about TCP slow start here? In that case, yes, I agree that in theory compressing the headers is potentially useful, though for a well-organised site I wouldn't expect it to make much difference in practice.
You do have to try quite hard (or be careless with things like cookies) to get the size of a single request above the MTU for typical Internet usage. Chances are that in practice you are going to be requesting an HTML resource first, then probably prioritising some CSS, then JS, and then other resources like image or multimedia content. And for the responses in each case, the size will usually be dominated by the payload rather than the headers anyway. So it seems likely that in such an environment you'll be more limited initially by the response time for the HTML and the main CSS resources anyway, or by general packet loss on an unreliable network.
By the time you've got those initial resources, parsed them, and started requesting the supporting content, you've probably warmed up the TCP connection enough for slow start not to make much difference any more. Beyond that point, effects like multiplexing, prioritisation and server push seem to have more potential for increasing real world performance by reducing or eliminating certain round-trips at the HTTP layer. (This assumes that the browser and server do actually take full advantage of these new options; there seems very little discussion so far about how we might achieve this in general.)
* Server push functionality / server initiated streams
* current implementations means http/2 works over tls _only_ - neither Firefox nor chrome currently support non encrypted connections. This does keep things simpler with things like proxies in the middle I presume also.
* tls implementations must now support sni also so basically http/2 is now a forcing function for supporting sni which is awesome.
As far as I understand, none for production use. I see that nginx implements SPDY (HTTP2 is based on SPDY), but not yet HTTP2. They promise to bring HTTP2 by end of year though - http://nginx.com/blog/how-nginx-plans-to-support-http2/ .
Undertow and Jetty are both Java options that support HTTP/2. I've been playing with both and it's still very rough around the edges (have to mess with bootpath to get ALPN to work), but it does work. F5 are supporting it now too.
Basically it caches requests per referrer so the next time a request is made, it can guess what subsequent requests will be made. Documentation is still sparse and a lot of it is carryover from their SPDY features.
According to the presentation, 9% of all HTTP on FF36 (for those who didn't disable this data collection) is HTTP/2. Surely there are people here using HTTP/2 in production: How are you serving HTTP/2 traffic?