I'm not ever supporting HTTP/2. For something "monumental" enough to be called the whole second revision of HTTP, what have we really gained? A Google-backed "server push" mechanism and some minor efficiency additions? Add to that the fact that SPDY was pushed through as HTTP/2 because nothing else was ready.
Please.
Downvoters: although I don't usually do this, I'd ask you to enter into a discussion with me instead of just hitting the down arrow. Do you honestly think my discussion is worth being silenced?
This attitude is exactly how you make sure that nothing ever changes or improves. It is "the perfect is the enemy of the good" exemplified. HTTP/2 is a huge improvement over HTTP in many very important ways. True, it's not perfect, but guess what? 2 is not the last version number out there. We can switch to HTTP/2 now and fix the rest of the problems with HTTP/3.
Moreover, it seems like we are collectively getting better at upgrading technologies: IPv6 adoption has finally got some momentum; HTTP/2 is actually happening. With lessons learned from the HTTP => HTTP/2 transition, HTTP/3 could happen in five years instead of in another fifteen.
> This attitude is exactly how you make sure that nothing ever changes or improves.
On the contrary, we are in a desperate need of such attitudes in software. We need for everyone to stop jumping to every new thing with silly promises. We need to start choosing quality over quantity. We need substantial well researched improvements.
I think you're confusing quantity as being the end result. The quantity is about experimentation. The quality comes as the winning products are refined over time; the low quality products never gain mass traction and are discarded. That's exactly how it should work. These things are complimentary, not mutually exclusive.
That process is how innovation happens quickly. It's also how you frequently discover new things you weren't looking for, which is how a lot of innovation happens (by accident). Rapid iteration is in nearly all cases vastly superior to turtle-speed iteration.
I appreciate your optimism, but do realize that there isn't really a massive improvement unless you're Google. I don't see this as worthy of the "/2" suffix; Google might like it because it allows them to make their tech standard, but other than that it's unnecessary marketing.
HTTP has never been the bottleneck. I think IPv6 is excellent and a needed, massive improvement especially since IPv4 is no longer tenable. HTTP/1.1, however, still works quite well and keeps a larger feature set in some circumstances. It's less insane because it's not made by W3C or IETF or any other hugely bureaucratic group; however, that doesn't mean it's better either.
I can't wait for HTTP/3! Hopefully this time they won't rush it.
Check this out https://http2.golang.org/gophertiles and tell me that HTTP isn't the bottleneck, especially for high-latency connections. This is going to make the web so much faster.
On some small sites I've worked on, switching to SPDY shaved about 20-30% off our load times. And all we had to do was type " SPDY" into our nginx.conf. That's like the definition of a win.
If that doesn't convince you to support HTTP/2, then nothing will: https://www.httpvshttps.com/ HTTP/1.1 is 5x-15x slower in this benchmark! These insane perf gains are possible only thanks to HTTP/2, specifically thanks to its support for multiplexing. Please read the spec and understand the technical implications before criticizing.
"This study is very flawed. Talking to a proxy by SPDY doesn't
magically make the connection between that proxy and the original site
use the SPDY protocol, everything was still going through HTTP at some
point for the majority of these sites. Further, the exclusion of 3rd
party content fails to consider how much of this would be 1st party in a
think-SPDY-first architecture, where you know you'll reduce round
trips, so putting this content on your own domain all together would be
better, anyway."
In other words the guy benchmarked SPDY _slowed down by HTTP connections behind it_!!
Sure. It illustrates how any benchmark can be flawed because it's tailored to the point it's trying to make. The author of that article thought this scenario was "more realistic." What is more realistic to him is not to other people.
And thus, benchmarks are unhelpful.
I care about feature sets and major improvements, not minor down-to-the-wire fixes. If this were called HTTP/1.2 or something I'd be less critical, but there are so many issues and flaws left unfixed, with unhelpful bikeshedding occurring over perceived "performance".
No, they are helpful! Especially real-world benchmarks. Sure you can cook up utterly flawed benchmarks (like the one you pointed to), but that doesn't mean all benchmarks are unhelpful. A good engineer knows which benchmarks matter, which don't. You don't seem to be able to do that.
> If this were called HTTP/1.2 ...
The mere fact you brought this up (no amount of backpedalling you may do after my comment on this) makes your criticism look even stupider. You should judge the spec based on its technical content, not based on whatever arbitrary version number was assigned to it. Talk about a bike-shed argument (http://en.wikipedia.org/wiki/Parkinson's_law_of_triviality)
The linked benchmark is flawed, as dlubarov noted above. I wanted to note that it is easy to write a flawed benchmark. And in this case, benchmarks are unhelpful, because my major lament is not efficiency or lack thereof, it is the lack of any new features or any consideration to any of the other pain points that exist on the Web today.
> doesn't mean all benchmarks are unhelpful
Didn't mean to imply that, although I can see how it could be read that way. Rest assured, I only believe benchmarks are unhelpful here. Oftentimes a benchmark is the best way to quantify usability, such as DoYouEvenBench (http://www.petehunt.net/react/tastejs/benchmark.html)
> backpedalling
I won't backpedal. This is important, actually, because the name it's given lends it some intrinsic hype. Let's say you're Google, and you're pushing a web standard that benefits you more than anyone else. What's more likely to be adopted, "HTTP/1.2" or "HTTP/2" ? It is important, in my opinion.
> The linked benchmark is flawed, as dlubarov noted above.
No, I already replied to him. You and him should spend some time looking at the Chrome network console visiting some of the top500 sites. It is very common for sites to be exactly like that: tons of small requests for small resources.
Okay. Fair enough. I concede that efficiency is important and that SPDY / HTTP/2 can improve upon it. But I don't believe that this is worth the hype because the exposed featureset is otherwise tiny. Efficiency is cool, yes, but I'm personally waiting until HTTP/3 fixes the other wrong things with the Internet before I implement anything. I think the amount of effort that goes into this is not worth the result. Why include tons of small resources on your page if they're not necessary? Why revamp a protocol entirely if all you have to do is stop including tons of small resources?
> Okay. Fair enough. I concede that efficiency is important and that SPDY / HTTP/2 can improve upon it.
Great, I appreciate you recognize this.
> Why include tons of small resources on your page if they're not necessary?
But it _is_ necessary. In every single of the examples I gave in my reply to dlubarov, it is necessary:
- There are 100+ small images, icons, etc and all are displayed on the nytimes.com homepage.
- All of the thumbnail pictures of ebay items on a listing are displayed to the user.
- The 50+ maps tiles downloaded when browsing Google Maps are all necessary.
- Etc
You seem to fail to realize that in 2015, not every web page can be dumbed down to a blob of static HTML and no more than 2-3 images. The modern web is complex. We needed a protocol that can serve it efficiently.
Not really a fair benchmark. It's making tons of requests with tiny payloads, so that most browsers will hit a connection limit and requests will be queued up.
Heavily optimized pages like google.com use data urls or spritesheets for small images, and inline small css/javascript.
On the bright side, reducing the need to minimize request count will make our lives as developers a bit easier :-)
I don't know how you're assessing those pages, but bear in mind that
- Counting images can be misleading, since well-optimized sites use spritesheets or data URIs.
- If you're using something like Chrome's dev console to view requests, a lot of them are non-essential requests which are intentionally made after the page is functional.
- HTTP connection caps are per host. The benchmark is making hundreds of requests to one host, whereas a real page might make a dozen requests to the main server, a dozen to some CDN for static files, and a dozen to miscellaneous third parties.
- The benchmark is simulating an uncached experience; with a realistic blend of cached/uncached, HTTP 1 vs 2 performance would be much more comparable.
HTTP/2 is an improvement but if people expect a "5-15X" difference, they're in for a big disappointment.
> These insane perf gains are possible only thanks to HTTP/2, specifically thanks to its support for multiplexing.
What's sad about this is that if you load this site with pipelining enabled you get the same speed benefits as with HTTP/2 or SPDY, but Google would never know this, since they never tested SPDY against pipelining.
> ENHANCE_YOUR_CALM (0xb):
> Please read the spec and understand the technical implications before criticizing.
Please understand that -- technically -- this protocol is an embarrassment to the profession and to those involved in designing it.
HTTP pipelining is busted for a variety of reasons. Support exists in most browsers but it's disabled by default because it makes things worse, on balance.
At the time SPDY came out, Opera and Android Browser had pipelining on by default and Firefox was about to also default it on. They didn't only because of the promise of SPDY, not because pipelining "is busted". Pipelining works fine in almost all cases.
And if you only enable pipelining to known-good servers over a non-MITM SSL connection -- exactly like SPDY does -- then there is absolutely no problem with it and it performs similarly to SPDY. But I have no doubt you will continue spreading the party line from your employer, who couldn't be bothered to even test this.
Firefox wasn't about to default it to being on (yes, there was work being done on it to see how workable it was, but there was no decision to ship it) — it's well-known that pipelining causes all kinds of bizarre breakage with badly behaved servers and proxies (and the latter are where the implementations are especially bad).
Opera had enough problems with pretty crazy-complex heuristics as to when to enable pipelining; it would've been nice for them to have been published, but that has never happened. Determining what a known-good server is over SSL isn't that easy.
> Determining what a known-good server is over SSL isn't that easy.
Just the opposite. Both Firefox and Chrome's discussion of pipelining claim that "unknown" MITM software is why they didn't turn on pipelining. Nobody knows what this software is (could be malware). But whatever this mystery software is can't look inside SSL, so pipelining in SSL was just as doable as inventing SPDY.
If Google hadn't pushed SPDY then pipelining was going to happen, and the unknown bad software would have been fixed or blacklisted. Android was using pipelining for years in Browser until Google replaced it with SPDY. Mobile Safari has been using pipelining since 2013 (probably why it wins the mobile page load time benchmarks). Pipelining works.
Yes, some endpoints could be buggy, for instance IIS 4 (in Windows NT 4) was blacklisted in Firefox. Introducing a new, more complicated protocol just because of 10 year old outdated software is not a great way to solve problems.
The endpoints can be (and often are, in absolute terms) buggy. TLS stops bad proxies from breaking stuff, but it doesn't stop endpoints from breaking stuff.
My "actual" complaint is that it's not enough to be a major version and that it's a system that only benefits large corporations with data to pre-push, with no other benefits.
Snark aside, it's a standardized way of allowing different architectural patterns that can benefit use cases we haven't even seen yet. Yes, those architecture patterns currently benefit large corporations, but they're not being implemented at the expense of anything else. HTTP is a remarkably complete and flexible protocol.
What other benefits were you expecting to see that aren't already part of HTTP/1.1?
* no more easy debugging on the wire
* another TCP like implementation inside the HTTP protocol
* tons of binary data rather than text
* a whole slew of features that we don't really need but that please some corporate sponsor because their feature made it in
* continuing, damaging and absurd lack of DNS and IPv6 considerations
* most notably the omission of any discussion of endpoint resolution
Fixing anything related to DNS, DNSSEC, IPv6, or anything else would have made this closer to "HTTP/2."
And as I said in another thread: yes. Calling it HTTP/1.2 would actually have made me a little happier. This isn't the next new, big thing. This is a minor improvement, if not a minor regression.
You do realise that the overwhelming majority (99%) of HTTP traffic is transferred to or from large companies like Google and Facebook? If it benefits their clients, then it benefits most of the web. HTTP/2 is particularly beneficial for the developing world, where latencies are higher. The world is bigger than you.
Also WTF does HTTP have to do with DNS, DNSSEC, and IPv6? Talk about layering violations...
Also, I think a total wire-protocol change warrants a major version number increase, not that it matters at all.
I don't think anyone's selling it as the next new, big thing. It's just a version increment on HTTP; the one that includes DNS/DNSSEC/IPv6 changes can be called HTTP/3000 for all I care. You don't have to use these features if you don't like; they may make sites from companies like Google harder to reverse engineer, but HTTP is currently used for a lot more than text data. You just seem to confuse "corporations want it" with "bad".
And honestly, IPv6 is probably the biggest "big corporate" feature out there. Any big company providing access to more than 16 million devices (and yes, they do exist) has a very urgent need since the 10.0.0.0/8 network only contains ~16 million addresses.
At the end of the day, it's only a standard. As proven by SPDY, "big corporations" like Google are going to implement whatever the heck they want to, then ask for it to be included in the standard. I'm all for a system that makes it easier for companies to get their technologies standardized as part of an open standard - they're spending the investment dollars, but we all benefit from the capability.
Wait, it being a binary protocol is a good thing. No longer will we have proxies mangling Upgrade handshakes and such.
Header compression, server push and proper multiplexing (which avoids all the problems with pipelining) are all features most applications will benefit from.
This is pretty nonsense. There are a bunch of useful improvements in HTTP/2 – it's not perfect, but I'd rather see incremental improvements of this sort than e.g. the ridiculously extended adoption times we've seen with IPv6.
Please.
Downvoters: although I don't usually do this, I'd ask you to enter into a discussion with me instead of just hitting the down arrow. Do you honestly think my discussion is worth being silenced?