The future looks more like this, as the default, with no special effort required: https://http2.golang.org/gophertiles
May nobody else have to suffer through writing an interoperable HTTP/1.1 parser!
Yes, now it'll be much easier than parsing plain-text. Now they just have to write a TLS stack (several key exchange algorithms; block ciphers; stream ciphers; and data integrity algorithms); then implement the new HPACK compression; then finally a new parser for the HTTP/2 headers themselves.
Now instead of taking maybe one day to write an HTTP/1.1 server, it'll only take a single engineer several years to write an HTTP/2 server (and one mistake will undermine all of its attempts at security.)
If you are going to say, "well use someone else's TLS/HPACK/etc library!", then I'll say the same, "use someone else's HTTP/1.1 header parsing library!"
HTTP/2 may turn out to be great for a lot of things. But making things easier/simpler to program is certainly not one of them. This is a massive step back in terms of simplicity.
Then, separately, writing interoperable HTTP 1.1 is hard because it was designed/taken up ad-hoc in a time of relatively immature browsers. I would expect HTTP2 to increase standardisation in the same way newer HTML/CSS specs have relative to the late-nineties. That doesn't mean that initial implementation will not be more difficult, but it's done once every 20 years (per-vendor).
Your argument is invalid in my opinion. HTTP/1.1 is not simple to implement to any decent level of completeness and correctness, and HTTP/2 does fix a fair few things.
Anyway, there are already plenty of good tools for debugging HTTP/2 streams (Wireshark filters, etc.), and there's only going to be plenty more as time goes by.
Honestly? No. I wrote an HTTP server that runs my site just fine. It also functions as a proxy server so that I can run Apache+PHP (for phpBB) on the same port 80. (The reason I don't just use Apache is that I generate my pages via C++, because I like that language a hell of a lot more than PHP.) I also have had the HTTP client download files from the internet for various projects (my favorite was to work around a licensing restriction.)
I get around 100,000 hits a month, and have not had any problems. If you think issues will arise when I start reaching into Facebook levels of popularity ... I'm sure they will. But, I'll never get there, so to me it doesn't really matter.
So for my use case, HTTP/2 is unbelievably more challenging and costly to support. Especially as I have about seven subdomains, and nobody's giving out free wildcard SSL certs.
I also didn't even say the added complexity is a bad thing+. Modern Windows/Linux/BSD are infinitely more complex than MS-DOS was, too. I was just pointing out that the OP's elation was misguided. (+ though to be fair, I do believe things should be kept as simple as is reasonable.)
Also, I strongly challenge this notion that you have to be 100% standards-complaint with the entire RFC to run an HTTP/1 server successfully. Because not even Apache is remotely close to that. The mantra is liberal on your input, conservative on your output. And everyone follows that. And as a result, no major projects out there are spitting out headers split into 300 lines using arcane edge cases of the RFCs.
HTTP/1.1 contains tons of optional features. Practially no two implementations support the same set.
HTTP/2 is all 100% mandatory. Any compliant HTTP/2 implementation will support an EXACT set of known features.
That's a nice idea in theory, but what makes you think anyone is going to adhere to that?
Developers have always, and will always, do whatever they want when they implement your standards.
It was bad enough that when I worked on a binary delta patching format, I made sure there were absolutely zero possible undefined values, because I knew someone might try and use them to add new functionality in.
For something as complex as HTTP, I can guarantee you people will ignore parts of the spec they don't care about. And you can yell at them and say it's not a valid/legal HTTP/2 implementation, but they won't care. They'll keep on doing what they're doing.
Sure, if you're out-of-spec, all bets are off. It's just that with hTTP/1.1 even 100% in-spec implementations are a pretty wide target zone.
But over the next couple of years, won't people come up with new ideas and add them as optional extensions? How is that handled?
I suspect some of these optional extensions will be really useful in special cases such as support for LZMA/LZHAM compression in addition to just gzip.
Compare with HTTP/1.1 where for instance the entire content negotiation mechanism is optional and clients need to be able to deal with it not being available.
So down the line, it will be pretty much exactly like HTTP 1.0 and 1.1 then.
Good to hear someone thought this thoroughly through before creating a mega-complex protocol unimplementable by most industry-grade engineers, which will also need to be debugged and maintained for all internet-eternity.
I have seen this before with OSI when MCI decided that part of the x.400 standard was optional. And not to mention ICL who thought that starting counting from 0 was a good idea when that standard said MUST START from 1 (and you wonder why the UK doesnt have a mainframe maker any more)
People are good with dealing with a small number of simple things that can be stacked together. Throw in a human-readable data stream, and you're set to understand and use a stack of simple programs.
People are not good with dealing with a single monstrous object of unfathomable proportions, they will try to break it down in things they understand. If the thing is too complex, with too many inputs, too many outputs and too many states, this is a recipe for confusion. This is why overly complicated things always fail in face of simple things.
One could argue that FTP/SFTP was just as good as transferring bytes over network, but HTTP/1.0 won because it was simpler.
HTTP/2 was written to tickle the egos of its developers, following the principle - it is hard to write, it is hard to read. And its downfall is going to come from this problem.
I've written parsers and generators for plenty of binary protocols. It's actually really not that bad - you just need slightly different tooling. Yes, if somebody else hasn't written those it takes a bit longer because you have to do that yourself, but you save a lot of time because it's far easier to parse than text. And guess what - people have already written plenty of tooling for HTTP/2 already... And HTTP/2 is fairly straightforward as protocols go (you wouldn't believe the crazy proprietary control protocols around the place - trust me, HTTP/2 is not at all bad)
The 'downfall' of HTTP/2 is also a real long shot - most people are already browsing in browsers that support it, and for many web site owners, using it is literally adding two lines to an nginx configuration file...
Not true! Text parsing is a pain in the ass; give me a well-documented binary protocol any day. On the upside, binary protocols tend to force good documentation. HTTP/1.1 is far from simple; every browser supports a slightly different implementation and the server is expected to serve to all of them. But a binary protocol is not any more difficult than a text-based protocol for someone with a decent knowledge of CS. If you don't have a decent knowledge of CS, you probably shouldn't be writing code at the protocol level.
Besides, who in their right mind outputs directly to ANY protocol these days? Unless you're building a web server, you should be doing it through an abstraction layer because it's a proper architecture practice. Once abstraction layers are built for all of the major languages (which I'm willing to bet has already happened) it will become a non-issue.
Since we're not viewing headers manually on 80x25 terminals anymore, we could do away with multi-line header values. That alone would drop off most of the complexity. (Being perfectly honest, even though it's part of the standard, you don't have to parse them now, anyway. I don't, and I've never had anyone complain to me about the site not working. Nothing mainstream sends them for the important fields.)
Add a Server-Push header (filename+ETag), and we could eliminate most extraneous 304 Not Modified requests. Have browsers actually acknowledge Connection: keep-alive instead of opening tons of parallel requests. And leave this as the "hobbyist level, can't afford wildcard SSL certs" option, and I think it'd be quite beneficial.
If browsers want to warn that it's not encrypted, fine. So long as they don't go into ridiculous hysteria levels like they do now with self-signed certs.
One immediate potential downside is Apache. It completely ignores the protocol request. If you ask for "GET / HTTP/1.2", or even "GET / HackerNewsTP/3.141e", it will happily reply with "HTTP/1.1 200 OK"
As a result, the negotiation would be trickier than with HTTP/2.
But like you said, it could be done in a way that it's 100% backward-compatible with existing 1.1 software, so long as their responses are also in a compatible, simplified format (and most already are.)
I don't believe they can do anything apart from what happens now. Imagine someone manages to redirect your traffic. You were talking to some website which used known certificate, but this time you got a self-signed one. The browser has two options essentially:
- continue the connection - in this case you just handed over your session cookie, the person on the other side can act as you on that website
- go into "ridiculous hysteria levels" and tell you that the cert presented by the server is not trusted - so do what browsers do right now
There's really no situation where the first option should be allowed. How option 2 is implemented is the interesting detail.
To the people who keep insisting everything needs encryption: No it doesn't. Fuck off!
You don't see me forcing PGP on on your email, do you? No? Fine, then let us non-weirdoes keep using plain HTTP where we want it, where we have determined that it is a good fit for our needs.
Besides, this is purely a theoretical concern because a browser which drops support for plain HTTP wont have any user-base as soon people discover that 95% of the internet will broken when using that browser.
Not pgp (as in, not end-to-end encryption). But hopefully in most cases you are forced to encrypt your email (smtp/tls), servers forwarding your email are likely using encryption (smtp/tls between servers), and you're pulling the email over encrypted channel (imaps). Alternatively your mail submission/collection goes over https to the email provider.
And yes, I will insist on everyone using encryption in mail, web, everything. Because once you actually want to use it for some reason, you don't want it to be completely different from all your other traffic, basically screaming "hey, I'm trying to hide some data here, because all my other connections are in plaintext".
Fortunately we're at the stage where everyone is actually forced to use encryption for a lot of their traffic.
I presume you don't care about encryption when you send emails, and yet if you're using a big name your emails will be encrypted without you even knowing it.
That's why I keep wanting to put "automatic" encryption everywhere, and would rather have your browser demote plain HTTP as insecure as a TLS connection with RC4-MD5, and display good security connection with a higher "indicator" than those, even if the certificate is self-signed (yet not as high as a trusted communication)
In practice that would mean "this connection is PROBABLY secure. If you really care about what you're about to do, STOP NOW. If you don't care just go on".
"Automatic" PGP (or really E2E encryption) would be awesome, but there is still far too much manual work for it to happen. Maybe one day we'll be there.
Also a bonus: no more "Referer" (sic)
What are they about? Essentially, optimizing your HTTP responses for the ways in which actual web browsers make HTTP requests. Any web performance analysis tool worth using (like YSlow and PageSpeed – both of which Steve Souders was involved in btw) recommended the practices outlined in those books.
So, no. I don't think a new book with updated practices says anything about the protocol. The optimization tips from this book will simply become widespread common knowledge the same way they did in the past.
That book is also a phenomenal resource on the performance of all things web-related, so you should check it out regardless of any concerns you have about HTTP/2.
I'm betting there will come a polyfill to make HTTP/2 servers able to deliver content to HTTP/1.1-but-HTML5 web browsers in an HTTP/2-idiomatic way—perhaps, for example, delivering the originally-requested page over HTTP/1.1 but having everything else delivered in HTTP2-ish chunks over a websocket.
Or maybe I'm overly optimistic!
And you're telling me all of those will have an updated HTTP-stack within 2 years?
You're not being "overly optimistic", you're being tragically unrealistic.
Like every other published internet-standard, HTTP/1.0 and HTTP/1.1 will be here until the end of the internet. Sadly the clusterfuck that is HTTP/2.0 will too now.
The people talking about HTTP/3.0 already really seems to have missed this bit. (They're talking about 3.0 because HTTP/2.0 didn't really solve the problems we have with HTTP/1.1, but nevermind that, Google steam-rolled this one through and we want to be trendy)
The question is now: How many HTTP-stacks do you want to support? Is 2 OK? 3? 4? When do you say enough is enough?
That's what I'm talking about.
It isn't. And it needs to be treated differently.
Exactly! There are more important problems to solve, page load time isn't one of them. My list includes:
* Better authentication
* More secure caching
* Improved ability to download large files
* Better methods to find alternate downloads locations
* Making each request contain less information about the sender
* Improved Metadata
I brain-dump a bit here: https://github.com/jimktrains/http_ng
And people wonder why I don't like it.
Google, FB, Twitter, Akamai have adapted their http daemons for it.
nginx has SPDY support so HTTP/2 should be forthcoming but I wonder if this will be the death of Apache - they didn't seem to be able to update the SPDY plugin to 2.4
This specification describes an optimized expression of the semantics
of the Hypertext Transfer Protocol (HTTP). HTTP/2 enables a more
efficient use of network resources and a reduced perception of
latency by introducing header field compression and allowing multiple
concurrent exchanges on the same connection. It also introduces
unsolicited push of representations from servers to clients.
This specification is an alternative to, but does not obsolete, the
HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged.
all these changes seem good without a large change, just an improved user experience. (the Introduction section is also good - "HTTP/2 addresses these issues by defining an optimized mapping of HTTP's semantics to an underlying connection", I'd quote more but why not click through the link at the top of this comment. basically just some compression of headers, none of the funky stuff to keep connections alive for server push, prioritizing important requests, etc. all without changing semantics much - great.)
Cookies solve real use case problems. Unless we all start building and experiencing and improving the alternatives, progress won't be made.
That said, good luck on getting rid of cookies all together.
I tried searching on the net, but it doesn't seem to give any concrete/valid results.
Can you give me any pointers?
No, they are not.
Every time I use Tor, I appreciate your viewpoint. But trying to pretend that most developers are better off spending their time maintaining a separate renderer for a few edge case users is not really reflective of reality.
I've worked on more "web apps" than I can count and the reality is, only 2 of them were legitimate use cases for a pure JS solution. And we gain nothing from it. I've just spent all morning debugging an issue with a complicated angular directive (and not for the first time) that would have been a few lines of jquery a couple of years ago. Probably because a bored dev wanted to play with a new toy.
As you imply, we were writing sophisticated web apps long before AJAX was popularised and those apps were way more reliable and predictable than what we have now, and they worked in Lynx if you wanted.
>that would have been a few lines of jquery a couple of years ago
Also, I don't see the advantage of storing session/auth tokens in localstorage over cookies. Both are stored in plain text, and can be read if somehow it is obtained. Also, using localstorage means writing your own client-side implementation of doing the session management.
I also don't see the advantage of using session tokens in URLs. Anyway cookies are included as part of header of the HTTP request, you don't have to have your application send session trackers. I think both are functionally same and the tokens in URLs just does not look good!
And public/private key-based signing system is still not there yet, unless we simplify some UX issues about having private/public keys for every user, we are not getting there.
So, it looks like, to me, there is no really effective alternative for doing sessions apart from cookies (even in HTTP/2)?!
Cookies aren't auth tokens anyway, just session trackers.
Unfortunately it's not fantastically simple to move to a new device (particularly not a mobile device where client certs are even harder to install)
Please, no. The internet works because it's compatible, and installing a local client for everything just prevents use of a service.
One key thing which should make this work is that server push should follow the same origin checks as most other recent web standards:
“All pushed resources are subject to the same-origin policy. As a result, the server cannot push arbitrary third-party content to the client; the server must be authoritative for the provided content.”
Assuming that survives contact with the actual implementations, you should be able to avoid latency-sensitive content going through the CDN while still being able to push out e.g. stylesheets & referenced fonts/images.
As the simplest level this might be just the CSS, and JS in the <head> but obviously as different UA's behave differently there's scope for much granular optimisations.
Naively, it looks to me that server push will mostly be an improvement for small websites that do not use a CDN, but I can't see how it can coexist with a CDN.
Or it would require a new syntax, where the html tells the browser to start connecting to the CDN with this particular URL, which contains a token, and should be downloaded first which would tell the CDN that a particular list of assets will be needed for that page, and then the CDN will use server push to send these static assets.
Alternatively the CDN would become a proxy for the underlying html page, which would still be generated by the application server. That would probably be simpler.
They can use features like ESI to assemble the final page on the edge from static and dynamic parts, or they can just act as a proxy to the origin with the dynamic page generated there.
Even when the CDN is just acting as a proxy back to the origin there can be performance advantages e.g. lower latency TCP and TLS negotiation between edge to client, and permanent connection between edge and origin i.e. single TCP negotiation for all clients, larger congestion windows leading to higher throughput.
In short CDNs aren't just for static content!
How accurate it is, I'm not sure.
While there has always been some degree of disagreement regarding technological matters, I think we're really seeing a lot more of it these days, especially when it comes to projects that are open source, or standards that are supposedly open. HTTP/2 is a good example. But we've also got GNOME 3, systemd, how systemd has been included in various Linux distros, many of the recent changes to Firefox, and so forth.
Not only is this disagreement more prevalent, it's also much harsher than what we've seen in the past. Instead of seeing compromise, we're seeing marginalization. We're repeatedly seeing a small number of people force their preferences upon increasingly larger masses of unwilling victims. We're seeing consensus being claimed, but this is only an illusion that barely masks the resentment that is building.
What we're seeing goes beyond mere competition between factions with differing situations. We're seeing any sort of competition, or even just dissent, being highly discouraged, suppressed, or even prevented wherever possible. Those whose needs aren't being met end up backed into a corner and shunned, rather than any effort being put into cooperating with them, with helping them, or even just with considering their views.
This isn't a healthy situation for the community to be in, especially when it comes to projects that allegedly pride themselves on openness. We've already seen this kind of polarization severely harm the GNOME 3 project. We're seeing things get pretty bad within the Debian project. And the HTTP/2 situation hasn't been very encouraging, either.
There's a few high profile critics e.g. PHK, who puts his points as an eloquent rant which of course we as a community tend to love but the reality is HTTP/2 is going to get rollout by companies who've tested it and seen the benefits i.e. not just Google.
I don't have any way to dispute this, but I don't think it's easy to provide evidence for it either. I feel that there may simply be more individuals involved in these kinds of discussions these days.
Obviously at some point you have to stop discussing something and start building it. That's not to say that discussion isn't important or shouldn't be encouraged (quite the contrary), but I find it very difficult to make generalizations about where the line should be drawn.
No, see, that's where we are having problems: there is a perfectly valid answer of "It's good enough, or simple enough, that we will leave it as-is barring a really big leap".
Assuming that you've got to build something to replace the status quo is still itself an assumption. Saying, in effect, "Hey, we've got to build something" naturally disallows a reasonable engineering conservatism.
Software gets better not as you add things, but as you remove them--people keep forgetting this.
Could you please elaborate on this point?
Things tend to be particularly bad when it comes to systemd, though. It isn't unusual to see censorship occur, either in the form of unjustifiable downmodding, comment deletion, or even the banning of participants, depending on the venue. We also are seeing it become increasingly difficult for Debian users, for example, to opt out of using systemd, or to easily switch to an alternative init system.
Instead of people with different preferences or interests working together, or even working independently, we're more often seeing one group of people quench the ability of the competing groups to participate or to even have choice. When discussion is stifled, and choice is taken away, the outcome will likely never be positive.
But comparing it with systemd is completely unreasonable. When a huge group of people complained about HTTP/2, it become an optional standard, and HTTP/1.1 is officially the only web standard capable of satisfying a big number of use-cases. No choice is being taken away, it's just a bad standard that is being pushed at server maintainers.
This goes both ways though, doesn't it? Much of the arguing about HTTP/2 tended to be Johnny Come Latelies who, if we're being honest, seemed to just want to toss some refuse in the gears. Microsoft, in particular, watched as Google proposed SPDY, and then iterated and shared their findings, and then right as consensus (or as close to consensus as possible) starts to be reached, Microsoft tries to upset the cart. In that case wouldn't Microsoft, and the naysaysers, be the ones trying to force their preferences? The delay of HTTP/2, or basic improvements to these technologies, not only causes hassles for developers (image sprites, resource concatenation, many domains, and on and on), it marginalizes the web.
It is going to be pretty rare when any initiative sees complete unanimous agreement, especially given that many of the parties have ulterior motives and agendas that aren't always clear.
The first wheels were probably logs under rocks. Then axles got developed, then spokes, then tyres etc.
Everything from the gyroscope to the LHC can attribute it's beginnings to the humble wheel.
Reinvention is, if not always good, always admirable.
I think you need to expand a little more on why you feel this is a "known flaw".
A static compression table with proxy authentication fields in it, as if the network between your browser and LAN proxy is somehow the bottleneck, is reasonable?
This is a most over-engineered protocol, one where nobody knows how tables were constructed or from what data and nobody can quantify what the benefits its features will really be. For instance how much is saved by using a huffman encoding instead of a simple LZ encoding or simple store/recall or not compressing it? Nobody knows! Google did some magic research in private and decided on the most complicated compression method, so therefore HTTP/2 must use it.
This HTTP/2 process is insane.
It's not all about compression ratios. From the HPACK spec (https://http2.github.io/http2-spec/compression.html#rfc.sect...):
SPDY [SPDY] initially addressed this redundancy by
compressing header fields using the DEFLATE [DEFLATE]
format, which proved very effective at efficiently
representing the redundant header fields. However, that
approach exposed a security risk as demonstrated by
the CRIME attack(see [CRIME]).
This specification defines HPACK, a new compressor for
header fields which eliminates redundant header fields,
limits vulnerability to known security attacks, and which
has a bounded memory requirement for use in constrained
environments. Potential security concerns for HPACK are
described in Section 7.
Good luck with that. Google has been incredibly adept at using the leverage of their search engine to make webmasters adhere to best practices. ("mobile friendly" e-mails lately, anyone?) Once HTTP/2 lands as a default in Chrome, Apache, Nginx, and Netty, of course they'll push HTTP/2-friendly sites higher in search results.
He goes into zero detail, and where he does, it says things like "likely to increase CO2 consumption"?
I put this in another comment in the above:
* Better authentication
* More secure caching
* Improved ability to download large files
* Better methods to find alternate downloads locations
* Making each request contain less information about the sender
* Improved Metadata
It seems like it was built for the big players to eek out 5% more performance. How about the average website? What is in there help standardize authentication? What is in there to help protect privacy?
In the end it looks more HTTP 1.2, with header compression being the only new feature. The rest of what makes up HTTP 2 is basically implementing a new transport layer protocol at the application level.
Keeping it backwards compatible with HTTP 1.1 as far as semantics means it will actually get real adoption, very easily, as you can seamlessly enable it via middleware without changing app code anywhere.
I don't know what you mean to "standardize auth", but seeing what a clusterfuck OAuth2 turned into, it'd probably guarantee HTTP2 wouldn't ship for a long time, then ship a mess.
To protect privacy, major browsers plan to only support HTTP2 over TLS. That should be a major incentive for more websites to force TLS. Pretty clever, sorta, even if we might have technical objections to requiring TLS for no "real" reason.
> help protect privacy
I think this is sort of answered by the headline - "HTTP/2 is done".
If they had succumbed to the second-system effect and added every bell and whistle that every HTTP user would ever want, there would be an endless period of bikeshedding and it would be years before you could even call the spec "done".
They seem to have taken a more conservative approach - changing enough that HTTP 1.2 wasn't accurate, but also not going all-out and trying to define a post-Snowden quantum-computing-based KSTP (kitchen sink transport protocol).
I think 5% is a very conservative figure. HTTP has massive overhead for lots of small file (look at the http joke), even with browser caching it still requires multiple round trips (empty TCP frames!) and compression has a negative impact on small files (overhead & bigger files)
The problem is, that to use this effectively (sending a page, plus any CHANGED assets down a single request). Requires quite a bit of work on not only the webserver, but also the application.
Another plus with reusing connections, is not having to re-authenticate every request.
It's actually much bigger for the average site which doesn't have Google's engineering team supporting hundreds of thousands of edge servers expensively pushed as close the client as possible and massive investment in front-end optimization. Think about how much complexity you can avoid taking on without needing to do things like sharding, spriting, JS bundling, etc.
header compression exists merely to enable that as an implementation detail.