Hacker News new | past | comments | ask | show | jobs | submit login
HTTP/2 Is Done (mnot.net)
437 points by stephenjudkins on Feb 18, 2015 | hide | past | web | favorite | 136 comments



It's time to begin the long process of unwinding all the hacks that we've built to make HTTP/1.1 fast. No more concatenation of static assets, no more domain sharding.

The future looks more like this, as the default, with no special effort required: https://http2.golang.org/gophertiles

May nobody else have to suffer through writing an interoperable HTTP/1.1 parser!


> May nobody else have to suffer through writing an interoperable HTTP/1.1 parser!

Yes, now it'll be much easier than parsing plain-text. Now they just have to write a TLS stack (several key exchange algorithms; block ciphers; stream ciphers; and data integrity algorithms); then implement the new HPACK compression; then finally a new parser for the HTTP/2 headers themselves.

Now instead of taking maybe one day to write an HTTP/1.1 server, it'll only take a single engineer several years to write an HTTP/2 server (and one mistake will undermine all of its attempts at security.)

If you are going to say, "well use someone else's TLS/HPACK/etc library!", then I'll say the same, "use someone else's HTTP/1.1 header parsing library!"

HTTP/2 may turn out to be great for a lot of things. But making things easier/simpler to program is certainly not one of them. This is a massive step back in terms of simplicity.


I was under the impression that the parent was making the simplicity argument for people on the other side of things - people writing frameworks and websites. Using someone else's HTTP1.1 stack doesn't solve those problems.

Then, separately, writing interoperable HTTP 1.1 is hard because it was designed/taken up ad-hoc in a time of relatively immature browsers. I would expect HTTP2 to increase standardisation in the same way newer HTML/CSS specs have relative to the late-nineties. That doesn't mean that initial implementation will not be more difficult, but it's done once every 20 years (per-vendor).


Have you tried writing anything more than a very simple HTTP/1.1 parser/server? It's actually not as easy as it seems at first - edge cases everywhere, different user agents doing subtly different things, etc. etc.

Your argument is invalid in my opinion. HTTP/1.1 is not simple to implement to any decent level of completeness and correctness, and HTTP/2 does fix a fair few things.

Anyway, there are already plenty of good tools for debugging HTTP/2 streams (Wireshark filters, etc.), and there's only going to be plenty more as time goes by.


> Have you tried writing anything more than a very simple HTTP/1.1 parser/server?

Honestly? No. I wrote an HTTP server that runs my site just fine. It also functions as a proxy server so that I can run Apache+PHP (for phpBB) on the same port 80. (The reason I don't just use Apache is that I generate my pages via C++, because I like that language a hell of a lot more than PHP.) I also have had the HTTP client download files from the internet for various projects (my favorite was to work around a licensing restriction.)

I get around 100,000 hits a month, and have not had any problems. If you think issues will arise when I start reaching into Facebook levels of popularity ... I'm sure they will. But, I'll never get there, so to me it doesn't really matter.

So for my use case, HTTP/2 is unbelievably more challenging and costly to support. Especially as I have about seven subdomains, and nobody's giving out free wildcard SSL certs.

I also didn't even say the added complexity is a bad thing+. Modern Windows/Linux/BSD are infinitely more complex than MS-DOS was, too. I was just pointing out that the OP's elation was misguided. (+ though to be fair, I do believe things should be kept as simple as is reasonable.)

Also, I strongly challenge this notion that you have to be 100% standards-complaint with the entire RFC to run an HTTP/1 server successfully. Because not even Apache is remotely close to that. The mantra is liberal on your input, conservative on your output. And everyone follows that. And as a result, no major projects out there are spitting out headers split into 300 lines using arcane edge cases of the RFCs.


HTTP/1.1 is only complicated if you want to support all of the optional features.


So won't HTTP/2 have these edge cases also?


No. Most of the edge cases arise from trying to parse an underspecified plaintext protocol. Everything in HTTP/2 is length-prefixed and unambiguously specified. That makes it dramatically easier to write a compliant parser or client.


Essentially, no.

HTTP/1.1 contains tons of optional features. Practially no two implementations support the same set.

HTTP/2 is all 100% mandatory. Any compliant HTTP/2 implementation will support an EXACT set of known features.


Won't that create the same problems as XML and XHTML where full compliance is/was mandatory -- and the reality turned out to be different?


Probably not, most, if not all, non-compliant XML/XHTML is either written by hand or by very bad generation tools. In the case of HTML/2 it's a protocol that needs to be implemented by browsers and web servers and there are only so many of those. In the case of XML/XHTML any person throwing up a website or sending a document does the generation with a different set of tools(or by hand)


I've seen tons of computer-generated terrible xml. It's in use in so many custom API's I can't even describe to you. Poor escaping, nesting, &c.


> HTTP/2 is all 100% mandatory

That's a nice idea in theory, but what makes you think anyone is going to adhere to that?

Developers have always, and will always, do whatever they want when they implement your standards.

It was bad enough that when I worked on a binary delta patching format, I made sure there were absolutely zero possible undefined values, because I knew someone might try and use them to add new functionality in.

For something as complex as HTTP, I can guarantee you people will ignore parts of the spec they don't care about. And you can yell at them and say it's not a valid/legal HTTP/2 implementation, but they won't care. They'll keep on doing what they're doing.


It's still a much better situation than HTTP/1.1. At least an HTTP/2 compliant implementation has an exact definition, that it either is, or isn't. An HTTP/1.1 implementation has a vast number of corner cases and optional bits.

Sure, if you're out-of-spec, all bets are off. It's just that with hTTP/1.1 even 100% in-spec implementations are a pretty wide target zone.


> HTTP/2 is all 100% mandatory. Any compliant HTTP/2 implementation will support an EXACT set of known features.

But over the next couple of years, won't people come up with new ideas and add them as optional extensions? How is that handled?

I suspect some of these optional extensions will be really useful in special cases such as support for LZMA/LZHAM compression in addition to just gzip.


There is support for extensions, but they're, well, extensions. The only thing the protocol specifies is that a compliant implementation must pass-through unchanged any block it doesn't understand.

Compare with HTTP/1.1 where for instance the entire content negotiation mechanism is optional and clients need to be able to deal with it not being available.


> There is support for extensions, but they're, well, extensions.

So down the line, it will be pretty much exactly like HTTP 1.0 and 1.1 then.

Good to hear someone thought this thoroughly through before creating a mega-complex protocol unimplementable by most industry-grade engineers, which will also need to be debugged and maintained for all internet-eternity.


Well in that case Apache Ngix Microsoft and the IETF need to keep control of the satndard any one that tries to add the http/2 equivelent of <blink> gets taken out and shot.

I have seen this before with OSI when MCI decided that part of the x.400 standard was optional. And not to mention ICL who thought that starting counting from 0 was a good idea when that standard said MUST START from 1 (and you wonder why the UK doesnt have a mainframe maker any more)


And the lack in simplicity is why it will fail.

People are good with dealing with a small number of simple things that can be stacked together. Throw in a human-readable data stream, and you're set to understand and use a stack of simple programs.

People are not good with dealing with a single monstrous object of unfathomable proportions, they will try to break it down in things they understand. If the thing is too complex, with too many inputs, too many outputs and too many states, this is a recipe for confusion. This is why overly complicated things always fail in face of simple things.

One could argue that FTP/SFTP was just as good as transferring bytes over network, but HTTP/1.0 won because it was simpler.

HTTP/2 was written to tickle the egos of its developers, following the principle - it is hard to write, it is hard to read. And its downfall is going to come from this problem.


I'm really confused by comments like these. Are you somebody who implements low-level network protocols?

I've written parsers and generators for plenty of binary protocols. It's actually really not that bad - you just need slightly different tooling. Yes, if somebody else hasn't written those it takes a bit longer because you have to do that yourself, but you save a lot of time because it's far easier to parse than text. And guess what - people have already written plenty of tooling for HTTP/2 already... And HTTP/2 is fairly straightforward as protocols go (you wouldn't believe the crazy proprietary control protocols around the place - trust me, HTTP/2 is not at all bad)

The 'downfall' of HTTP/2 is also a real long shot - most people are already browsing in browsers that support it, and for many web site owners, using it is literally adding two lines to an nginx configuration file...


> People are good with dealing with a small number of simple things that can be stacked together. Throw in a human-readable data stream, and you're set to understand and use a stack of simple programs.

Not true! Text parsing is a pain in the ass; give me a well-documented binary protocol any day. On the upside, binary protocols tend to force good documentation. HTTP/1.1 is far from simple; every browser supports a slightly different implementation and the server is expected to serve to all of them. But a binary protocol is not any more difficult than a text-based protocol for someone with a decent knowledge of CS. If you don't have a decent knowledge of CS, you probably shouldn't be writing code at the protocol level.

Besides, who in their right mind outputs directly to ANY protocol these days? Unless you're building a web server, you should be doing it through an abstraction layer because it's a proper architecture practice. Once abstraction layers are built for all of the major languages (which I'm willing to bet has already happened) it will become a non-issue.


I haven't looked more into that, but wouldn't it also be viable now to start HTTP/1.2 with e.g. a more restrictive header grammar, restricting all existing features to what's actually used, at least on the server side? Clients with 1.1 support would keep working, but future clients would be simplified.


That would be absolutely lovely.

Since we're not viewing headers manually on 80x25 terminals anymore, we could do away with multi-line header values. That alone would drop off most of the complexity. (Being perfectly honest, even though it's part of the standard, you don't have to parse them now, anyway. I don't, and I've never had anyone complain to me about the site not working. Nothing mainstream sends them for the important fields.)

Add a Server-Push header (filename+ETag), and we could eliminate most extraneous 304 Not Modified requests. Have browsers actually acknowledge Connection: keep-alive instead of opening tons of parallel requests. And leave this as the "hobbyist level, can't afford wildcard SSL certs" option, and I think it'd be quite beneficial.

If browsers want to warn that it's not encrypted, fine. So long as they don't go into ridiculous hysteria levels like they do now with self-signed certs.

One immediate potential downside is Apache. It completely ignores the protocol request. If you ask for "GET / HTTP/1.2", or even "GET / HackerNewsTP/3.141e", it will happily reply with "HTTP/1.1 200 OK"

As a result, the negotiation would be trickier than with HTTP/2.

But like you said, it could be done in a way that it's 100% backward-compatible with existing 1.1 software, so long as their responses are also in a compatible, simplified format (and most already are.)


> If browsers want to warn that it's not encrypted, fine. So long as they don't go into ridiculous hysteria levels like they do now with self-signed certs.

I don't believe they can do anything apart from what happens now. Imagine someone manages to redirect your traffic. You were talking to some website which used known certificate, but this time you got a self-signed one. The browser has two options essentially:

- continue the connection - in this case you just handed over your session cookie, the person on the other side can act as you on that website

- go into "ridiculous hysteria levels" and tell you that the cert presented by the server is not trusted - so do what browsers do right now

There's really no situation where the first option should be allowed. How option 2 is implemented is the interesting detail.


I am with you up to letting people use unencrypted HTTP. I assume people in this can use telnet instead of ssh because it is simpler. No, browsers should drop support for unencrypted HTTP soon after Let's Encrypt goes live.


> browsers should drop support for unencrypted HTTP soon after Let's Encrypt goes live.

To the people who keep insisting everything needs encryption: No it doesn't. Fuck off!

You don't see me forcing PGP on on your email, do you? No? Fine, then let us non-weirdoes keep using plain HTTP where we want it, where we have determined that it is a good fit for our needs.

Besides, this is purely a theoretical concern because a browser which drops support for plain HTTP wont have any user-base as soon people discover that 95% of the internet will broken when using that browser.


> You don't see me forcing PGP on on your email, do you?

Not pgp (as in, not end-to-end encryption). But hopefully in most cases you are forced to encrypt your email (smtp/tls), servers forwarding your email are likely using encryption (smtp/tls between servers), and you're pulling the email over encrypted channel (imaps). Alternatively your mail submission/collection goes over https to the email provider.

And yes, I will insist on everyone using encryption in mail, web, everything. Because once you actually want to use it for some reason, you don't want it to be completely different from all your other traffic, basically screaming "hey, I'm trying to hide some data here, because all my other connections are in plaintext".

Fortunately we're at the stage where everyone is actually forced to use encryption for a lot of their traffic.


So, is the problem encryption or manual encryption ?

I presume you don't care about encryption when you send emails, and yet if you're using a big name your emails will be encrypted without you even knowing it.

That's why I keep wanting to put "automatic" encryption everywhere, and would rather have your browser demote plain HTTP as insecure as a TLS connection with RC4-MD5, and display good security connection with a higher "indicator" than those, even if the certificate is self-signed (yet not as high as a trusted communication)

In practice that would mean "this connection is PROBABLY secure. If you really care about what you're about to do, STOP NOW. If you don't care just go on".

"Automatic" PGP (or really E2E encryption) would be awesome, but there is still far too much manual work for it to happen. Maybe one day we'll be there.


I assume you use telnet instead of ssh too. Once you do a little research and spend a little time figuring out what can actually be done (and is done constantly) to the unencrypted HTTP (anything from user tracking, to ad injections, to identity theft), you will realize just how wrong you are. Yes, HTTP needs to die. Sorry it's taking you a while to see it.


HTTP/2 is easier to parse than HTTP/1.1 because there's less edge cases

Also a bonus: no more "Referer" (sic)


If anyone wants to learn more about optimizing for HTTP/2, unwinding HTTP/1.1 hacks, and strategies to optimize for both versions at the same time, Ilya Grigorik's "High Performance Browser Networking" is an excellent resource:

http://chimera.labs.oreilly.com/books/1230000000545/ch13.htm...


Couldn't agree more, its also one of the best written tech books I've ever read.


Thanks. That book is a great resource.


If it requires a book to optimize for HTTP2, doesn't that counter your comment's parent's point? It's supposed to be simple.


One can write a book about literally anything. But besides that, it already "required" a book – at least two of them in fact, both published before Google even announced SPDY: "High Performance Web Sites" [1] in 2007 and its sequel "Even Faster Web Sites" [2] in 2009, both by Steve Souders.

What are they about? Essentially, optimizing your HTTP responses for the ways in which actual web browsers make HTTP requests. Any web performance analysis tool worth using (like YSlow and PageSpeed – both of which Steve Souders was involved in btw) recommended the practices outlined in those books.

So, no. I don't think a new book with updated practices says anything about the protocol. The optimization tips from this book will simply become widespread common knowledge the same way they did in the past.

[1] http://shop.oreilly.com/product/9780596529307.do [2] http://shop.oreilly.com/product/9780596522315.do


Simple to use doesn't mean simple to create. Even a simple and small code base doesn't mean the think-process preceding the actual programming was simple.


It requires a small section in a book to tell you how to undo all the tricks you've had to learn in the past 10 years to make an HTTP/1.1 website fast. Most of that is irrelevant and detrimental with HTTP/2, which is kind of the point. You'll get the benefits of those optimizations without having to do anything special to get them.

That book is also a phenomenal resource on the performance of all things web-related, so you should check it out regardless of any concerns you have about HTTP/2.


The real problem, I think, will be moving to "idiomatic" HTTP2-centric design (lots of little resources, relying on parallel chunked delivery and server-suggested retrieval) while still keeping HTTP/1.1 clients fast.

I'm betting there will come a polyfill to make HTTP/2 servers able to deliver content to HTTP/1.1-but-HTML5 web browsers in an HTTP/2-idiomatic way—perhaps, for example, delivering the originally-requested page over HTTP/1.1 but having everything else delivered in HTTP2-ish chunks over a websocket.


Now that all of the browsers have moved to an auto update model and the only ones that haven't are on mobiles that have short lifespans, that's really only going to be important to the big players for about a year or two.

Or maybe I'm overly optimistic!


Think all software written for all devices involving every single embedded thingie with a webclient reporting in or polling data everywhere: Every ATM, every POS device, every IOT-device made yet and everyone to be made in the future. Every little gadget with a network stack ever made.

And you're telling me all of those will have an updated HTTP-stack within 2 years?

You're not being "overly optimistic", you're being tragically unrealistic.

Like every other published internet-standard, HTTP/1.0 and HTTP/1.1 will be here until the end of the internet. Sadly the clusterfuck that is HTTP/2.0 will too now.

The people talking about HTTP/3.0 already really seems to have missed this bit. (They're talking about 3.0 because HTTP/2.0 didn't really solve the problems we have with HTTP/1.1, but nevermind that, Google steam-rolled this one through and we want to be trendy)

The question is now: How many HTTP-stacks do you want to support? Is 2 OK? 3? 4? When do you say enough is enough?


You're missing the point. For all of them ut doesn't matter if my consumer website serves multiple assets.

That's what I'm talking about.


Our point is that some people are treating this internet-protocol with a lifetime of decades like it was this week's update of Chrome.

It isn't. And it needs to be treated differently.


> The people talking about HTTP/3.0 already really seems to have missed this bit. (They're talking about 3.0 because HTTP/2.0 didn't really solve the problems we have with HTTP/1.1, but nevermind that, Google steam-rolled this one through and we want to be trendy)

Exactly! There are more important problems to solve, page load time isn't one of them. My list includes:

* Better authentication

* More secure caching

* Improved ability to download large files

* Better methods to find alternate downloads locations

* Making each request contain less information about the sender

* Improved Metadata

I brain-dump a bit here: https://github.com/jimktrains/http_ng

EDIT: Formatting


I'm afraid you are - while things have improved for the reasons you outlined, corporate IT still going to be a major party pooper.


If at any point you need a reference guide of all the HTTP/1.1 hacks that we {could,should,would} change for HTTP/2, I found the following post really useful: http://ma.ttias.be/architecting-websites-http2-era/


nginx doesn't support http2, apache too.. So maybe it's time to make some efforts to implement it on servers first. Maybe make some donations to apache, nginx dev. teams to speed up this process.


nginx fully supports SPDY. HTTP/2 is largely based on SPDY so I expect that nginx will support HTTP/2 very soon.


This is my real worry, at least nginx supports SPDY so has a base to evolve from, Apache support appears to be in a really bad place last time I checked.


The two most largely deployed servers on the Internet don't have HTTP2 support so their experience implementing and using it was never factored into the protocol.

And people wonder why I don't like it.


MS seem to be doing a good job of building it into IIS, Traffic Server has it, as does H2O, nghttp2 and others.

Google, FB, Twitter, Akamai have adapted their http daemons for it.

nginx has SPDY support so HTTP/2 should be forthcoming but I wonder if this will be the death of Apache - they didn't seem to be able to update the SPDY plugin to 2.4


Yes it's a pitty, Apache and mod SPDY is really a pain currently.


Try hacking together a rudimentary one over a socket opened by AT commands. Embedded programming including cellular modems for the win, Bob.


I read Daniel Stenberg's (he is a maintainer of curl, I think?) "http2 explained" pdf the other day, and it's by far the best comprehensive explanation of http2 that I have seen. Well worth a read if you're curious what's coming with http2. http://daniel.haxx.se/http2/



I know I'm apparently not meant to be, but I'm genuinely keen to start using HTTP/2. If you've been following some of the things being done in HTML recently (rel=subresource, rel=dns-prefetch), I think it's starting to become a little obvious that for most people HTTP is the bottleneck. HTTP/2 seems to be a good, solid step forwards. If it's not perfect, well, it doesn't have to be; 2 isn't the last number.


Agreed. I can't wait to start using it. Speed is a very important feature.


this seems nice, pretty conservative.

https://tools.ietf.org/html/draft-ietf-httpbis-http2-17

"

Abstract

   This specification describes an optimized expression of the semantics
   of the Hypertext Transfer Protocol (HTTP).  HTTP/2 enables a more
   efficient use of network resources and a reduced perception of
   latency by introducing header field compression and allowing multiple
   concurrent exchanges on the same connection.  It also introduces
   unsolicited push of representations from servers to clients.

   This specification is an alternative to, but does not obsolete, the
   HTTP/1.1 message syntax.  HTTP's existing semantics remain unchanged.
"

all these changes seem good without a large change, just an improved user experience. (the Introduction section is also good - "HTTP/2 addresses these issues by defining an optimized mapping of HTTP's semantics to an underlying connection", I'd quote more but why not click through the link at the top of this comment. basically just some compression of headers, none of the funky stuff to keep connections alive for server push, prioritizing important requests, etc. all without changing semantics much - great.)


Did that anti-encryption backdoor get put in in the end or not? News reporting on it went quiet a while back...


Does anyone have any information about HTTP/2 development in Apache? Searching the bug list I don't immediately see anything and the only thing I can find from a Google search is a mailing list entry with someone asking about http/2 development and being told that it isn't really being worked on [1]

[1] http://mail-archives.apache.org/mod_mbox/httpd-dev/201408.mb...


I wish instead of a protocol improvement that focused solely on network resources, the next version will also include improvements for users such as encryption by default and doing away with cookies.


It's unfortunate that TLS ended up being optional in HTTP/2, but if the big browsers only support HTTP/2 over TLS (as FF/Chrome have said they will do) then we might see very little non-TLS deployment.


You can begin today to do away with cookies on your own sites and services. Start implementing richer clients and leveraging Open ID Connect and OAuth2.

Cookies solve real use case problems. Unless we all start building and experiencing and improving the alternatives, progress won't be made.

That said, good luck on getting rid of cookies all together.


Excuse my ignorance, but how can I do session management without using cookies?

I tried searching on the net, but it doesn't seem to give any concrete/valid results.

Can you give me any pointers?

Edit: I do use OAuth2.0 on my services and use Mozilla Persona to manage user logins, but I am not clear how can I keep sessions between requests if I don't use cookies.


You can carry the session ID in the URL. This also has the benefit of eliminating XSRF. The downside is that you have a horrendous URL if that type if thing bothers you, and you can't have a "remember me" check box in your login.


This approach has some massive downsides - the session ID is sent via Referer to outbound links, URLs are logged all over the place (including browser histories), it's easy for people to publicly share it without thinking which then ends up in Google as well...


Partying like it's LITERALLY 1999...


That's a horrible suggestion, it's not 1999 anymore…


HTML5 has local storage, so you can put auth tokens in there and only send them when you need them, versus on every request.


So rather than just using cookies effectively, you can make your application absolutely dependant on both JavaScrip and XHR?


"Applications" on the web are inherently dependant on JavaScript and most often XHR too, but I do agree that using Local Storage has little to no advantage over Cookies.


> "Applications" on the web are inherently dependant on JavaScript

No, they are not.


Then we clearly have different definition of "application". For me, an web application runs in the browser, not merely exposes an API over HTTP that can be used by HTML from a browser.


You have a very narrow definition of an application then.


Yeah, him and every other user that includes basic features like "when I select the first step in a sequence of steps, the UI immediately responds instead of waiting several hundred ms to fetch an entirely new set of markup".

Every time I use Tor, I appreciate your viewpoint. But trying to pretend that most developers are better off spending their time maintaining a separate renderer for a few edge case users is not really reflective of reality.


I didn't say JavaScript can't be used to ENHANCE an application, I'm saying it isn't necessary and apps should work without it.


Absolutely. I'm sick and tired of idiot developers fucking up the web with their bullshit "apps" which are slow, crash all the time, and break everything.

I've worked on more "web apps" than I can count and the reality is, only 2 of them were legitimate use cases for a pure JS solution. And we gain nothing from it. I've just spent all morning debugging an issue with a complicated angular directive (and not for the first time) that would have been a few lines of jquery a couple of years ago. Probably because a bored dev wanted to play with a new toy.

As you imply, we were writing sophisticated web apps long before AJAX was popularised and those apps were way more reliable and predictable than what we have now, and they worked in Lynx if you wanted.


Even in 1999, using JavaScript to make far more usable UIs was common. While I'm not a fan of these bloated apps that need a couple megs of stuff just to render some basic looking page, let's not pretend that requiring user action and a full round trip to render even the smallest change was some golden era.

>that would have been a few lines of jquery a couple of years ago

Irony?


Please elaborate.


I agree with you completely. Cookies are an older technology and is well supported in all kinds of browser. Plus it does the client side of session management for you(sending the tokens on every request). Localstorage is newer technology and might not be feasible in all situations. Plus JS+XHR are also not available to all kinds of users. (People using Tor, NoScript etc).

Also, I don't see the advantage of storing session/auth tokens in localstorage over cookies. Both are stored in plain text, and can be read if somehow it is obtained. Also, using localstorage means writing your own client-side implementation of doing the session management.

I also don't see the advantage of using session tokens in URLs. Anyway cookies are included as part of header of the HTTP request, you don't have to have your application send session trackers. I think both are functionally same and the tokens in URLs just does not look good!

And public/private key-based signing system is still not there yet, unless we simplify some UX issues about having private/public keys for every user, we are not getting there.

So, it looks like, to me, there is no really effective alternative for doing sessions apart from cookies (even in HTTP/2)?!


Maybe you can get around not using cookies for an AJAX application that keeps a constant connection open. But for the other 99% of the web, you still need cookies to get the statefullness required for any kind of authentication or tracking.

Cookies aren't auth tokens anyway, just session trackers.


Theoretically, we could move to a private key based system, where your browser encrypts/signs with a private key for each site, but there's neither the will to do it, nor the means to make it simple for the room temperature IQs. Shame, as the privacy and security benefits would be amazing.


This could be done today with TLS Client Certificates. There is already browser support (through either <KeyGen /> or the MS alternative, which is an API rather than an element, I believe) for creating a private/public key pair, and sending the public key to the server.

Unfortunately it's not fantastically simple to move to a new device (particularly not a mobile device where client certs are even harder to install)


>richer clients

Please, no. The internet works because it's compatible, and installing a local client for everything just prevents use of a service.


Stupid question: would you rather have server push serving static content from the application server or a CDN for the static assets? If a CDN, how can server push be leveraged when the assets are not related to each others (and the server can't tell in which order they will be requested).


That's really a very smart comment – no matter what happens with HTTP/2, things like geographic distance and failover are still going to matter and CDNs will still be important.

One key thing which should make this work is that server push should follow the same origin checks as most other recent web standards:

“All pushed resources are subject to the same-origin policy. As a result, the server cannot push arbitrary third-party content to the client; the server must be authoritative for the provided content.”

(http://chimera.labs.oreilly.com/books/1230000000545/ch12.htm...)

Assuming that survives contact with the actual implementations, you should be able to avoid latency-sensitive content going through the CDN while still being able to push out e.g. stylesheets & referenced fonts/images.


Where sites serve the base page through a CDN, then the CDN has the potential to start making intelligent decisions on what should be pushed.

As the simplest level this might be just the CSS, and JS in the <head> but obviously as different UA's behave differently there's scope for much granular optimisations.


That's the easy, but relatively rare scenario. Today most content is dynamic.

Naively, it looks to me that server push will mostly be an improvement for small websites that do not use a CDN, but I can't see how it can coexist with a CDN.

Or it would require a new syntax, where the html tells the browser to start connecting to the CDN with this particular URL, which contains a token, and should be downloaded first which would tell the CDN that a particular list of assets will be needed for that page, and then the CDN will use server push to send these static assets.

Alternatively the CDN would become a proxy for the underlying html page, which would still be generated by the application server. That would probably be simpler.


CDNs aren't limited to just static content, it's quite common for large dynamic sites to deliver their base pages though CDNs.

They can use features like ESI to assemble the final page on the edge from static and dynamic parts, or they can just act as a proxy to the origin with the dynamic page generated there.

Even when the CDN is just acting as a proxy back to the origin there can be performance advantages e.g. lower latency TCP and TLS negotiation between edge to client, and permanent connection between edge and origin i.e. single TCP negotiation for all clients, larger congestion windows leading to higher throughput.

In short CDNs aren't just for static content!


Question: I originally heard HTTP/2 would force TLS and have it baked into the protocol. Is this still the case? If so, is this going to be strictly enforced? I think it's a really terrible idea to melt a protocol and a transport together. Or am I misunderstanding how it works?


This says yes for Chrome and FF but no such requirement for cURL or IE: http://daniel.haxx.se/http2/

How accurate it is, I'm not sure.


I'd really like to know what this means in the context of MeteorJS - particularly how the HTTP Push feature will affect MeteorJS in the long run. Does it make MeteorJS redundant?


[placeholder for commentary about how HTTP/2 is a bad protocol because it's binary and everything could have been fixed in a text protocol follow by ad nauseum repetition of all the same old arguments]


The fact that there's so much disagreement and discontent surrounding this should concern everyone involved. Trade-offs are being made that may benefit some people and organizations, but these trade-offs are also causing significant problems for others.

While there has always been some degree of disagreement regarding technological matters, I think we're really seeing a lot more of it these days, especially when it comes to projects that are open source, or standards that are supposedly open. HTTP/2 is a good example. But we've also got GNOME 3, systemd, how systemd has been included in various Linux distros, many of the recent changes to Firefox, and so forth.

Not only is this disagreement more prevalent, it's also much harsher than what we've seen in the past. Instead of seeing compromise, we're seeing marginalization. We're repeatedly seeing a small number of people force their preferences upon increasingly larger masses of unwilling victims. We're seeing consensus being claimed, but this is only an illusion that barely masks the resentment that is building.

What we're seeing goes beyond mere competition between factions with differing situations. We're seeing any sort of competition, or even just dissent, being highly discouraged, suppressed, or even prevented wherever possible. Those whose needs aren't being met end up backed into a corner and shunned, rather than any effort being put into cooperating with them, with helping them, or even just with considering their views.

This isn't a healthy situation for the community to be in, especially when it comes to projects that allegedly pride themselves on openness. We've already seen this kind of polarization severely harm the GNOME 3 project. We're seeing things get pretty bad within the Debian project. And the HTTP/2 situation hasn't been very encouraging, either.


Is there "so much disagreement and discontent"?

There's a few high profile critics e.g. PHK, who puts his points as an eloquent rant which of course we as a community tend to love but the reality is HTTP/2 is going to get rollout by companies who've tested it and seen the benefits i.e. not just Google.


> While there has always been some degree of disagreement regarding technological matters, I think we're really seeing a lot more of it these days

I don't have any way to dispute this, but I don't think it's easy to provide evidence for it either. I feel that there may simply be more individuals involved in these kinds of discussions these days.

Obviously at some point you have to stop discussing something and start building it. That's not to say that discussion isn't important or shouldn't be encouraged (quite the contrary), but I find it very difficult to make generalizations about where the line should be drawn.


Obviously at some point you have to stop discussing something and start building it.

No, see, that's where we are having problems: there is a perfectly valid answer of "It's good enough, or simple enough, that we will leave it as-is barring a really big leap".

Assuming that you've got to build something to replace the status quo is still itself an assumption. Saying, in effect, "Hey, we've got to build something" naturally disallows a reasonable engineering conservatism.

Software gets better not as you add things, but as you remove them--people keep forgetting this.


"We're seeing any sort of competition, or even just dissent, being highly discouraged, suppressed, or even prevented wherever possible."

Could you please elaborate on this point?


jgrahamc's comment that I replied to is a mild example of this. If people are repeatedly raising the same concerns whenever HTTP/2 is discussed, then there are clearly issues with it that aren't being sufficiently dealt with. Writing off their problems as merely being "all the same old arguments", and discouraging discussion of them, doesn't exactly help solve these problems.

Things tend to be particularly bad when it comes to systemd, though. It isn't unusual to see censorship occur, either in the form of unjustifiable downmodding, comment deletion, or even the banning of participants, depending on the venue. We also are seeing it become increasingly difficult for Debian users, for example, to opt out of using systemd, or to easily switch to an alternative init system.

Instead of people with different preferences or interests working together, or even working independently, we're more often seeing one group of people quench the ability of the competing groups to participate or to even have choice. When discussion is stifled, and choice is taken away, the outcome will likely never be positive.


Count me in the naysayer group. I don't like HTTP/2.

But comparing it with systemd is completely unreasonable. When a huge group of people complained about HTTP/2, it become an optional standard, and HTTP/1.1 is officially the only web standard capable of satisfying a big number of use-cases. No choice is being taken away, it's just a bad standard that is being pushed at server maintainers.


We're repeatedly seeing a small number of people force their preferences upon increasingly larger masses of unwilling victims

This goes both ways though, doesn't it? Much of the arguing about HTTP/2 tended to be Johnny Come Latelies who, if we're being honest, seemed to just want to toss some refuse in the gears. Microsoft, in particular, watched as Google proposed SPDY, and then iterated and shared their findings, and then right as consensus (or as close to consensus as possible) starts to be reached, Microsoft tries to upset the cart. In that case wouldn't Microsoft, and the naysaysers, be the ones trying to force their preferences? The delay of HTTP/2, or basic improvements to these technologies, not only causes hassles for developers (image sprites, resource concatenation, many domains, and on and on), it marginalizes the web.

It is going to be pretty rare when any initiative sees complete unanimous agreement, especially given that many of the parties have ulterior motives and agendas that aren't always clear.


You just saved me like ~10 minutes. Thanks bro.


Another year, another wheel reinvented.


I know you're being somewhat facetious, but have you considered how much the wheel has actually been reinvented?

The first wheels were probably logs under rocks. Then axles got developed, then spokes, then tyres etc.

Everything from the gyroscope to the LHC can attribute it's beginnings to the humble wheel.

Reinvention is, if not always good, always admirable.


Up to a point, I agree. Beyond that point it becomes churn and reinvention for the sake of itself.


HTTP/2 is a bad protocol, that much is clear by now. Luckily most of us won't have to deal with it, because it will be deployed merely as an optimization, with a new generation of reverse-proxy servers, like H2O. https://github.com/h2o/h2o


Care to at least explain why you think it's a bad protocol?


Poul-Henning Kamp (the author of Varnish) explained it best. http://queue.acm.org/detail.cfm?id=2716278


Do you have another explanation which hasn't been torn apart by Hacker News commentators already? As someone who would like to avoid going to the trouble to implement HTTP2, I'm not being sarcastic - I truly want to know why HTTP2 is so bad, and I want to read it from someone who backs up their words with cold-hard facts


Have a look at the W3 HTTP WG mailing list. PHK and others voiced all of their objections there in detail over a long period, and you can also read the rest of the WG's responses: https://www.w3.org/Search/Mail/Public/advanced_search?keywor... (all of PHK's posts to the list, in chronological order).


This for example happened in the HTTP working group earlier today, the whole thing was rushed and the known flaws just keep adding up. https://lists.w3.org/Archives/Public/ietf-http-wg/2015JanMar...


I followed the thread a little further, and the response seemed reasonable: experiments were run on this idea, it didn't seem to help, and nobody (including phk) has offered any further evidence to the contrary since then.

I think you need to expand a little more on why you feel this is a "known flaw".


A 60 page document to explain how to compress text name/value pairs that are mostly unchanged between requests is "reasonable"?

A static compression table with proxy authentication fields in it, as if the network between your browser and LAN proxy is somehow the bottleneck, is reasonable?

This is a most over-engineered protocol, one where nobody knows how tables were constructed or from what data and nobody can quantify what the benefits its features will really be. For instance how much is saved by using a huffman encoding instead of a simple LZ encoding or simple store/recall or not compressing it? Nobody knows! Google did some magic research in private and decided on the most complicated compression method, so therefore HTTP/2 must use it.

This HTTP/2 process is insane.


> For instance how much is saved by using a huffman encoding instead of a simple LZ encoding or simple store/recall or not compressing it? Nobody knows!

It's not all about compression ratios. From the HPACK spec (https://http2.github.io/http2-spec/compression.html#rfc.sect...):

  SPDY [SPDY] initially addressed this redundancy by 
  compressing header fields using the DEFLATE [DEFLATE]
  format, which proved very effective at efficiently 
  representing the redundant header fields. However, that
  approach exposed a security risk as demonstrated by
  the CRIME attack(see [CRIME]).

  This specification defines HPACK, a new compressor for
  header fields which eliminates redundant header fields, 
  limits vulnerability to known security attacks, and which
  has a bounded memory requirement for use in constrained 
  environments. Potential security concerns for HPACK are
  described in Section 7.


The Huffman code is optional and the spec says not to use it on sensitive fields (no mention of what those are). You can use a separate LZ on each header field, compressing really long headers and not having information leak between headers and the bodies. You think they tested that? Or did the people who didn't know about CRIME in the first place just react? Where are the numbers that show a static huffman code that was created from some unknown dataset at one point in time and can't be extended because there's only a single "compress or huffman" bit in the protocol is needed? Like I said, insane.


"As someone who would like to avoid going to the trouble to implement HTTP2 (...)"

Good luck with that. Google has been incredibly adept at using the leverage of their search engine to make webmasters adhere to best practices. ("mobile friendly" e-mails lately, anyone?) Once HTTP/2 lands as a default in Chrome, Apache, Nginx, and Netty, of course they'll push HTTP/2-friendly sites higher in search results.


I think you meant to say "Googles definition of best practices"


Except, as HN has mostly ripped him apart for, his argument is very weak at best.

He goes into zero detail, and where he does, it says things like "likely to increase CO2 consumption"?

Seriously?



Because it doesn't solve any problems except page loading speed. There are other things that people care about and the added complexity of implementing Layer 4 in Layer 7 make it even more of a monstrosity.

I put this in another comment in the above:

* Better authentication

* More secure caching

* Improved ability to download large files

* Better methods to find alternate downloads locations

* Making each request contain less information about the sender

* Improved Metadata

I brain-dump a bit here: https://github.com/jimktrains/http_ng


How is it that sensible comments like this keep getting downvoted?


I just wish people would actually discuss their opposition to any statement I made rather than just downvoting, but so it is.


Well he doesn't need to, by his own proclamation there's no arguing! /s


It is not a bad protocol for what is in there, it is bad for what is not.

It seems like it was built for the big players to eek out 5% more performance. How about the average website? What is in there help standardize authentication? What is in there to help protect privacy?

In the end it looks more HTTP 1.2, with header compression being the only new feature. The rest of what makes up HTTP 2 is basically implementing a new transport layer protocol at the application level.


On tests on a rather average SPA site I worked on, adding the letters "spdy" to the nginx config produced double-digit% performance benefits.

Keeping it backwards compatible with HTTP 1.1 as far as semantics means it will actually get real adoption, very easily, as you can seamlessly enable it via middleware without changing app code anywhere.

I don't know what you mean to "standardize auth", but seeing what a clusterfuck OAuth2 turned into, it'd probably guarantee HTTP2 wouldn't ship for a long time, then ship a mess.

To protect privacy, major browsers plan to only support HTTP2 over TLS. That should be a major incentive for more websites to force TLS. Pretty clever, sorta, even if we might have technical objections to requiring TLS for no "real" reason.


Double-digit performance for a bunch a little files probably. It is less than 5% when compared against concatenated CSS and JavaScript, image sprites, and domain sharding. Its great that HTTP2 saves a build step but it is hardly going to make the web a lot faster.


> help standardize authentication

> help protect privacy

I think this is sort of answered by the headline - "HTTP/2 is done".

If they had succumbed to the second-system effect and added every bell and whistle that every HTTP user would ever want, there would be an endless period of bikeshedding and it would be years before you could even call the spec "done".

They seem to have taken a more conservative approach - changing enough that HTTP 1.2 wasn't accurate, but also not going all-out and trying to define a post-Snowden quantum-computing-based KSTP (kitchen sink transport protocol).


Or they could have solved actual problems people have like the lacking of privacy features and the problems with HTTP Authentication currently.


I agree, and have no problem with a binary protocol (you always have to go down to the byte layer if you want performance networking or storage). But they could have made it much easier, just look at the spec on http://json.org (BSON isn't that bad either). I sometimes wonder if RFC's are designed to be human readable..

I think 5% is a very conservative figure. HTTP has massive overhead for lots of small file (look at the http joke), even with browser caching it still requires multiple round trips (empty TCP frames!) and compression has a negative impact on small files (overhead & bigger files)

The problem is, that to use this effectively (sending a page, plus any CHANGED assets down a single request). Requires quite a bit of work on not only the webserver, but also the application.

Another plus with reusing connections, is not having to re-authenticate every request.


"Standardize authentication"?


Well Basic Auth exists now, but it is unusable for most sites due to several well-documented shortcomings. Surely something could be done to improve upon it.


The most important shortcoming of basic auth is the idea of building it into the protocol to begin with. Session authentication is superior to basic auth.


Like kicking it up to the application layer?


You can add new authentication methods by defining the format of the authorization/authentication headers. OAuth 2 does it. The only thing you need is buy-in from application authors.


This is mostly true but browsers treat Basic Auth special. To use Authorization Bearer headers you have to use JavaScript and perhaps localstorage. When using Basic Auth the browser caches your credentials and allows you to be authenticated without cookies and without JavaScript code. The only way you can use OAuth header authorization today is with JavaScript apps, Basic Auth works with normal server side apps.


> How about the average website?

It's actually much bigger for the average site which doesn't have Google's engineering team supporting hundreds of thousands of edge servers expensively pushed as close the client as possible and massive investment in front-end optimization. Think about how much complexity you can avoid taking on without needing to do things like sharding, spriting, JS bundling, etc.


arbitrary levels of prioritized multiplexing within a functioning congestion control context is the killer feature of http/2.

header compression exists merely to enable that as an implementation detail.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: