Hacker News new | comments | show | ask | jobs | submit login
Making the Web Faster with HTTP 2 Protocol (phpclasses.org)
35 points by rvavruch 1909 days ago | hide | past | web | 19 comments | favorite

With all the buzz already surrounding SPDY and the number of existing implementations out there... what are the chances that HTTP-2.0 will simply never get the traction it needs for real applications?

I mean SPDY is here and almost every server can handle it already. (where can != is configured to by default) If google pulls features from HTTP-2.0 into SPDY-1.x in the next couple of months, what would be the benefit of anyone doing HTTP-2.0?

It is my understanding that it's very likely that HTTP 2.0 will be SPDY - maybe with some added extensions, but the basis will be SPDY.

A clear case where a working and apparently perfectly well backwards compatible vendor-specific implementation worked so well that it actually might get to be an official standard.

Now if only we could have SSL with name based virtual hosts or much wider use of IPv6 so that SPDY will actually be useful for a wide range of server administrators.

Now if only we could have SSL with name based virtual hosts

We can: it's called SNI (http://en.wikipedia.org/wiki/Server_Name_Indication). Unfortunately browser support isn't quite there; recent versions the major desktop browsers do support it, but e.g. the stock Android browser does not (because Apache HttpClient does not).

We wouldn't need SSL with name-based virtual hosts if web browsers could use SRV records (and thus connect to different ports, so the server would know which cert to cough up without requiring the name.)

This doesn't really scale. Every SSL-protected vhost then needs its own port on the server. In current practice there are probably plenty of free ports, but it just seems like a poor choice overall.

(Regardless, as I mentioned above, name-based vhosts work just fine with HTTPS, using SNI. Much easier to fill in the gaps in browser support for SNI than to get everyone to adopt SRV records.)

HTTP 2.0 has been in the works for a long time. It's picked up again recently, after the HTTP-NG working group disbanded in 1998. The working group will take some features that SPDY has shown to be useful and stable and integrate them into the spec. They are also looking at parts of Microsoft's "Speed and Mobility" protocol.

I understood the article said that SPDY would be the next version of HTTP 2.0, just standarized and not in control of Google (in theory). Which makes a lot of sense, for both server and browser implementors, and Google. I guess they wouldn't want to be the next Microsoft, not on purpose at least.

SPDY is overrated and only has traction because Google decided to make it. So far the data shows that the severely limited gains it provides, it provides at the cost of significant complexity.

I sure as hell could appreciate a new HTTP-standard, which was brought to the field by having professionals (maybe even IETF!) work together and reason about things publically, not just have Google hijack the process, dump shit out, saying "that's what we have implemented. if you want to compete with us, you will have to take this package", and then after the fact release some docs and claim it's open.

You know. SPDY just smells bad wherever you look. I'd like for the new HTTP protocol to be something you can trust

> So far the data shows that the severely limited gains it provides, it provides at the cost of significant complexity.


HTTP: Stateless, simple, plain-text, open, extendable. One request, one resource. Etc.

SPDY: Not-stateless, not-plaintext, includes complecting factors like prioritization and multiplexing. I say that is several orders of magnitudes of increased complexity and uncertainty baked into a protocol.

Result: Much less than an order of magnitude in improved response-time.

I say SPDY comes at such a cost that it cannot be considered worth it. Especially when you consider it comes at the added cost of handing over control of one of internet's main protocol's to a single company to be developed behind closed doors, such as Google is already doing with SPDY.

I'm flabbegastered that people aren't reacting to this. Had Microsoft being doing anything like this, people would be calling their senators to start investigations. But with Google it is evidently all cool.

You'd think people would remember the cost of handing over the internet to one single company, when the results of last time we did that is still plaguing us today (IE).

As for source, this one should do: http://www.theverge.com/2012/5/3/2995881/google-spdy-speed-t...

> I sure as hell could appreciate a new HTTP-standard, which was brought to the field by having professionals (maybe even IETF!) work together and reason about things publically

This is a good point, for instance the developers of Spdy don't actually compare it to HTTP pipelining. They don't measure the effect of 'head of line blocking', just assuming it is a major problem. They don't consider that the protocol performs worse than HTTP over satellite and similar links. Neither the design choices nor specific details have been vetted or backed up by real deliberations.

There are tons of ways a committee of experts could improve Spdy, but it looks like Google is just going to show up with their draft RFC and demand a rubber stamp.

But this is just typical Google. Even when they do work with committees they show up and for instance say 'add big integers to JavaScript or else' (the 'or else' being Dart).

Spdy is already being further developed behind closed door, like a closed standard, owned by Google.

This should really be enough of a reason not to trust them with the task of developing this protocol.



- doesn't include DNS lookup times

- doesn't include packet loss

- doesn't compare to pipelining

First two mean the claimed speedup isn't measuring the whole real time. Third is just amazing... the stock Android browser even uses pipelining. Basically pipelining gives the same benefits as Spdy and they really, really don't want to admit this.


This is what I'm talking about... every one of Microsoft's changes he identifies as b-h is a positive change. That he can't see it I guess is a testament to being young and inexperienced.

"Another aspect of concern is the implementation of server push of resources that were already cached by the browsers. Mobile applications may also not want to retrieve some resources that the server may assume they want to download. So the criticism is that preemptive server push may end up being an undesirable thing."

Why is it "preemptive"? It seems more like a nonpreemptive push, right?

> If the user closes the browser tab and no other pages from the same site are opened, the browser may send an explicit request to end the connection, so it does not keep tying the server.

Does this mean keeping a background tab open uses a remote server's resources indefinitely? How can I as the server dev prevent unintentional DDOS?

Haven't looked at the SPDY spec[1] too closely, but I think each side of the SPDY (or underlying TCP) connection would be able to idle-disconnect after a timeout or during a high-load situation. (i.e. to prevent idle connections from consuming ports/file descriptors)

So in the case you quoted, the server would also be able to explicitly tell the browser to start a new connection later. (It's not just a browser-to-server signal.)

Generally, most HTTP 1.1 (keepalive-aware) servers have a default timeout for those "persistent" connections[2][3] so this isn't actually a new problem specific to SPDY.

(Aside: simply consuming leaving open an idle TCP connection for later re-use doesn't necessarily imply that idle users will "DDOS" a server. Depending on the server software and OS, the cost-per-socket is low enough that many idle connections isn't actually a problem until you get to port and file descriptor limits — which, again, is already well-dealt with in plenty of other HTTP/TCP applications by using timeouts at all.)

[1]: http://www.chromium.org/spdy/spdy-protocol [2]: http://wiki.nginx.org/HttpCoreModule#keepalive_timeout [3]: https://httpd.apache.org/docs/2.2/mod/core.html#keepalivetim...

I don't currently have any SPDY experience so if there is a "best practice" for this I'm unaware of it. With that said, I would take a two pronged approach. On the server side you could set a timeout on that user's session to reclaim resources after a period of time that you deem reasonable, regardless of whether you've received an explicit "I'm done" from the user's browser. I would hope server's implementing SPDY would also allow a way to explicitly end a connection as well. If so, I would close the connection at the time that user's session expires.

Maybe close the connection when it hasn't been used for a while?

I was so close to not opening this because the source is a website with the name 'PHP' in it... but I'm glad I did, nice article.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact