> Three companies that like Google's approach -- Automattic, MaxCDN, and CloudFlare -- are funding Nginx developers to update its SPDY support to version 3.1, CNET has learned. Under the deal, SPDY 3.1 should arrive in Nginx 1.5 in January, a source familiar with the partnership said.
The developer who announced this works for Nginx the company, so I would assume (emphasis on the assume!) that Nginx Inc. was paid to undertake this.
I don't think this is a problem given that the patch is being openly pushed into the "public" source immediately. It'd also be extremely challenging to pay "the community" for a patch like this (how do you divvy up the funds? per SLOC?), although technically you could always directly pay a single contributor or two.
Directly engaging Nginx Inc seems to be both the surest way to get something delivered and avoid a lot of PR risk, if you ask me.
Normally if a company wants to take that route, they would hire one of the developers as an employee, and the employee's job would be to work on that open source project.
Good initiative! Just a heads up that server push does not seem to be implemented.
Server push is one of the spec items that got much wanted clarifications in SPDY3 and would be awesome to see nginx support that. Apache mod_spdy, jetty and netty do have it.
Server push makes it handy to provide super low latencies for full page loading by pushing out all the files that "server thinks" browser might need for a page without waiting for the requests.
Server push is not a magic bullet, it can take more bandwidth compared to the regular visit with a browser that has required resources (images, css, js etc) cached. But still better than inlining in html because the resources from server push can be cached.
If you care more about page loading speed (especially for first-time visitors) and less about actual bandwidth consumed, SPDY with server push can be great. Hint: deciding on whether visitor might benefit from "a little push" as a fresh visitor could be done with a cookie existence check or smth similar.
Last year, I had spent about a week in benchmarking SPDY. Server push didn't show any improvements to page load time for even first-time visitors. The whole benchmark was scripted with Chrome running in a dummy X server. The round-trip time was artificially constrained to about ~100ms.
Even if it were to show an improvement in some situations (different round trip times or network speeds), it would be in the microseconds to couple of milliseconds range; with a large variance.
Given that it degrades performance for second-time visitors, I would recommend not enabling it without further benchmarking.
We have a lot to learn on how to use server push effectively. That said, let's analyze some actual use case..
a) Page A currently inlines half dozen small assets on every page. These inlined resources inflate every page and are delivered at same priority as HTML. By contrast, a "dumb push" server delivers these same assets as individual resources via push. Net benefit? Basically same performance since inlining is a form of application-layer push. However, a smart server can at least prioritize and multiplex push bytes in a smarter way... Now, let's make the server just a tiny bit smarter. If it's the same TCP connection and the server has already pushed the resource, don't push it on next request. Now we're also saving bytes... <insert own smarter strategy here>.
b) Page B has two CSS and one JS file in the critical rendering path. Without push you either inline all three (ouch), or, you roundtrip to the client and get it to parse the HTML to discover these resources and come back to the server... With push, the server can avoid the extra RTT and push those assets directly to the client -- this is a huge performance win. How does the server know? Well, either you made it smart.. or you use some adaptive strategy like looking at past referrers and building a map of "when client requests X, they also come back for Y and Z" - this is already available in Jetty.
The fears of extra bytes are also somewhat exaggerated (they're valid, but exaggerated). The client, if they desire, can disable push. The client can also use flow-control to throttle how much data can be transferred in first push window (both FF and Chrome do this already). Lastly, the client can always decline and/or RST a push resource.
These ideas sound fine on paper, but for first time visitors (on a warmed up browser), server push didn't show any improvements to page load times in my benchmarks. My question is why enable server push at all?
I am no longer with the company for whom I performed the benchmarks, that's why I can't publish them. Perhaps there are other benchmarks out there, that show server push is more performant. If so, I would be happy to see them.
I don't know the details of your benchmark methodology or setup (server / client), so not sure I can offer a meaningful response... short of: let's not confuse "my benchmark failed" with "the future doesn't work". Anything from a poorly implemented server (broken multiplexing, flow control, prioritization, etc), to bugs in past versions of Chrome...
I cloned from http://hg.nginx.org/nginx to try and reproduce with your configure arguments, but i actually error out elsewhere;
src/http/ngx_http_request.c: In function ‘ngx_http_set_virtual_server’:
src/http/ngx_http_request.c:1955:32: error: ‘cscf’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
cc1: all warnings being treated as errors
which i'm not interested in following up. My `ngx_string.o` did build successfully though. Using gcc 4.7.2 on Debian 7
Are you using a new CPU with an old(er) gcc version? Try specifying -march explicitly instead of native.
Doh. I recently moved this VM from a C2D to an i5 machine. And this is on Ubuntu 12.04 which is old enough not to know about this newer CPU. Still, it should not crap out like that...
I haven't had issues, but I'm only using -O2. I also have to do "-march=native -mno-avx -mno-aes" because I'm on a Xen instance that doesn't allow the avx/aes instructions.
Firefox 14? If that's correct, does your company understand it's vulnerable to numerous security vulnerabilities by being so far behind (especially ones that could remotely compromise your entire system)?
Seems coincidental. Just highlights that we need HTTP/2, and the sooner the better... Inventing your own app-layer multiplexing on top of HTTP (via base64 chunks - ugh), is not an interesting problem to be solving in 2014.
So HTTP 1.1 lasted for 20 years without significant changes, but SPDY which has been around for a year or so and is already on version 3, with extra point-releases for shit and giggles.
If you're going to create a protocol and want it to be used internet-wide can you at least plan up front and keep things stable once released?
Glad I don't have to maintain and deploy any code related to this. Jesus christ.
A version number is just an arbitrary point in time, in the case of SPDY, with a certain feature set. Just because SPDY/2 doesn't support everything SPDY/3 supports doesn't mean it's unstable.
This is why I like date-based version numbering, such as Ubuntu 13.10 or C++11.
How long did HTTP 0.9 last? How about 1.0? I guess you think that early http was one of those non-stable not-suitable-for-internet-wide release protocols.
> Three companies that like Google's approach -- Automattic, MaxCDN, and CloudFlare -- are funding Nginx developers to update its SPDY support to version 3.1, CNET has learned. Under the deal, SPDY 3.1 should arrive in Nginx 1.5 in January, a source familiar with the partnership said.
http://news.cnet.com/8301-1023_3-57616193-93/nginx-upgrade-f...