Hacker News new | past | comments | ask | show | jobs | submit login
Nginx Patch: SPDY/3.1 protocol implementation (nginx.org)
124 points by jdorfman on Jan 27, 2014 | hide | past | favorite | 28 comments



With help from some companies:

> Three companies that like Google's approach -- Automattic, MaxCDN, and CloudFlare -- are funding Nginx developers to update its SPDY support to version 3.1, CNET has learned. Under the deal, SPDY 3.1 should arrive in Nginx 1.5 in January, a source familiar with the partnership said.

http://news.cnet.com/8301-1023_3-57616193-93/nginx-upgrade-f...


Just out of curiosity, are they paying Nginx Inc or Nginx contributors?


The developer who announced this works for Nginx the company, so I would assume (emphasis on the assume!) that Nginx Inc. was paid to undertake this.

I don't think this is a problem given that the patch is being openly pushed into the "public" source immediately. It'd also be extremely challenging to pay "the community" for a patch like this (how do you divvy up the funds? per SLOC?), although technically you could always directly pay a single contributor or two.

Directly engaging Nginx Inc seems to be both the surest way to get something delivered and avoid a lot of PR risk, if you ask me.


Normally if a company wants to take that route, they would hire one of the developers as an employee, and the employee's job would be to work on that open source project.


Since ~99.5% of contributions to nginx are made by Nginx Inc employees, I think there wasn't two options.


Good initiative! Just a heads up that server push does not seem to be implemented.

Server push is one of the spec items that got much wanted clarifications in SPDY3 and would be awesome to see nginx support that. Apache mod_spdy, jetty and netty do have it.

Server push makes it handy to provide super low latencies for full page loading by pushing out all the files that "server thinks" browser might need for a page without waiting for the requests.

Server push is not a magic bullet, it can take more bandwidth compared to the regular visit with a browser that has required resources (images, css, js etc) cached. But still better than inlining in html because the resources from server push can be cached.

If you care more about page loading speed (especially for first-time visitors) and less about actual bandwidth consumed, SPDY with server push can be great. Hint: deciding on whether visitor might benefit from "a little push" as a fresh visitor could be done with a cookie existence check or smth similar.


Last year, I had spent about a week in benchmarking SPDY. Server push didn't show any improvements to page load time for even first-time visitors. The whole benchmark was scripted with Chrome running in a dummy X server. The round-trip time was artificially constrained to about ~100ms.

Even if it were to show an improvement in some situations (different round trip times or network speeds), it would be in the microseconds to couple of milliseconds range; with a large variance.

Given that it degrades performance for second-time visitors, I would recommend not enabling it without further benchmarking.


We have a lot to learn on how to use server push effectively. That said, let's analyze some actual use case..

a) Page A currently inlines half dozen small assets on every page. These inlined resources inflate every page and are delivered at same priority as HTML. By contrast, a "dumb push" server delivers these same assets as individual resources via push. Net benefit? Basically same performance since inlining is a form of application-layer push. However, a smart server can at least prioritize and multiplex push bytes in a smarter way... Now, let's make the server just a tiny bit smarter. If it's the same TCP connection and the server has already pushed the resource, don't push it on next request. Now we're also saving bytes... <insert own smarter strategy here>.

b) Page B has two CSS and one JS file in the critical rendering path. Without push you either inline all three (ouch), or, you roundtrip to the client and get it to parse the HTML to discover these resources and come back to the server... With push, the server can avoid the extra RTT and push those assets directly to the client -- this is a huge performance win. How does the server know? Well, either you made it smart.. or you use some adaptive strategy like looking at past referrers and building a map of "when client requests X, they also come back for Y and Z" - this is already available in Jetty.

The fears of extra bytes are also somewhat exaggerated (they're valid, but exaggerated). The client, if they desire, can disable push. The client can also use flow-control to throttle how much data can be transferred in first push window (both FF and Chrome do this already). Lastly, the client can always decline and/or RST a push resource.

Some additional resources: - http://chimera.labs.oreilly.com/books/1230000000545/ch12.htm... - http://www.igvita.com/2013/06/12/innovating-with-http-2.0-se...


These ideas sound fine on paper, but for first time visitors (on a warmed up browser), server push didn't show any improvements to page load times in my benchmarks. My question is why enable server push at all?

I am no longer with the company for whom I performed the benchmarks, that's why I can't publish them. Perhaps there are other benchmarks out there, that show server push is more performant. If so, I would be happy to see them.


I don't know the details of your benchmark methodology or setup (server / client), so not sure I can offer a meaningful response... short of: let's not confuse "my benchmark failed" with "the future doesn't work". Anything from a poorly implemented server (broken multiplexing, flow control, prioritization, etc), to bugs in past versions of Chrome...

Jetty guys had a couple of nice demos they showed off at various conferences. Here's one: http://www.youtube.com/watch?v=4Ai_rrhM8gA


I've been having issues compiling the last 2 versions of nginx. Anyone else seeing this?

  make -f objs/Makefile
  make[1]: Entering directory `/home/newman314/src/nginx-1.5.9'
  cc -c -pipe  -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -march=native -Ofast -fomit-frame-pointer -fstack-protector -D_FORTIFY_SOURCE=2 -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \
		-o objs/src/core/ngx_string.o \
		src/core/ngx_string.c
  {standard input}: Assembler messages:
  {standard input}:1253: Error: no such instruction: `vfmadd312sd .LC5(%rip),%xmm1,%xmm0'
  make[1]: *** [objs/src/core/ngx_string.o] Error 1
Configure flags used: ./configure --with-http_ssl_module --with-http_spdy_module --with-http_gzip_static_module --with-cc-opt='-march=native -Ofast -fomit-frame-pointer -fstack-protector -D_FORTIFY_SOURCE=2'


I cloned from http://hg.nginx.org/nginx to try and reproduce with your configure arguments, but i actually error out elsewhere;

    src/http/ngx_http_request.c: In function ‘ngx_http_set_virtual_server’:
    src/http/ngx_http_request.c:1955:32: error: ‘cscf’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
    cc1: all warnings being treated as errors
which i'm not interested in following up. My `ngx_string.o` did build successfully though. Using gcc 4.7.2 on Debian 7

Are you using a new CPU with an old(er) gcc version? Try specifying -march explicitly instead of native.


Doh. I recently moved this VM from a C2D to an i5 machine. And this is on Ubuntu 12.04 which is old enough not to know about this newer CPU. Still, it should not crap out like that...


Just as a follow up, I tried this with no -march flag and it compiled just fine.

This is with gcc 4.6.3.


This error seems to be caused -Werror compiler flag.


I haven't had issues, but I'm only using -O2. I also have to do "-march=native -mno-avx -mno-aes" because I'm on a Xen instance that doesn't allow the avx/aes instructions.


Thats great. I have been waiting for this since Firefox and Chrome stated that spdy will no longer be supported on next versions.

I hope that nginx will maintain spdy2 and serve version 3 to compatible clients. My company just upgraded to Firefox 14...


Firefox 14? If that's correct, does your company understand it's vulnerable to numerous security vulnerabilities by being so far behind (especially ones that could remotely compromise your entire system)?

If they need to better manage Firefox updates, I recommend they look at swapping to Firefox ESR (Extended Support Release). See https://www.mozilla.org/en-US/firefox/organizations/ for more information.


> since Firefox and Chrome stated that spdy will no longer be supported on next versions

SPDY 2, that is. SPDY 3 will still be supported.


Is this connected with Dropbox testing SPDY [1], or they're on HN front page by chance? Just curious...

[1] https://news.ycombinator.com/item?id=7132202


Seems coincidental. Just highlights that we need HTTP/2, and the sooner the better... Inventing your own app-layer multiplexing on top of HTTP (via base64 chunks - ugh), is not an interesting problem to be solving in 2014.


is QUIC[0] going to be supported anytime soon?

[0] http://en.wikipedia.org/wiki/QUIC


Version 3.1? OK.

So HTTP 1.1 lasted for 20 years without significant changes, but SPDY which has been around for a year or so and is already on version 3, with extra point-releases for shit and giggles.

If you're going to create a protocol and want it to be used internet-wide can you at least plan up front and keep things stable once released?

Glad I don't have to maintain and deploy any code related to this. Jesus christ.


A version number is just an arbitrary point in time, in the case of SPDY, with a certain feature set. Just because SPDY/2 doesn't support everything SPDY/3 supports doesn't mean it's unstable.

This is why I like date-based version numbering, such as Ubuntu 13.10 or C++11.


But date-based numbering provides no information other than the date the code was released on. Semver, on the other hand, conveys much more.


SPDY version numbers will increasing clock up, HTTP/2.0 numbers are likely to be far more less frequent


It's still a draft.

All those changes to, say, 802.11n, WebSockets or OAuth2? Drafts too.


How long did HTTP 0.9 last? How about 1.0? I guess you think that early http was one of those non-stable not-suitable-for-internet-wide release protocols.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: