

Nginx Patch: SPDY/3.1 protocol implementation - jdorfman
http://mailman.nginx.org/pipermail/nginx-devel/2014-January/004890.html

======
dmix
With help from some companies:

> Three companies that like Google's approach -- Automattic, MaxCDN, and
> CloudFlare -- are funding Nginx developers to update its SPDY support to
> version 3.1, CNET has learned. Under the deal, SPDY 3.1 should arrive in
> Nginx 1.5 in January, a source familiar with the partnership said.

[http://news.cnet.com/8301-1023_3-57616193-93/nginx-
upgrade-f...](http://news.cnet.com/8301-1023_3-57616193-93/nginx-upgrade-
funded-by-fans-of-googles-spdy-web-protocol/)

~~~
yeukhon
Just out of curiosity, are they paying Nginx Inc or Nginx contributors?

~~~
elithrar
The developer who announced this works for Nginx the company, so I would
assume (emphasis on the assume!) that Nginx Inc. was paid to undertake this.

I don't think this is a problem given that the patch is being openly pushed
into the "public" source immediately. It'd also be extremely challenging to
pay "the community" for a patch like this (how do you divvy up the funds? per
SLOC?), although technically you could always directly pay a single
contributor or two.

Directly engaging Nginx Inc seems to be both the surest way to get something
delivered _and_ avoid a lot of PR risk, if you ask me.

~~~
cmelbye
Normally if a company wants to take that route, they would hire one of the
developers as an employee, and the employee's job would be to work on that
open source project.

------
AhtiK
Good initiative! Just a heads up that server push does not seem to be
implemented.

Server push is one of the spec items that got much wanted clarifications in
SPDY3 and would be awesome to see nginx support that. Apache mod_spdy, jetty
and netty do have it.

Server push makes it handy to provide super low latencies for full page
loading by pushing out all the files that "server thinks" browser might need
for a page without waiting for the requests.

Server push is not a magic bullet, it can take more bandwidth compared to the
regular visit with a browser that has required resources (images, css, js etc)
cached. But still better than inlining in html because the resources from
server push can be cached.

If you care more about page loading speed (especially for first-time visitors)
and less about actual bandwidth consumed, SPDY with server push can be great.
Hint: deciding on whether visitor might benefit from "a little push" as a
fresh visitor could be done with a cookie existence check or smth similar.

~~~
hrjet
Last year, I had spent about a week in benchmarking SPDY. Server push didn't
show any improvements to page load time for even first-time visitors. The
whole benchmark was scripted with Chrome running in a dummy X server. The
round-trip time was artificially constrained to about ~100ms.

Even if it were to show an improvement in some situations (different round
trip times or network speeds), it would be in the microseconds to couple of
milliseconds range; with a large variance.

Given that it degrades performance for second-time visitors, I would recommend
not enabling it without further benchmarking.

~~~
igrigorik
We have a lot to learn on how to use server push effectively. That said, let's
analyze some actual use case..

a) Page A currently inlines half dozen small assets on every page. These
inlined resources inflate every page and are delivered at same priority as
HTML. By contrast, a "dumb push" server delivers these same assets as
individual resources via push. Net benefit? Basically same performance since
inlining is a form of application-layer push. However, a smart server can at
least prioritize and multiplex push bytes in a smarter way... Now, let's make
the server just a _tiny_ bit smarter. If it's the same TCP connection and the
server has already pushed the resource, don't push it on next request. Now
we're also saving bytes... <insert own smarter strategy here>.

b) Page B has two CSS and one JS file in the critical rendering path. Without
push you either inline all three (ouch), or, you roundtrip to the client and
get it to parse the HTML to discover these resources and come back to the
server... With push, the server can avoid the extra RTT and push those assets
directly to the client -- this is a _huge_ performance win. How does the
server know? Well, either you made it smart.. or you use some adaptive
strategy like looking at past referrers and building a map of "when client
requests X, they also come back for Y and Z" \- this is already available in
Jetty.

The fears of extra bytes are also somewhat exaggerated (they're valid, but
exaggerated). The client, if they desire, can disable push. The client can
also use flow-control to throttle how much data can be transferred in first
push window (both FF and Chrome do this already). Lastly, the client can
always decline and/or RST a push resource.

Some additional resources: \-
[http://chimera.labs.oreilly.com/books/1230000000545/ch12.htm...](http://chimera.labs.oreilly.com/books/1230000000545/ch12.html#HTTP2_PUSH)
\- [http://www.igvita.com/2013/06/12/innovating-with-
http-2.0-se...](http://www.igvita.com/2013/06/12/innovating-with-
http-2.0-server-push/)

~~~
hrjet
These ideas sound fine on paper, but for first time visitors (on a warmed up
browser), server push didn't show any improvements to page load times in my
benchmarks. My question is why enable server push at all?

I am no longer with the company for whom I performed the benchmarks, that's
why I can't publish them. Perhaps there are other benchmarks out there, that
show server push is more performant. If so, I would be happy to see them.

~~~
igrigorik
I don't know the details of your benchmark methodology or setup (server /
client), so not sure I can offer a meaningful response... short of: let's not
confuse "my benchmark failed" with "the future doesn't work". Anything from a
poorly implemented server (broken multiplexing, flow control, prioritization,
etc), to bugs in past versions of Chrome...

Jetty guys had a couple of nice demos they showed off at various conferences.
Here's one:
[http://www.youtube.com/watch?v=4Ai_rrhM8gA](http://www.youtube.com/watch?v=4Ai_rrhM8gA)

------
newman314
I've been having issues compiling the last 2 versions of nginx. Anyone else
seeing this?

    
    
      make -f objs/Makefile
      make[1]: Entering directory `/home/newman314/src/nginx-1.5.9'
      cc -c -pipe  -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -march=native -Ofast -fomit-frame-pointer -fstack-protector -D_FORTIFY_SOURCE=2 -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \
    		-o objs/src/core/ngx_string.o \
    		src/core/ngx_string.c
      {standard input}: Assembler messages:
      {standard input}:1253: Error: no such instruction: `vfmadd312sd .LC5(%rip),%xmm1,%xmm0'
      make[1]: *** [objs/src/core/ngx_string.o] Error 1
    

Configure flags used: ./configure --with-http_ssl_module --with-
http_spdy_module --with-http_gzip_static_module --with-cc-opt='-march=native
-Ofast -fomit-frame-pointer -fstack-protector -D_FORTIFY_SOURCE=2'

~~~
mappu
I cloned from [http://hg.nginx.org/nginx](http://hg.nginx.org/nginx) to try
and reproduce with your configure arguments, but i actually error out
elsewhere;

    
    
        src/http/ngx_http_request.c: In function ‘ngx_http_set_virtual_server’:
        src/http/ngx_http_request.c:1955:32: error: ‘cscf’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
        cc1: all warnings being treated as errors
    

which i'm not interested in following up. My `ngx_string.o` did build
successfully though. Using gcc 4.7.2 on Debian 7

Are you using a new CPU with an old(er) gcc version? Try specifying -march
explicitly instead of native.

~~~
newman314
Doh. I recently moved this VM from a C2D to an i5 machine. And this is on
Ubuntu 12.04 which is old enough not to know about this newer CPU. Still, it
should not crap out like that...

~~~
newman314
Just as a follow up, I tried this with no -march flag and it compiled just
fine.

This is with gcc 4.6.3.

------
xfalcox
Thats great. I have been waiting for this since Firefox and Chrome stated that
spdy will no longer be supported on next versions.

I hope that nginx will maintain spdy2 and serve version 3 to compatible
clients. My company just upgraded to Firefox 14...

~~~
reedloden
Firefox 14? If that's correct, does your company understand it's vulnerable to
numerous security vulnerabilities by being so far behind (especially ones that
could remotely compromise your entire system)?

If they need to better manage Firefox updates, I recommend they look at
swapping to Firefox ESR (Extended Support Release). See
[https://www.mozilla.org/en-
US/firefox/organizations/](https://www.mozilla.org/en-
US/firefox/organizations/) for more information.

------
ecesena
Is this connected with Dropbox testing SPDY [1], or they're on HN front page
by chance? Just curious...

[1]
[https://news.ycombinator.com/item?id=7132202](https://news.ycombinator.com/item?id=7132202)

~~~
igrigorik
Seems coincidental. Just highlights that we need HTTP/2, and the sooner the
better... Inventing your own app-layer multiplexing on top of HTTP (via base64
chunks - ugh), is not an interesting problem to be solving in 2014.

------
dylz
is QUIC[0] going to be supported anytime soon?

[0] [http://en.wikipedia.org/wiki/QUIC](http://en.wikipedia.org/wiki/QUIC)

------
josteink
Version 3.1? OK.

So HTTP 1.1 lasted for 20 years without significant changes, but SPDY which
has been around for a year or so and is already on version 3, with extra
point-releases for shit and giggles.

If you're going to create a protocol and want it to be used internet-wide can
you at least plan up front and keep things stable once released?

Glad I don't have to maintain and deploy any code related to this. Jesus
christ.

~~~
stingraycharles
A version number is just an arbitrary point in time, in the case of SPDY, with
a certain feature set. Just because SPDY/2 doesn't support everything SPDY/3
supports doesn't mean it's unstable.

This is why I like date-based version numbering, such as Ubuntu 13.10 or
C++11.

~~~
StavrosK
But date-based numbering provides no information other than the date the code
was released on. Semver, on the other hand, conveys much more.

