

Nginx port of mod_pagespeed - dpaluy
https://github.com/pagespeed/ngx_pagespeed

======
igrigorik
Just a heads up.. ngx_pagespeed is a work in progress. As in, the skeleton is
there and there is a filter or two running as a proof of concept, but there is
still a lot of work to be done. Contributions would be most welcome.. :)

~~~
taligent
What help can we provide.

Apart from the whole "making it work part". Is there something in particular
that is difficult ?

~~~
igrigorik
Mostly just hooking up all the API's - aka, "making it work", and making it
work in the context of nginx. As a reference point, the Apache implementation
took a while to get right even just because understanding all of the gotchas
of Apache's worker model and surrounding API's took time - half a dozen false
starts, etc. Now it's solid.

If someone has good understanding of nginx internals, then any input,
guidance, etc, would be awesome, as it would short circuit a lot of that extra
work. We have a design doc in progress, I'll work on making it public and will
share it in the project readme. In the meantime, if you have any suggestions,
open an issue on the repo and let us know!

~~~
zhuzhaoyuan
Cool!

We have a good understanding of the nginx internals - the Tengine web server
maintained by us is an example (<http://tengine.taobao.org/>
<http://github.com/taobao/tengine>) We also want to ask how to contribute and
look forward to your design doc :)

------
ashray
This looks pretty cool. For what it's worth - I did give Google's Pagespeed
service a shot a month or so ago and ran from it like the wind. It's just that
web pages seem to be extremely complex beasts and it's really hard to find a
one-size-fits-all solution.

I want my scripts cached a certain way for a certain thing and my deployment
systems do take care of all that. Given that situation - the only thing
pagespeed was adding for me was serving pages through google's CDN. However,
the upstream connection from google to my server was pretty poor and for most
cases it ended up ADDING latency! :(

So for now, I'll give automated 'page-speed' tools a pass and simply try and
do each step that Y!Slow suggests manually.

~~~
alanctgardner2
I'm not sure if I misunderstand your CDN remark, but your web server shouldn't
be connecting to the Google CDN at any point. Your server provides the list of
resources to the client browser, which is responsible for connecting to remote
hosts to fetch those resources. If your server is near you, you will almost
always see better latency for locally hosted content. But for users in Asia,
being able to download something like jquery from a more local Google server
helps significantly. Plus, Google CDN content can be cached, so users don't
have to download it again for each site they visit.

I guess the point is, is your site for your personal use, or the world at
large?

~~~
ashray
Google has been testing a 'hosted' version of mod_pagespeed over the last few
months. It's currently invite only but what you do is you add a CNAME record
to your domain and point it to google's caching servers.

So then what happens is that Google does a fetch from your server, and
optimizes the response using mod_pagespeed techniques (including throwing your
static assets onto their CDN and other funky stuff..).

Basically, all your CSS/JS/page-content/images get fetched through google's
CDNs but this functionality relies on one critical part of the framework
functioning really well - Google's connection to your origin server.

This is where it's at: <https://developers.google.com/speed/pagespeed/service>

Also, yeah my site is for the world at large and that's why this is an issue.
I can definitely see how it's useful for personal websites and small projects.

~~~
alanctgardner2
OK, that makes more sense. Actually reading the site, I don't see why anyone
would opt for that service over something like CloudFront and some manual
optimization, unless it was a very small project.

~~~
igrigorik
If your upstream bandwidth to Google is poor, chances are, your upstream
bandwidth is poor for _most other users_. In other words, I would consider
this an issue on its own, and one definitely worth investigating.

CDN's are popular for static content, but there are a number of very good
reasons to proxy dynamic content through your CDN as well. The most obvious
advantage is that you can terminate the TCP connection much closer to the
user, and then can leverage the optimized backhaul network to fetch from
origin. This alone can be a huge win, especially if you have SSL in the mix,
which requires 2-3 RTT's before it can even send the request.

------
eberfreitas
I was just wondering if nginx would have something similar to the apache
module. Glad to see this here. Unfortunately I don't have the skills to help,
but if there is another way to support this, please, let me know!

------
mogop
WoW! That escalated quickly

