The following cacheable resources have a short freshness lifetime. Specify an expiration at least one week in the future for the following resources:
http://www.google-analytics.com/ga.js (1 day)
I actually don't mind HN taking a while to load. It't not that we have to fight for every new user by aggressively optimizing page load time. Btw. the recent changes to the HN backend already improved the average speed a lot.
So, I've written my own script, which minified seven js-files with total size 84.8 KB to one file with size 26.7 KB. Plus gzip and it will be very small.
"The page Google got an overall Page Speed Score of 100 (out of 100)." ... so they eat their own dog food, or was the homepage the pinnacle of excellence for the building this tool?
I wonder what would happen to my 15% conversion rates if my shopping cart wasn't so crappy... I'm superstitious about switching though because of a fear that it might mess up our organic search traffic.
That's because patio11 is telling you how to keep your site alive under load, while that page is telling you how to decrease your page load times. They're contrary goals in this case. Keep-Alive makes things load faster, but it puts a cap on how many clients can connect to your server before it curls up and dies. It would be ideal if Apache would let you set a high-water limit for Keep-Alive connections after which it turns the feature off, but I don't know any way to do that. You can set how long they're kept alive, and you can set how many requests are allowed per Keep-Alive session, but not how many sessions are kept alive.
This is only true for servers that use separate thread/process for each request... It doesn't apply to event-driven servers (nginx, etc).
I'd even say that keep-alive is always your friend and the longer you can keep connection open the better... Of course there are always OS-level limits (open file descriptors, etc), so you should use LRU-queue on idle keep-alived connections to make sure that you won't run out of resources...
The best way to handle keepalives is to proxy everything through nginx, which can handle a huge number of connections with very little memory. Turn keepalives on in nginx, turn them off on your app server.
It is not mine, but gautaml's one, and it use the independent tool of chrome that perform the same test. But when using the webservice it shows different results.
My bad. I wasn't paying attention to the usernames.
Are you sure they perform the same tests though? Why would Google build a Page Speed Chrome extension if the exact same algorithms were already built in? I suspect they are working towards the same goal but are approaching it differently.
Minifying the following JavaScript resources could reduce their size by 1.1KiB (0% reduction). Minifying http://ajax.googleapis.com/.../jquery-ui.min.js could save 641B (0% reduction). Minifying http://ajax.googleapis.com/.../jquery.min.js could save 516B (0% reduction).