Here's a slightly modified MPS config that gets you to
100 without any manual work:
mod_pagespeed can automate all those complicated steps.
(Disclaimer: I work on mod_pagespeed and ngx_pagespeed.)
In the interest of sharing...this is what has worked for us:
1. We don't have any references to external CSS and JS files. This means there is almost no reason for the browser to stop painting the webpage as soon as it starts getting the document or even fractions of it. This means that the user never gets impatient cause it seems that his device is doing nothing. I know this seems like a huge management hassle in terms of changing things, but we got over that by using a server side helper library that spits out the HTML of all the components based on certain input parameters. That way the HTML of the user interface is even more modular and centralized than CSS etc. Besides, it also makes the making of new pages a craft project of sticking different user interface blocks together.
2. All user-interface images for buttons and icons and all are in image sprites. So, an average page is just 2-3 requests. Also sprites once cached load beautifully. In fact, it also makes for a much more pleasing page load as opposed to some parts of the page coming in and some coming in a little bit later.
3. All background repeating images are make 1px thick and saved as optimized as JPEGS and then made part of the CSS with base64 encoding.
4. Of course we very aggressively cache and reduce DB calls and combine DB calls etc etc.
Having done all this, we find that our pages load...well more or less instantly at least with respect to what matters to an average human. But this instant loading is only on a network where the initial connection time is not relevant. On 2G & 3G even, all the optimizations in the world cant save you from the fact that the device takes a LONG time to just connect as Ilya Grigorik has mentioned in his presentation.
This obviously doesn't work if you're serving up multiple megabytes of CSS+JS, but I suspect they're an order of magnitude or two short of that.
Now, you are right, this means that from page to page, those 3-4 bytes might be the same and re-downloaded. However, we are focused on mobile phones and some of our user base is on crappy mobile phone browsers (Nokia phones that are still popular here) that have a bad caching system. So the trade off is between 3-4 bytes of CSS that is a repeat, versus a whole new request. And 3-4 bytes of CSS is not even noticeable. A request most defiantly is.
In that instance, I can see an advantage for inlining it, but it sure doesn't make for attractive code. Priorities though - right?
In my (admittedly somewhat limited) agency experience, performance is rarely an afterthought to developers. Rather, it can be difficult to convince stakeholders that it's worth the extra time to ensure that each of these avenues is thoroughly explored. They'd rather spend their retainer implementing new feature X then making what they've already purchased perform faster.
We do what we can in terms of adding additional time to estimates to account for these things, but it's always a balance. At the end of the day, someone still needs to pay for my time.
Any ideas on the shortfalls of the ranking, areas where it's not accurate, etc?
HTTP2 when it comes (2014?) will negate the need to inline/concatenate CSS or cook up image sprites because it solves it at the proper level, the network layer.
If you load large amounts of render blocking JS/CSS in the <head>, the browser must wait for it to finish downloading before it can render content in <body> to the screen.
To deliver a fast experience, keep the amount of JS/CSS needed to render the initial view to a minimum. Ideally, that'd be no JS, and just the CSS needed to style the content in the initial view. Then, once the initial view has rendered, load the JS and additional CSS needed for the rest of the app.
Unfortunately the question then becomes, how long until all the major browsers support it? And once that happens, how long until the crowd not using the updated browsers is small enough that we can safely ignore them?
I suppose the auto-updating browser trend should help a lot with the latter issue.
So (pardon my ignorance, I googled a bit but didn't find very conclusive results), why isn't everyone using HTTP2/SPDY now? Because of IE < 11 and Safari?
Edit: Or maybe because of the SSL requirement?
Edited again: For clarity.