Hacker News new | comments | show | ask | jobs | submit login
The Right Stuff: Breaking the PageSpeed Barrier with Bootstrap (appneta.com)
118 points by trjordan 1489 days ago | hide | past | web | favorite | 33 comments



While they do point out Ilya's comment in an update, it should be more central:

    Here's a slightly modified MPS config that gets you to
    100 without any manual work:

      ModPageSpeed on
      ModPagespeedRewriteLevel CoreFilters
      ModPagespeedEnableFilters prioritize_critical_css
      ModPagespeedEnableFilters defer_javascript
      ModPagespeedEnableFilters sprite_images
      ModPagespeedEnableFilters convert_png_to_jpeg
      ModPagespeedEnableFilters convert_jpeg_to_webp
      ModPagespeedEnableFilters collapse_whitespace
      ModPagespeedEnableFilters remove_comments
-- https://github.com/danriti/bootstrap-pagespeed/issues/4

mod_pagespeed can automate all those complicated steps.

(Disclaimer: I work on mod_pagespeed and ngx_pagespeed.)


I've had huuuuge problems with defer_javascript before using specific libraries. If it works it is an amazing help, but I'd carefully bug-test after you implement if you use that filter!


The defer_javascript filter is amazing on some pages and completely breaks others. It's frustrating because it has so much potential, but it's too dangerous for us to turn on by default. If it looks good on your site in testing, though, it's probably fine.

The basic problem is that javascript expects to run at a certain time and at a certain point in the page so we have to do all sorts of crazy things in the background to make that appear to be the case when it isn't. This fixes many of the pages (document.write works) but there are enough ways for javascript to be introspective that we can't catch everything.


Does mod_pagespeed work with Apache 2.4 now?


Yup. Both the beta and stable versions work with 2.4.


Thank you, I can't wait to try this...


i updated the article to move the inline update to the `2. Enable mod_pagespeed` section as requested =)


Final table no longer makes sense. Might make more sense to extract your update to a new article and link to it from the top?


We have worked pretty hard to push the speed at which our mobile webpages load.

In the interest of sharing...this is what has worked for us:

1. We don't have any references to external CSS and JS files. This means there is almost no reason for the browser to stop painting the webpage as soon as it starts getting the document or even fractions of it. This means that the user never gets impatient cause it seems that his device is doing nothing. I know this seems like a huge management hassle in terms of changing things, but we got over that by using a server side helper library that spits out the HTML of all the components based on certain input parameters. That way the HTML of the user interface is even more modular and centralized than CSS etc. Besides, it also makes the making of new pages a craft project of sticking different user interface blocks together.

2. All user-interface images for buttons and icons and all are in image sprites. So, an average page is just 2-3 requests. Also sprites once cached load beautifully. In fact, it also makes for a much more pleasing page load as opposed to some parts of the page coming in and some coming in a little bit later.

3. All background repeating images are make 1px thick and saved as optimized as JPEGS and then made part of the CSS with base64 encoding.

4. Of course we very aggressively cache and reduce DB calls and combine DB calls etc etc.

Having done all this, we find that our pages load...well more or less instantly at least with respect to what matters to an average human. But this instant loading is only on a network where the initial connection time is not relevant. On 2G & 3G even, all the optimizations in the world cant save you from the fact that the device takes a LONG time to just connect as Ilya Grigorik has mentioned in his presentation.


Wait, you inline your JS and CSS? Doesn't that mean that the user has to re-download it with every page? I get the desire to have as few connections going as possible, but just be sensible about caching. First page load downloads the CSS + JS, subsequent page loads just serve it from cache.


If the majority of your visitors only ever load one page then only the cold-cache scenario matters, and there's plenty of sites where that's the case. Even if your visitors do load multiple pages, if the cold cache load time is fast enough, does it matter that the subsequent pages aren't even faster?

This obviously doesn't work if you're serving up multiple megabytes of CSS+JS, but I suspect they're an order of magnitude or two short of that.


We don't exactly inline the CSS. (Actually, to be perfectly honest, sometimes we do when that seems like the more efficient thing to do and there is no benefit of modularity to be got from CSS.) However, in general we actually construct the HEAD and CSS also through the server library using some logic and parameters.

Now, you are right, this means that from page to page, those 3-4 bytes might be the same and re-downloaded. However, we are focused on mobile phones and some of our user base is on crappy mobile phone browsers (Nokia phones that are still popular here) that have a bad caching system. So the trade off is between 3-4 bytes of CSS that is a repeat, versus a whole new request. And 3-4 bytes of CSS is not even noticeable. A request most defiantly is.


How much CSS can you fit into 4 bytes?


  a {}


Only if it's not gzipped :P That would bulk it up to more like 25 bytes.


Maybe they meant kbytes?


Lol. Sorry KB! My bad!


I agree with this. By inlining all the JS and CSS, you lose the entire benefit of the browser cache, making each HTTP request for a real page a lot larger.


Unless you're using snippets or partial views where you can load only what's needed (in terms of JS and CSS) on a given page.

In that instance, I can see an advantage for inlining it, but it sure doesn't make for attractive code. Priorities though - right?


Even then, unless your partial views are barely ever repeated in different pages it still isn't worth it. And if they're barely ever repeated then there isn't much point in them being partial views.


Love all of this. I'm always looking for well tuned sites like yours to showcase. Can you share a URL?


I just saw google recommending inlining small fragments of css/js in the newer pagespeed guidelines. This has definitely piqued my interest and I wondered if people were doing it, mainly because I'm never sure which side of the speed vs. proper semantics debate I want to be on.


Semantically there is no difference between adding network requests for assets or inlining them, whether through data URIs or inline script/style tags. It's a fast vs slow debate. :)


You can have both speed and proper semantics (I assume you mean clean separation of concerns) by having your build script do the inlining.


"After Ilya’s talk ended, I started to think more about why performance always seems to be an afterthought with developers."

In my (admittedly somewhat limited) agency experience, performance is rarely an afterthought to developers. Rather, it can be difficult to convince stakeholders that it's worth the extra time to ensure that each of these avenues is thoroughly explored. They'd rather spend their retainer implementing new feature X then making what they've already purchased perform faster.

We do what we can in terms of adding additional time to estimates to account for these things, but it's always a balance. At the end of the day, someone still needs to pay for my time.


The blog post loads perceptively fast, but what I ran it through the PageSpeed tool, it came up very poor;

http://i.imgur.com/IBpar1y.png

Any ideas on the shortfalls of the ranking, areas where it's not accurate, etc?


Food for thought: all this wisdom is really an attempt to prioritize how bits hit your browser and is very much due to legacy issues. These are hacks, albeit clever and measurably better.

HTTP2 when it comes (2014?) will negate the need to inline/concatenate CSS or cook up image sprites because it solves it at the proper level, the network layer.


SPDY addresses some shortcomings in HTTP, but you will always have to be mindful of the amount of render blocking JavaScript and CSS loaded on your page, regardless of network protocol.

If you load large amounts of render blocking JS/CSS in the <head>, the browser must wait for it to finish downloading before it can render content in <body> to the screen.

To deliver a fast experience, keep the amount of JS/CSS needed to render the initial view to a minimum. Ideally, that'd be no JS, and just the CSS needed to style the content in the initial view. Then, once the initial view has rendered, load the JS and additional CSS needed for the rest of the app.


Looking forward to that!

Unfortunately the question then becomes, how long until all the major browsers support it? And once that happens, how long until the crowd not using the updated browsers is small enough that we can safely ignore them?

I suppose the auto-updating browser trend should help a lot with the latter issue.


SPDY (and thus HTTP2) is implemented and shipping in Chrome, Firefox, Internet Explorer 11 (except win7) and Opera. It's all ready and mod_spdy and the nginx spdy module are solid and available.


Wow, okay. I knew some of the bigger sites supported SPDY, but I didn't realize that the browsers outside of Chrome were already supporting SPDY (and HTTP2) as well.

So (pardon my ignorance, I googled a bit but didn't find very conclusive results), why isn't everyone using HTTP2/SPDY now? Because of IE < 11 and Safari?

Edit: Or maybe because of the SSL requirement? Edited again: For clarity.


Probably because the spec is not finalized yet - I bet once SPDY is essentially name changed to HTTP2 it will see wide adoption.


Don't even video games using image sprites for (non-network) performance benefits?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: