

The Right Stuff: Breaking the PageSpeed Barrier with Bootstrap - trjordan
http://www.appneta.com/blog/bootstrap-pagespeed/

======
cbr
While they do point out Ilya's comment in an update, it should be more
central:

    
    
        Here's a slightly modified MPS config that gets you to
        100 without any manual work:
    
          ModPageSpeed on
          ModPagespeedRewriteLevel CoreFilters
          ModPagespeedEnableFilters prioritize_critical_css
          ModPagespeedEnableFilters defer_javascript
          ModPagespeedEnableFilters sprite_images
          ModPagespeedEnableFilters convert_png_to_jpeg
          ModPagespeedEnableFilters convert_jpeg_to_webp
          ModPagespeedEnableFilters collapse_whitespace
          ModPagespeedEnableFilters remove_comments

\-- [https://github.com/danriti/bootstrap-
pagespeed/issues/4](https://github.com/danriti/bootstrap-pagespeed/issues/4)

mod_pagespeed can automate all those complicated steps.

(Disclaimer: I work on mod_pagespeed and ngx_pagespeed.)

~~~
druiid
I've had huuuuge problems with defer_javascript before using specific
libraries. If it works it is an amazing help, but I'd carefully bug-test after
you implement if you use that filter!

~~~
cbr
The defer_javascript filter is amazing on some pages and completely breaks
others. It's frustrating because it has so much potential, but it's too
dangerous for us to turn on by default. If it looks good on your site in
testing, though, it's probably fine.

The basic problem is that javascript expects to run at a certain time and at a
certain point in the page so we have to do all sorts of crazy things in the
background to make that appear to be the case when it isn't. This fixes many
of the pages (document.write works) but there are enough ways for javascript
to be introspective that we can't catch everything.

------
zoj_bad
We have worked pretty hard to push the speed at which our mobile webpages
load.

In the interest of sharing...this is what has worked for us:

1\. We don't have any references to external CSS and JS files. This means
there is almost no reason for the browser to stop painting the webpage as soon
as it starts getting the document or even fractions of it. This means that the
user never gets impatient cause it seems that his device is doing nothing. I
know this seems like a huge management hassle in terms of changing things, but
we got over that by using a server side helper library that spits out the HTML
of all the components based on certain input parameters. That way the HTML of
the user interface is even more modular and centralized than CSS etc. Besides,
it also makes the making of new pages a craft project of sticking different
user interface blocks together.

2\. All user-interface images for buttons and icons and all are in image
sprites. So, an average page is just 2-3 requests. Also sprites once cached
load beautifully. In fact, it also makes for a much more pleasing page load as
opposed to some parts of the page coming in and some coming in a little bit
later.

3\. All background repeating images are make 1px thick and saved as optimized
as JPEGS and then made part of the CSS with base64 encoding.

4\. Of course we very aggressively cache and reduce DB calls and combine DB
calls etc etc.

Having done all this, we find that our pages load...well more or less
instantly at least with respect to what matters to an average human. But this
instant loading is only on a network where the initial connection time is not
relevant. On 2G & 3G even, all the optimizations in the world cant save you
from the fact that the device takes a LONG time to just connect as Ilya
Grigorik has mentioned in his presentation.

~~~
untog
Wait, you _inline_ your JS and CSS? Doesn't that mean that the user has to re-
download it with every page? I get the desire to have as few connections going
as possible, but just be sensible about caching. First page load downloads the
CSS + JS, subsequent page loads just serve it from cache.

~~~
zoj_bad
We don't exactly inline the CSS. (Actually, to be perfectly honest, sometimes
we do when that seems like the more efficient thing to do and there is no
benefit of modularity to be got from CSS.) However, in general we actually
construct the HEAD and CSS also through the server library using some logic
and parameters.

Now, you are right, this means that from page to page, those 3-4 bytes might
be the same and re-downloaded. However, we are focused on mobile phones and
some of our user base is on crappy mobile phone browsers (Nokia phones that
are still popular here) that have a bad caching system. So the trade off is
between 3-4 bytes of CSS that is a repeat, versus a whole new request. And 3-4
bytes of CSS is not even noticeable. A request most defiantly is.

~~~
Domenic_S
How much CSS can you fit into 4 bytes?

~~~
mh-

      a {}

~~~
nfm
Only if it's not gzipped :P That would bulk it up to more like 25 bytes.

------
kaishiro
"After Ilya’s talk ended, I started to think more about why performance always
seems to be an afterthought with developers."

In my (admittedly somewhat limited) agency experience, performance is rarely
an afterthought to developers. Rather, it can be difficult to convince
stakeholders that it's worth the extra time to ensure that each of these
avenues is thoroughly explored. They'd rather spend their retainer
implementing new feature X then making what they've already purchased perform
faster.

We do what we can in terms of adding additional time to estimates to account
for these things, but it's always a balance. At the end of the day, someone
still needs to pay for my time.

------
mikeyouse
The blog post loads perceptively fast, but what I ran it through the PageSpeed
tool, it came up very poor;

[http://i.imgur.com/IBpar1y.png](http://i.imgur.com/IBpar1y.png)

Any ideas on the shortfalls of the ranking, areas where it's not accurate,
etc?

------
recuter
Food for thought: all this wisdom is really an attempt to prioritize how bits
hit your browser and is very much due to legacy issues. These are hacks,
albeit clever and measurably better.

HTTP2 when it comes (2014?) will negate the need to inline/concatenate CSS or
cook up image sprites because it solves it at the proper level, the network
layer.

~~~
mmorris
Looking forward to that!

Unfortunately the question then becomes, how long until all the major browsers
support it? And once that happens, how long until the crowd not using the
updated browsers is small enough that we can safely ignore them?

I suppose the auto-updating browser trend should help a lot with the latter
issue.

~~~
paulirish
SPDY (and thus HTTP2) is implemented and shipping in Chrome, Firefox, Internet
Explorer 11 (except win7) and Opera. It's all ready and mod_spdy and the nginx
spdy module are solid and available.

~~~
mmorris
Wow, okay. I knew some of the bigger sites supported SPDY, but I didn't
realize that the browsers outside of Chrome were already supporting SPDY (and
HTTP2) as well.

So (pardon my ignorance, I googled a bit but didn't find very conclusive
results), why isn't everyone using HTTP2/SPDY now? Because of IE < 11 and
Safari?

Edit: Or maybe because of the SSL requirement? Edited again: For clarity.

~~~
recuter
Probably because the spec is not finalized yet - I bet once SPDY is
essentially name changed to HTTP2 it will see wide adoption.

