We use data: URIs all over the place, typically storing icons and other small graphics as background-image sources in CSS. Over the broadband connection I’m using right now (roughly 5Mb/s download speeds) and with a cleared cache, our home page seems to be downloading and rendering in about 1.5s in all of the major browsers, and the lion’s share of that is downloading the HTML file first and then downloading various standalone images that we haven’t yet optimised at the end. Over a 3G mobile connection of dubious quality, in the browsers I can readily test with, there’s a lot of extra lag up-front, but little difference in the middle section where we download and render the CSS/data: images. These results are both fairly consistent with what we’ve seen throughout development and testing, across a decent range of test scenarios over an extended period.
So while I’m not sure from these articles where the difference might lie, I don’t see how to reconcile the results we’ve seen with the idea that switching to data: URIs somehow slows things down by 6x or 10x as reported in the linked article series. I wonder whether either the people doing these experiments weren’t measuring what they thought they were measuring or perhaps they’ve hit an awkward use cases that some/all major browsers don’t optimise well rather than a general problem with using data: URIs. The latter is certainly plausible, as there are cases with for example SVG where some browsers seem absurdly slow but everything works with the kind of performance you’d expect in others.
Meanwhile, data: URIs continue to have some concrete practical advantages for mobile, perhaps most obviously that they tend to circumvent mobile Internet providers “helpfully” compressing your graphics on the fly so they look terrible on your visitor’s 300dpi smartphone/tablet display, which is a particular problem with image sprites if their compression starts bleeding one image into the next. For reasons like this, we’ve found that in practice our decisions about how to send graphics on web pages are rarely dominated by speed considerations anyway — though they surely would be if using data: URIs really slowed down our pages by a factor of 6-10x!
• separate image files: after the cost of the initial many-separate-loads (which might be minimized by pipelining/SPDY), once reaching the cached-condition, might having them as individual artifacts again show some wins (no translations/trimming; shorter/simpler CSS)?
• data URIs inside CSS rather than HTML: on the off chance that leads to better post-data-decode image caching than re-parsing the cached HTML
However, if the CSS file and sprite reside on the same server, and HTTP/1.1 with keepalives is used, chances are that the sprite will come down a preheated TCP connection (and thus also foregoing an SSL handshake, if fetching via https). The article doesn't actually mention how client and server are set up, and what the base network latency between them is.
It would be nice to see the test results with and without keepalives to see how this influences the results, as well as what the median network latency between client and server were during these tests.
dd if=/dev/urandom bs=1024 count=64 | base64 | gzip | wc -c
64+0 records in
64+0 records out
65536 bytes (66 kB) copied, 0.0127386 s, 5.1 MB/s
$ wget http://en.wikipedia.org/wiki/ASCII
$ (cat ASCII;dd if=/dev/urandom bs=8000 count=1 | base64) | gzip | wc -c
$ (cat ASCII;dd if=/dev/urandom bs=9000 count=1 | base64) | gzip | wc -c
* media queries, which are evaluated, if the CSS is not applicable it isn't downloaded
* disabled stylesheets which are ignored
(edited for formatting)
I imagine that CSS parsers aren't particularly designed with any kind of parallel operation, whereas grabbing images (& decoding them) is largely done in parallel (up to the number of max connections). So while you're parsing the CSS you can be getting the images, offsetting the connection cost.
In the CSS case, the device needs to get the (slightly larger) CSS, un-gzip it (with more complex tables), Base64 decode - and then decode the image as before. I wouldn't be surprised if this is a completely sequential activity with the rest of the CSS parsing.
There's a lot of commonality between the images, http://g-ecx.images-amazon.com/images/G/01/common/sprites/sp...
Even with Base64 encoding, much of that commonality would remain and thus I'd think that if you had all these sprites as SVG in a single file, as Data URIs, gzip would do a very good job indeed on the CSS file.
In a PNG sprite I wouldn't be so sure.
What isthe basis or the source of the idea that base64 encoded images inlined into the CSS would be faster?
iOS may now lag behind Android in handsets shipped -- but it's still a dominant player in web visits on mobile!
Unfortunately that means that for RUM tests I needed to have results that were insensitive to differences of a few ms.
Here's a petition asking Apple to include the navigation timing API in a future iOS release:
The decision was pretty easy to make after that discovery.