
Want to pack JS and CSS really well? Convert it to a PNG - shawndumas
http://ajaxian.com/archives/want-to-pack-js-and-css-really-well-convert-it-to-a-png-and-unpack-it-via-canvas
======
kls
This is what I love about JavaScript right now. The air of innovation is
everywhere. It really is an exciting language to be developing in right now.
It reminds me of the early days of Java.

------
X-Cubed
It would be interesting to see how it copes with proxy servers (like the ones
Opera use) that downscale images to reduce the traffic on mobile networks. It
would be a good idea to checksum the code to make sure that it is what you're
expecting to receive.

------
cantino
That's a cool hack, if impractical.

------
RodgerTheGreat
And so the "pure CSS images" trend has come full circle.

~~~
Raphael_Amiard
This means you could do a pure css image and put it in a PNG ... !

Mind bending.

------
ramine
Wonder if it'd make sense to make a lib/gem out of it.

~~~
quippd
Definitely not.

"And before anyone else points out how gzip is superior, this is in no way
meant as a realistic alternative."

------
jdavid
This as 3 purposes,

    
    
      1. show of some neat trick
      2. hide code
         a. hide code on a 3rd party site
         b. hide code in an image that actually looks like an image ( steganography ), 
            thus hiding parts of the code within several images.
            for a really good time, hide some unique data on
            one machine that acts as a key
      3. save per file system space
    

i think gzip uses the same compression algorithm as png files so, to reuse
this is silly. the browser supports gzip anyways

i saw this as part of a 10kapart competition, and maybe they were trying to
make up the difference between the actual size of the bits used and the file
system size. i think although this is interesting, but, it defeats the point
of the 10k limit; which, was there to show what one could do without server
side scripts, and I think a build script .... counts.

~~~
sr3d
I did write to the 10kapart to ask about using this technique before working
on the app. Their answer was:

"As long as it works in the required browsers and it's 100% client-side you're
welcome to use it in your app."

So the build script ... doesn't count. As long as everything works in the
browser, it's good. The end justifies the means :)

------
donohoe
Been there, it's really cool but impractical in many respects and no real
performance gain. It's also a bitch if you update code on a regular basis.

~~~
sr3d
It was definitely not convenient during development. When I was testing for
IE9, every change in the code needed to be encoded to the PNG image. But
having an automate build script really helped speeding up things.
YUICompressor caught all the major bugs during the build process so as long as
my un-minified/unencoded scripts worked, I was pretty sure that the PNG
version would work as well.

------
sethg
I am surprised that an algorithm designed to compress a graphic image would be
at all useful for compressing bytes of text.

~~~
wrs
For a lossy algorithm based on human vision characteristics (e.g., JPEG) it
would be quite surprising. However, PNG is lossless and is based on DEFLATE, a
general-purpose compression algorithm. So actually this makes perfect sense.
(See
[http://en.wikipedia.org/wiki/Portable_Network_Graphics#Compr...](http://en.wikipedia.org/wiki/Portable_Network_Graphics#Compression))

~~~
studer
PNG uses a predictor tuned to image data, though, so it's a bit tricker than
that. But it's good enough for this purpose.

------
woodall
This post follow along the same lines, however a bit more blackhat[1]. He also
did another write up about using images to store xss data a while back. It is
great if the image is loaded into javascript, other wise it will not work.

[http://ha.ckers.org/blog/20070604/passing-malicious-php-
thro...](http://ha.ckers.org/blog/20070604/passing-malicious-php-through-
getimagesize/)

------
al_james
Cool hack. Of course, running GZIP on your server renders it all useless.

~~~
noodle
original article:

> And before anyone else points out how gzip is superior, this is in no way
> meant as a realistic alternative.

~~~
sjs
What's the use case then?

~~~
noodle
there has to be a use case?

~~~
sjs
Of course not! But if there's no use case then this is getting an awful lot of
buzz for no real reason other than Ajaxian running a story on it, considering
that it's not that new. [http://blog.nihilogic.dk/2008/05/compression-using-
canvas-an...](http://blog.nihilogic.dk/2008/05/compression-using-canvas-and-
png.html)

It's been on reddit or hacker news before, this year iirc.

~~~
noodle
as the ajaxian article stated, i _believe_ that people are trying to make use
of it to squeeze the most out of the size-limited javascript contests that are
out there. but, i've not seen an entry that does yet (haven't looked that hard
though).

~~~
bmelton
I think js10k limits entries to those only readable in UTF-8 which, I believe,
a PNG (compressed or uncompressed) does not meet.

------
JangoSteve
Using the same process (combining data into a png => sending to browser =>
reading out specific chunks from the png), could you also have a script that
combines all the images in your stylesheet into a CSS sprite, sends it to the
browser, and then crops out each image on the client-side?

Then you wouldn't have to mess with all the trickery required to layout a CSS
sprite and positioning it properly in your stylesheet. Of course the
compromise would be the performance hit the first time the browser loads your
site and has to crop the images out.

~~~
tlrobinson
Or just do what we do in Cappuccino, and use Base64 encoded data URIs (or
MHTML in most versions of IE): [http://cappuccino.org/discuss/2009/11/11/just-
one-file-with-...](http://cappuccino.org/discuss/2009/11/11/just-one-file-
with-cappuccino-0-8/)

Base64 bloats the size a little bit, but it's recovered as long as you enable
Gzip transfer encoding.

~~~
JangoSteve
Interesting. What are your experiences with handling the disadvantages of
using data URIs? [1] The biggest disadvantage that I see is:

 _Data URIs are not separately cached from their containing documents (e.g.
CSS or HTML files) so data is downloaded every time the containing documents
are redownloaded. Content must be re-encoded and re-embedded every time a
change is made._

Based on this, it seems like this would still work well for images specified
in your stylesheet, but not so much for images that are specified directly in
your HTML, because it would bloat HTML file size, and unless the user's
browser caches the HTML file itself, they'll have to re-download the large
HTML file over and over again.

Also, according to Wikipedia, base64 encoding makes the file size 33% larger
than the binary file, and compressing it (as with gzip) only shaves off 2-3%
of the base64 file size. Is this accurate? If so, it seems gzip encoding the
transfer doesn't really recover the file size bloat you mentioned.

[1] <http://en.wikipedia.org/wiki/Data_URI_scheme#Disadvantages>

EDIT: Now that I think about it, if you base64 encode all the images in your
stylesheet, won't that make your website look like ass for as long as it takes
to download that stylesheet? Usually, this isn't a big deal because a text
stylesheet gets downloaded pretty quickly, but now it will be larger in file
size than the total sum of all the images that were specified in the
stylesheet.

~~~
tlrobinson
Well, our approach works great for rich JavaScript web applications like those
built with Cappuccino, not so much for websites. We don't have image URLs
specified in stylesheets at all, but rather we use JavaScript to build up the
DOM and set the src of all img tags (or background-image style properties, or
canvas drawing, etc). That gives us the opportunity to replace the URLs with
these data/MHTML URIs looked up through a map (URL -> data/MHTML) that gets
populated when you load the sprite.

Regarding the "Flash of Unstyled Content" effect
(<http://en.wikipedia.org/wiki/Flash_of_unstyled_content>), I believe this has
been fixed by modern browsers, as long at the stylesheet is loaded in the head
section, since the browser won't display anything until those resources have
been loaded (not 100% sure about this, it doesn't affect the kinds of apps I'm
talking about anyway)

Regarding Base64 + Gzipping, no, that's not accurate:

 _"Base64-encoded data URIs are 1/3 larger in size than their binary
equivalent. This overhead is reduced to 2-3% if the HTTP server compresses the
response using HTTP's Content-Encoding header."_

This is saying the overhead is reduced from 33% _to_ 2-3% over the raw image
sizes, which is insignificant relative to the performance gained by
significantly reducing the number of HTTP requests.

You last point doesn't apply to web applications where, for example, a
progress bar can be displayed before showing any of the application UI.

~~~
JangoSteve
Oh cool. Yes, I must have misread that compression stat.

 _We don't have image URLs specified in stylesheets at all, but rather we use
JavaScript to build up the DOM and set the src of all img tags (or background-
image style properties, or canvas drawing, etc)._

That's really interesting. Since that's not explained in the article you had
linked to, I'm assuming Cappuccino did this before switching to data URIs?

Thanks for taking the time to explain this approach more in depth.

~~~
boucher
Yeah, you should read a bit about Cappuccino, it makes drastically different
decisions about how an application should be built on top of the browser, but
as a result it gets to do complex things like this image spriting extremely
simply.

