Hacker News new | past | comments | ask | show | jobs | submit login
CSS Sprites are Stupid - Let's Use Archives Instead (Firefox Demo) (kaioa.com)
58 points by mixmax on July 7, 2009 | hide | past | favorite | 37 comments



If you're looking for a way of bundling resource requests for arbitrary file types into single HTTP request, there is already a feature in the HTTP 1.1 that allows you to do this; it's called HTTP Pipelining [http://en.wikipedia.org/wiki/HTTP_pipelining]. Unfortunately this feature is off by default in Firefox, but you can override it to be on using about:config.

Other browsers don't yet support it, but they don't support archives yet either. As it stands, CSS Sprites are really the only cross-browser compatible option we have, and we're going to have to make do with them.

I like your idea, but it appears to me that this is better implemented on the HTTP layer.


Pipelining reuses a single TCP connection, but still makes multiple HTTP requests. This means that you still send and receive a bunch of mostly-duplicated request/response headers, and still have the latency of multiple request/response round trips.

Archives/sprites/etc. are better in some ways than pipelined requests. Consider the case where the client is making a conditional GET request and the content hasn't changed on the server. If the content is bundled into a single resource, then you just have a single request and a single HTTP 304 response with no body. But if the content is served as multiple resources, then you have the exact same conversation repeated once for each resource. For a site with 100 images, this will use 100x the bandwidth and take nearly 100 times as long.


If you're gzipping/deflating your HTTP connections, 100 304 response headers are not going to be your bandwidth bottleneck. Even if you are not, a 304 response has at most like 200 characters in it, 200 x 100, 20 kilobytes in headers responses, not exactly earth shattering.

Since you brought up the caching issue, when you change any resource in the bundle, you are going to have to redownload the whole bundle. The bigger the bundle the more likely that one of the resources inside might need to be updated over time. With pipelining you only have to retransmit modified resources. This is not even considering the higher likelihood that larger objects have of getting thrown out of the cache.

Also, most of the overhead in establishing a connection over a non-negligible latency internet connection is in the TCP handshake. Not in the time spent transmitting a few packets of headers.


Note that HTTP content-encoding does not compress headers, so gzip compression will have no effect on responses with status 304 (or on request headers).

Your other points are good ones, which is why I said that bundling beats pipelining in some ways. Obviously you should measure your real-world use cases and performance before deciding.


The size of the headers response is not the issue. It's the latency of going and getting each one, and remember web browsers typically have a limited concurrency to any one domain. You're not going to be firing off a big batch of requests in parallel and getting the responses a second later, you're going to have three or so worker processes chewing through 30 requests each in serial.

The situation becomes even worse with TLS, in which the handshake is even more of a time sink. Too many requests on an https page is absolute death for performance! This technique could really help that.

Pipelining would certainly help, though, if and when it finally "arrives".


Does anyone have any further information on this? It's interesting to see that while most modern web servers handle this, most web clients do not or disable it be default.

If this offers nothing other than a performance increase, where is the issue?


Pipelining doesn't work.


"most modern web servers handle pipelining without any problem. Exceptions include IIS 4 and reportedly 5."

Color me surprised.


IIS 4 came out with the NT 4.0 option pack. IIS 5 came with Windows 2000. At 10 years old, I wouldn't call them modern.


Here's some info about including image data as part of the URI: http://en.wikipedia.org/wiki/Data_URI_scheme.

Assuming the HTML is compressed, wouldn't this also achieve the same "poor-man's pipelining" result, as in less server connections? It seems to have broader browser support than the JAR thing.

Edit: I found this too... the MHTML method linked in this article is also interesting: http://danielmclaren.net/2008/03/embedding-base64-image-data...


Data uri's can't be cached, so they can only help latency of a cold page load, but they make every page after that slower. (if you put the data uri in your html instead of the css)


You could also put the Data URIs into a javascript file that plugs them into the document/css/whereever once it's loaded. The script would cache.


What would be nice is a script that extracts the image urls from the stylesheet(s), creates the sprite image for the used images, then creates another css file with the required offsets for the classes.

I have been putting off spending some time creating a script like this, mostly because I am pretty sure there is already one out there that does it.


Google Web Toolkit does this; I'm sure other web development frameworks do as well. A quick search for a standalone utility revealed: http://spritegen.website-performance.org/ .


CSS sprites are stupid. So stupid, in fact, that I refuse to use them. It wouldn't be such a big deal, I suppose, if IE6 supported proper transparency. But as it stands, I still cannot, in 2009, safely use a 24-bit PNG for my CSS sprites.

If anything like this is ever standardized, which seems unlikely, and implemented in all the major browsers, which seems even more unlikely, MAYBE in 30 years we can use it in production.

But as far as fantasies go, it's one of the cooler ones!


What do sprites have to do with transparency? I thought they were just about displaying a part of the image with offset/size?


Surely you mean 32-bit RGBA PNG?


I have no idea why I would "surely" mean that. Is "32-bit RGBA PNG" a synonym for "24-bit PNG with Alpha Transparency"? If so, I suppose I could have been more explicit, though I've never once seen such images referred to as that.


A 24-bit PNG would have no transparency, being 3 8-bit channels for RGB. Alpha transparency is a channel too!

To be fair, the PNG spec is really bad about referring to alpha as if it were something entirely different from a color sample. I have spent way too much time thinking about PNG minutiae for a project at work...


Doesn't yahoo.com use sprites? Not trying to be difficult, is the transparency issue the reason you don't like them?

edit: I suppose I could have read the article first


That's even hackier than using a sprite.


You could have a webserver generate the jar dynamically. In that case, everything could be in there.



    The same with an absolute URL:

    <img src="jar:http://kaioa.com/b/0907/test.jar!/img1.png" alt="img1" width="32" height="32"/>
my security sense is tingling...


If you are thinking of local directory traversal (src="jar:http://kaioa.com/b/0907/test.jar!../../somefile) then I think you are underestimating browser coders. Introducing that security issue would require a dedicated effort.


no, i was thinking that in certain scenarios where it might be possible to control the name of the file being shown, using jar:http:// could do a remote request and possibly expose some information.


CSS Sprites aren't stupid, they're just clunky. Single browser technology, however, is positively retarded. Next?


Not going to agree quite so harshly, but a technique based on forgotten functionality in one browser is rather gimmicky.


How about using data urls for embedding images in the CSS?

Ok so IE6 and IE7 doesnt support it... never experienced that before.


IE8 supposedly does, but then again, I've heard that there's a limit to how large an image you can embed in a data url in some browsers.


Cool hack.

In the future, when network speed is not an issue, we should be able to send the whole web app (or page) in one packet.

One request, one response.


In the future, as network speed becomes less of an issue, web apps will increase in size and complexity to compensate.


> when network speed is not an issue, we should be able to > send the whole web app (or page) in one packet

it is not the case that a faster network implies larger packets


In the limit it does, because a faster network means more interrupts per second when the packet size is fixed. Thus, Ethernet jumbograms. Also, there can be problems with bit-rate-independent framing; for example, 802.11b loses a lot of its 11Mbps to the packet preamble, since the preamble is the same number of microseconds for compatibility with the older 802.11 standards. (I assume G solved this problem.)

I suspect you could gzip a working CP/M system into a 9000-byte Ethernet jumbogram.


Many home routers and network devices I've seen have optional short preamble/long preamble.


Sprites are easy to use though and are well-supported.


Sprites are easy to use though and are well-supported.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: