If you're looking for a way of bundling resource requests for arbitrary file types into single HTTP request, there is already a feature in the HTTP 1.1 that allows you to do this; it's called HTTP Pipelining [http://en.wikipedia.org/wiki/HTTP_pipelining]. Unfortunately this feature is off by default in Firefox, but you can override it to be on using about:config.
Other browsers don't yet support it, but they don't support archives yet either. As it stands, CSS Sprites are really the only cross-browser compatible option we have, and we're going to have to make do with them.
I like your idea, but it appears to me that this is better implemented on the HTTP layer.
Pipelining reuses a single TCP connection, but still makes multiple HTTP requests. This means that you still send and receive a bunch of mostly-duplicated request/response headers, and still have the latency of multiple request/response round trips.
Archives/sprites/etc. are better in some ways than pipelined requests. Consider the case where the client is making a conditional GET request and the content hasn't changed on the server. If the content is bundled into a single resource, then you just have a single request and a single HTTP 304 response with no body. But if the content is served as multiple resources, then you have the exact same conversation repeated once for each resource. For a site with 100 images, this will use 100x the bandwidth and take nearly 100 times as long.
If you're gzipping/deflating your HTTP connections, 100 304 response headers are not going to be your bandwidth bottleneck. Even if you are not, a 304 response has at most like 200 characters in it, 200 x 100, 20 kilobytes in headers responses, not exactly earth shattering.
Since you brought up the caching issue, when you change any resource in the bundle, you are going to have to redownload the whole bundle. The bigger the bundle the more likely that one of the resources inside might need to be updated over time. With pipelining you only have to retransmit modified resources. This is not even considering the higher likelihood that larger objects have of getting thrown out of the cache.
Also, most of the overhead in establishing a connection over a non-negligible latency internet connection is in the TCP handshake. Not in the time spent transmitting a few packets of headers.
Note that HTTP content-encoding does not compress headers, so gzip compression will have no effect on responses with status 304 (or on request headers).
Your other points are good ones, which is why I said that bundling beats pipelining in some ways. Obviously you should measure your real-world use cases and performance before deciding.
The size of the headers response is not the issue. It's the latency of going and getting each one, and remember web browsers typically have a limited concurrency to any one domain. You're not going to be firing off a big batch of requests in parallel and getting the responses a second later, you're going to have three or so worker processes chewing through 30 requests each in serial.
The situation becomes even worse with TLS, in which the handshake is even more of a time sink. Too many requests on an https page is absolute death for performance! This technique could really help that.
Pipelining would certainly help, though, if and when it finally "arrives".
Does anyone have any further information on this? It's interesting to see that while most modern web servers handle this, most web clients do not or disable it be default.
If this offers nothing other than a performance increase, where is the issue?
Assuming the HTML is compressed, wouldn't this also achieve the same "poor-man's pipelining" result, as in less server connections? It seems to have broader browser support than the JAR thing.
Data uri's can't be cached, so they can only help latency of a cold page load, but they make every page after that slower. (if you put the data uri in your html instead of the css)
What would be nice is a script that extracts the image urls from the stylesheet(s), creates the sprite image for the used images, then creates another css file with the required offsets for the classes.
I have been putting off spending some time creating a script like this, mostly because I am pretty sure there is already one out there that does it.
Google Web Toolkit does this; I'm sure other web development frameworks do as well. A quick search for a standalone utility revealed: http://spritegen.website-performance.org/
.
CSS sprites are stupid. So stupid, in fact, that I refuse to use them. It wouldn't be such a big deal, I suppose, if IE6 supported proper transparency. But as it stands, I still cannot, in 2009, safely use a 24-bit PNG for my CSS sprites.
If anything like this is ever standardized, which seems unlikely, and implemented in all the major browsers, which seems even more unlikely, MAYBE in 30 years we can use it in production.
But as far as fantasies go, it's one of the cooler ones!
I have no idea why I would "surely" mean that. Is "32-bit RGBA PNG" a synonym for "24-bit PNG with Alpha Transparency"? If so, I suppose I could have been more explicit, though I've never once seen such images referred to as that.
A 24-bit PNG would have no transparency, being 3 8-bit channels for RGB. Alpha transparency is a channel too!
To be fair, the PNG spec is really bad about referring to alpha as if it were something entirely different from a color sample. I have spent way too much time thinking about PNG minutiae for a project at work...
If you are thinking of local directory traversal (src="jar:http://kaioa.com/b/0907/test.jar!../../somefile) then I think you are underestimating browser coders. Introducing that security issue would require a dedicated effort.
no, i was thinking that in certain scenarios where it might be possible to control the name of the file being shown, using jar:http:// could do a remote request and possibly expose some information.
In the limit it does, because a faster network means more interrupts per second when the packet size is fixed. Thus, Ethernet jumbograms. Also, there can be problems with bit-rate-independent framing; for example, 802.11b loses a lot of its 11Mbps to the packet preamble, since the preamble is the same number of microseconds for compatibility with the older 802.11 standards. (I assume G solved this problem.)
I suspect you could gzip a working CP/M system into a 9000-byte Ethernet jumbogram.
Other browsers don't yet support it, but they don't support archives yet either. As it stands, CSS Sprites are really the only cross-browser compatible option we have, and we're going to have to make do with them.
I like your idea, but it appears to me that this is better implemented on the HTTP layer.