Hacker News new | past | comments | ask | show | jobs | submit login

Base64 on the web is such an unbelievably terrible solution to such a trivial problem that it beggars belief. The idea that parsing CSS somehow blocks on decompressing string delimited, massively inefficient, nonlinearly encoded blobs of potentially incompressible data is insane.

We've taken the most obvious latency path a normal user sees and somehow decided that mess was better than sending an ar file.

(Not that this wasn't inevitable as soon as someone decided CSS should be a text format... sigh)




Why should parsing CSS block on decompressing base64, unless the CSS itself is a base64-encoded data: URI?

If you're talking about data: background images in CSS, then all the CSS parser has to do is find the end of the url(). It doesn't have to do any base64 decoding or anything like that until the rule actually matches.


You have to gz-decompress it, which is a harder job, and the only reason you're doing that (at least for images) is to undo the inefficiency you just added.


Ah, gz-decompress the stylesheet itself, ok.

For what it's worth, in my profiles of pageloads in at least Firefox I haven't seen gz-decompression of stylesheets show up in any noticeable way, but I can believe that it could be a problem if you have a lot of data: images in the sheet...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: