Internet Explorer also has some weird filter properties that almost -- but not quite -- do what we need. If someone can figure it out, it would be significantly better than using FlashCanvas.
For this example I was able to quickly trim the jpg size from 35k to ~24k in about 60 seconds using Photoshop.
So your test doesn't tell us anything - maybe you just used a lower quality level.
(Although I agree with the basic premise, post editing the image isn't such a good idea. Instead use a lossless jpeg editor that can selectively cut out portions of the jpeg.)
And get a blocky crapbucket as a result. By making unused/un-displayed sections of the JPEG (or PNG or whatever) file easier to compress (which still requires some basic insight as to how the file's compression is performed), you lower the file's size at constant perceived quality. That would be the point.
Since we're nitpicking, I might as well point out that you can't "cut out" portions of a jpeg either.
If you do want to cut out the alpha region, do not trim right up to the image. Instead trim only in 16 pixel blocks. By having a sharp edge inside a macroblock you increase the storage requirements.
No one has written it currently as far as I know, but it's also possible to selectively remove/replace 16x16 pixel blocks inside the jpeg losslessly.
Anyway, here's the JPEG FAQ explaining why using JPEG for alpha wouldn't be a good idea even if it was supported: http://www.faqs.org/faqs/jpeg-faq/part1/section-12.html
The only real solution is to combine lossy JPEG storage of the image with lossless storage of a transparency mask using some other algorithm.
Which is exactly what the OP has done.
> In one case we got a 573KB 24-bit PNG down to a
> 49KB JPEG with a 4KB PNG alpha-mask!
On DSL it probably doesn't (let's consider a 150ms ping, we'll take that as the time to initiate an HTTP connection, with 10Mbps of bandwidth or 1.25MB/s): the single image will take 150ms + 440ms (download) for a total of 590ms, whereas the pair will take 2x150ms + 40ms + 3ms for a total of 343ms.
Now consider 3G: 3.6Mbps (460KB/s) (we'll consider a pretty basic HSDPA) and pings of around 600ms are common. The single image takes 600ms + 1246ms for a total of 1846ms, whereas the pair takes 2x600ms + 100ms + 9ms for a total of 1309ms. Still lower, but we're getting closer. 7.2Mbps HSDPA bumps the bandwidth to 920KB/s and doesn't change the ping significantly. Now the single image only takes 632ms to download for a total of 1232ms, and the pair takes 54ms for a total of 1254ms.
And cell data usage will only grow further.
Now of course, a 573KB full-color PNG is going to be hell on a mobile browser and its cache, but still, HTTP request cost is coming back with a vengeance as HSPA makes high amounts of bandwidth available.
(Somehow I missed that line in the article by focusing in on the code.)
It might have better chances for adoption than completely new format like WebP.
We'll just have another decade of complaining that IE doesn't support alpha, but we've been there and survived :)
JPEG has support for arbitrary metadata, mostly used today to put EXIF data in the files. IIRC, there's no reason we couldn't store a base64'd or even binary raw PNG file in there.
If I wasn't getting ready for a 2 month trip, I'd spend an hour or two whipping this up. Feel free to take this info and whip something up yourself!
You've also taught me another valuable less: I created exactly this (actually with slightly different masking method) about a year ago, but I never got around to releasing it. Thank you for reminding me that real artists ship.
edit: The strength of PNGs is not photography, but images with large blocks of solid colors; screenshots, etc. That's why the example in the article didn't suffer.
"The right tool for the job" and all that jazz.
Read the unofficiql faq here: http://news.ycombinator.com/item?id=1755533 (read the main article)