Hacker News new | past | comments | ask | show | jobs | submit login

rule of the thumb: progressive JPEG costs ~3x more resources (CPU, memory) than baseline JPEG.

With baseline jpeg, you can blit the decoded blocks to screen directly and forget about it.

For progressive, you have to buffer the whole image (in DCT format!) and do the idct / color-space conversion on all passes.




To minimize additional computation JPEGs often use a limited number of scans, for example 5.

By default JPEG XL's progressive uses only two scans, 8x8 DC, and the transforms in the second pass using a 256x256 tiles in encode-time chosen priority order. This choise allows JPEG XL to do only one round of DCTs even in the progressive case. The 8x8 DC is interpolated using cheaper methods.

Because of the design choices, every JPEG XL image is guaranteed to be at least minimally progressive in the same manner, i.e., 8x8 DC first. Having a guarantee will make it more rewarding for system designers to focus on extracting some user-experience benefit from that feature.


But, provided you don't need the intermediate results, you can rearrange the data back to progressive and then render it the simple way, for those times when CPU power is constrained more than the network. Total CPU use ends up being pretty much the same (the data rearrange step is rather cheap compared to the idct's)


if you re-arrange you have to wait for the bitstream to finish downloading, and thus lose the 'display something early on' feature.


No... After all but the final pass of the image has been delivered you have enough data to start doing progressive rendering. The final pass contains most of the bytes, so you still get a pretty decent progressive render too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: