Hacker News new | past | comments | ask | show | jobs | submit login
Scalable Bitmaps (2013) (ericportis.com)
26 points by natcombs on Aug 23, 2020 | hide | past | favorite | 4 comments



As far as image encoders go, do we really need 2 parameters to describe compression parameters, bitrate and resolution? Why not just have bitrate, where you can decompress an image to an arbitrary resolution because the encoding is not tied to resolution? For example, you could have a representation f(w,x,y) where x and y are the (floating point) pixel coordinates you want to decompress, and w is the compressed image representation. You can of course optimise this so you don't need to run f() as many times as many pixels you want to get out of w. With some neural network based codec of the future this should be possible, and I hope image (and video/audio) codecs get there at some point.


To avoid that complexity I sometimes use a single image for all devices, with high resolution and relatively high level (=low bitrate, low quality) of jpeg compression.

Modern GUI frameworks and browsers use high-quality GPU-based scalers, downscaling an image on a low-DPI device is almost free and gets rid of JPEG artifacts. On high DPI screens pixel density gets rid of these JPEG artifacts as well, unless using a magnifier, glass or software.


Maybe we should also consider using / making a progressive enhancement image format. FLIF for example can be downloaded partially to get part of the fidelity. (Which, unlike other interlacing, increases ~linearly with size.) https://flif.info/


This reminds me of mipmaps in game programming. You include texture in original size, 2 times smaller, 4 times smaller, 8 times smaller, etc.

Graphic card chooses the 2 closest sizes when it has to draw the mipmap and interpolates between them.

It only requires 2 times the memory compared to the original image.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: