
Scalable Bitmaps (2013) - natcombs
https://ericportis.com/posts/2013/scalables/
======
0-_-0
As far as image encoders go, do we really need 2 parameters to describe
compression parameters, _bitrate_ and _resolution_? Why not just have bitrate,
where you can decompress an image to an arbitrary resolution because the
encoding is not tied to resolution? For example, you could have a
representation f(w,x,y) where _x_ and _y_ are the (floating point) pixel
coordinates you want to decompress, and _w_ is the compressed image
representation. You can of course optimise this so you don't need to run f()
as many times as many pixels you want to get out of _w_. With some neural
network based codec of the future this should be possible, and I hope image
(and video/audio) codecs get there at some point.

------
Const-me
To avoid that complexity I sometimes use a single image for all devices, with
high resolution and relatively high level (=low bitrate, low quality) of jpeg
compression.

Modern GUI frameworks and browsers use high-quality GPU-based scalers,
downscaling an image on a low-DPI device is almost free and gets rid of JPEG
artifacts. On high DPI screens pixel density gets rid of these JPEG artifacts
as well, unless using a magnifier, glass or software.

------
quarantine
Maybe we should also consider using / making a progressive enhancement image
format. FLIF for example can be downloaded partially to get part of the
fidelity. (Which, unlike other interlacing, increases ~linearly with size.)
[https://flif.info/](https://flif.info/)

------
ajuc
This reminds me of mipmaps in game programming. You include texture in
original size, 2 times smaller, 4 times smaller, 8 times smaller, etc.

Graphic card chooses the 2 closest sizes when it has to draw the mipmap and
interpolates between them.

It only requires 2 times the memory compared to the original image.

