As far as image encoders go, do we really need 2 parameters to describe compression parameters, bitrate and resolution? Why not just have bitrate, where you can decompress an image to an arbitrary resolution because the encoding is not tied to resolution? For example, you could have a representation f(w,x,y) where x and y are the (floating point) pixel coordinates you want to decompress, and w is the compressed image representation. You can of course optimise this so you don't need to run f() as many times as many pixels you want to get out of w. With some neural network based codec of the future this should be possible, and I hope image (and video/audio) codecs get there at some point.
To avoid that complexity I sometimes use a single image for all devices, with high resolution and relatively high level (=low bitrate, low quality) of jpeg compression.
Modern GUI frameworks and browsers use high-quality GPU-based scalers, downscaling an image on a low-DPI device is almost free and gets rid of JPEG artifacts. On high DPI screens pixel density gets rid of these JPEG artifacts as well, unless using a magnifier, glass or software.
Maybe we should also consider using / making a progressive enhancement image format. FLIF for example can be downloaded partially to get part of the fidelity. (Which, unlike other interlacing, increases ~linearly with size.) https://flif.info/