
Gamma error in picture scaling - rubatuga
http://www.ericbrasseur.org/gamma.html
======
pfranz
I'm not an expert, but I've dealt with my share of this working in film. Color
problems in image processing are everywhere. Pre-multiplication and Alpha
channels are another huge one. A lot of professionals don't even understand it
and will brute-force compensate for problems instead of understanding the
problem. Here's a posting from this week [1] from professionals who care about
getting this stuff right trying to convert a tiff to jpeg without surprise
colors shifts.

A summary is that you need to do almost all image modifications in a linear
color space. So you first need to back out any color tweaks made (for
aesthetic or technical reasons), modify the image, then re-apply any tweaks.

Unfortunately, image files often aren't tagged with their color space and
might use a specific color space out of convention--but they could use
something else. Also, because it's way harder to do the right thing, a lot of
software just does the modifications ignoring color space. Just think about
how much slower and more memory a web page full of thumbnails would take to
draw if you added in color space transformations? For awhile, a lot of
browsers added support but left them off by default (imagine if you designed
your web page to look right in a broken color space and it was suddenly much
slower and now looked wrong). But a lot of software is slowly catching up.
Unfortunately, legacy software (like Photoshop) and legacy file formats might
never change.

[1] [http://lists.openimageio.org/pipermail/oiio-dev-
openimageio....](http://lists.openimageio.org/pipermail/oiio-dev-
openimageio.org/2019-January/001390.html)

~~~
mark-r
You could do a rough approximation of gamma correction with squares and square
roots, which wouldn't add to the processing time like a full sRGB conversion
would.

~~~
burfog
Going from 8-bit sRGB channels to 32-bit IEEE float channels is a simple
lookup in a table with 256 values.

Going the other way is only slightly harder. Scale to the range 0 ... 4095,
properly round the float to an integer, verify the range or mask with 0xfff to
protect against NaN and Inf, and then to a table lookup in a table with 4096
values.

Tux Paint contains GPL code for it. It's been there for more than a decade.
All you GIMP and Photoshop users having trouble with this should have been
using Tux Paint. :-)

~~~
mark-r
Lookup tables are cache killers on modern CPUs. Arithmetic is often faster.

~~~
burfog
It's 5120 bytes for the both of them, the access is not at all random (due to
color popularity), and the real cache killer is the image itself. You're
dealing with a few megabytes for the image.

~~~
mark-r
Oh, the lookup tables are cached just fine. It's just that they crowd out
something else.

~~~
burfog
That "something else" is already crowded out by running through a multi-
megabyte image.

At least the instruction cache is barely affected, which would not be the case
for a math library function.

~~~
AstralStorm
You're not running a multi MB image but a small block thereof. Not that 5-6KiB
LUT matters, but the extra conditional multiply ops or power or logarithm
function evaluations do. Latter are need a few MACs for a good approximation
(at least 12 bit precise, preferably 16 bit - like 5th order polynomial),
former have masks, worse for SIMD.

------
sometime
The same phenomenon also applies to color blending. In a modern context this
phenomenon is especially visible in UI elements that blur the background: Up
until recently many UIs did not get the blurring right, resulting in greyish
dark spots between different colors. I think CSS blurring in most browsers
still gets it wrong.

[https://www.youtube.com/watch?v=LKnqECcg6Gw](https://www.youtube.com/watch?v=LKnqECcg6Gw)
(Computer Color is Broken, MinutePhysics)

~~~
zokier
Well color blending is whole another can of worms; there are no easy answers
there. Mixing RGB values even in linear space does not yield always "correct"
results. More problematic, I'm not sure if there is even well-defined correct
result for generalized color mixing/blending.

~~~
mark-r
Even something as simple as a proper gradient between two colors is
surprisingly difficult. I think I finally cracked that one though:
[https://stackoverflow.com/a/49321304/5987](https://stackoverflow.com/a/49321304/5987)

Color mixing is easy if you're mixing light, just add together the linear
intensities. Color mixing of pigments is a different order of complexity,
because you have to consider that pigments are both transmissive and
reflective and there are other effects like dot gain and spectral lines to
consider.

------
mark-r
This is over 10 years old, and I just tested 3 browsers - NONE of them got it
right!

For image editing at home I use my own software to do resizing, using
Lanczos-5 in linear color space.

~~~
datenwolf
Browsers do wrong a lot of things. Some time ago I hacked this little test:

[https://datenwolf.net/rndrchk/](https://datenwolf.net/rndrchk/)

 _EDIT: If everything is done correctly, the text in the upper div will remain
unreadable._

If browsers were doing gamma correct scaling, the perceived brightness of the
checker pattern would not change.

Even more problematic, when zooming the interpolation and blending also don't
happen gamma correct. If mere nearest neighbor interpolation were applied,
this would remain hidden.

This becomes really annoying on HiDPI screens. For some reason the "px" CSS
unit has been "redefined" as an "average-ish" angular distance when viewed on
a 96DPI screen at an arm's length or so. And for higher resolutions px is
scaled accordingly. Whoever came up with that nonsense created a lot of issues
down the road. _sigh_

 _EDIT: Also Opera (before it switched to WebKit) did some really funky
business at the edges, when upscaling images. Interpolation would always
clamp-to-edge instead of respecting the tiling mode._

~~~
zamadatix
Regardless if they had included the device relative px in the spec or not CSS
doesn't define rendering, it defines layout. I.e. it's intentional CSS doesn't
allow you to define things in terms of physical device pixels. Technically you
could hack it for real world devices with ton of media queries but that's not
their intended usage (hence needing a ton).

If you're intending to render something in the browser (calculating DOM
elements, canvas, or anything else) they expect you to handle it in your
rendering code (JS/WASM).

~~~
datenwolf
The whole intention of my little test hack in the first place was to show,
that without being able to define _layout_ with device native unit precision,
the end result becomes unpredictable.

The idea was to apply a checkerboard background image to to body and div, but
translate the checkerboard by 1px in the div. By their very nature pixel based
images are defined in pixels. And the most naive way to display images is by a
1:1 mapping of image pixels to CSS px units. This is how things started out,
and only later features like page zooming, responsive scaling, HiDPI and so on
came to be. Of course that means that images must be scaled. But this scaling
must be well defined, so that the output will only differ in resolution, but
not layout or visual outcome after scaling.

And right now browsers fail to do this. Two years ago, when I wrote came up
with that test Opera and Android WebView did even fail to properly translate
the position of the div background; it looked like if somewhere in the scaling
at some point the translation was coerced into the device native units grid.

A checkerboard pattern can be understood as a Haar wavelet; or in terms of
spatial frequency space as ΣₖΣₗsin(x-k)/(x-k)·sin(x-l+π)/(x-l+π) When applied
to the pixel sampling grid a 2×2 checkerboard pattern is right at the Nyquist
limit. Upscaling it with an ideal filtering kernel (sin(x)/x = sinc(x) =
Lanczos) yields a single frequency (sinusoid).

Adding two such signals at exactly π phase difference gives perfect
destructive interference. Add some phase difference and it becomes
constructive. This little detour into signal theory should make it clear, that
you have to take great care when scaling and positioning stuff in a layout.
Scale corresponds to frequency, position corresponds to phase. And it should
be noted, that in a visual signal, phase carries the bulk of information,
hence image transformations should preserve the phase, where possible.

That it also shows, that interpolation and blending is broken, too (i.e.
doesn't respect gamma) is a secondary outcome.

~~~
zamadatix
If by "only later" you mean the original CSS1 spec in '96 then yeah. The web
has been relative long enough that anyone who has ever hit "print" didn't have
to worry about images being 1/6th the size of the rest of the page.

I don't see how broken interpolation/blending is a secondary outcome. That it
works only at "native" resolution is a result of interpolation being broken
not the other way around. If it was fixed you wouldn't be looking to use
device pixels (again except for something like canvas rendering of a web image
editor).

------
kazinator
I'm playing with the Dalai Lama picture in Gimp and seeing some interesting
results.

If you decompose the image into LAB, then the individual channels exhibit this
scaling issue.

But the kicker here is that when you're working with these decomposed
channels, they are visualized in the GUI as grayscale.

So, thus, never mind the fact that they represent color; that is irrelevant.
Color seems to be a "gray herring" here.

What's fascinating is that the eyes are not fooled; optical blurring and
scaling is resilient somehow.

So it really is just about the intensity mapping, underscoring how this is a
gamma issue.

The data is contrived so that a linear treatment of the intensities will
average out to around zero.

This reminds me of say, the wrongness of mixing logarithmically compressed
audio samples just by adding them together.

------
AstralStorm
Ah yes, linear light scaling. Implemented in Kodi and MPV since times
immemorial... in Imagemagick too. Like 10 years.

It also improves scaling quality for most resampling algorithms.

------
kazinator
What if an image is so contrived that the _correctly linearized_ data averages
into a flat, gray signal?

Won't the "correct" scaler succumb to the problem?

The answer is no. Here is why: such a contrived image will actually look gray,
and so when it scales to a gray square, that is arguably correct.

We can see this with the Dalai Lama readily. If apply even a modest gamma
curve to it to bring up its midtones, it turns more or less gray.

------
nayuki
Similar idea: [https://www.nayuki.io/page/gamma-aware-image-
dithering](https://www.nayuki.io/page/gamma-aware-image-dithering)

