Tangent: would it be correct to conclude that if I am trying to convert a high quality image into one with a limited palette that uses dithering (basically, think of gifs here), it would be best to determine the palette in a perceptual color space, and then proceed to handle the image dithering to that palette using a linear color space?
I've been meaning to explore if the ideas in Efficient palette-based decomposition and recoloring of images via RGBXY-space geometry could be used for creating better paletted PNGs and GIFs. Learning more about color spaces is quite relevant, I would say.
Thanks your for confirming my suspicions and the link!
What if instead of cramming the non-Euclidean space of perceived colors into an Euclidean space and interpreting straight lines there as gradients we used geodesic curves in the original non-Euclidean metric? Are there efforts to explore this area?
This is known as "the super-importance of hue", FWIW.
> What if [...] we used geodesic curves in the original non-Euclidean metric?
I came across a couple of examples of non-Euclidian colour spaces recently.
'Hyperbolic geometry for colour metrics':
'H2SI - A new perceptual colour space' (uses a four dimensional normalized Hilbert space, whatever that is):
Does anyone know of good open data that could be used to generate a metric on colourspace?
To understand the comparison it is critical to state what absolute luminance was used for media white in the ICtCp gradient. 80 100? 203? 10,000 (surely not)? The SDR to HDR mapping chosen has a big effect on the results for these HDR colorspaces using PQ or PQ-like transfer functions.
That mapping also has a big effect on the comparative transfer function graph. I worry that the curve for ST2084 starts and ends at the same place the others do. Does that mean you mapped media white to full-scale, absolute maximum small-area peak white?
I would have liked to see the arguments in favor of a single transfer function vs. a piecewise curve-plus-linear (or curve plus curve, in the case of HLG) expanded a bit more beyond computational efficiency and GPU-freindliness and more in the direction of correctness and fitness for purpose.
> For this reason I have designed a new perceptual color space, designed to be simple to use, while doing a good job at predicting perceived lightness, chroma and hue.
What does it mean for a color space to predict lightness, chroma, and hue?
Another useful function of a color space is to predict the threshold of when two similar colors are just barely perceptibly different.
Not all the colourspaces are perceptually uniform nor try to be. For example, CIE XYZ is photometrically linear and appropriately so!
Y = relative luminance, XYZ are tristimulus coordinates, and x, y or u', v' are chromaticity coordinates. These are all physical rather than perceptual quantities.
We don’t know how any given person will see and interpret light, so it’s all about guessing what will look right to most people.
But maybe I get it. Is it that since a lab colorspace has lightness and opponent color coordinates, a lab colorspace, for example, "predicts lightness well" when a small increase in the L coordinate leads to a small increase in the perceived lightness of the resultant color?
Take the blue–white gradient: that’s about saying “what happens if we take a certain blue, and lighten it?” Expressed otherwise: “given ‘blue’, what do people think ‘light blue’ is?” It’s predicting light blue, based on the shape baked into the colour model.
sRGB blending says “blue channel’s already full, let’s increase the red and green channels”. That’s tolerable, but not great. Most people will look at it and be very subtly discontent, though they probably can’t quite place their finger on what’s wrong with it. It’s just a little too indigo.
CIELAB does a terrible job of this particular prediction. People will tend to agree that its guess at light blue is actually a light purple.
IPT, Oklab and ICtCp have been modelled to know that, in order to lighten a blue, you actually have to add a little more green than red (increase the green channel more than the red channel), because that’s just how our eyes and brains work, how we perceive colours.
Or the blue–yellow gradient: it’s asking “what colour is half-way between blue and yellow?” sRGB thinks a mid-grey is, and CIELAB thinks mauve is: these two are quite clearly wrong. The interesting ones are that IPT and ICtCp think a sort of teal is, while Oklab thinks it’s a bit paler and not quite so green. These are the predictions that have been designed into the colour space. Who’s more correct? I dunno, actually. They’re both pleasant to look at.
I have trouble understanding the precise meaning of this sentence. Can it be formalized in standard terms from differential geometry? For example, than the color space has negative curvature? (As opposed to the surface of the sphere, where circles are shorter than πr^2, so it has positive curvature).
You're right that it can be formalised in terms of curvature. But we can't say that it has negative curvature because the curvature is nonconstant. It in fact has positive curvature in some regions and negative in others.
From that perspective, I definitely support offering designers either an IPT or Oklab based gradient tool. With the lightness predictions, it's likely that Oklab will do a slightly better job on blend modes, but clearly they're both good color spaces, certainly a major improvement over CIELAB which is vastly more popular.