Hacker News new | past | comments | ask | show | jobs | submit login
An interactive review of the Oklab perceptual color space (raphlinus.github.io)
120 points by raphlinus 8 months ago | hide | past | favorite | 33 comments



> All that said, there are definitely cases where you do not want to use a perceptual space. Generally for image filtering, antialiasing, and alpha compositing, you want to use a linear space (though there are subtleties here). And there are even some cases you want to use a device space, as the device gamut is usually nice cube there, while it has quite the complex shape in other color spaces.

Tangent: would it be correct to conclude that if I am trying to convert a high quality image into one with a limited palette that uses dithering (basically, think of gifs here), it would be best to determine the palette in a perceptual color space, and then proceed to handle the image dithering to that palette using a linear color space?


Yes. It turns out there is quite a bit of literature on palette generation and quantization ([1] comes to mind, which uses CIELAB as you describe), but limited palettes are mostly a retro curiosity these days.

[1]: https://engineering.purdue.edu/~bouman/publications/pdf/ei94...


Or a tinkering curiosity that is fun to play with, in my case :)

I've been meaning to explore if the ideas in Efficient palette-based decomposition and recoloring of images via RGBXY-space geometry[0] could be used for creating better paletted PNGs and GIFs. Learning more about color spaces is quite relevant, I would say.

Thanks your for confirming my suspicions and the link!

[0] https://cragl.cs.gmu.edu/fastlayers/


That sounds right to me.


> As it turns out, such a thing is no more possible than flattening an orange peel, because color perception is inherently non-Euclidean. To put it another way, the ratio of perceptually distinct steps around a hue circle to those directly across through gray is greater than would be expected as a circle in an ordinary Euclidean space.

What if instead of cramming the non-Euclidean space of perceived colors into an Euclidean space and interpreting straight lines there as gradients we used geodesic curves in the original non-Euclidean metric? Are there efforts to explore this area?


>> the ratio of perceptually distinct steps around a hue circle to those directly across through gray is greater than would be expected as a circle in an ordinary Euclidean space

This is known as "the super-importance of hue", FWIW.

> What if [...] we used geodesic curves in the original non-Euclidean metric?

I came across a couple of examples of non-Euclidian colour spaces recently.

'Hyperbolic geometry for colour metrics':

https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-22-10-1...

'H2SI - A new perceptual colour space' (uses a four dimensional normalized Hilbert space, whatever that is):

https://www.researchgate.net/publication/259265578_H2SI_-_A_...


I think the issue is you've described something qualitatively without giving the needed quantitative description of how the non euclidean space should be specified. Implicit surface with some parameterization? Family of geodesics, selecting a curve with a parameter per curve? How do artists specify where in the color space they are? How is your coordinate mapping defined?


This is a good question, but it's not obvious that geodesics will lead to the most visually pleasing gradients. In particular, because of hue super-importance, a geodesic between two highly chromatic colors of different hues will bend toward gray, which might not be as representative of a "gradient" intent as linear motion through a nice space such as Oklab or IPT.


You can just use the standard RGB (or any other) coordinate system, but then use data on perceptual differences to find geodesic gradients.


A parametrization and local metric at every point.


I was interested in doing this as a project, but I couldn't find any good data on perceptual differences to work from. MacAdam's famous paper only collected data for one brightness level.

Does anyone know of good open data that could be used to generate a metric on colourspace?


Firstly, really nicely done. Very clear, and the interactive gradient explorer was excellent for quickly exploring the spaces.

To understand the comparison it is critical to state what absolute luminance was used for media white in the ICtCp gradient. 80 100? 203? 10,000 (surely not)? The SDR to HDR mapping chosen has a big effect on the results for these HDR colorspaces using PQ or PQ-like transfer functions.

That mapping also has a big effect on the comparative transfer function graph. I worry that the curve for ST2084 starts and ends at the same place the others do. Does that mean you mapped media white to full-scale, absolute maximum small-area peak white?

I would have liked to see the arguments in favor of a single transfer function vs. a piecewise curve-plus-linear (or curve plus curve, in the case of HLG) expanded a bit more beyond computational efficiency and GPU-freindliness and more in the direction of correctness and fitness for purpose.


The words "predict" and "prediction" are used here and in the original article on Oklab in a way that I'm not familiar with.

For example:

> For this reason I have designed a new perceptual color space, designed to be simple to use, while doing a good job at predicting perceived lightness, chroma and hue.

What does it mean for a color space to predict lightness, chroma, and hue?


It's a good point, I wasn't precise in my language. A good way to understand it is that if two different colors have the same lightness (or hue, or saturation) in some color space, that "predicts" that they will be perceived to have the same lightness as well. It's impossible to get these predictions exactly right (in part because actual perception depends on so many variables), but some can be better than others.

Another useful function of a color space is to predict the threshold of when two similar colors are just barely perceptibly different.


> Another useful function of a “perceptually uniform” colorspace.

Not all the colourspaces are perceptually uniform nor try to be. For example, CIE XYZ is photometrically linear and appropriately so!


I'm a bit lost in the discussion here - is your first line a correction of the GP and you used the > mark 'wrong', or did the GP update his post and is the quote not in there any more?


The first. I should have added "perceptually uniform," kelsolaar is correct.


XYZ does not provide correlates of hue, lightness, brightness, chroma, colorfulness, saturation, blueness–yellowness, redness–greenness, or the like

Y = relative luminance, XYZ are tristimulus coordinates, and x, y or u', v' are chromaticity coordinates. These are all physical rather than perceptual quantities.


That makes sense, thank you!


It’s the normal sense of the words, because it’s about perception rather than any hard ground truth. Hence “predicting perceived lightness, chroma and hue”.

We don’t know how any given person will see and interpret light, so it’s all about guessing what will look right to most people.


It's a bit strange to me, because how does a colorspace "predict" anything?

But maybe I get it. Is it that since a lab colorspace has lightness and opponent color coordinates, a lab colorspace, for example, "predicts lightness well" when a small increase in the L coordinate leads to a small increase in the perceived lightness of the resultant color?


The gradients offer good examples.

Take the blue–white gradient: that’s about saying “what happens if we take a certain blue, and lighten it?” Expressed otherwise: “given ‘blue’, what do people think ‘light blue’ is?” It’s predicting light blue, based on the shape baked into the colour model.

sRGB blending says “blue channel’s already full, let’s increase the red and green channels”. That’s tolerable, but not great. Most people will look at it and be very subtly discontent, though they probably can’t quite place their finger on what’s wrong with it. It’s just a little too indigo.

CIELAB does a terrible job of this particular prediction. People will tend to agree that its guess at light blue is actually a light purple.

IPT, Oklab and ICtCp have been modelled to know that, in order to lighten a blue, you actually have to add a little more green than red (increase the green channel more than the red channel), because that’s just how our eyes and brains work, how we perceive colours.

Or the blue–yellow gradient: it’s asking “what colour is half-way between blue and yellow?” sRGB thinks a mid-grey is, and CIELAB thinks mauve is: these two are quite clearly wrong. The interesting ones are that IPT and ICtCp think a sort of teal is, while Oklab thinks it’s a bit paler and not quite so green. These are the predictions that have been designed into the colour space. Who’s more correct? I dunno, actually. They’re both pleasant to look at.


This is a great explanation! Thank you, it makes perfect sense now.


CIELAB colours seem wrong in many gradients.


The article suggests that it’s a feature of the way we perceive colour: if you get a spinning top with half blue and half white and spin it so the colours blend together (ie taking the average of the spectrum from each colour), it will shift towards purple as well as lighter. To get a gradient that seems to go from blue to white without going via purple, a colour space needs to counteract this aspect of our vision.


I was noticing that as well. They seem quite off in a surprising way


Yeah the blue looks purple


> the ratio of perceptually distinct steps around a hue circle to those directly across through gray is greater than would be expected as a circle in an ordinary Euclidean space.

I have trouble understanding the precise meaning of this sentence. Can it be formalized in standard terms from differential geometry? For example, than the color space has negative curvature? (As opposed to the surface of the sphere, where circles are shorter than πr^2, so it has positive curvature).


> Can it be formalized in standard terms from differential geometry? For example, than the color space has negative curvature?

You're right that it can be formalised in terms of curvature. But we can't say that it has negative curvature because the curvature is nonconstant. It in fact has positive curvature in some regions and negative in others.


"Objectively" the Oklab color space might be better but from the results I got by playing with the gradient generator I still prefer the IPT space. The main difference that swayed me toward it was that it appears to better handle the middle regions of gradients between two very saturated colors, where the result of Oklab looks a bit too washed out.


I think it's perfectly fine to subjectively prefer one result over another. I like thinking of results in color science as tools to help designers, rather than some objective truth. There's a lot of scientism.

From that perspective, I definitely support offering designers either an IPT or Oklab based gradient tool. With the lightness predictions, it's likely that Oklab will do a slightly better job on blend modes, but clearly they're both good color spaces, certainly a major improvement over CIELAB which is vastly more popular.


If you are trying to make a gradient between two colors, there is no reason you must interpolate linearly between them. (Indeed, for many purposes you should not.)


and if you are making a gradient with more than two stops, (which is super common) then piecewise linear will give obvious artifacts where the slope changes; much better to use some smooth curve such as Catmull-Rom curves to avoid abrupt changes of slope at the intermediate stops




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: