(1) Hue was not a good dimension for encoding magnitude information, i.e. rainbow color maps are bad.
(2) The mechanisms in human vision responsible for high spatial frequency information processing are luminance channels. If the data to be represented have high spatial frequency, use a colormap which has a strong luminance variation across the data range.
(3) For interval and ratio data, both luminance- and saturation-varying colormaps should produce the effect of having equal steps in data value correspond to equal perceptual steps, but the first will be most effective for high spatial frequency data variations and the second will be most effective for low spatial frequency variations.
or as pdf:
A related problem is, once you have a uniform color space, how do you color code a list of things such that:
A) Color are maximally distinguishable from each other, i.e. maximally separated on the color wheel.
B) Colors are stable as items are added and removed, to avoid confusion caused by the color for an item shifting as you update the display.
The two criteria are at odds, unfortunately. The compromise I came up used PHI, since it's essentially a radial distribution problem, which PHI excels at. Something like:
(PHI * idx * 360) % 360
Now one variation is instead of a straight line, you draw a spiral along it. And you got yourself a CUBEHELIX color scale (https://www.mrao.cam.ac.uk/~dag/CUBEHELIX/). Though CUBEHLELIX works inside a RGB cube and not a CIE LUV/LAB mesh, but the principle could be applied to other color spaces, too.
One thing to watch out for when working with these non-linear color spaces which have a weird shape in 3D space is how to deal with colors outside of the gammut. Simple clamping only works in RGB, in CIE LUV/LAB it's not that simple, because for any given L the range of UV/AB varies. And often times clamping is not the most (perceptually-)accurate solution.
The perceptual color problem is an interesting "bandwidth" limit on available hash buckets.
As to why phi behaves this way, maybe a mathematician can explain it better, but one thing I read is in some sense it's the "most" irrational number there is, so it has this property.
(Don't do anything safety critical with this analysis. I just spent a little time looking into it and probably got something wrong.)
here's a great talk about this, in the context of replacing the Jet colormap for matplotlib.
CIELAB is a very simple model, which has been very successful but also has various deficiencies. In particular it specifies a broken method of white point adaptation (this can be swapped out), and involves a nasty blue-purple hue shift at constant CIELAB hue which can have big impacts on images put through a gamut mapping algorithm in CIELAB space, e.g. for printing.
HSLuv seems to generate brown/gray intermediate colours when interpolating. CIELAB interpolates through clearer hues.
In my opinion that looks nicer on charts such as heat maps, etc, but it might not be a good choice as a general tool.
LCH is a hue-based version of LAB, which is perceptually uniform (when changing one dimension at a time) and easy to read, but unfortunately the gamut of what's displayable on a screen when viewed in this space is weird and hard to work with programatically. Specifically the higher C values are only available for the upper middle L values, and are hue dependent.
You could "fix" this by stretching any one of the values to fit in gamut - pick two and use a percent of the max in-gamut value for the third. It looks like this space is LH+C%. But that makes the chroma (saturation) no longer uniform. You could also do HC+L%, which makes luminence (brightness) no longer uniform, and would do a weird thing where L=0 is only black when C=0. You could also do LC+H% which would have weird discontinuities in hue and be generally funky.
In the green-blue gradient, the endpoints feel more saturated than the intermediary values. The blue end (especially one before the end) seems visually darker than the green end.
In the red-cyan gradient, the result seems to be a lot of muddy browns, which doesn't seem to be a great improvement overall. Beyond that, the magenta at the leftmost block sticks out, and the two rightmost blocks are almost indistinguishable.
I'll admit that I don't have calibrated monitor or anything, so some of the problem might be on my end. But I'd argue that any method intended for general public consumption should accommodate slight variations in displays, my monitor shouldn't be that much off.
What I never understood however is that even with the most
advanced models the color difference is apparently never simply the euclidean difference. This is fine if you just want to calculate deltaE, but not so helpful if you want to do diagrams.
Is there an up to date color model that is perceptually uniform and an euclidean space?
A quick google scholar search turned up https://www.osapublishing.org/oe/abstract.cfm?uri=oe-22-10-1... but there are plenty of other sources discussing similar topics.
Care to share?
EDIT - I see video in the right column which I assume is an input channel. But the shader shows an error on line 283:
'tanh' : no matching overloaded function found
You can think of it as a constant-brightness version of the HSV option. AFAIK it's the only one of these clocks with HSLuv, and the only one that matches the color gamut to a unit of time, so you know where you are in the hour (or day) at a glance.
Does anybody else find it disturbing that a gradient from red to cyan goes through the entire rainbow?