Hacker News new | past | comments | ask | show | jobs | submit login
Perceptually uniform color spaces (programmingdesignsystems.com)
268 points by beardicus on Oct 16, 2017 | hide | past | web | favorite | 51 comments



IBM did research back in the 90s on perseptually-based colormaps and how to best represent various types of data within the color dimensions of luminescence, saturation and hue [1]. For exmpale, they found that,

(1) Hue was not a good dimension for encoding magnitude information, i.e. rainbow color maps are bad.

(2) The mechanisms in human vision responsible for high spatial frequency information processing are luminance channels. If the data to be represented have high spatial frequency, use a colormap which has a strong luminance variation across the data range.

(3) For interval and ratio data, both luminance- and saturation-varying colormaps should produce the effect of having equal steps in data value correspond to equal perceptual steps, but the first will be most effective for high spatial frequency data variations and the second will be most effective for low spatial frequency variations.

===

[1] https://www.research.ibm.com/people/l/lloydt/color/color.HTM

or as pdf:

https://github.com/frankMilde/interesting-reads/blob/master/...


I've grappled with this problem multiple times in data-vis programming.

A related problem is, once you have a uniform color space, how do you color code a list of things such that:

A) Color are maximally distinguishable from each other, i.e. maximally separated on the color wheel.

B) Colors are stable as items are added and removed, to avoid confusion caused by the color for an item shifting as you update the display.

The two criteria are at odds, unfortunately. The compromise I came up used PHI, since it's essentially a radial distribution problem, which PHI excels at. Something like:

    (PHI * idx * 360) % 360


You plot the visible gammut of your favourite color space (RGB, CIE LUV, CIE LAB, preferably one that is perceptually uniform) in 3D space. You get either a cube (RGB) or something that resembles a skewed cube (CIE LUV/LAB). Then take the corners which are farthest apart and place a line between them, or place a line between any other two points which are sufficiently apart from each other. You sample however many points you need along that line and that is your color scale.

Now one variation is instead of a straight line, you draw a spiral along it. And you got yourself a CUBEHELIX color scale (https://www.mrao.cam.ac.uk/~dag/CUBEHELIX/). Though CUBEHLELIX works inside a RGB cube and not a CIE LUV/LAB mesh, but the principle could be applied to other color spaces, too.

One thing to watch out for when working with these non-linear color spaces which have a weird shape in 3D space is how to deal with colors outside of the gammut. Simple clamping only works in RGB, in CIE LUV/LAB it's not that simple, because for any given L the range of UV/AB varies. And often times clamping is not the most (perceptually-)accurate solution.


The problem I encountered is that once if subset your color space to include only color within ranges of, e.g., lightness and saturation, the transformed (perceptually uniform) color space is some weird, non-convex shape. Packing spheres into that transformed space is difficult.


Add to that the requirement of being distinguishable even for colorblind people. Whenever I choose colors for a chart or something, I try to find colors that work even with red/green blindness; but usually I give up and just use different textures or marker shapes and hope that they look distinctly enough even without color.


The best way to make colors distinguishable (for the color blind and everyone else) is to use lightness contrast.


To domain switch, slightly: it's essentially a hash bucketing problem in a different guise at that point. The color space and visually distinguishable steps defines your hash bucket count (and kind of an equivalent to a memory limit), and maximally distributing your hashes across the available buckets remains roughly the same problem and you want a relatively well distributed, stable hash for basically the same reasons (color locality versus memory locality).

The perceptual color problem is an interesting "bandwidth" limit on available hash buckets.


Ridiculousfish, of Hex Fiend and fishshell fame, also has this nifty trick up his sleeve: http://ridiculousfish.com/blog/posts/colors.html


PHI as in ‎~1.618? What is radial distribution, and why is phi particularly good for it?


Yes. It's basically the idea that as you plot points around the circle using this formula, at every iteration you'll have close-to-even spacing between points, no matter how many points you've plotted, without redistributing already-plotted points. It doesn't matter if you plot three points or three million, they'll be fairly evenly distributed.

As to why phi behaves this way, maybe a mathematician can explain it better, but one thing I read is in some sense it's the "most" irrational number there is, so it has this property.


Suppose you're generating positions mod 1, so f(i)=fmod(phi * i,1.0) is the position for index i. The distance between f(i) and f(j) is basically fmod(phi * (i-j),1.0), with the complication that mod wraps around... Anyway, if you do the correct wrap-around distance, the worst-case distance ends up being about approximating phi with i-j in the denominator, and the distance (i-j) * phi is away from being an integer is apparently just about 1/((i-j) * sqrt(5)) for phi (phi is what makes Hurwitz's theorem's bound tight-ish). In particular, with n points on a circle, the values will all be at least about 360/(n * sqrt(5)) degrees from each other. Of course, 360/n would be ideal, but it is not so bad being about 0.44x the best case.

(Don't do anything safety critical with this analysis. I just spent a little time looking into it and probably got something wrong.)


This is the same solution plants use for distributing branches around their stem. (They repeat, but the separation between branches is closely related to phi.) I don't have a proof for why phi is good for this (although see kmill's comment), but the fact that all plants use it suggests it is in fact a good solution.


That's actually where I got the idea!


https://www.youtube.com/watch?v=xAoljeRJ3lU

here's a great talk about this, in the context of replacing the Jet colormap for matplotlib.


This was way more interesting that I could have imagined! I really like how they present the idea of a color map as a high bandwidth interface between the computer and brain. They then proceed to figure out how optimize this interface given a number of tricky constraints about human color perception. Their explanation of color perception using basis functions was a great way to understand how human vision works.


I agree, the presentation is very good. Definitely worth a look if you’re interested in applying some of this theory in a practical way.


yeah, i learned more about color theory in those 5 minutes than i had previously known.


For some history, the Munsell color system was the first attempt to measure color in a perceptually uniform way, with a 3D space on hue, value, and chroma.

https://en.wikipedia.org/wiki/Munsell_color_system


CIELAB is basically an attempt to approximate the Munsell renotations with a simple and invertible formula (the Munsell renotations are a lookup table), suitable for implementation on hardware from the 1960s.


CIELAB gives nicer looking gradient with clear colors without brown or gray tones, while keeping the luminosity more or less the same.


I’m not sure what you are responding to. “Gives nicer looking gradient” than what?

CIELAB is a very simple model, which has been very successful but also has various deficiencies. In particular it specifies a broken method of white point adaptation (this can be swapped out), and involves a nasty blue-purple hue shift at constant CIELAB hue which can have big impacts on images put through a gamut mapping algorithm in CIELAB space, e.g. for printing.


Than HSLuv.

HSLuv seems to generate brown/gray intermediate colours when interpolating. CIELAB interpolates through clearer hues.

In my opinion that looks nicer on charts such as heat maps, etc, but it might not be a good choice as a general tool.


Once I had the task to color a graph with colors that seem equally far away and equally luminous. I got stuck for a long time until I discovered the Munsell scheme.


That resulting color space isn't perceptually uniform anymore.

LCH is a hue-based version of LAB, which is perceptually uniform (when changing one dimension at a time) and easy to read, but unfortunately the gamut of what's displayable on a screen when viewed in this space is weird and hard to work with programatically. Specifically the higher C values are only available for the upper middle L values, and are hue dependent.

You could "fix" this by stretching any one of the values to fit in gamut - pick two and use a percent of the max in-gamut value for the third. It looks like this space is LH+C%. But that makes the chroma (saturation) no longer uniform. You could also do HC+L%, which makes luminence (brightness) no longer uniform, and would do a weird thing where L=0 is only black when C=0. You could also do LC+H% which would have weird discontinuities in hue and be generally funky.


Author here! Thanks for the comment. Just for some background info: This book is written for people who want to begin using code as a creative tool, and many readers just learned the basics of JavaScript. Color theory quickly gets extremely hardcore, so the emphasis here is to present an alternative to the standard color spaces that is easy to use in JS. I think that HSLuv solves that problem, and it has been very helpful for my students when I introduced it in undergrad classes. There are many other relevant perceptually uniform color spaces, but none of them are as easy to understand as HSLuv for students who already know sRGB HSL.


If you use D3 for Javascript visualization it's very easy to get this right. D3 has supported interpolation in CIE and Lab spaces for a long time now, making it easy to create visualizations with perceptual uniformity. Many of its built-in color scales are also perceptually uniform.

https://bl.ocks.org/mbostock/3014589 https://github.com/d3/d3-scale/blob/master/README.md#sequent...


Dolby Ictcp colorspace seems to address some of these problems. it claims to be constant luminance with hue linearity per the following white paper:

https://www.dolby.com/us/en/technologies/dolby-vision/ictcp-...


While the idea is sound the example results are not exactly convincing:

In the green-blue gradient, the endpoints feel more saturated than the intermediary values. The blue end (especially one before the end) seems visually darker than the green end.

In the red-cyan gradient, the result seems to be a lot of muddy browns, which doesn't seem to be a great improvement overall. Beyond that, the magenta at the leftmost block sticks out, and the two rightmost blocks are almost indistinguishable.

I'll admit that I don't have calibrated monitor or anything, so some of the problem might be on my end. But I'd argue that any method intended for general public consumption should accommodate slight variations in displays, my monitor shouldn't be that much off.


Unfortunately the link in the article to HSLuv is down, but it seems it is based on CIELUV. CIELUV is what people thought of perceptually uniform in 1976, a time when computers were not widespread. We have much better color models now.

What I never understood however is that even with the most advanced models the color difference is apparently never simply the euclidean difference. This is fine if you just want to calculate deltaE, but not so helpful if you want to do diagrams.

Is there an up to date color model that is perceptually uniform and an euclidean space?


No, it’s not possible to have uniform color differences at all scales and in all directions in a 3-dimensional Euclidean space. If you go around a hue circle in little steps and then sum up all the perceived color differences you end up with more than π times the sum of differences you get if you go across the circle through gray.


Is that implying that you could have uniform model in higher-dimensional space? Afaik there isn't really any particular reason why we should be limited to three dimensions.


Sure, you could make your 3-dimensional color space a manifold with negative curvature embedded in a higher-dimensional Euclidean space.

A quick google scholar search turned up https://www.osapublishing.org/oe/abstract.cfm?uri=oe-22-10-1... but there are plenty of other sources discussing similar topics.


My math is a bit rusty but isn’t a manifold locally Euclidean? The GP is talking about how we can’t have Euclidean difference.


A sphere and a hyperboloid are both locally Euclidean (as is color space). The interesting stuff starts to happen as you move around a bit further in the space.


I think it's a bad link. I found it within the URL: http://www.hsluv.org


Actually, the link is just wrongly formatted, the HSLuv project is online: http://www.hsluv.org/


> We have much better color models now.

Care to share?


I would recommend IPT or CIECAM02.


I made two shaders to visualise HSLuv which I like : https://www.shadertoy.com/view/ldSBRK https://www.shadertoy.com/view/ldBBzK


Second one is broken, FYI.


What is your OS and browser ?


I get a black box on Firefox 57 beta


The video has to load


I have audio and the status bar says it's playing at 60 FPS, but no video ever shows up. How big a video are we talking?

EDIT - I see video in the right column which I assume is an input channel. But the shader shows an error on line 283:

    'tanh' : no matching overloaded function found


An XSLT pie chart generation implementation that can create an arbitrary number of wedges, each coloured with the same perceived brightness.

https://stackoverflow.com/a/25481023/59087


CAM02-LCD, CAM02-SCD, and CAM02-UCS colourspaces by Luo, Cui and Li (2006) and based on CIECAM02 colour appearance model should perform better than the proposed colourspace based on CIELuv because CIECAM02 accounts for viewing conditions. IPT colourspace by Fairchild is also an excellent colourspace when hue uniformity is required. The wikipedia page on CAMs have some decent information: https://en.wikipedia.org/wiki/Color_appearance_model


I love the idea of perceptually uniform color spaces, but I’ve found that in practice, they just make your UI look dull and murky. Something is lost without bright reds, greens, and (sometimes) yellows.


I wrote a library for Java/Processing that resonates with this article:

https://github.com/neilpanchal/Chroma


I added a HUSL (previous name of HSLuv) option to my color-changing clock a while back

http://lumma.org/code/js/colortime

You can think of it as a constant-brightness version of the HSV option. AFAIK it's the only one of these clocks with HSLuv, and the only one that matches the color gamut to a unit of time, so you know where you are in the hour (or day) at a glance.


That first gradient doesn't look right to me - sampling the color values I don't see an obvious pattern in some of the color changes.

Does anybody else find it disturbing that a gradient from red to cyan goes through the entire rainbow?


Anyone know if there's anything like this for R? Or is colorRamp in CIElab good enough?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: