And you can see what this looks like when used to visualize sorting algorithms at large scale here:
I've also published an Open Source project that might be useful to anyone who wants to play with these ideas:
There are some limitations to the process as described - most obviously the fact that I'm using the RGB color space which is not the best match for human color perception. RGB is convenient because it is a regular cube and therefore easy to map onto the Hilbert curve, but with a little work the process could be generalized.
However, it's not a fast algorithm. The more colours you choose, the slower each iteration gets, since you have to calculate the distances from a growing set of colours. Also, after ~10, the distinctness seems to drop off pretty quickly. Lastly, assuming you're seeding with the same colours, you'll always get the same result, so it's really best to do the calculations once and then cache them.
CIELAB space was designed with the goal that any pair of colors separated by about 1 unit (i.e. ΔE = 1) would have a similar degree of apparent difference. However, apparent color differences are a highly non-linear property, and so by the point ΔE gets past 5 or 10, a specific color difference in one direction or one part of the color space will be substantially different than a same-distance color difference in another part of the space.
Often, the reason you want to carefully pick a color scheme is to make sure that two areas of a graphic are easily visually distinguishable along an edge. Because human vision mainly uses a monochromatic color signal for distinguishing shape, texture, and the finest details, the most important difference to make between a pair or set of colors is to separate them in the lightness (in CIELAB, that's L) dimension. If you have sufficient lightness contrast, then any particular choices of a and b (or hue and chroma) will work fine. If you don’t have sufficient lightness contrast, changing the other color components won’t help much: even if you maximally separate them in the color space, they’ll just appear to mush together in a clashing way along edges.
[edit: removed asterisks from L*, etc. because HN thinks they should be for italicizing]
1. Once you've seen a colored element in the graphic, you want to quickly match it to its corresponding element in a legend (or vice versa). Having distinct hue and chroma is more important for this, because lightness is so vulnerable to your brain's tendency to 'correct' it based on contextual cues.
2. Hues should be aesthetically pleasing.
3. Lightness tends to impart connotations to an visualization: class X is more important than other classes if it is substantially darker or lighter. Same goes for chroma. So one wants to bound lightness and chroma to within a specific range when using them for class labeling.
Another deep complication is color impossibility, which is another way of saying that for any reasonably perceptually uniform color space like CIELAB, there is no straight-forward parametrization of the space that is both linear and avoids impossible colors.
My gut instinct is that space-filling curves could play a greater role in parameterizing the valid portions of such color spaces. Desiderata 3 is suggestive: we want some parameter that 'wiggles' within the allowed range of lightness and chroma to achieve both good distinction between nearby parameterized colors, and good global uniformity of color and chroma.
This wiggling could also 'absorb' the 'nooks and crannies' that color impossibility produces.
Still a research project, though.
There are Brewer palettes geared for sequential data, where you further need to express proportionality along a scale; and "diverging" data that has a natural zero point with two extremes where you need to present a spectrum. In all these cases it's important that one category doesn't appear "heavier" than another, and that subjective notions like "about twice as intense" reflect the underlying data.
In the case you don't know how many colors you will need at the time of first picking you can use the golden ratio method to pick colors.
There is also uniform perceptual LAB: http://www.brucelindbloom.com/index.html?UPLab.html which is a slight tweak to LAB to reduce the 'blue -> purple' effect that happens when you tweak chroma.
UPLab looks cool, thanks for sharing.
Explain your "slightly rotated" comment?
Also, to be completely precise, it should be noted that the lightness dimension of UPLab was left the same as CIELAB’s lightness, rather than using Munsell value. Only a and b were adjusted to reflect Munsell hue/chroma.
(there is no proof that humans perceive phi as a more pleasing proportional ratio than others, and it's not really that ubiquitous in Nature either)
(I realize this doesn't really answer the question, which is an interesting one. Just suggesting it may be solving the wrong problem.)
Yet that's not really an answer for how to optimize for color blindness. That I don't know and would be interested in an answer.
If high contrast between colours is desired, you could take some further precautions in some of the suggested algorithms, e.g. if you're choosing certain hues as a basis of your palette, then making sure that none of the pairs are problematic. If your algorithm checks the distance between colours to generate high-contrast pairs, then you could add in a check for problematic pairs as part of that step.
How do you know that? How do you know you are looking to equivalent distributions of lighter and darker magentas?
I have some uncommon kind of color blindness, I have problems with blue and green. I don't even know what colors I can't see, I'd love to know the procedures for that.
I didn't understand that. Thanks.
Generating colors is not hard; the hard part, as evidenced by this article, is getting them to look decent and getting them to appear distinct.