
Color Spaces - tylerchr
https://ciechanow.ski/color-spaces/
======
jacobolus
Like many resources about color, this one has an output-medium-centric
followed by early-20th-century-experiment-design-centric viewpoint, presumably
because that was where the author started and what he read about. But however
perfect the exposition might otherwise be, that is the wrong way to start a
general discussion of the subject, and the result is inevitably misleading and
confusing for non-experts.

> _it is a pragmatic approach_

Having tried to talk face-to-face to many people from different backgrounds
about color, in practice this doesn’t work. People are starting with too many
misconceptions and gaps in foundational knowledge.

If you want to understand color you have to start with at least the basics
about the physiology of human vision, along with some basic optics. It is all
but impossible to properly understand what is going on with RGB displays,
understand what “white point” means, etc. without that background.

One of the resource recommended at the end, Bruce MacEvoy’s site
[http://handprint.com/LS/CVS/color.html](http://handprint.com/LS/CVS/color.html)
is a much better starting place, albeit pretty long for someone just
interested in the basics.

Or there are many books about human color vision and color reproduction, some
friendlier for laypeople and others highly detailed and technical. To
understand recent color models Mark Fairchild’s book _Color Appearance Models_
is pretty good, and the first half is a pretty accessible general overview. Or
for more about color reproduction try Billmeyer and Saltzman’s _Principles of
Color Technology_ (recent editions rewritten by Roy Berns). For more about
colorimetry try Hunt & Pointer’s book _Measuring Color_. Or for something
historical, try Kuehni’s _Color Space and Its Divisions_.

~~~
mncharity
> People are starting with too many misconceptions and gaps in foundational
> knowledge. [...] start with at least the basics about the physiology of
> human vision

A question I ask first-tier physical-sciences graduate students is "A five-
year old comes up to you and asks, 'I'm told the Sun is a big hot ball.
Awesome! What color is the ball? I really care about the color of balls - I'm
5.' What color is the Sun?". Among the pervasively incorrect answers I get, is
a recurrent "It doesn't have a color; it's lots of different colors; it's
rainbow color."

It reminds me of a failure mode in learning high-school stoichiometry:
students not thinking of atoms as real physical objects. And why should
letters be conserved?

~~~
kyberias
It's white, right?

~~~
mncharity
Yes. I've found it an uncommon answer, even with first-tier physical sciences
and astronomy graduate students. US cultural convention is yellow, and
apparently red in Japan. Lots of orange answers. Astronomy misconception is
yellow (the the white point for naming is blueish), plus assorted "and
orange"-like noise. Non-astronomy physical science grads are... creative, with
misconceptions of light and color a common mode. A nicely disruptive follow-up
question is "And what color is sunlight?", setting up a "white" (dominating
answer) vs whatever-they-just-said-for-Sun conflict - juxtaposing two rote-
memorized bits that they've perhaps not rubbed together before.

~~~
burfog
If you are standing on the Moon, the Sun is white.

If you are standing on planet Earth, the Sun is yellow.

As viewed on Earth, sunlight directly from the Sun is also yellow... but
normally we include the blue light from the blue sky, which indirectly comes
from the Sun. Scattering of blue light by our atmosphere is what makes the sky
blue, and by subtraction makes the Sun yellow.

This is why normal shadows are a bit blue. They are lit only by the blue sky.

Things out in "sunlight" are lit by both the yellow directly from the Sun and
by the blue indirect light coming from all directions in the sky. This adds up
to being white light.

~~~
jacobolus
This is also a somewhat confused take. The human visual system adapts to
arbitrary light sources within a pretty wide range. Whatever the brightest
light source is will typically look “white”. Which is why either an
incandescent lamp with 2400K CCT or a fluorescent lamp with 5500K CCT will
appear to be “white” after adaptation.

The midday sun is so much brighter than everything else in the field of view
that to the extent it can be given a color label (again, it will
superstimulate all of your cones to the point that “color” loses meaning and
permanently damage your vision; please don’t look at the sun) that “white” is
really the only reasonable choice. If you pointed a long tube at the sun to
keep out light from the rest of the sky and used the sunlight to light a
surface that diffusely reflected say 30% of incident light irrespective of
wavelength, then that surface would appear white.

~~~
tropo
If you're going to count adaptation, then a car's brakelight is white, as is
the light (normally considered blue) on a cop car. It's cheating to count
adaptation. Assume you just look at it up in the blue sky, using untinted eye
protection. The Sun is yellow.

~~~
jacobolus
No, a car’s brake lamps and police beacons are not within the range that would
appear white, under any type of adaptation. You could stare at a brake lamp
for 30 minutes and it would still look red. Even a low-pressure-sodium street
lamp is going to keep looking orange after you’ve been standing under one for
a long time. These are all more or less monochromatic light sources.

> _Assume you just look at [the sun] up in the blue sky, using untinted eye
> protection._

Under ordinary circumstances if you look at the midday sun through a neutral
density filter it will look white, though if your filter blocks out a high
enough proportion of the light coming through it that might change. But in
general, any light source will change apparent color if you put it through
strong neutral density filter. Color doesn’t map 1:1 with normalized spectral
power distribution.

Be careful to make sure that your filter also blocks solar radiation outside
the visible spectrum. You can seriously damage your eyes looking at the sun
through the wrong neutral density filter.

~~~
tropo
With adaptation, low-pressure-sodium street lamps look white to me. Key to
this is having no other source of light as an alternative reference.

So if you are going to count adaptation, then all light sources are white.
That makes the whole question of color completely pointless.

The same goes for not reducing the brightness. Burning out your eyes makes the
question of color completely pointless. Here is a filter that would work: a
small hole in a rapidly-spinning disk. Look right at the sky with the Sun, but
with a duty cycle that reduces the light down to levels similar to typical
indoor lighting. Another method, not quite as good due to field of view and
imperfectly white screens, is the pinhole camera box commonly used to view
eclipses.

~~~
NeedMoreTea
No amount of adaptation brings low pressure sodium to white. Several hours
doesn't achieve it. Half the cars under it will simply appear black, the rest
yellow. You're left with little to no space for colour perception.

~~~
burfog
That is a different problem, called _illuminant metameric failure_. See here:
[https://en.wikipedia.org/wiki/Metamerism_(color)](https://en.wikipedia.org/wiki/Metamerism_\(color\))

Illuminant metameric failure can happen with relatively normal LED and
fluorescent lighting, particularly with a low color rendering index. You can
get an orange object to appear black under the "white" lighting of RGB LEDs.
Different orange objects would be differently affected depending on exactly
what frequencies/wavelengths get absorbed. The objects could appear red,
orange (good!), dark yellow, green, or black.

Under low pressure sodium, the eyes do adapt. Some cars are white, some cars
are grey, and some cars are black. There is no yellow.

~~~
NeedMoreTea
Then I have a problem with my eyes. ¯\\_(ツ)_/¯

Eyes adapt, but mine get nowhere near the point of perceiving white items as
looking anything but yellow under sodium. _Always_ yellow near orange. 25
years of driving and dog walking in a very yellow world before LED started
arriving. With all yellow snow when we still got snow. There was no white,
until headlights caught something or dawn.

FWIW eye tests put my colour vision as normal.

------
cmurf
It's quite a decent article. I prefer the term "color image encoding" for sRGB
IEC61966-2, or ITU-R BT.2020 or DCI-P3, because it encapsulates the primaries
(red, green, blue in relation to CIE XYZ), tone response curve (often defined
with a gamma function, or the sRGB curve which approximates a curve defined by
a gamma function of 2.2 but in fact they aren't the same as the article points
out), precision or bit depth, the reference viewing condition, and the
reference medium.

Very often ignoring the differences in reference viewing condition or medium,
gets people into trouble when they do transforms between these encodings and
think the environment or medium doesn't have to change. For example there are
four variants of DCI-P3, with the same primaries, different white points, and
different tone response curves, to account for their different reference
viewing conditions. Are they all the same color space? Errr, maybe, yes? Are
they all the same color imagine encoding? Definitely not. Same for ITU-R
BT.709 vs sRGB; same for Adobe RGB (1998) vs opRGB. As the reference mediums
differ, so will the dynamic range.

CIE XYZ, based on the 1931 standard observer, continues to be put through its
paces. There's also the 1964 standard observer derived from that and
supplemental observations and research. There's dozens of color appearance
models that have appeared, and continue to be an active area of research. Some
try to account for various features of human vision including optical
illusions (illusions expose them as if they are a kind of trick or failure,
but in fact they're a feature) like simultaneous contrast, Bezold effect,
distinguishing between saturation, chroma, and colorfulness, and many other
aspects of appearance. Recently there's some understanding the idea of a
single standard observer is probably wrong, and how to go about categorizing
and handling multiple standard observers (i.e. normal color vision) and that
is also yet another area of active research.

For anyone interested in color science, or having substantial math and/or
computing skill, and looking for unique real world application, I highly
recommend IS&T's annual Color Imaging Conference. imaging.org

------
chewxy
Out of curiosity, what is stopping screen makers from making the entirety of
the colour space as defined by the CIE chromaticity diagram its gamut?

Is there some fundamental limitation of liquid crystals? I understand for
example, CRTs require red phosphor made of yttrium and europium, which
somewhat limits the emitted freq. What about LEDs and LCDs? What are the
limitations for making a wider gamut?

Do note my knowledge in this side of things is close to nil (the specific
examples of yttrium and europium as red phosphor was a story told to me by a
physicist friend, and it stuck)

~~~
seandougall
If you take a look at the xyY diagram:

>
> [https://ciechanow.ski/images/color_gamut_srgb.svg](https://ciechanow.ski/images/color_gamut_srgb.svg)

the gamut is defined by the triangle connecting the RGB primaries. If you
could add a primary at a more pure green, you'd get a bigger triangle and a
wider gamut, but still not the entirety of the chromaticity space. If you
start adding other components that lie outside that triangle, you start
expanding the polygon, but there will always be parts of the diagram it
doesn't cover.

In order to encompass the entire chromaticity space, you'd have to have a
large (technically infinite) number of components. You can never have a
component outside the diagram, because the curved hull is the pure
chromaticity of light at each wavelength; light outside the curve doesn't
exist, and light below the straight "line of purples" is invisible (UV/IR).

Theoretically you could get much more coverage by adding more components as
new types of emitters are developed. This is happening in high-end theatrical
lighting, where you can cram a ton of different LEDs into a fixture, but it's
not (yet) practical to cram them all into an on-screen pixel.

Things get even more mind-bending when you start illuminating objects with
light instead of looking at the light directly, because then there's a benefit
to adding components even if they don't expand the gamut. Once you have four
or more components, you can make many colors an infinite number of ways
(metamers), and they might look absolutely indistinguishable when lighting a
white wall, but they'll make skin tones look completely different if you made
them by combining just RGB versus, say, amber and blue. That's not really
related to your question, but I've just started delving into this area in my
day job, and it's exciting and trippy. :)

~~~
jacobolus
> _it 's not (yet) practical to cram them all into an on-screen pixel_

At the pixel density (>500,000 subpixels per square inch) of current flagship
smartphones, I think it would actually work just fine to have 6 or 9 (or
whatever) different primaries, make a grid of little pixels in some scattered
but regular arrangement, and then use clever digital signal processing to
figure out how to convert an RGB image into a RR'GG'BB' (or whatever) image. I
would expect it to look visually identical at typical viewing distances for
existing images, while potentially allowing someone with low-level hardware
access to make a wider gamut / choose their preferred metamer / etc.

I would expect it to be entirely achievable using current technology and not
inordinately much more expensive than current displays.

But the engineering effort and additional complexity might not provide enough
benefit for display vendors to invest in that, vs. just continuing to optimize
their current display concepts. Or they might not even be considering radical
changes in strategy.

------
pzone
The CIE XYZ color space is fundamental to all modern color space processing.
Very cool that it was defined in 1931 and has held up more-or-less unchanged
to today. Be sure to click and drag on the 3d cubes!

~~~
zokier
> Very cool that it was defined in 1931 and has held up more-or-less unchanged
> to today

I wonder if the color mixing experiments have been recreated since the 30s,
and if there has been any observable shifts in the curves

~~~
jacobolus
There have been a bunch of additional experiments. But under well-lit
conditions with typical light sources, for a typical viewer with normal color
vision looking directly at an image / colored object, the 1931 functions are
close enough that switching isn’t worth the transition cost.

The best recent data can be found at
[http://www.cvrl.org/](http://www.cvrl.org/)

------
edwintorok
There are also color appearance models, the latest one is CAM16UCS[1], which
is simpler than the older CIECAM02[2] model, but harder to find papers
explaining it. The Jab[2] representation is more perceptually uniform than
Lab.

[1]
[https://colour.readthedocs.io/en/develop/_modules/colour/app...](https://colour.readthedocs.io/en/develop/_modules/colour/appearance/cam16.html)
[2]
[https://en.wikipedia.org/wiki/CIECAM02](https://en.wikipedia.org/wiki/CIECAM02)

------
kuon
For web design, I use HSLuv[1] which is very handy.

[1]: [http://www.hsluv.org/](http://www.hsluv.org/)

~~~
Exuma
Can you explain in laymans terms to another web designer why that is
important? Like what problem does it solve?

~~~
kuon
For example, you want to dynamically generate a background and text color
based on a hue. With hsluv you can simple do `hsluv(120, 60%, 10%)` for
background color and `hsluv(120, 60%, 90%)` for foreground text color, you
will have text on background with 80% perceived luminance difference, you can
replace the `120` in my example with whatever hue in the `0-360` range and
have consistent results.

The idea is that the color space is linear (perceived) with saturation and
luminance regardless of hue.

You can see it clearly if you compare something like
[http://www.hslpicker.com](http://www.hslpicker.com) with
[http://www.hsluv.org/](http://www.hsluv.org/) if you drag the hue slider on
the hsl version, the luminosity varies, it does not on the hsluv version.

~~~
Exuma
Ahh, very interesting. That is cool.

------
mroche
This is a very well written article, and boils down the complexity of color
science into very digestible bites. As someone who has to deal with color
spaces daily (VFX/CG), having something like this when I started would have
been incredibly helpful.

------
lettergram
I've actually used color spaces in a lot of my work, and it can be _highly_
effective to alter the color space to improve performance of computer vision
applications (even the ML kind):

[https://austingwalters.com/edge-detection-in-computer-
vision...](https://austingwalters.com/edge-detection-in-computer-vision/)

[https://austingwalters.com/chromatags/](https://austingwalters.com/chromatags/)

------
teddyh
See also the venerable Color FAQ, a link to which is conspicuously absent:

[http://www.poynton.com/ColorFAQ.html](http://www.poynton.com/ColorFAQ.html)

------
Straw
No mention of CIELAB? Unfortunate, its a really neat way to get perceptual
uniformity, has intuitive coordinates.

------
platz
gimp needs a LAB mode

~~~
pmoriarty
It has one.

~~~
platz
i see. it's just operates very differently; i was hoping to view the result of
applying LAB curves in real time on the original image, instead of a multi-
step process of decomposing and re-composing each time to view the results.

I guess I can just use two windows though

