> a new not-sRGB-anymore color space that happens to have a compatible white point and RGB primaries.
Do you know if this new color space has an official name?
I work with it basically everyday via the three.js renderer internals and mostly see it refered to as simply "linear", but if more specificity is needed usually I see "linear sRGB". I think that's common even if it's not correct.
However as HDR color spaces take over from sRGB I expect it'll become more important to have a correct name for this color space. I have searched for one previously but haven't been able to come up with anything better than "linear sRGB".
“Linear” is in practice the official name. The only additional specificity you can have is to give the scalar conversion factor from your linear colors into physical units like lumens or candela or watts per solid angle or surface area. What linear means is that you’re working in a scaled set of physical units. The scale factor is the only thing missing.
You might not ever need the physical scale factor, it doesn’t generally matter. You probably didn’t set your light or color intensities in three.js using physical units in the first place, and even if you did you might be simply scaling the values until things look right. We usually scale the inputs and outputs manually anyway, so the absolute scale of the intermediate linear color space in CG is frequently irrelevant.
HDR is mostly referring to whether your values are conceptually [0..1] or [0..infinity]. Sometimes people are also referring to how many bits get used per color channel. (But note that the R is HDR is referring to ‘range’ and not to ‘resolution’.) 8 bit color values that represent [0..1] are unambiguously LDR, while 32 bits per channel color values that represent [0..inf] are unambiguously HDR. If you use 16 bit colors and your range is [0..3] to allow for headroom and glare effects, it’s not really considered HDR, but it’s not exactly LDR either. It mostly only makes sense to work with HDR values in a linear color space, but it’s not strictly or by definition necessary.
“Linear sRGB” gets used sometimes, and what it means is that a color was converted from sRGB into linear. So the space is actually just linear. It’s useful to know if a color came from sRGB sometimes, because there’s less worry about going back into sRGB if you know it came from sRGB originally, and it helps you understand something about the scale and brightness of your linear values. If you have linear values and don’t know where they came from, you have no point of reference, and no idea if a value of 0.1 means invisible black, blinding white, or just middle gray. But unless I scale the linear colors, I do know a linear sRGB value of 0.1 is a dark gray that’s visible, because I know that the range of color is approximately [0..1] because it came from sRGB.
> You probably didn’t set your light or color intensities in three.js using physical units in the first place,
Nod... unless you're using a color space, like Jzazbz, with an absolute (not relative) transfer function. Application context then motivates the nit pick. So a Jzazbz HDR of [0..10k], might get a daylight 1k nits picked for diffuse (non-highlight) white. Thus I appreciate it when color libraries provide a distinct Absolute XYZ D65 data type.
This is a good point! Jzazbz is a good example of the (somewhat rare) application of perceptual response functions in high dynamic range. I’d be curious to hear what reasons you have to use Jzazbz. The color space seems best at scenarios where you need to compute HDR color differences. Most people don’t need that, but some advanced users like you do. Worth noting in this context that Jzazbz is non-linear, and even though it’s HDR, it still requires conversion to a linear space in order to blend colors physically.
I want to elaborate just slightly on what I meant. In CG, like in real photography, the photographer almost always sets some combination of their exposure, aperture size, and/or white balance values manually, regardless of how well they’re controlling inputs and color spaces. As a simple example, I can always dim the lights by a factor of 2 power, and also either expose for twice as long, or open my shutter by one f-stop. When I dim lights by half and expose by double, then nothing about the image changes. Even if you know the physical luminosity of your lights, the overall process (usually) still cancels scale factors and at the end reduces to an arbitrary scaling that essentially sets the output white and output black to the brightest and darkest colors I want to show, respectively.
With real film, there are additional arbitrary scaling values like the film development process, and lighting conditions when viewing. With digital, the display’s scaling is frequently unknown, and thus arbitrary; you might use 1k nits for your daylight diffuse, but that’s not what your monitor actually shows you.
There are legit use-cases for wanting absolute physical values in the intermediate stages, and using a “linear” color space for blending is, in a way, emulating exactly that desire but still with an arbitrary scale because the input brightness and/or output brightness are being tuned by hand.
> I’d be curious to hear what reasons you have to use Jzazbz
I needed a color space for teaching, so I'm using it to kludge wide-gamut hue linearity onto CAM16UCS. I wish I knew of something better to do.
Backstory is, I've long struggled to find community of interested and discussion, around the idea of transformatively improving science education content by collaboratively applying vastly greater-than-usual domain expertise. I thought to troll with a worked example, and chose "color". Even though "I'm building hard thing solo, because I really don't want to build such things solo?!?"... sigh. Color is pervasively taught in early primary, but with such poor content and outcomes, that foundational confusion persists even among first-tier physical-sciences graduate students. So: might color be taught more clearly by emphasizing spectra, in early primary? With web interactives? It seemed a question to explore.
So I needed a perceptual color space optimized for learning color. A rather different objective than usually drives color space design. Computability doesn't matter, as small lookup tables suffice. But noticeable features from eyeballing the space should be "yes, that's color perception", not "oh, no, that's an artifact of the model - try to ignore that aspect". The usual art/science education "one part careful, nine+ parts misleading bogosity" just doesn't end well. CAM16UCS is generally good at avoiding "that... doesn't look plausible". Better than other UCSs I skimmed. But... hue gets curvy near its edges, especially badly in blue. Jzazbz regrettably fails the "space looks plausible" test, but is wide-gamut, and has nice hue linearity. So I'm using it to kludge greater linearity onto CAM16UCS, between sRGB gamut and visible boundary. Which aids understandably coupling space to spectra.
Getting from daylight to banana to spectra and apparent spectra and color space, provides a lot of opportunity to get things wrong. And from interactive draw-a-spectra. Swapping in Jzazbz for end-to-end HDR physical units provides the warm fuzzy of sanity check.
Thus three.js intensities in nits, and an opportunity for Sunday punning.
It's just a silly niche use case, rather than advanced use, but modulo burn out, I'll be back at it tomorrow. Also... there's an order-of-magnitude luminance "why don't you see a sky full of stars" interactive somewhere on my infinite todo list, so getting perceptual luminace right had additional motivation. Though that will have to handle meso and scotopic perception... sigh. Appreciated your comments.
Hey that’s really cool and not silly or niche at all, I actually think it’s extra advanced to not just understand color so deeply but try to teach it at that level. Pretty hard core in my book, color is a surprisingly hard and deep subject.
I had a few good teachers along the way who taught me a ton about color and computer graphics. Peter Shirley is probably the top of my personal list, and he’s also very interested in improving science education, so maybe worth checking out his CG books. In his rendering class we bought a MacBeth color checker and had to take a photograph of it set inside a scene that we physically built out of whatever materials and light sources we wanted, and then we had to write our own renderers to match the photo. You really learn a lot about color trying to do that! :)
Pete and I have since designed a simple opponent-process color space aimed at ease of use that is perceptually uniform ‘enough’, but we haven’t published it yet. That could be a fun way to teach though: have students design their own color space! There also might be some interesting work going on in the area of digital color picker interfaces that could be useful for teaching. I’m forgetting who does this stuff, so can’t cough up any links at the moment, but I’ve definitely seen a couple of neat presentations recently that combine the science of perception and color spaces along with good design sensibilities to make interfaces that are demonstrably better for artists than the crappy RGB & HSV & other pickers we usually see today.
Oh yeah. I was (hmm, am) hoping to dig out some limited cluster of concepts that gells as "oh, that's fun, and I'd very not have thought you could teach that, or that way". Eg xkcd color name regions in a voxelized color space as "learn color names" K-1 worksheet. Trying for a "light"(spectra) vs "color"(human perception) distinction, limiting scope on perception to early primary color topics.
Science education rarely does deep, integrated, or usable, so it's rather an open question what might be possible. And without a vision of that, there's little incentive for funding or exploring other than incremental improvement.
Even aside from creation pragmatics. "A big onboarding welcome to all our new science textbook author staff! Worry not that you are fresh liberal arts graduates, with no science background at all, for we've A Scientist on call!"(paraphrased)
> a simple opponent-process color space
Ah, neat. That could be fun.
Given my focus on spectra, I was also tempted by IGPGTG's "build on Gaussian spectra", but pruned.
> have students design their own color space!
Oh, there's an intriguing idea. I'd though as far as "if I can't find a color space without misleading large artifacts, I'll have to offer a diverse swappable several, if students are to have any hope of distinguishing data from noise". But a direct manipulation "create your own color space"... hmmm. Perhaps evaluated with visual metrics? Like gradients for linearity. Or mixing for Euclidean-ness and hue order - reorder a hue circle, and get weirdness in the in-betweens? Neat approach - not "this is just how it is" but "hands on, you'll find this a sweet spot". Hmm, if say, "Pink is the most important color! So its variants should be the focus of my space!"... what might that look like? A "pink-set" of color names? Eg, not "green" but "anti-magenta"? Not "white" but "palest pink"? :) I so miss pre-covid brainstorming like this at MIT. :/
> a couple of neat presentations recently that combine the science of perception and color spaces along with good design sensibilities to make interfaces that are demonstrably better for artists
Very neat. If a link surfaces, I'd be interested. There's some work on simulating pigment spectra and interaction for "physically realistic" painting, and I wondered at their UI. And if a MVP web app with such, might serve as hands-on antidote to the "subtractive rules are peers to additive" misconception.
> take a photograph of it set inside a scene that we physically built
I wondered how controlled lighting (filter gell on box, discrete LEDs, and tablets getting the narrow spectra of quantum-dot displays) might be leveraged. Eg, "sketch what you think this page/thing/scene will look like if lit like X, then try it and describe it, and take a picture"?
And sketched, sort of a point renderer - a block ui for playing with spectra. Eg, a Sun block, realistically pictured with red-tinted rim on white, with a spectra and white color", passed though atmosphere block set at angle X, with this transmittance/reflectance/florescence strip, yielding this spectra and color, lighting this point on a multispectral image of fruit, with/yielding ditto. Seen by this camera to get rgb, shown by this display to get this spectra. Banana vs banana pixel. Filters, bounced sources, etc. And then perhaps challenges of "assemble blocks to get a spectra like this". Or to light this multispectral MacBeth or scene to match this snapshot. Or bantering, "your freehand or musically-keyboarded spectra looks like this strawberry icecream with blue M&Ms". A hands-on way to address the many misconceptions around lighting and color.
And then there's video shaders, filtering or fragmenting by color space and names, for interacting with color in the environment. And... the minor challenge of finding a coherent MV"P" in all of this. :)
When you take sRGB and apply an inverse gamma curve to it you end up in linear light space. Is that what you're referring to?
The choice of the inverse gamma curve you're using is a choice with no single answer depending on how you intend to map that linear light onto...whatever it is you're doing. The obvious one is to use the nominal gamma of 2.2. That may or may not make sense depending on your display technology and what you're doing.
Do you know if this new color space has an official name?
I work with it basically everyday via the three.js renderer internals and mostly see it refered to as simply "linear", but if more specificity is needed usually I see "linear sRGB". I think that's common even if it's not correct.
However as HDR color spaces take over from sRGB I expect it'll become more important to have a correct name for this color space. I have searched for one previously but haven't been able to come up with anything better than "linear sRGB".