People who care about color should be pushing to change the default color representation to a linear format with 16 bits per channel rather than making marginal changes to the edges of 8 bits per channel gamma compressed representations.
That has to start with the addition of 16 bit floating point format support to various hardware. It's really a crying shame that so little hardware supports 16 bit floating point. In addition to imaging it would be useful for audio, and deep learning too.
Part of it is probably the kind of content you care about: "artificial looking" UI such as e.g. typical desktop apps care least (imho) about gamut. But if they don't do subtle gradients, then you won't notice banding either. On the other hand, for photographic content, you really notice those new colors you just can't represent in sRGB. I would really regret having to give that up, and would tolerate even visibly annoying banding every day to if it means having pictures that actually look like real life, and not like muddy, lifeless copies.
But to be clear: I very rarely notice any banding whatsoever, and when I do it certainly seems to be due to the source not the processing. Perhaps it's an OS/driver thing, or perhaps it's because more pictures contain sufficient noise to act as poor mans dithering, but banding just doesn't seem to be a (meaningful) issue. I mean: it's visible if you have large area subtle gradients, but not to a huge degree nor and well... don't do that?
Most things I see are either entirely flat (no banding) or photographic (noisy enough that you won't easily see banding).
I agree that a deeper color space is about time, although I suspect that going 16-bit floating point is an overreaction for most scenarios. Floating point isn't free, and neither are all those extra bits. With a decently high gamma, you can probably get away with just 10bits per channel, which would conveniently keep a pixel within 32-bit for efficient packed processing. And in the odd case that you really want to spend more than 10bits on color detail, then even for HDR you really don't need floating point - due to gamma correction, 2 extra bits means 20-50 times more light - more than enough for any kind of hdr that's likely to be displayable any time soon (and really, even if we could make peaks of 30000 nits - does that sound like something you want to look at?)
For HDR, a PQ (perceptual quantisation) encoding curve is already standardised by SMPTE - page 8 in these slides goes through the process of how it was worked out, right from human visual system basics: https://www.smpte.org/sites/default/files/2014-05-06-EOTF-Mi...
Given the abundance of log or gamma encodings for display imagery you might wonder why true linear 16-bit float is so common in CG production - why not log encode those 16 bits and get loads smoother gradients? Maybe the answer is that during production those linear files often also encode non-image data like vertex positions and normals, and perceptually "good" quantisation of those could lead to unexpected precision problems...
The aforementioned DCI-P3 even have a higher gamma value of 2.6. Currently, almost all design compositions are done in the gamma compressed space, and the incorrect AA  and blending will be even worse on those devices.
Another thing is that most of displays are not even calibrated properly. Not even speaking about technical characteristics of the screens.
Text blending in linear space perceived as “too thin” and inconsistent because font weights are choosen for the sRGB .
With light fonts at small sizes, sRGB blending also has apparent weight changes with the background .
But with bold fonts, the weight is consistent only shapes are perfectly smooth with the linear blending .
And with more colors, the sRGB blending is a failure .
Unless they're just being mapped directly, which would probably make everything automatically look much more vivid... and that would help sell those displays too.
When there are gradients visible, more or less the only thing you can do is to introduce artificial monichromatic noise to the image to hide the perception of a staggered gradient.
I would like to see an industry-wide push for consumer-grade (at least) 10-bit signal chain from graphics cards to monitors with high dynamic range. That would have more impact on image quality than crap being pushed for now, like 4k and VR.
HDR B4 4K, chaps!
I can clearly see the logo and I'm using a Samsung LCD from 2003. The pixel values are significantly different; the background is 255,0,0 and the logo is 241,0,0. The rest of my hardware is 2009-vintage. Likewise, the two shoe examples look very slightly (not a whole lot, but it's noticeable) different.
On an sRGB display, you can’t see the logo, because all the red values above 241 in Display P3 are beyond the highest red in sRGB, so the 241 red and the 255 red end up as the same color.
Seriously? 241 and 255 look very different to me, and if I remember correctly, would have always looked different in all the 24-bit colour monitors I've used.
Edit: experimentation with my monitor a little more shows that I can just barely discern 252,0,0 from 255,0,0, but 253 and 254 look identical to 255. This probably depends on my colour perception too. Either way, I still doubt my monitor is "wide-gamut", so what's going on here?
Remember the red square with the faint WebKit logo?
That was generated by creating an image in the Display
P3 color space, filling it with 100% red, rgb(255, 0, 0),
and then painting the logo in a slightly different red,
rgb(241, 0, 0). On an sRGB display, you can’t see the
logo, because all the red values above 241 in Display P3
are beyond the highest red in sRGB, so the 241 red and
the 255 red end up as the same color.
"Debugging" color settings is tricky, because it can be hard what goes wrong along the chain, and what you end up observing may be counterintuitive.
EDIT: I observe the same behaviour than you on my early 2013 13" Retina MBP. Firefox seems to ignore the color profile on the red image while every other app doesn't (including downloading the image and using Preview). Other images, notably the Iceland one, highlight that my screen does have a wider gamut in some color areas, maybe just not in the red.
Correct; this was mentioned several times at WWDC.
A quick google finds this: http://cameratico.com/guides/web-browser-color-management-gu... which suggests that even on macos it used to be at least as good as safari, although apparently by default it made the (idiotic) assumption that untagged images should be displayed in the native gamut. They should be displayed as sRGB, because untagged almost always means "the author didn't think about it, and had an sRGB display".
Not sure what's going on with that image on the webkit demo, but I'll note that clipping out of gamut colors is by no means the obvious solution to out-of-gamut colors. That's what rendering intents are all about; and well known ones include absolute intent, relative intent, and perceptual intent. Perhaps firefox is assuming a perceptual intent, and which here would cause desaturation to retain out-of-gamut detail.
Edit: according to MDN, the default rendering intent firefox uses is perceptual, which explains why out-of-gamut detail is retained: http://kb.mozillazine.org/Gfx.color_management.rendering_int...
What they are trying to demonstrate only works if the app is interpreting the pixel values in P3 and then converting them to sRGB. A graphics stack that includes a CMS (color management system) will always do this in preparation for display on an sRGB device, which is probably very close to your display's gamut.
The macOS graphics stack incorporates a CMS (ColorSync) that is turned on by default. Windows and Linux have great CMS support as well, but depending on the OS it may require installing an extra package, or enabling a CMS in the display properties.
WebKit is intentionally squashing rgb(241,0,0) into rgb(255,0,0), whether the display shows them as distinct colors or not. That might improve consistency across displays, in that it'll make every display lower quality, but it doesn't actually improve quality. It might be nice to know that the display you render on has better color fidelity, but why not show all the colors present in the image rather than squashing them together? Then, for instance, if you have a display that has better color reproduction than sRGB but not as good as P3, you'll get the full color reproduction your display can manage.
In fact, I'd say that if you displayed that example image simultaneously on a non-colour-managed sRGB monitor (logo visible), colour-managed sRGB monitor (logo invisible), and (colour-managed?) P3 monitor (logo visible), no average user is going to want the middle option.
That's not a reasonable assumption to make about an arbitrary image that doesn't contain a color profile. For instance, consider a pixel-art image drawn entirely on a computer and saved as a PNG with no color profile.
If an image contains a color profile, it's somewhat more reasonable to attempt to interpret that color profile (though it's still odd to not at least attempt to take advantage of higher-gamut-but-not-identified-as-such displays by using the full 8-bit RGB range, on the assumption that the display may or may not render ). And it seems reasonable to support media queries for image color profiles as well. But to quote the article:
> If an image doesn’t have a tagged profile, WebKit assumes that it is sRGB.
That's the behavior I'm arguing is completely wrong: if an image with no color profile contains the colors rgb(241, 0, 0) and rgb(255, 0, 0), the browser should render those as distinct colors, not squash them together.
(It’s okay though. This is something which developers without training in human color perception / color reproduction are commonly wrong about. It comes from lack of training and lack of experience. The proper thing to do is not a priori intuitively obvious to the layman, in the same way optimizing algorithms for CPU cache locality might not be obvious to a color scientist.)
If you take an image which was authored to be sRGB, and show it on a wide-gamut display, stretching all the colors out to fill the gamut, you will supersaturate all the colors and totally distort all of the lightness, hue, and chroma relationships in the image, and it will look terrible.
Likewise, if you take an image which was authored for P3 (or whatever), and squish it down to fit on an sRGB gamut, everything will end up undersaturated, again distorting all the lightness, hue, and chroma relationships. Again, it will look terrible.
There are fancier ways to do gamut mapping than pure clipping, but there is a lot of subtlety involved (Ján Morovič wrote a several-hundred-page monograph about this, if you want more details, https://amzn.com/0470030321), the best one to use varies from image to image, depends a bit on viewing conditions and other context, and the nicer methods are quite computationally expensive.
The HTML/CSS/etc. specs declare that untagged images and other untagged colors are to be treated as sRGB. This is the only reasonable assumption in a world where sRGB has been the primary standard for 20 years.
Any 2016 browser / operating system / image viewer which treats an untagged image as anything other than sRGB is spec-non-compliant and functionally broken. (Sadly, this includes most browsers on most platforms.)
Given the blatant loss of information that occurs if you do that, as evidenced by the WebKit logo example, I'd argue the spec is broken. Pixels of different values should be different when displayed, because that's what matters to the majority of users.
Yes, but «distorts the hue, lightness, and chroma relationships between colors as perceived by a large statistical sample of humans with typical color vision» (or perhaps easier, «... as computed using the CIE standard observer and a well-defined color appearance model») is not subjective.
You shouldn’t be “arguing” with the spec here or presuming to speak for “users” until you have studied up on color science, done concrete work with color reproduction, or at least compared both ways for yourself on a few dozen images.
Again, this is the naïve approach which is too often taken, but, contra your intuition, the results are dramatically worse if you do it that way. In an immediately obvious, not subtle way. What most “matters to the majority of users” is that their images look like what they expect. If you totally distort all the color relationships, the images end up looking completely different than intended, and “most users” will be dissatisfied with the output, blaming either the creator of the image or the software for being incompetent but without knowing quite what the problem is.
There are ways to avoid hard clipping, using a more sophisticated gamut mapping algorithm (I recommend you read Morovič’s book), but you can’t just treat one color space as if it were another.
In practice, hard clipping of out-of-gamut colors (assuming you do it along lines of constant hue/lightness in CIELAB space, or similar) ends up working reasonably well. You get some artifacts, but the vast majority of images stay away from the edges of the gamut.
(Photoshop and other Adobe apps have a pretty bad gamut clipping method, which is to map between color spaces without clipping, and then independently hard clip each color component. This sometimes causes severe hue shifts. Alas, even the industry leading software is often implemented by people who aren’t trained in color science, and don’t consider the edge cases. Or wrote their implementation 20 years ago and haven’t bothered to update it to keep up with computing improvements. It’s still much better than just pretending one color space is another though. I’m not sure precisely what Apple’s color management stack does.)
Most people care whether the colours look good to them, not how close it is to some standard they probably don't know about. Almost everyone buying monitors in a store are going to be strongly basing their decisions on the brightness and vividness of the colours.
What most “matters to the majority of users” is that their images look like what they expect.
...and that is subjective.
Regardless, if 241,0,0 looks identical to 255,0,0 on a monitor with 24-bit colour, that's just not right at all.
I feel a bit like a broken record here. There are ways of mapping one color space down to a smaller one without hard clipping to the gamut boundary. But these can be computationally expensive and difficult to design properly.
However, assuming that a better gamut mapping method is unavailable, hard clipping out of gamut colors in practice works much better than just pretending two color spaces are the same. When you do the latter, images end up looking terrible.
The effect is immediately obvious to anyone with standard color vision, and as such is no more “subjective” than anything when we’re dealing with perception. In some sense all perception is subjective if you want to get philosophical. But in a practical sense, not really.
If you have a copy of Photoshop, you can try this for yourself. Collect a number of photographs or other images encoded in a large color space like P3. Then convert these to sRGB in two ways, (a) using the “assign profile” menu option and (b) using the “convert to profile” menu option.
Your proposal is to do the former. In practice, the results are entirely unacceptable. They look bad. That is, if you collect a group of humans with typical color vision and present both options, they will pick option (b) for almost all images, and for most images the right choice will be very obvious to everyone.
Say there is a background colour in sRGB and an image in P3, and both should display the same colour visually. Changing the visual display of the P3 image just because it could include non-sRGB colours will make them not match.
does have a color profile. By the way, it also uses 15-bit color channels.
Here is the same image after pngcrush removed the color profile PNG chunks:
In Safari on my non-high-gamut Mac, the logo is clearly visible. (Edit: To be specific, on the default "Color LCD" profile I get 252,13,27 and 238,12,25 as native equivalents of the sRGB 255,0,0 and 241,0,0. Presumably there is some slight clamping around the edges of the profile, as well as some inherent precision loss if WebKit is performing the conversion on 24-bit colors, but it's not anywhere near as crazy as making 241,0,0 and 255,0,0 look the same.)
> does have a color profile.
I didn't comment on the handling of that image. My comment related to the bit of the article I quoted, which states that WebKit treats images without a color profile as sRGB.
"RGB samples represent calibrated colour information if the colour space is indicated (by gAMA and cHRM, or sRGB, or iCCP) or uncalibrated device-dependent colour if not."
So webkit shouldn't try to transform the colors to sRGB but simply display it according to whatever display is present.
Having said that sRGB designed to be consistent and not high quality. It even has a "black point" defined, but nobody respects that since it would cause a maximum contrast ratio for all images represented in sRGB.
The reason for this for anyone not familiar with colours and pixels is that the Apple P3 primary colour red is a different red from sRGB's. That means that if you do not provide additional metadata, the value will be dumped directly to whatever type of display you are using, making it display the colours using your display's lights.
As other posters have noted, if you view the colour with no colour management system in place, you are instructing the machine to display 241 intensity as is, which means you are seeing (very likely) the sRGB lights in your display set to 241 and 255 intensity; a very easy difference to spot. This is not the intention of his example.
In a colour managed system, that 241 value is transformed into an absolute colour model, and from there it is mapped to the smaller gamut of sRGB. That is, the Apple P3 RGB lights are converted to meaningful sRGB RGB light representations.
So why does an Apple P3 value end up being the "same" sRGB value, despite being different colours? It amounts to a mapping issue.
The entire range of intensity values at 8 bits per pixels for the Apple P3 red channel is an identical colour at all intensity levels. Same goes for sRGB; intensity does not change the colour. When we examine any single P3 intensity, we can map that colour to a "closest possible" sRGB triplet. No matter what we do, the P3 red is different to the sRGB red light, and the sRGB red light can never represent the fully saturated and different red of the Apple P3, no matter what encoded intensity value.
Given that the P3 primary for red is quite different, we end up with a value for sRGB, that after transformation and clipping to the sRGB gamut, is a collision when mapped to sRGB; several different colours might end up mapped to the same sRGB colour due to quantization. In the case of 241, it happens to map to 255 using the rendering intent he selected because some or both of the complimentary values end up negative and clipped in the smaller sRGB gamut. Every other value is also being mapped to sRGB, and there are going to be mapping collisions along the entire intensity range because again, no matter what we do, the sRGB red lights can never represent the red lights of Apple's P3.
An excellent reference on ICC colour management is here for anyone interested, including covering rendering intents and other issues: http://www.cambridgeincolour.com/tutorials/color-space-conve...
So anything about consistent color on the web ignores the reality that the viewing device is likely not at all sRGB. And then just as big of a problem and sometimes worse is the viewing condition is highly variable.
Beyond just the topic at hand, this is a wonderful example of showing people how much more the web can do and why it is important that we keep pushing for progress.
Well, Chrome is still broken for some of these images on Linux. Not surprising though, it also screws up resizing images:
I with they would fix these things before worrying about wide gamut
Basically, if the Result doesn't match up with Expected, the browser is doing it wrong. Ideally browsers should handle YUV colorspaces like so:
H.264 video - look for colorspace tagging in the video by default and use it if available, otherwise fall back to guessing based on resolution. SD video (up to 1024x576) should be converted with Rec.601, HD video (width >1024 or height >576) with Rec.709.
VP8 video - VP8 is defined as Rec.601 only, so always use it.
Theora - Same as VP8.
How browsers actually fare today (tested on Windows 10):
IE11/Edge - Always assumes Rec.601 for H.264 video. Doesn't support VP8/Theora.
Chrome - No colorspace tagging support for H.264. Converts HD video with Rec.709, SD video with Rec.601. 1024x576 is treated as HD already. VP8 is always converted with Rec.601 as it should. Theora gets Rec.601 in SD but incorrectly uses Rec.709 in HD.
Firefox - No colorspace tagging support for H.264. HD uses Rec.709, SD uses Rec.601, 1024x576 treated as HD like in Chrome. VP8 and Theora both always use Rec.601 as they should.
The unfortunate conclusion from this is that color accuracy is pretty much a crapshoot when dealing with HD video on the web. The only way to guarantee accurate results right now would be to convert your video to Rec.601 (if you're mastering HD video chances are you're using Rec.709 by default), serve VP8 video by default and have a Rec.601 H.264 fallback for IE/Edge (I haven't tested how Flash video playback handles this matter so you might also need a Rec.709 H.264 fallback for that).
FYI I tested Safari 9.1.1 (11601.6.17) and it failed on a whole bunch of the tests, including, frighteningly, the untagged HD 709 H264 :(
When Digital Color Meter (color-sampling tool on Mac) is set to "display native values," Chrome's values are consistent with CSS. When DCM is set to "display in sRGB," Safari's values are consistent with the CSS.
edit: Not shown in screenshot, but Firefox is the same as Chrome.
I can't tell if Firefox has a bug for this exact issue, but they do have some open color-management related issues.
It has a non-default hidden option where you could previously set it to work correctly, but that option stopped working some time ago (at least on OS X).
Chrome has had this bug filled 7 years ago, and they kept ignoring it (although it's a very popular bug, people complain about it all the time). It drives me insane.
Basically I am forced to run Safari, although I'd very much prefer to use an open source browser.
Edge & Firefox support color management; though for best results in firefox you'll want to toggle http://kb.mozillazine.org/Gfx.color_management.mode to 1 - the default is 2, which is Not Good. Edge works out of the box.