That's not a reasonable assumption to make about an arbitrary image that doesn't contain a color profile. For instance, consider a pixel-art image drawn entirely on a computer and saved as a PNG with no color profile.
If an image contains a color profile, it's somewhat more reasonable to attempt to interpret that color profile (though it's still odd to not at least attempt to take advantage of higher-gamut-but-not-identified-as-such displays by using the full 8-bit RGB range, on the assumption that the display may or may not render ). And it seems reasonable to support media queries for image color profiles as well. But to quote the article:
> If an image doesn’t have a tagged profile, WebKit assumes that it is sRGB.
That's the behavior I'm arguing is completely wrong: if an image with no color profile contains the colors rgb(241, 0, 0) and rgb(255, 0, 0), the browser should render those as distinct colors, not squash them together.
(It’s okay though. This is something which developers without training in human color perception / color reproduction are commonly wrong about. It comes from lack of training and lack of experience. The proper thing to do is not a priori intuitively obvious to the layman, in the same way optimizing algorithms for CPU cache locality might not be obvious to a color scientist.)
If you take an image which was authored to be sRGB, and show it on a wide-gamut display, stretching all the colors out to fill the gamut, you will supersaturate all the colors and totally distort all of the lightness, hue, and chroma relationships in the image, and it will look terrible.
Likewise, if you take an image which was authored for P3 (or whatever), and squish it down to fit on an sRGB gamut, everything will end up undersaturated, again distorting all the lightness, hue, and chroma relationships. Again, it will look terrible.
There are fancier ways to do gamut mapping than pure clipping, but there is a lot of subtlety involved (Ján Morovič wrote a several-hundred-page monograph about this, if you want more details, https://amzn.com/0470030321), the best one to use varies from image to image, depends a bit on viewing conditions and other context, and the nicer methods are quite computationally expensive.
The HTML/CSS/etc. specs declare that untagged images and other untagged colors are to be treated as sRGB. This is the only reasonable assumption in a world where sRGB has been the primary standard for 20 years.
Any 2016 browser / operating system / image viewer which treats an untagged image as anything other than sRGB is spec-non-compliant and functionally broken. (Sadly, this includes most browsers on most platforms.)
Given the blatant loss of information that occurs if you do that, as evidenced by the WebKit logo example, I'd argue the spec is broken. Pixels of different values should be different when displayed, because that's what matters to the majority of users.
Yes, but «distorts the hue, lightness, and chroma relationships between colors as perceived by a large statistical sample of humans with typical color vision» (or perhaps easier, «... as computed using the CIE standard observer and a well-defined color appearance model») is not subjective.
You shouldn’t be “arguing” with the spec here or presuming to speak for “users” until you have studied up on color science, done concrete work with color reproduction, or at least compared both ways for yourself on a few dozen images.
Again, this is the naïve approach which is too often taken, but, contra your intuition, the results are dramatically worse if you do it that way. In an immediately obvious, not subtle way. What most “matters to the majority of users” is that their images look like what they expect. If you totally distort all the color relationships, the images end up looking completely different than intended, and “most users” will be dissatisfied with the output, blaming either the creator of the image or the software for being incompetent but without knowing quite what the problem is.
There are ways to avoid hard clipping, using a more sophisticated gamut mapping algorithm (I recommend you read Morovič’s book), but you can’t just treat one color space as if it were another.
In practice, hard clipping of out-of-gamut colors (assuming you do it along lines of constant hue/lightness in CIELAB space, or similar) ends up working reasonably well. You get some artifacts, but the vast majority of images stay away from the edges of the gamut.
(Photoshop and other Adobe apps have a pretty bad gamut clipping method, which is to map between color spaces without clipping, and then independently hard clip each color component. This sometimes causes severe hue shifts. Alas, even the industry leading software is often implemented by people who aren’t trained in color science, and don’t consider the edge cases. Or wrote their implementation 20 years ago and haven’t bothered to update it to keep up with computing improvements. It’s still much better than just pretending one color space is another though. I’m not sure precisely what Apple’s color management stack does.)
Most people care whether the colours look good to them, not how close it is to some standard they probably don't know about. Almost everyone buying monitors in a store are going to be strongly basing their decisions on the brightness and vividness of the colours.
What most “matters to the majority of users” is that their images look like what they expect.
...and that is subjective.
Regardless, if 241,0,0 looks identical to 255,0,0 on a monitor with 24-bit colour, that's just not right at all.
I feel a bit like a broken record here. There are ways of mapping one color space down to a smaller one without hard clipping to the gamut boundary. But these can be computationally expensive and difficult to design properly.
However, assuming that a better gamut mapping method is unavailable, hard clipping out of gamut colors in practice works much better than just pretending two color spaces are the same. When you do the latter, images end up looking terrible.
The effect is immediately obvious to anyone with standard color vision, and as such is no more “subjective” than anything when we’re dealing with perception. In some sense all perception is subjective if you want to get philosophical. But in a practical sense, not really.
If you have a copy of Photoshop, you can try this for yourself. Collect a number of photographs or other images encoded in a large color space like P3. Then convert these to sRGB in two ways, (a) using the “assign profile” menu option and (b) using the “convert to profile” menu option.
Your proposal is to do the former. In practice, the results are entirely unacceptable. They look bad. That is, if you collect a group of humans with typical color vision and present both options, they will pick option (b) for almost all images, and for most images the right choice will be very obvious to everyone.
Say there is a background colour in sRGB and an image in P3, and both should display the same colour visually. Changing the visual display of the P3 image just because it could include non-sRGB colours will make them not match.
does have a color profile. By the way, it also uses 15-bit color channels.
Here is the same image after pngcrush removed the color profile PNG chunks:
In Safari on my non-high-gamut Mac, the logo is clearly visible. (Edit: To be specific, on the default "Color LCD" profile I get 252,13,27 and 238,12,25 as native equivalents of the sRGB 255,0,0 and 241,0,0. Presumably there is some slight clamping around the edges of the profile, as well as some inherent precision loss if WebKit is performing the conversion on 24-bit colors, but it's not anywhere near as crazy as making 241,0,0 and 255,0,0 look the same.)
> does have a color profile.
I didn't comment on the handling of that image. My comment related to the bit of the article I quoted, which states that WebKit treats images without a color profile as sRGB.