Hacker News new | past | comments | ask | show | jobs | submit login

AI can solve it better than normal code can: you have to train on how humans see color. (This has been a long-standing ambition of mine, but sadly there's not yet much work or infrastructure in the field of "graphics AI." There's barely one software rasterizer. https://github.com/vahidk/tf.rasterizer)

The key would be to spend an absurd amount of time carefully cataloguing good colors – by hand – and training it to extrapolate from that information. That's basically how the Musnell color system was created. https://en.wikipedia.org/wiki/Munsell_color_system It's the realization that color space isn't spherical, it's not a cube, it's not any shape at all except "how humans happened to evolve." Even the term "max chromaticity" is just a reflection of the weird way our brains happen to work, not an intrinsic property of color.

In that context, a universal curve fitting algorithm might be handy. Which of course is all that AI is.




> The key would be to spend an absurd amount of time carefully cataloguing good colors – by hand – and training it to extrapolate from that information.

It's a bit more nuanced than that. Not only do you need to catalog good colors (and color combinations), you need to have it done by a large number of people, since different people perceive color differently and have different aesthetic preferences. This is something I've been working on in the limited context of color cycles for data visualization and plotting [1][2]. Based on my preliminary analysis, these data are quite noisy.

[1] https://colorcyclesurvey.mpetroff.net/ [2] https://mpetroff.net/2020/01/color-cycle-survey-update/


Not to mention are viewing the colors on different screens, with possibly different tv/monitor modes enabled which will alter colors. I have two monitors where colors can look noticeably different. Unfortunately it's probably not feasible to account for this.


You're absolutely right, and that's an exciting area: viewing conditions matter!

Well, different monitors aren't really different viewing conditions, but it's a similar idea.

When people can't see colors too well, they turn up the brightness. But that changes the problem entirely.

Even something like whether a window has curtains or not will completely change whether you can perceive a certain "vibrance." Lots of the conversation in parallel replies has probably suffered due to such confusions.


HSLuv is a sphere: https://www.hsluv.org/

They took the CIELUV/LCH/HCL solid and compressed it into a sphere, similar to HSL on top of RGB. The L* (perceptual luminance) value is consistent across hues, it can also replace the 100/200/300 scale for design systems.

But even the CIE model has flaws, chromacity is inherently linked with brightness/luminance, no color model can change that. A more saturated/chromatic color will always be perceived as more bright than a less chromatic one at the same luminance. And in order to create a palette you need normalization, which will always weed out individual color defining peaks.


I gave up trying to reason about color models when I realized the CIE model wasn't determined rigorously and empirically. A full critique is beyond the scope of 3AM, but, it's kind of fascinating to go back to the roots of color models and read about how precisely they came up with them. Remember to turn on your skepticism when you do.

"X is a sphere" has to be reconciled with "What are the chances that our visual system would evolve into a perfect sphere with no flaws?"

For some reason, people are so determined to turn complicated phenomena into a pure and simple form. Even astronomy wanted to believe that orbits were circles, since circles are clearly more perfect. But nature isn't perfect; it simply exists.


Certain aspects of nature can be modeled extremely well with simple formulae.

Orbits are damn close to perfect ellipses.

Likewise, human perception of color luminance could be represented by a simple model where cross-human perceptual variation results in a damn close to perfect sphere.

Just like the actual orbital parameters for a given body are described by a few constants derived from observation, the actual human perceptual parameters (such as the constants in the CIE model) are likewise derived through observation.


Being close to something doesn't make it that thing. The orbits are close to perfect ellipses, but Mercury's orbits isn't a perfect ellipse – it required Einstein's modification to understand the discrepancy.

Human vision isn't even remotely in the same ballpark as "close to an ellipse". That idea is a very powerful, very persistent illusion, and as far as I'm concerned it will be productive to break it whenever possible.


For me, their (CIEs) color opponents are already wrong, I prefer NCS: https://en.wikipedia.org/wiki/Natural_Color_System


Are there any NCS-based web tools for generating such a palette? The "official" tool is... bad.


Not to my knowledge. Their official tool is for interior designers I think, and their system is copyrighted. I'm using a custom "color wheel" that's inspired by NCS opponents and CIE. I might release it open source as a web app though. It's actually part of a bigger design system for design systems I'm working on, but I'll see if I can break it off into it's own thing somehow.


That would be awesome, thank you.


Interesting. Can you provide source about normalization? I thought CIELab and CIELuv spaces use some form of average human perception correction factor. At a given luminosity, it should be the same regardless of the orthogonal chromaticity axis for an average human.

Matplotlib’s colormaps were generated similarly: https://m.youtube.com/watch?v=xAoljeRJ3lU


Compare the colors: http://davidjohnstone.net/pages/lch-lab-colour-gradient-pick...

Doesn't the one with 54 chromacity appear much brighter to you than the one with a chomacity of 30?


Squint eyes and they’re the same to me.

https://classicalatelierathome.com/squint-your-eyes

Yeah it does appear “brighter” otherwise but I postulate that it is not possible to come up with 2 colors of same chromacity but with different “brightness” (your definition).

Also, I hope the colors are not being cut off because of sRGB or P3 gamut on your PC or phone.


It's clearly and intentionally more vibrant, which can be a synonym for "brighter", but no, it does not seem particularly more luminant.

Squinting very hard may help you see this. Another way is to overlap the two colors with interleaving stripes, and see how adjusting the chrominance differs perceptually from adjusting the luminance.


But that's just theory, in practice users don't squint their eyes when they use your UI, meaning that in practice perceived brightness does matter, less how it comes about.


Isn’t the perceived brightness depend on surrounding colors and many other variables such as motion, incident illumination, etc?

Those things cannot be part of a specification such as CIE or NCS.


My point was that luminance and chromacity fulfill a very similar function in UI design besides size or shadows, they bring elements forward, make them stand out. An item with low chromacity therefore isn't on the same Z level as an item with high chromacity if you think of the foreground and background as a 3d space that you can use to guide the eyes of users. It's the function for the user experience that counts, at least for me.


Thanks for the discussion, I learned about the NCS system and you seem to know a lot about this.


For psychological impact use of "brightness", yes.

For judging contrast against e.g. text for legibility in a design?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: