I think it's best to keep the palette separate from the code (e.g. store palettes in an app like Sip), and not insist on the replaceability of colors across a range. E.g. instead of blue-500 use functional names like btn-color or brand-color and alias the colors that way in code.
The key would be to spend an absurd amount of time carefully cataloguing good colors – by hand – and training it to extrapolate from that information. That's basically how the Musnell color system was created. https://en.wikipedia.org/wiki/Munsell_color_system It's the realization that color space isn't spherical, it's not a cube, it's not any shape at all except "how humans happened to evolve." Even the term "max chromaticity" is just a reflection of the weird way our brains happen to work, not an intrinsic property of color.
In that context, a universal curve fitting algorithm might be handy. Which of course is all that AI is.
It's a bit more nuanced than that. Not only do you need to catalog good colors (and color combinations), you need to have it done by a large number of people, since different people perceive color differently and have different aesthetic preferences. This is something I've been working on in the limited context of color cycles for data visualization and plotting . Based on my preliminary analysis, these data are quite noisy.
Well, different monitors aren't really different viewing conditions, but it's a similar idea.
When people can't see colors too well, they turn up the brightness. But that changes the problem entirely.
Even something like whether a window has curtains or not will completely change whether you can perceive a certain "vibrance." Lots of the conversation in parallel replies has probably suffered due to such confusions.
They took the CIELUV/LCH/HCL solid and compressed it into a sphere, similar to HSL on top of RGB. The L* (perceptual luminance) value is consistent across hues, it can also replace the 100/200/300 scale for design systems.
But even the CIE model has flaws, chromacity is inherently linked with brightness/luminance, no color model can change that. A more saturated/chromatic color will always be perceived as more bright than a less chromatic one at the same luminance. And in order to create a palette you need normalization, which will always weed out individual color defining peaks.
"X is a sphere" has to be reconciled with "What are the chances that our visual system would evolve into a perfect sphere with no flaws?"
For some reason, people are so determined to turn complicated phenomena into a pure and simple form. Even astronomy wanted to believe that orbits were circles, since circles are clearly more perfect. But nature isn't perfect; it simply exists.
Orbits are damn close to perfect ellipses.
Likewise, human perception of color luminance could be represented by a simple model where cross-human perceptual variation results in a damn close to perfect sphere.
Just like the actual orbital parameters for a given body are described by a few constants derived from observation, the actual human perceptual parameters (such as the constants in the CIE model) are likewise derived through observation.
Human vision isn't even remotely in the same ballpark as "close to an ellipse". That idea is a very powerful, very persistent illusion, and as far as I'm concerned it will be productive to break it whenever possible.
Matplotlib’s colormaps were generated similarly: https://m.youtube.com/watch?v=xAoljeRJ3lU
Doesn't the one with 54 chromacity appear much brighter to you than the one with a chomacity of 30?
Yeah it does appear “brighter” otherwise but I postulate that it is not possible to come up with 2 colors of same chromacity but with different “brightness” (your definition).
Also, I hope the colors are not being cut off because of sRGB or P3 gamut on your PC or phone.
Squinting very hard may help you see this. Another way is to overlap the two colors with interleaving stripes, and see how adjusting the chrominance differs perceptually from adjusting the luminance.
Those things cannot be part of a specification such as CIE or NCS.
For judging contrast against e.g. text for legibility in a design?
What's the point of color variables, if they just describe the color they are... We already have color names for that.
It's the kind of thing professional designers do intentionally, but set up in a system that makes it easier to get a "designed" look without much effort.
You can also create your own custom colors with whatever names you want, whether color names or something like "primary" as the primary color of your theme. For each you can set shades as well.
The Tailwind UI updated color palette at https://www.npmjs.com/package/@tailwindcss/ui has true greys with no color hue.
And you don't need Tailwind UI to use the updated colors (I asked Adam once). I think the plan is the updated color palette will eventually be rolled into a future Tailwind CSS update, if not already.
"During early access, the components in Tailwind UI depend on some extensions we've added to the default Tailwind CSS config (like extra spacing values, updated, colors, additional shadows, etc.)
"These extensions will make their way into Tailwind itself in the future...."
The bottom of Tailwind UI documentation gives more info on the color palette changes, including the new grey:
> It uses two neural networks to predict the full palette. The first, model.js it predicts all the shades vertically from 50-900 given a certain color as input. The second, nextModel.js predicts horizontally all the colors horizontally given a certain shade as input.
It's not really a neural network either.
It will always return the same output for the same input. There are no training datasets etc.
It's literally just an algorithm that returns an output and returns the next output based on the previous output.
I would have expected some dataset of colors to train a model on and then use that to generate values but there really is no actual AI.
People seem to confuse a clever algorithm with AI.
There's nothing inherently intelligent about what it does. It's all math.
As any other AI algorithm. In the end, all it's curve fitting...
This is an already trained NN, that's why there are no training dataset or the model does not train...
I would expect an AI to be some higher level agent (according to the agent model) that modifies its own code while it does its work.
ML is done independently from the work and if you don't do it again (manually), the model won't change itself.
"AI" may seem cheesy, but you have to remember that in the future, lots of "AI programming" is just going to become "programming." This is clearly an effective, simple way of solving the problem, and requires no clever or common algorithms.
Also, what the hell is this file? Haha. https://raw.githubusercontent.com/dmarman/dmarman.github.io/...
I may be wrong, but it seems the weights of a trained NN
This is literally just the definition of a 1 layer neural network.
You don't want your model to mimic your trainning data. Otherwise, it would performe poorly with new data.