Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: AI color palette generator for Tailwindcss (tailwind.ink)
209 points by manceraio on Sept 7, 2020 | hide | past | favorite | 52 comments



These kind of palettes always struggle with brighter yellows like #ffe400 because of the necessary normalization across hues to make it a full palette in first place, which of course prevents some colors to fulfill their potential (which is often at their peak chromacity). No AI can solve this problem because it's how colors work.

I think it's best to keep the palette separate from the code (e.g. store palettes in an app like Sip), and not insist on the replaceability of colors across a range. E.g. instead of blue-500 use functional names like btn-color or brand-color and alias the colors that way in code.


AI can solve it better than normal code can: you have to train on how humans see color. (This has been a long-standing ambition of mine, but sadly there's not yet much work or infrastructure in the field of "graphics AI." There's barely one software rasterizer. https://github.com/vahidk/tf.rasterizer)

The key would be to spend an absurd amount of time carefully cataloguing good colors – by hand – and training it to extrapolate from that information. That's basically how the Musnell color system was created. https://en.wikipedia.org/wiki/Munsell_color_system It's the realization that color space isn't spherical, it's not a cube, it's not any shape at all except "how humans happened to evolve." Even the term "max chromaticity" is just a reflection of the weird way our brains happen to work, not an intrinsic property of color.

In that context, a universal curve fitting algorithm might be handy. Which of course is all that AI is.


> The key would be to spend an absurd amount of time carefully cataloguing good colors – by hand – and training it to extrapolate from that information.

It's a bit more nuanced than that. Not only do you need to catalog good colors (and color combinations), you need to have it done by a large number of people, since different people perceive color differently and have different aesthetic preferences. This is something I've been working on in the limited context of color cycles for data visualization and plotting [1][2]. Based on my preliminary analysis, these data are quite noisy.

[1] https://colorcyclesurvey.mpetroff.net/ [2] https://mpetroff.net/2020/01/color-cycle-survey-update/


Not to mention are viewing the colors on different screens, with possibly different tv/monitor modes enabled which will alter colors. I have two monitors where colors can look noticeably different. Unfortunately it's probably not feasible to account for this.


You're absolutely right, and that's an exciting area: viewing conditions matter!

Well, different monitors aren't really different viewing conditions, but it's a similar idea.

When people can't see colors too well, they turn up the brightness. But that changes the problem entirely.

Even something like whether a window has curtains or not will completely change whether you can perceive a certain "vibrance." Lots of the conversation in parallel replies has probably suffered due to such confusions.


HSLuv is a sphere: https://www.hsluv.org/

They took the CIELUV/LCH/HCL solid and compressed it into a sphere, similar to HSL on top of RGB. The L* (perceptual luminance) value is consistent across hues, it can also replace the 100/200/300 scale for design systems.

But even the CIE model has flaws, chromacity is inherently linked with brightness/luminance, no color model can change that. A more saturated/chromatic color will always be perceived as more bright than a less chromatic one at the same luminance. And in order to create a palette you need normalization, which will always weed out individual color defining peaks.


I gave up trying to reason about color models when I realized the CIE model wasn't determined rigorously and empirically. A full critique is beyond the scope of 3AM, but, it's kind of fascinating to go back to the roots of color models and read about how precisely they came up with them. Remember to turn on your skepticism when you do.

"X is a sphere" has to be reconciled with "What are the chances that our visual system would evolve into a perfect sphere with no flaws?"

For some reason, people are so determined to turn complicated phenomena into a pure and simple form. Even astronomy wanted to believe that orbits were circles, since circles are clearly more perfect. But nature isn't perfect; it simply exists.


Certain aspects of nature can be modeled extremely well with simple formulae.

Orbits are damn close to perfect ellipses.

Likewise, human perception of color luminance could be represented by a simple model where cross-human perceptual variation results in a damn close to perfect sphere.

Just like the actual orbital parameters for a given body are described by a few constants derived from observation, the actual human perceptual parameters (such as the constants in the CIE model) are likewise derived through observation.


Being close to something doesn't make it that thing. The orbits are close to perfect ellipses, but Mercury's orbits isn't a perfect ellipse – it required Einstein's modification to understand the discrepancy.

Human vision isn't even remotely in the same ballpark as "close to an ellipse". That idea is a very powerful, very persistent illusion, and as far as I'm concerned it will be productive to break it whenever possible.


For me, their (CIEs) color opponents are already wrong, I prefer NCS: https://en.wikipedia.org/wiki/Natural_Color_System


Are there any NCS-based web tools for generating such a palette? The "official" tool is... bad.


Not to my knowledge. Their official tool is for interior designers I think, and their system is copyrighted. I'm using a custom "color wheel" that's inspired by NCS opponents and CIE. I might release it open source as a web app though. It's actually part of a bigger design system for design systems I'm working on, but I'll see if I can break it off into it's own thing somehow.


That would be awesome, thank you.


Interesting. Can you provide source about normalization? I thought CIELab and CIELuv spaces use some form of average human perception correction factor. At a given luminosity, it should be the same regardless of the orthogonal chromaticity axis for an average human.

Matplotlib’s colormaps were generated similarly: https://m.youtube.com/watch?v=xAoljeRJ3lU


Compare the colors: http://davidjohnstone.net/pages/lch-lab-colour-gradient-pick...

Doesn't the one with 54 chromacity appear much brighter to you than the one with a chomacity of 30?


Squint eyes and they’re the same to me.

https://classicalatelierathome.com/squint-your-eyes

Yeah it does appear “brighter” otherwise but I postulate that it is not possible to come up with 2 colors of same chromacity but with different “brightness” (your definition).

Also, I hope the colors are not being cut off because of sRGB or P3 gamut on your PC or phone.


It's clearly and intentionally more vibrant, which can be a synonym for "brighter", but no, it does not seem particularly more luminant.

Squinting very hard may help you see this. Another way is to overlap the two colors with interleaving stripes, and see how adjusting the chrominance differs perceptually from adjusting the luminance.


But that's just theory, in practice users don't squint their eyes when they use your UI, meaning that in practice perceived brightness does matter, less how it comes about.


Isn’t the perceived brightness depend on surrounding colors and many other variables such as motion, incident illumination, etc?

Those things cannot be part of a specification such as CIE or NCS.


My point was that luminance and chromacity fulfill a very similar function in UI design besides size or shadows, they bring elements forward, make them stand out. An item with low chromacity therefore isn't on the same Z level as an item with high chromacity if you think of the foreground and background as a 3d space that you can use to guide the eyes of users. It's the function for the user experience that counts, at least for me.


Thanks for the discussion, I learned about the NCS system and you seem to know a lot about this.


For psychological impact use of "brightness", yes.

For judging contrast against e.g. text for legibility in a design?


Fully agree with this, I think having color variables like blue-500, orange-100 is completely counter-productive.

What's the point of color variables, if they just describe the color they are... We already have color names for that.


The numbers describe the brightness, they can be useful if they're linked to UI usecases and if you're ok with limiting yourself to a prebuild color palette. It's just that functional names (btn-fg, btn-bg etc.) are better imo because they separate concerns. You could link them to a grayscale when prototyping and then later switch the colors in the stylesheet. It's also slower if you have to think about picking colors when coding.


The intention is that colors of the same number have a similar luminance, so you can easily adjust contrast and hue independently while designing. These schemes also keep a relatively consistent saturation profile across the palette.

It's the kind of thing professional designers do intentionally, but set up in a system that makes it easier to get a "designed" look without much effort.


I thought this immediately too but tailwind does allow you to use generic names too: https://tailwindcss.com/docs/customizing-colors#naming-your-...


This does pick some pleasing palettes but I’m not clear how they are utilized in the page. A quick scan of Tailwindscss docs shows a bit of discussion about naming colors and picking a primary and secondary color but I don’t see much discussion about the other colors in their palettes. Anyone explain?


Tailwind comes with a default color palette. Each color has a 9 shades (100, 200,... 900) and the colors can be used for various elements, such as background, border, font, etc. For example, you can set a border to the 500 shade indigo using the class "border-indigo-500".

https://tailwindcss.com/docs/customizing-colors#default-colo...

You can also create your own custom colors with whatever names you want, whether color names or something like "primary" as the primary color of your theme. For each you can set shades as well.


This is a lovely app but it really needs to generate grays as well. You just can't randomly drop in a color palette without changing the default TW grays. (they are pretty blue)


> You just can't randomly drop in a color palette without changing the default TW grays. (they are pretty blue) reply

The Tailwind UI updated color palette at https://www.npmjs.com/package/@tailwindcss/ui has true greys with no color hue.

And you don't need Tailwind UI to use the updated colors (I asked Adam once). I think the plan is the updated color palette will eventually be rolled into a future Tailwind CSS update, if not already.


All the colors were updated for Tailwind UI, and they are supposed to be incorporated into Tailwind CSS at some point:

"During early access, the components in Tailwind UI depend on some extensions we've added to the default Tailwind CSS config (like extra spacing values, updated, colors, additional shadows, etc.)

"These extensions will make their way into Tailwind itself in the future...."

The bottom of Tailwind UI documentation gives more info on the color palette changes, including the new grey:

https://tailwindui.com/documentation#how-tailwindcss-ui-exte...


Yes, I also notice this. The model have some flaws. I could compensate adding some extra data points, but I wanted to train with the original palette. With the release of the new Tailwind palette I will have more data to train with and maybe it will get solved.


You could also try training from other palettes but it does look really good! Adding a grays would be key.


So what exactly is AI about it? It just looks like a common algorithm that matches colors together and not some AI that has been trained to find matching color palettes.


It uses two neural networks trained with the original tailwind palette. Here is the GitHub page with more info:

https://github.com/dmarman/dmarman.github.io


According to the README [0] it's using neural networks:

> It uses two neural networks to predict the full palette. The first, model.js it predicts all the shades vertically from 50-900 given a certain color as input. The second, nextModel.js predicts horizontally all the colors horizontally given a certain shade as input.

[0]: https://github.com/dmarman/dmarman.github.io


Yeah but if you look at the code it doesn't really predict anything on trained data etc.

It's not really a neural network either.

It will always return the same output for the same input. There are no training datasets etc.

It's literally just an algorithm that returns an output and returns the next output based on the previous output.

I would have expected some dataset of colors to train a model on and then use that to generate values but there really is no actual AI.

People seem to confuse a clever algorithm with AI.

There's nothing inherently intelligent about what it does. It's all math.


Well, it's doing inference with an already trained neural network. Inference will return the same result always. A training dataset is not needed once a model is trained. Of course, AI sounds better than NN and is shorter than Neural Network. If you want to check the code for the "real" NN to train the model just follow the open issue on GitHub...


> There's nothing inherently intelligent about what it does. It's all math.

As any other AI algorithm. In the end, all it's curve fitting...

This is an already trained NN, that's why there are no training dataset or the model does not train...


Is this maybe the difference between AI and ML?

I would expect an AI to be some higher level agent (according to the agent model) that modifies its own code while it does its work.

ML is done independently from the work and if you don't do it again (manually), the model won't change itself.


It's effective, for one. I was surprised how good it looks. Color matching is a lot harder than it seems when you're generating palettes.

"AI" may seem cheesy, but you have to remember that in the future, lots of "AI programming" is just going to become "programming." This is clearly an effective, simple way of solving the problem, and requires no clever or common algorithms.

Also, what the hell is this file? Haha. https://raw.githubusercontent.com/dmarman/dmarman.github.io/...

This is awesome. The model outputs were turned into a single line of javascript which then gets evaluated by multiplying the color against all the weights. That wasn't what I was expecting, and I'm totally stealing this technique.


> Also, what the hell is this file? Haha. https://raw.githubusercontent.com/dmarman/dmarman.github.io/....

I may be wrong, but it seems the weights of a trained NN


yep, this line of js was rendered using Brain.js

https://github.com/BrainJS/brain.js#standalone-function


> gets evaluated by multiplying the color against all the weights. That wasn't what I was expecting

This is literally just the definition of a 1 layer neural network.


The use of the term AI may generate funding ;-)


i've added it to my list of color palette generation tools - sharing in case it helps someone else https://github.com/sw-yx/spark-joy/blob/master/README.md#col...


Great resource, thank you for curating this list!


welcome! honestly been thinking about making it into a nicer site with previews and commentary and faving and stuff. just gotta find the time (heh, as with all side projects)


Thanks!


Why doesn't the generated palette include the original color? Makes me lose faith.


Because the model doesn't overfit and that is good.

You don't want your model to mimic your trainning data. Otherwise, it would performe poorly with new data.


This works really well, I'll be using it on my upcoming projects :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: