Hacker News new | past | comments | ask | show | jobs | submit login


http://colormind.io ("The AI powered color palette generator" - the image upload option is pretty nifty)

http://khroma.co/train/ ("The AI color tool for designers")

AI? Wouldn't they just automate regular colour theory?

(I made colormind.io) Color theory isn't an exact science, you could make random palettes from various color rules but they don't look good (imo).

as an experiment, try this:

go on https://color.adobe.com and click on one of the color rules (it will give you a random palette based on the rule)

now compare with a palette from https://coolors.co/ or https://color.adobe.com/explore/ (user-uploaded)

if color theory + regression fully solved this problem, none of these color sites would need to exist.


I really liked colormind.io, and made a little toy Vue site when I was looking for a job (I grabbed your colors and pushed them to state so my website colors changed). See here: http://q8z8p.net/#/color. I just wanted to thank you because I think that was helpful in getting my first job!

awesome, that's exactly why I put up the api!

Thanks for the reply, I dove a bit more into it after making this comment and read some of the pages on your site discussing the process and practical application. Really interesting stuff and I appreciate the effort going into explaining it all. I guess I underestimated the depth of the problem space. Even while writing my comment I got to thinking about how colour theory would account for subjectivity and outlying pallettes that work well but don't have obvious relationships between vibrant colours.

That got me wondering if the process going on for colourmind could be turned into a new theory on colour. Is there a way to boil the process down into a deterministic one, or do you feel like the neural network is accomplishing something that couldn't be refined into a "rule" or guidline to use in a regular theory?

it's hard to say because GANs are still black boxes. There's a lot of research into explainable ANNs that gives some explanation as to how the NN arrived at a particular conclusion, so I think it should be possible in the future.

Remember that most of the time, "AI" is a codeword for linear regression.

Two sides to this coin as well: I'm baffled that what was previously called linear algebra is now called "AI", but it has also emphasised to me how much I need to get better at linear algebra...

I came to the same conclusion, and actually picked up a stats book recently. The hype might be unbearable at times, but the underlying knowledge is also very valuable.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact