Hi
I am looking for some feedback on our new project "Sketch Colourisation". The envisioned UI and objectives are --
* An artist should have greater control on how to colour a sketch. While a text-to-image model lacks this fine-grained control, a per-pixel colourisation pipeline makes sketch colourisation a laborious process with high-entry barrier.
* What if an artist only draws a mask for a local region and specifies the colour palette for that local region? Then a neural network figures out how to colour the overall sketch -- while maintaining those local colour palette.
[I would really like a feedback if the above UI (i.e., mask and local colour palette) makes sense to users/designers. As researchers, we often have the wrong idea of what is desired by end-users.]
* On the exact implementation of the above concept, we designed a no-training based neural network framework -- and also make sure it runs on a Nvidia 4090. In other words, I will try to avoid any expensive training or inference -- which defeats the purpose of being useful to people (not just some research labs).
* Note, I am not so bothered about the exact implementation (or whether it is "novel") -- as long as it is useful.
* A shameless advertisement: The codebase (https://github.com/CHAITron/sketchdeco-code.git) is MIT License. It is no way near to being useful to people -- but I would really like to pursue this direction and your feedback/criticism will be immensely helpful.
Thanks
Regarding the interface Petalica gets it right, although mask + palette makes sense.
A fully local model with support with different styles and a configurable respect for the line art (with default 100% respect) would definitely be a game changer. At their current level, the AI models are slightly lacking, so it feels like too much time investment compared to the quality of the results.
[0] https://petalica.com/index_en.html [1] https://github.com/lllyasviel/style2paints/issues/235