Hacker News new | past | comments | ask | show | jobs | submit login

It is absolutely true that the Glazed images will be scooped up by a fresh webscrape for training data for a new model. But there isn't any evidence that this would provide any actual defense. Their paper only studies the fine-tuning scenario. It seems to me that if you train your text-to-image system from scratch on Glazed images, Glaze has lost its upper-hand. You'd essentially be performing adversarial training but with a fixed adversary!

At the very least, I'd want to see some actual experiments on training from scratch before telling artists that Glaze will protect them in that scenario. And I'm very skeptical that it would.




I hadn’t really thought about that. If it doesn’t work against people training the base models, and those are inevitably going to be trained on a wider and wider set of internet-available imagery, it seems like this is even more futile.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: