Hacker News new | past | comments | ask | show | jobs | submit login

This is true for explicitly adversarial actors. I can imagine a serious-business misinformation group investing the money required to train up a generator designed to defeat automatic detection.

For everyone else though, the people making common tools everyone uses explicitly want their images to be easy to automatically ID as fake. And, the users largely prefer it that way too.




As a user, I don't really care one way or the other. I'm not trying to trick anyone. I also don't care if they can readily tell if my image was AI generated. I don't post much media though. If I did, I'd probably use it for a game. And if it's a game, does it matter if I touched it up in Photoshop vs StableDiffusion? There's going to be some processing either way to get the art asset I need.

The toolmakers might have a mild incentive to tag images generated with their tool to prevent too easily creating these 'fakes' or just for analytics purposes so they can see how their output is spreading.

But regardless, it's a fools errand like the other poster said. Anyone that is serious about tricking the mass media will strip out the watermark or use a different tool.


It's 2024. Everyone is an adversarial actor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: