Hacker News new | past | comments | ask | show | jobs | submit login

I'm having trouble following. Can someone explain this in easier terms? What scenario does this address and how does this work exactly?



scenario: you want to resell 50000 istock phots, but they are watermarked, making the free high resolution versions valueless.

solution: you use a denoising filter to reconstruct the watermark pixels to plausible original values. Profit!

this: instead of just simple obvious watermarks, you can instead encode visually indistinct fake-noise that deliberately confuses denoising neural networks.

They claim “We find that we can target multiple different models simultaneously with our technique.”, ie. it is reasonably generic.

how? Eh, that’s complicated, look up “adversarial neural networks”, there’s a fairly high level overview here: https://towardsdatascience.com/how-to-systematically-fool-an...


The OP's title is incorrect. This doesn't detect deepfakes, it serves for people to watermark their images in a way that are hard to remove by conventional ML approaches.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: