Hi all
As I scanned the daily news with more amazing image generations from the new kid on the block, and the usual comments and posts that fear doom and gloom over the AI future(fake images, fake videos, etc), I couldn't help but wonder why we still don't use reliable means to provide verification of an image authenticity.
A quick google search revealed papers as old as 1998[1], where a proposal for using public keys for that purpose was presented.
Picture this, you see some shocking image of Elon Musk having a date with a humanoid robot (I actually saw this photo yesterday). But now you have a tool to submit that photo for verification. Maybe sources like Getty, AP, CNN, etc, would have their public keys available for anyone to cross-check the authenticity of images. Much like we do today with PGP/GPG.
Perhaps a whole new image format could be developed that would even facilitate this(or require such keys to be used). And there would be no gatekeeping, everyone can have their own private/pub keys, like we already do now. Famous photographers will have their pub keys on their websites so that people can use them to verify.
If AI-generated images are such a problem (and will become a bigger one), why is this not being done?
[1] - https://ieeexplore.ieee.org/document/723526
So even if you you can get people to actually verify images (and bear in mind how hard it is to get people to manage keys and use something like GPG), not to mention the technical issues with image reproduction, this would only give you protection in cases where someone is claiming that an image was created by a specific individual and it wasn't. But that's not the problem in the vast majority of instances.