Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why don't we already use image verification and authentication?
4 points by nunodonato 37 days ago | hide | past | favorite | 8 comments
Hi all

As I scanned the daily news with more amazing image generations from the new kid on the block, and the usual comments and posts that fear doom and gloom over the AI future(fake images, fake videos, etc), I couldn't help but wonder why we still don't use reliable means to provide verification of an image authenticity.

A quick google search revealed papers as old as 1998[1], where a proposal for using public keys for that purpose was presented.

Picture this, you see some shocking image of Elon Musk having a date with a humanoid robot (I actually saw this photo yesterday). But now you have a tool to submit that photo for verification. Maybe sources like Getty, AP, CNN, etc, would have their public keys available for anyone to cross-check the authenticity of images. Much like we do today with PGP/GPG.

Perhaps a whole new image format could be developed that would even facilitate this(or require such keys to be used). And there would be no gatekeeping, everyone can have their own private/pub keys, like we already do now. Famous photographers will have their pub keys on their websites so that people can use them to verify.

If AI-generated images are such a problem (and will become a bigger one), why is this not being done?

[1] - https://ieeexplore.ieee.org/document/723526




There are lots of problems with this approach, but one of the bigger ones is that if everyone has their own keys, then everyone can create signed images that have been manipulated.

So even if you you can get people to actually verify images (and bear in mind how hard it is to get people to manage keys and use something like GPG), not to mention the technical issues with image reproduction, this would only give you protection in cases where someone is claiming that an image was created by a specific individual and it wasn't. But that's not the problem in the vast majority of instances.


isn't it? when I see an AI-generated image of something that can cause public outrage (or any other sort of virality), the first thing in my mind is: what's the source? and, is it authentic?

If the image comes from John Doe, I probably couldn't care less about it. But if it is from a reputable source, then I definitely want to make sure it is authentic.

keys are hard to manage and use just because they never reached mainstream. A solution like this would have easy-to-use tools to verify such images, perhaps even online tools to do so. I don't expect my mom to know anything about private/public key pairs, but she should be able to run a simple verification on an image


> keys are hard to manage and use just because they never reached mainstream.

I think they never reached mainstream because there is a fundamentally difficult problem to solve that hasn't been solved.

Identity. There are so many similarly named people and organizations that a signature doesn't mean much, unless you understand the identity, and identity is fundamentally hard to understand.

Add on to that, images have a very long lifetime, and signatures become difficult to verify over time. After the key is changed, the old key may no longer be published. It can be difficult to verify when a signature was made, although there's options for signing services that can help... but when those services rotate keys or disappear, it can be difficult to validate their old signatures.


> from a reputable source

And who/what defines that? The problem with your concept is not technical, it is social and systemic. If someone controls the definition of who is reputable, it is centralized control which will both be gatekeeping and susceptible to corruption. If no one controls it, it is personal opinion, not verification. Either way, it is susceptible to manipulation.

It is better to just treat all this the same as any other misinformation - by thinking about what we see. Education and critical thought go a long way, and we need to encourage people to use such tools when consuming media.


We do have certificate authorities, something similar could be used. Nothing stops you from signing your own keys, but it's worth what it's worth. The thing is, if I'm famous and my website has my public key, anyone can use it to check


You're not testing what you think you're testing. We can verify an image, but still have problems with the story below the image. I don't think this should be half-solved, it would give false credence to real images used in lies. Or the reverse, you discredit a fake image of Musk you saw yesterday, but tomorrow he does it for real.

You'd need to give political campaigns' keys some trust, for images you expect from them. But what if they start signing images that you wouldn't expect from them, then you have 'verified' fake news.


There’s actually a web standard to authenticate images.

Several cameras on the market implement it, and even openai authenticates their images.

The limit of signing an image is that any modification will break authentication, but zero knowledge proofs can be used to “preserve authentication”. The current signatures & zkp have some practical limitations for large images, but nothing that won’t be fixed soon, if I had to guess.

I recommend this presentation: https://youtu.be/EKoY8ysGblk?si=3-lLrzCP7263sY_J


What is the problem solved or revenue opportunity from image verification and authentication?

A fake image of Elon doing whatever would general billions of clicks, ad impressions, etc., which is all that matters.

If you want to get only genuine images from AP, CNN, etc., just go to those websites (using HTTPS).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: