Hacker News new | past | comments | ask | show | jobs | submit login

Can't this be solved by having cameras cryptographically hash images concurrently with their generation?

Even easier with video: Hash each frame in a way that's dependent on the previous frame. A hashchain, if you will.

Totally impossible with text, of course. AI text can be, in theory, genuinely undetectable.




> Can't this be solved by having cameras cryptographically hash images concurrently with their generation?

No. This can be only solved by having "integrity", something very few journalists have.


What happens after someone extracts the hashing key off of the camera?


Well then you have >>definitely real<< AI generated picture. I don't think that problem has a technological solution, but a regulatory one - each generated picture must be somehow marked (i.e. via a watermark invisible for human, but difficult to delete by a program) that it is AI generated. If a person or a company fails to do so, massive fines are on their way.


I feel like that horse has already fled the stable.

Stable Diffusion has been kicking around in the wild for a couple of years now, and probably a million people are running it. You can't really criminalize that.

But key generation and storage can be in a hardware module, and tied to a unique device, so it would be quite difficult to hijack -- and, if you did, you'd only get very limited use out of it. So there might be a technological solution. Maybe not a perfect one, but one that's at least plausibly useful.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: