I’ve seen a number of attempts to identify deepfakes and other forms of manipulated images using AI. This seems like a fool’s errand since it becomes a never ending adversarial AI arms race.
Instead, I haven’t seen a proposal for a system I think could work well. Camera and phone manufacturers could have their devices cryptographically sign each photo or video taken. And that’s it. From that starting place, you can build a system on top of it to verify that the image on the site you’re reading is authentic. What am I missing that makes this an invalid approach?
I do understand that this would require manufacturers to implement, but it seems achievable to get them onboard. I even think you get one company like Apple to do this and it’s enough traction for the rest of the industry to have to follow suit.
Instead, I haven’t seen a proposal for a system I think could work well. Camera and phone manufacturers could have their devices cryptographically sign each photo or video taken. And that’s it. From that starting place, you can build a system on top of it to verify that the image on the site you’re reading is authentic. What am I missing that makes this an invalid approach?
I do understand that this would require manufacturers to implement, but it seems achievable to get them onboard. I even think you get one company like Apple to do this and it’s enough traction for the rest of the industry to have to follow suit.