I want to know whether an image or video is largely generated by AI, especially when it comes to news. Images and video often imply that they are evidence of something actually happening.
I don't know how this would be achieved. I also don't care. I just want people to be accountable and transparent.
We can’t even define the boundaries of AI. When you take a photo on a mobile phone, the resulting image is a neural network manipulated composite of multiple photos [0]. Anyone using Outlook or Grammarly now is probably using some form of generative AI when writing emails.
Rules like this would just lead
to everything having an “AI generated” label.
People have tried it in the past with trying to require fashion magazines and ads warn when they photoshop the models. But obviously everything is photoshopped, and the problem becomes how do we separate good photoshop (levels, blemish remover?) from bad photoshop (warp tool?).
I want to know whether an image or video is largely generated by AI, especially when it comes to news. Images and video often imply that they are evidence of something actually happening.
I don't know how this would be achieved. I also don't care. I just want people to be accountable and transparent.