We are seeing tons of AI-generated images from Dall-E, StableDiffusion, Midjourney, etc. flooding the internet.
This will only increase.
Not every such images has a distinct mark that denotes it as AI-generated. They could be mistaken for real photograph or real work of (digital) art by a human. Especially by an algorithm.
I also understand a lot of today's cutting-edge models are trained on images scraped from the web. Not sure what curation happens but it cannot be foolproof.
Will future AI models that generate "realistic" images feed on this as input and generate images that mimic some of these attributes -- creating some kind of feedback loop that will eco for generations of models?
Has anyone already thought of such issues -- not just with images but with AI-generated text, data, music, etc.
Curious to know what is the thinking of this group here.
Especially with text there's an arms race to make undetectable AI text for blogspam and similar purposes. It's going to end up like carbon dating: once nuclear weapons were used in the atmosphere, everything ended up contaminated and had to be accounted for. https://www.radiocarbon.com/carbon-dating-bomb-carbon.htm
The future will include humans claiming AI art as their own, possibly touched up a bit, and AIs claiming human art as their own.