That assumes AI can only make ever-degrading copies of copies and are completely unable to synthesize new information out of existing information. I don't think is true any more (even just at today's capabilities).
It is producing novel information -- pictures or text or otherwise.
Besides, human creativity isn't borne of a vacuum either. Most humans works are derivatives of some other work and depend on the creator's life experiences, etc. Our entire education system is based on "people teaching other people what they know", at a lower fidelity, copies-of-copies style.
Like I said, rough analogy. The "lossy compression" here is in the implicit world models these networks learn, not necessarily in the generation space (though we certainly see deviations from reality there too). However, the bigger difference between learning from SEO-generated blogspam and human teaching is the lack of feedback mechanisms to correct errors in generated output that then get taken as ground truth.
> assumes AI ... are completely unable to synthesize new information out of existing information
I love it when we go back to assumptions. This is exactly what I'm assuming. However, the problem is not solved. what exactly is "new" information? here is more assumptions for you right there.
> Most humans works are derivatives of some other work
Good point. "Most", however, does not equal "All". collectively, we are not just copying. there is a newness. (again, what is new?)
It is producing novel information -- pictures or text or otherwise.
Besides, human creativity isn't borne of a vacuum either. Most humans works are derivatives of some other work and depend on the creator's life experiences, etc. Our entire education system is based on "people teaching other people what they know", at a lower fidelity, copies-of-copies style.