The pedantry gets tiring. If the AI can't recreate it exactly, it can recreate a likeness that is compelling enough that the average person would think it was the same. If it can't now, it will as it gets better. That's the point of using the training data.
That's probably a very relevant point. (I'm guessing.) If I ask for an image of a red dragon in the style of $ARTIST, and the algorithm goes off and says "Oh, I've got the perfect one already in my data"--or even "I've got a few like that, I'll just paste them together"--that's a problem.
A trained model holds relationships between patterns/colours in artwork and their affinity to the other images in the model (ignoring the English tagging of images data within this model for a minute).
To this degree, it holds relationships between millions of images and the degree of similarities (i.e. affinity weighting of the patterns within them) in a big blob (the model).
When you ask for a dragon by $ARTIST it will find within it's model an area of data with high affinity to a dragon and that of $ARTIST.
What has been glossed over in discussion here is that there are millions of other bits of related images - that have lower affinity - from lots of unrelated artwork which gives the generated image uniqueness.
Because of this, you can never recreate 1:1 the original image, it's always diluted by the relationships from the huge mass of other training data, e.g. a colour from a dinosaur exhibit in a museum may also be incorporated as it looks like a dragon, along with many other minor traits from millions of other images, chosen at random (and other seed values).
Another interesting point is that a picture of a smiling dark haired woman would have high affinity with Mona Lisa, but when you prompt for Mona Lisa you may get parts of that back and not the patterns from the Mona Lisa*, even though it looks the same. That arguably (not getting Mona Lisa) is no longer the copyrighted data.
* Nb. this is a contrived example, since in SD the real Mona Lisa weightings will out number the individual dark haired woman's many times, however this concept might be (more) appropriate for minor artists whose work is not popular enough to form a significantly large amount of weighting in the training data.
I realize that's not how it works. My point was that they're apparently taking deliberate steps to try to make sure the model trains over a large number of images and doesn't overfit on a small sample given a sufficiently specific "in the style of," etc.
> If the AI can't recreate it exactly, it can recreate a likeness that is compelling enough that the average person would think it was the same
That's the opposite goal of this image model. Sure you might find other types of research models which are meant to do that but that's not stablediffusion and the likes.
Why does this argument apply to an Artificial Intelligence, but not a human one? A human is not breaking copyright just by being able recreate a copyrighted work they've studied.
It depends to what degree it's literal copying. See e.g. the Obama "Hope" poster. [1] Though that case is muddied by the fact that the artist lied about the source of his inspiration. Had it in fact been an older photo of JFK in a similar pose, there probably wouldn't have been a controversy.