People love to draw the analogy to human listening, and while in principle I agree, the AI is still not human and the argument isn't necessarily valid
It could be worth considering: humans are [ostensibly, I claim] intelligent by default. There are obviously different details of scenarios, but humans can learn to speak by just being around their immediate family; they may only read a few dozen books in their lifetime; they can appreciate music listening to their very first album.
AI (at least the current rendition of it) is trained on a massive collection of data before we could even start to claim they are intelligent. You can't take an empty neural network and have it listen to a single album and it gets anything worthwhile out of that. It can't read a few dozen books and be able to do much of anything. It needs a much, much larger data set before we would even think of saying it was intelligent.
So I think, to compare a human listening to an album, and a trained AI system listening to an album, yes, those two things might be reasonably analogous. But to even get the AI system to that point, it needed to be built up using a huge quantity of data. Do the copyright holders on that data concur with that training usage? I think, in that respect, there is a difference compared to a human listening to a recording.
Randomly pontificating in a discussion forum here; not offering a well-thought-out plan for everyone to live by!
> So I think, to compare a human listening to an album, and a trained AI system listening to an album, yes, those two things might be reasonably analogous.
This still relies on the implicit assumption that AI and human mind work the same way. While AI and humans may generate similar-looking results, and need a similar amount of data fed into them, is that really enough evidence that they work the same way, and should therefore be treated the same way?
Additionally, this analogy is ignoring the fact that human mind cannot be replicated, scaled and automated, while an AI can. Isn't that aspect highly relevant in case of producing content?
Sure. Maybe that analogy is in fact not even fair. The main point I was getting at is, an AI system has already used a huge quantity of data before we could even try to compare it with what a human does.
But even at that point, I would agree, the analogy still may have holes in it.
It could be worth considering: humans are [ostensibly, I claim] intelligent by default. There are obviously different details of scenarios, but humans can learn to speak by just being around their immediate family; they may only read a few dozen books in their lifetime; they can appreciate music listening to their very first album.
AI (at least the current rendition of it) is trained on a massive collection of data before we could even start to claim they are intelligent. You can't take an empty neural network and have it listen to a single album and it gets anything worthwhile out of that. It can't read a few dozen books and be able to do much of anything. It needs a much, much larger data set before we would even think of saying it was intelligent.
So I think, to compare a human listening to an album, and a trained AI system listening to an album, yes, those two things might be reasonably analogous. But to even get the AI system to that point, it needed to be built up using a huge quantity of data. Do the copyright holders on that data concur with that training usage? I think, in that respect, there is a difference compared to a human listening to a recording.
Randomly pontificating in a discussion forum here; not offering a well-thought-out plan for everyone to live by!