Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If the argument is that human authors should get a cut of AI profits because their works were "stolen" to train the models, this is going to be a increasingly losing argument, because it doesn't have a leg to stand on for private training data.

The argument can be made that LLMs could not be created without expropriating the original works of all the authors they were trained on, and that argument would in fact be true and have quite sturdy legs as far as I’m concerned.

It’s not a historical instance of forgotten times, it started less than half a decade ago and I would be surprised if it’s not still ongoing (your argument about synthetic training data is forward-looking).



>The argument can be made that LLMs could not be created without expropriating the original works of all the authors they were trained on, and that argument would in fact be true and have quite sturdy legs as far as I’m concerned.

That makes as much sense as "American industry was built on the backs of British inventors (back it the day it was the "China" when it came to IP), so Britain should get perpetual (?) royalties from the US economy".


So we’re back the human vs. unthinking machine distinction. American inventors were human. We’re going in circles and this article was hidden on HN anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: