Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right the difference is that it’s a large company looking at it then copying it and reselling it without credit, which basically everyone would understand as bad without the indirection of a model.

Edit: the key words here are “company” and “reselling”



But it’s not copying and reselling — it’s imitation.

Copying is controlled by copyrights. And imitation isn’t controlled by anything.

As for a company: a company is just a group of people acting together.


Imitation has been controlled by patent rights, where an object did not need to be an exact copy but merely use the essence of an idea to count as a violation. Of course so far this protection has only been applied to inventions, because they took a lot of time to develop but once done could be trivially copied at a glance. Now, in this new landscape, we may find it appropriate to apply a similar regime to art.

Or maybe we won't. It's a choice for society to make, to balance how much we need to protect the incentives to create something new vs protect the ease of copying.


#1, it’s extremely easy to coerce direct copies out of models that artists could be pursued for infringement if they drew, but companies reselling said copyrighted artwork face no penalty

#2, yes, it’s a group of people who came together to build an algorithm that learns to extract features learned from images made by other people in order to generate images somewhere between these images in a high dimensional space. They sell these images and give no credit or cash to the images being “interpolated” between. Notice this doesn’t extend to open source, it’s the commercial aspect that represents theft.

The reality is that laws are meant to be interpreted not by their letter but their spirit. The AI can’t exist without the hard work its trained on, and the outputs often resemble the inputs in a manner that approaches copying, so selling those outputs without compensation for the artists in the training set should be illegal. It won’t be, but it should.


> it’s extremely easy to coerce direct copies out of models that artists could be pursued for infringement if they drew, but companies reselling said copyrighted artwork face no penalty

The purpose of the model isn't to make exact reproductions. It's like saying you can use the internet for copyright infringement. You can, but it's the user who chooses the use, so is that on AT&T and Microsoft or is it on the users doing the infringement?

> They sell these images and give no credit or cash to the images being “interpolated” between.

A big part of the problem is that machines aren't qualified to be judges.

Suppose the image you request is Gollum but instead of the One Ring he wants PewDiePie. Obviously this is using a character from the LOTR films by Warner Bros. If you're PewDiePie and you want this image to use in an ad for your channel, you might be in trouble.

But Warner Bros. got into a scandal for paying YouTubers to promote Promote Middle Earth: Shadow of Mordor without disclosing the payments. If you're creating the image to criticize the company's behavior, it's likely fair use.

The service has no way to tell why you want the image, so what is it supposed to do? A law that requires them to deny you in the second case is restricting a right of the public. But it's the same image.

Meanwhile in the first case you don't really need the company generating the image to do anything because Warner Bros. could then go after PewDiePie for using the character in commercial advertising without permission.

> Notice this doesn’t extend to open source, it’s the commercial aspect that represents theft.

It's also not really clear how this works. For example, Stable Diffusion is published. You can run it locally. If you buy a GPU from Nvidia or AMD in order to do that, is that now commercial use? Is the GPU manufacturer in trouble? What if you pay a cloud provider like AWS to use one of their GPUs to do it? You can also pay for the cloud service from Stability AI, the makers of Stable Diffusion. Is it different in that case than the others? How?


>It's like saying you can use the internet for copyright infringement.

I think that a comparison that could help to elucidate the problem here is to a search engine. Like with imagegen, an image search is using infrastructure+algorithm to return the closest match(es) to a textual input over some particular space (whether the space of indexed images or the latent space of the model). Immediately, however, there are qualitative differences. A search company, as an entity, doesn’t in any way take credit for the work; it bills itself as, and operates as, a mechanism to connect the user to others’ work, and in the service of this goal it provides the most attribution it’s reasonably able to provide given technical limitations (a url).

For me this is the difference. Image gen companies, at least all that I’m aware of, position themselves more as a kind of pseudo-artist that you can commission. They provide no means of attribution, rather, they deliberately obfuscate the source material being searched over. Whether you are willing to equate the generation process to a kind of search for legal purposes is really the core disagreement here, and beyond an intuition for it not something I feel I can prove.

So what’s the solution, what’s a business model I’d find less contentious? If an AI company developed a means to, for example, associate activation patterns to an index of source material, (or hell, just provided an effective similarity search between output and training data) as a sort of good-faith attribution scheme, made visible the training set used, and was upfront in marketing about its dependence on the source material, I’d struggle to have the same issues with it. It would be leagues ahead of current companies in the ethical department. To be clear though, I’m not a lawyer. I can’t say how image gen fits into the current legal scheme, or whether it does or doesn’t. My argument is an ethical one; I think that the unethical behavior of the for-profit imagegen companies should be hampered by legality, through new laws if necessary. I feel like this should answer your other questions as well but let me know if I missed something.


I guess I don't understand what you mean when you say companies reselling said copyrighted artwork face no penalty. Why wouldn't they? If I was to make a copy of a Studio Ghibli movie and sell it, I would absolutely face a penalty if I was caught.


A common joke is to type in a description of some corporate IP and have ChatGPT generate it without ever saying it directly. Plenty of people have paid a subscription to do that, generate corporate IP that an artist could be sued over, but I don’t believe OpenAI has faced any legal issues if I’m correct, just as an example.


You need to copy the work to use it for AI training.


I’d argue that would be fine for non-commercial use; it’s once the AI outputs are sold that the problem arises.


Try "imitating" some Mario Brothers in a commercial context and see how that goes. Good luck.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: