Hacker News new | past | comments | ask | show | jobs | submit login

Setting aside the efficacy of this tool, I would be very interested in the legal implications of putting designs in your art that could corrupt ML models.

For instance, if I set traps in my home which hurt an intruder we are both guilty of crimes (traps are illegal and are never considered self defense, B&E is illegal).

Would I be responsible for corrupting the AI operator's data if I intentionally include adversarial artifacts to corrupt models, or is that just DRM to legally protect my art from infringement?

edit:

I replied to someone else, but this is probably good context:

DRM is legally allowed to disable or even corrupt the software or media that it is protecting, if it detects misuse.

If an adversarial-AI tool attacks the model, it then becomes a question of whether the model, having now incorporated my protected art, is now "mine" to disable/corrupt, or whether it is in fact out of bounds of DRM.

So for instance, a court could say that the adversarial-AI methods could only actively prevent the training software from incorporating the protected media into a model, but could not corrupt the model itself.




None whatsoever. There is no right to good data for model training, nor does any contractual relationship exist between you and and a model builder who scrapes your website.


If you're assuming this is open-shut, you're wrong. I asked this specifically as someone who works in security. A court is going to have to decide where the line is between DRM and malware in adversarial-AI tools.


I'm not. Malware is one thin, passive data poisoning is another. Mapmakers have long used such devices to detect/deter unwanted copying. In the US such 'trap streets' are not protected by copyright, but nor do they generate liability.

https://en.wikipedia.org/wiki/Trap_street


A trap street doesn't damage other data. Not even remotely useful as an analogy. That's to allow detection of copies, not to corrupt the copies from being useable.


Sure it does. Suppose the data you want to publish is about the number of streets, or he average street length, or the distribution of street names, or the angles of intersections. Trap streets will corrupt that, even if it's just a tiny bit. Likewise, ghost imagery slipped into desirable imagery only slightly corrupts the model, but like the trap streets, that's the model-maker's problem.

You have a legal right to scrape data and use it as input into a model, you don't have a right to good data. It's up to you to sanitize it before training your model on it.


Worth trying but I doubt it unless we establish a right to train.


The way Nightshade works (assuming it does work) is by confusing the features of different tags with each other. To argue that this is illegal would be to argue that mistagging a piece of artwork on a gallery is illegal.

If you upload a picture of a dog to DeviantArt and you label it as a cat, and a model ingests that image and starts to think that cats look like dogs, would anybody claim that you are breaking a law? If you upload bad code to Github that has bugs, and an AI model consumes that code and then reproduces the bugs, would anyone argue that uploading badly written code to Github is a crime?

What if you uploaded some bad code to Github and then wrote a comment at the top of the code explaining what the error was, because you knew that the model would ignore that comment and would still look at the bad code. Then would you be committing a crime by putting that code on Github?

Even if it could be proven that your intention was for that code or that mistagged image to be unhelpful to training, it would still be a huge leap to say that either of those activities were criminal -- I would hope that the majority of HN would see that as a dangerous legal road to travel down.


That’s like asking if lying on a forum is illegal


No, it's much closer to (in fact, it is simply) asking if adversarial AI tools count as DRM or as malware. And a court is going to have to decide whether the model and or its output counts as separate software, which it is illegal for DRM to intentionally attack.

DRM can, for instance, disable its own parent tool (e.g. a video game) if it detects misuse, but it can't attack the host computer or other software on that computer.

So is the model or its output, having been trained on my art, a byproduct of my art, in which case I have a legal right to 'disable' it, or is it separate software that I don't have a right to corrupt?


> asking if adversarial AI tools count as DRM or as malware

Neither. Nightshade is not DRM or malware, it's "lying" about the contents of an image.

Arguably, Nightshade does not corrupt or disable the model at all. It feeds it bad data that leads the model to generate incorrect conclusions or patterns about how to generate images. This is assuming it works, which we'll have to wait and see, I'm not taking that as a given.

But the only "corruption" happening here is that the model is being fed data that it "trusts" without verifying that what the data is "telling" it is correct. It's not disabling the model or crashing it, the model is forming incorrect conclusions and patterns about how to generate the image. If Google translate asked you to rate its performance on a task, and you gave it an incorrect rating from what you actually thought its performance was, is that DRM? Malware? Have you disabled Google translate by giving it bad feedback?

I don't think the framing of this as either DRM or malware is correct. This is bad training data. Assuming it works, it works because it's bad training data -- that's why ingesting one or two images doesn't affect models but ingesting a lot of images does, because training a model on bad data leads the model to perform worse if and only if there is enough of that bad data. And so what we're really talking about here is not a question of DRM or malware, it's a question of whether or not artists have a legal obligation to make their data useful for training -- and of course they don't. The implications of saying that they did would be enormous, it would imply that any time you knowingly lied about a question that was being fed into an AI training set that doing so was illegal.


I see it as no different than mapmakers inventing a nonexistent alley, to check who copies their maps verbatim ("trap street"). Even if this caused, for example, a car crash because of an autonomous driver, the onus I think would be on the one that made the car and used the stolen map for navigation, and not on the one that created the original map.

https://en.wikipedia.org/wiki/Trap_street


Japan is considering it, I think? https://news.ycombinator.com/item?id=38615280


How would that situation be remotely related?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: