Hacker News new | past | comments | ask | show | jobs | submit login

If you're assuming this is open-shut, you're wrong. I asked this specifically as someone who works in security. A court is going to have to decide where the line is between DRM and malware in adversarial-AI tools.



I'm not. Malware is one thin, passive data poisoning is another. Mapmakers have long used such devices to detect/deter unwanted copying. In the US such 'trap streets' are not protected by copyright, but nor do they generate liability.

https://en.wikipedia.org/wiki/Trap_street


A trap street doesn't damage other data. Not even remotely useful as an analogy. That's to allow detection of copies, not to corrupt the copies from being useable.


Sure it does. Suppose the data you want to publish is about the number of streets, or he average street length, or the distribution of street names, or the angles of intersections. Trap streets will corrupt that, even if it's just a tiny bit. Likewise, ghost imagery slipped into desirable imagery only slightly corrupts the model, but like the trap streets, that's the model-maker's problem.

You have a legal right to scrape data and use it as input into a model, you don't have a right to good data. It's up to you to sanitize it before training your model on it.


Worth trying but I doubt it unless we establish a right to train.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: