Hacker News new | past | comments | ask | show | jobs | submit login

> More specifically, we assume the attacker:

> • can inject a small number of poison data (image/text pairs) to the model’s training dataset

I think thoes are bad assumption, labelling is more and more done by some labelling AI.




Usually clip, which is actually how this works — the examples are modified to be misclassified in clip, but look passable to a human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: