Hacker News new | past | comments | ask | show | jobs | submit login

Just for the record, the journalist who broke the story on Clearview noted that Clearview AI has specifically demonstrated it isn't fooled by thispersondoesnotexist.com:

https://twitter.com/kashhill/status/1218542846694871040?s=20

You won't be giving them any info on you, but you won't be confounding them either.






I'm not sure what is not working-as-intended here?

Run facial recognition against computer generated face, got no matches. Surely that is the expected and intended result from both parties?

Or is it expected to match against a different face?


They might have already scraped all of the faces on that site. They aren't generated on demand so you conceivably scrape the entire database of fake people and tell the algorithm to ignore anything matches them. Then, the algorithm would just treat any photos of a person that doesn't exist that it finds in the wild as it would if you have no profile pic. It might fool a person or an AI not trained on those pictures. Another option is to generate your own people that do not exist and use those images. This could work as long as Clearview isn't doing some sort of image analysis to look for telltale signs of AI-generated faces. You could start photoshopping fake faces onto your real pictures in an effort to blur the line between AI generated pictures and real pictures.

He thought giving them a fake face would gum up their search quality and ability to resolve him. I'm saying it would basically be a null-photo to Clearview.

It's the fact that it returned no matches, that indicates it isn't fooled. If it were fooled, it would have associated those faces with the accounts that use them (assuming anyone is using the, which they probably are).

By returning no accounts, it demonstrates their AI isn't using those faces for identification.


In that case, how about the opposite strategy? Take your own picture and a GAN with a latent space feature (i.e. a thispersondoesnotexist that lets you decide age, gender, etc.). Then set the parameters so that you get a picture of yourself. Upload this picture to social media and watch Clearview ignore it, while you still look like you to humans.

I was thinking something similar. Maybe with some randomization to it, so different profiles across (social) media wouldn't link together through Clearview et al. but still all look like you to humans.

True, but at least it prevents them from linking accounts. It is equivalent to no profile picture, which you might not want to keep up appearances.

You may be surprised to learn that your face is your least identifying trait online. You network of friends/followers/likes identifies you far more readily—even if you use a random username.[1]

Managing your privacy is a lot like CPU side channel attacks. It forces you to re-evaluate your fundamental assumptions about what information can be exploited.

[1] http://www.vldb.org/pvldb/vol7/p377-korula.pdf


While reading the comment, I was thinking about overlaying faces with the Laughing Man instead of thispersondoesnotexist.

http://cdn.collider.com/wp-content/uploads/2016/02/ghost-in-...


But, that sounds like it is fooled, in this context.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: