They might have already scraped all of the faces on that site. They aren't generated on demand so you conceivably scrape the entire database of fake people and tell the algorithm to ignore anything matches them. Then, the algorithm would just treat any photos of a person that doesn't exist that it finds in the wild as it would if you have no profile pic. It might fool a person or an AI not trained on those pictures. Another option is to generate your own people that do not exist and use those images. This could work as long as Clearview isn't doing some sort of image analysis to look for telltale signs of AI-generated faces. You could start photoshopping fake faces onto your real pictures in an effort to blur the line between AI generated pictures and real pictures.
He thought giving them a fake face would gum up their search quality and ability to resolve him. I'm saying it would basically be a null-photo to Clearview.
It's the fact that it returned no matches, that indicates it isn't fooled. If it were fooled, it would have associated those faces with the accounts that use them (assuming anyone is using the, which they probably are).
By returning no accounts, it demonstrates their AI isn't using those faces for identification.
Run facial recognition against computer generated face, got no matches. Surely that is the expected and intended result from both parties?
Or is it expected to match against a different face?