Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Experts: Spy used AI-generated face to connect with targets (apnews.com)
49 points by NN88 on June 13, 2019 | hide | past | favorite | 25 comments


At first, I thought this was going to be some "A Scanner Darkly" level stuff done real-time over Skype or something. Turns out it's just using thispersondoesnotexist.com to generate a profile pic for a website. I'd expected bots to have picked up this trick by now.


Although admittedly ad hominem, I was instantly turned off by the article upon learning Hao Li[1] was solicited to remark in his affiliate capacity with USC.

[1] http://sadeghi.com/dr-iman-sadeghi-v-pinscreen-inc-et-al/


Question: why does it matter that the image was generated using ML? Seems to be a main point of the article, but I really don't see why it matters based on what was discussed in the article.

Software that does this automagically exists on GitHub, so any 13 year old with a < 10 year old laptop could have made it. So the ability to generate this image doesn't matter. A person could have changed a face into a new face in Photoshop or GIMP, or just grabbed a random sample from the internet. So I am a bit confused as to why it's even mentioned; it seems a pointless detail, and even more so to appear so promiscuously in the headline.


The reason it is important here is it increases the likelihood of the profile being fake. If it was a real picture it may be someone lying to improve their resume and get a better job. But if it's a spy, a generated image means it's unique and not able to be reverse image searched.


Perhaps using an AI generated face does not constitute Identity theft? ...just wondering


Reminds me of Robin Sage[1].

[1] https://en.wikipedia.org/wiki/Robin_Sage


That was an interesting little read, thanks!

So in other news- social engineering is still a huge threat to security, because humans are humans. Wheee.


If you're going to write an article whose primary topic is an image it would be nice to include that image.


https://i.imgur.com/ddaY01b.png What kind of browser do you have?


It's 2019, if your website doesn't support images in Lynx then it doesn't deserve my business!


<angrily typing response in Pine> :P


Hmm, I guess I'm a victim of still being on a Windows Phone in 2019.


Yeah, that’s probably a pretty rare edge case by now.


Does anyone know why AI-based detection algorithms seem to have lagged behind AI-based production of images? Perhaps that's just where the money is, but you would think the DoD and other agencies would be at least equally interested in detection.


Could just be that this is often done using GANs? So if you come up with some better way of telling apart real from fake, it can immediately be used as part of the discriminator, resulting in (hopefully) even better fake images.


> So if you come up with some better way of telling apart real from fake, it can immediately be used as part of the discriminator

The generator can only learn from the discriminator if the discriminator is differentiable and has reasonably consistent gradients.


You can use relaxation for discrete variables (e.g. by using convex simplex), replacing them with differentiable variables, and then just discretize after the very end of the computation. A common trick for variational autoencoders that are another way to do generative models.


I sort of naively assumed that any binary classification task on images nowadays would be done with a deep NN.

Are there any proposed techniques for detecting fakes that can't be easily differentiated through?


To my understanding, the idea is to stop training as soon as the discriminator has been 'fooled', i.e. its performance for telling fake and real images apart is just like random guessing. So, in a sense, you always keep on making better fake images, but not necessarily better discriminators (unless you botch the training or the losses, obviously).


Another reason (next to the ones about GANs etc. that have already been mentioned) is also that it's easier to fool detection systems by just making up more data---there's a heavy imbalance going on here, and that does not even account for issues with real images, such as artefacts, or damage, and so on.


There's more money in fakery than truthery.


GANs are detectors too, so at any point in time they will be right-on-par with the state of the art fakes, but no better.


One thing that comes to mind is the significant cost asymmetry of a false result or error.


It will be really funny if this image turns out to be a real photo.


Why would they reveal this




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: