
Stopping deepfake news with an AI algorithm that can tell when a face doesnt fit - rustoo
https://spie.org/news/stopping-deepfake-news-with-an-ai-algorithm-that-can-tell-when-a-face-doesnt-fit
======
yes_man
The problem will increasingly be "whose algorithm do we believe". Internet has
revealed that people believe mostly what they want to. We have seen that a
large subset of people are believing Bill Gates is behind the pandemic. Why
would masses somehow be more rational in picking the most rigorous and
objective neural network to recognize deepfakes, than they are in making sense
of the world in general? In the end we will have multiple competing entities
claiming to have the best deepfake recognition, all with their own agenda

~~~
nothis
For me, one reason not to freak out about this is Photoshop.

It's been around for decades. _Perfect_ photographic fakes (literally called
"photoshops") are possible. Yet is there a great crisis of fake photographs
taking over the news? Not really. The actual "fake news" barely even bother
with photoshop (and those for whom it works, don't care about the quality).
It's somehow still fairly easy to get context and at some point, you just have
to trust a news outlet, just like you had to trust them for text-based news.

All we see is a trend of making it easier for people to subscribe to a bubble
of "news" that fits their world view. The quality of the fakeness barely
factors into this.

~~~
Natsu
Maybe, but I've found that a lot of viral images are doctored in one way or
another.

Here are some links to recent examples of doctored photos. I know I've seen at
least the deceptive image of cops "pointing a gun at children" when they were
actually not on the front page of Reddit, so it's not like manipulated images
have no effect:

[https://www.hackerfactor.com/blog/index.php?/archives/884-Pr...](https://www.hackerfactor.com/blog/index.php?/archives/884-Protests,-Propaganda,-and-
Photos.html)

[http://hackerfactor.com/blog/index.php?/archives/891-Count-o...](http://hackerfactor.com/blog/index.php?/archives/891-Count-
on-it.html)

That said, remember that not all alterations are digital:

[https://www.hackerfactor.com/blog/index.php?/archives/590-Un...](https://www.hackerfactor.com/blog/index.php?/archives/590-Under-
the-Influence.html)

~~~
dTal
That blog is frightening and informative - it gives some idea of the sheer
scale of misinformation we have to cope with now.

------
turblety
Now we can use this new AI to train another AI that can defeat it. And so
continues the great cat vs mouse chase.

I don't think this is a problem that is ever going to be solved. Deep fakes
will become more and more popular, and harder to detect.

~~~
amelius
Antibiotics resistance is also a cat and mouse game. It's still useful to keep
playing the game though.

~~~
sildur
The difference is that all the bacteria can become resistant to the antibiotic
soon after it is created.

~~~
dijksterhuis
As can deepfakes. All it takes is an additional step to optimise wrt the model
that tries to catch it.

~~~
sildur
Yeah, I was dismissing the analogy with antibiotics because usually it takes
quite some time between creating the antibiotic and germs being resistant. But
with deepfakes the arms race is almost instantaneous. The moment something
appears to tell apart deepfakes, the moment people can train deepfakes against
that something.

~~~
amelius
Perhaps the deepfake checking should be a third-party service? So you can only
check so many deepfakes per day, limiting these attacks (i.e., you can't
realistically put the checking inside a training loop). Just an idea ...

------
trott
Looking at the diagram, this appears to be just an LSTM slapped on top of a
CNN. If so, I'm failing to see any novelty in this approach. RNNs on top of
CNNs have been used before, including for deepfake detection. See for example:
[https://arxiv.org/abs/1905.00582](https://arxiv.org/abs/1905.00582)

The recent DeepFake Detection Challenge threw a lot of manpower at the problem
earlier this year, BTW.

------
formerly_proven
Isn't this the idea of getting better results using the adversarial network
approach? So this would be inherently ill-suited to _stopping_ deepfake news,
yeah?

~~~
DarthGhandi
it's great at making them better, yes

~~~
giancarlostoro
If the people trying to stop deepfakes can do it so can the people trying to
produce them anyway. The best we can do is some way of digitally signing
videos and software displays it as authentic or not but then they just make
fake youtube style sites that say its authentic. The problem will go on and
on.... Fake news will always spread for one reason or the next.

~~~
the8472
> but then they just make fake youtube style sites that say its authentic.

Even simpler, someone will build an optical setup that sends deepfake pixels
onto a signing image sensor.

------
29athrowaway
Deepfakes are generated by a generative adversarial network.

There are two networks: a generator, and a discriminator.

The generator generates a result, the discriminator evaluates that result.

This AI that detect fakes faces could be used to train the discriminator so
that the GAN generates even better results.

------
chiefalchemist
AI novice here. It would seem to me that the detection algorithm can be
repurposed into making the original less detectable. A recursion that never
ends. An advantage quickly becomes disadvantage.

Worse. All information - even facts and truth - has been subverted. What
happens when there is no trust? Is this not a road to the New Darker Ages?

~~~
1f60c
That's what I'm thinking. I don't want to diminish the value of this research,
but this cat-and-mouse game is like a GAN[0] with extra steps.

[0]:
[https://en.wikipedia.org/wiki/Generative_adversarial_network](https://en.wikipedia.org/wiki/Generative_adversarial_network)

------
dependenttypes
There is a simpler alternative imo: consider any news that do not list all of
their sources as fake news. Treat them as if they were scientific papers.

------
neutrallinked
We should also look towards simpler alternatives. ( For example some days back
I also read about a profile made on a social media platform with a "generated"
face )

1) For image platforms to prevent "generated" profile with fake faces, may be
request 2 profile pictures and not one. A task that is difficult to achieve at
high resolution.

2) For deep fake videos .. May be the idea be to fight the "video" part and
not the "deep fake" part. By that I mean "signed" content. ABC news should
"sign" generated videos and so should other publishing houses. So that any
other source claiming forged voice or certain kind of "faked" content are not
able to do so.

------
hairofadog
Can god create a fake so deep that they themselves can’t detect it?

~~~
neutrallinked
DNA will set them apart

------
guscost
Cool tech, but most people will have tuned out what we know as "the news" (or
become jaded to its purpose beyond entertainment) before a tool like this is
necessary.

------
mbrumlow
This will just be like anti virus. Peole will start to train their algos to
fool the current algo that is is being used. We will be a never ending fight.

~~~
3wolf
That's GAN training in a nutshell.

------
peterthehacker
Are deepfake face swaps a real problem yet? I can’t recall any major
controversies in recent history that were caused by a deepfake face swap.

> This technique can be used to create compromising videos of virtually
> anyone, including celebrities, politicians, and corporate public figures.

I’ve read a lot of concerned comments like this but I haven’t read about any
real world examples of a controversy caused by a deepfake.

------
Erlich_Bachman
Hey, we all know that this will not be the end of deep fakes. It will just
another channel of information to think about. Now we have to care about
whether this algorithm is correct, or maybe it is also manipulated by the
other political party to claim that the others' picture is fake...

But it had to be created at some point, it had to exist. This is just an
inevitable next step of the progress.

------
olivermarks
The challenge with algos like this is that they could be used to claim events
that actually did happen didn't. So as an example credible/convincing footage
of Jeffrey Epstein by a pool in Paraguay last week could be 'identified as a
deep fake' and discredited, despite other supporting facts and information
that lent credence to veracity.

------
vffhfhf
People believe what they want to believe.

Just a basic search and 5 minutes in wikipedia about a topic will make you
better than most people who sprout nonsence.

But you got to have a right mindset.

Human brain hates change. You have to force the bitch to take in new
information and process it and prepare it for further info on the topic.

------
bsaul
Maybe advanced DRMs will be a way forward ? Have camera fingerprint / sign the
video, and then every editing videos fingerprint & sign the changes performed,
and send everything to a ledger ? Only way to make sure a video is actually
coming from the real world...

~~~
ed25519FUUU
We don’t need this for photographs. Why do we need it for video?

------
harryf
Seems like an AI arms race just begun

~~~
jcims
It's just a slobbering infant trying to roll off the bed at this point.

The battleground of what is real and fake will very quickly move into
'superhuman' realms of sensitivity, leaving us meaty minions as spectators
trying to figure out which AI to trust.

------
varbhat
What if it gives false positive results? What if it tags non-deepfake news as
deepfake-news ?

------
nine_k
Now we need to stop believing our eyes _twice:_ first when we see a natural-
looking picture but know it is a deepfake, and second when we see a natural-
looking picture we believe is not a deepfake, but the computer tells us it is.

------
alkonaut
This seems like it’s “adversarial” to deep fakes. I wonder what that can be
used for.

------
JimiofEden
This seems like test driven development, but applied on a much larger scale.

------
m0zg
If this is differentiable (which it seems like it should be), it could be used
adversarially to create better deepfakes, BTW.

------
arielbaz
see also "Training a deep learning model for deepfake detection" (
[https://news.ycombinator.com/item?id=22433711](https://news.ycombinator.com/item?id=22433711)
)

------
wu-tsy
Can be used as an algorithm for a new discriminator in the GAN.

------
danieldrehmer
as Zizek would say, "And then you use this as the basis for a GAN that
optimises for faces that do fit, making better deepfakes and so on and so on"

------
codecamper
This is getting stupid. People.. aging, cancer, Alzheimer's still exist. Maybe
time to reprioritize & organize efforts?

~~~
Nasrudith
Those are completely different deep specalties and best funded in parallel
anyway given diminishing returns. Complaining about something that irrelevant
to capabilities is like saying brain surgeons suck at inventing green energy.

------
ThomPete
this is the virus killer war all over again

------
adfhnionio
I find all this concern a little bit overblown. Yes, it is a problem that we
can make extremely convincing fakes, but we've had fakes that can fool non-
experts for a very long time. The Soviet Union doctored a great many
photographs in a way invisible to me (examples:
[https://en.wikipedia.org/wiki/Censorship_of_images_in_the_So...](https://en.wikipedia.org/wiki/Censorship_of_images_in_the_Soviet_Union)).
Why are we more concerned about this fakery than about airbrushing?

The solution is the same as it's always been: stick to trustworthy sources and
insist that all evidence is traced and corroborated. It remains easy to learn
the truth as long as you make a good faith effort to do so. In the worst case
we can just stop trusting photographs altogether. We got by just fine before
the camera was invented; we can do fine after it becomes obsolete.

~~~
mooneater
Because good fake video is now possible for the first time, good video was
nearly impossible to fake. Because people believe certain leaders who are
willing to use false evidence. Because the populace is not prepared to be that
discerning.

------
Ijumfs
The deepfakes which support the officially blessed narrative will not receive
scrutiny, while authentic videos will be "proved" to be deep fakes by some
black box machination.

But we've known not to trust anything we didn't personally see ourselves for
many decades.

