
High-tech deception of ‘deepfake’ videos - creaghpatr
https://www.apnews.com/21fa207a1254401197fd1e0d7ecd14cb/I-never-said-that!-High-tech-deception-of-%27deepfake%27-videos
======
SmooL
I have to agree that this will be inevitable. Fake videos _will_ be created to
spread misinformation.

I bring you no solutions. Our best programmatic tool for these kinds of video
analysis is currently the exact same technology that enables it in the first
place. Furthermore, any neural network that can detect fake videos will only
be used adversarially to train a better deep fake creator!

The only solution I see is the camera somehow cryptographically signing the
video, and enabling users to easily check whether the signature is valid.
However I don't think this is feasible, due to the amount of cooperation we'd
need between camera makers and video distribution platforms, especially since
it's not really in the businesses direct line of best interest help.

The world's going to start getting real surreal.

~~~
zzzcpan
To make it feasible you don't need much, just journalists signing videos and
uploading them somewhere besides publishing anywhere unsigned. This will make
every unsigned video easily verifiable, just not with crypto.

~~~
binarymax
That's fine for journalists, but does nothing to prevent fakes captured by the
populace. A good deal of our journalism now is by on scene witnesses. What can
we do about fakes posted on social media purported to be the real capture of
some scandal or event?

~~~
fooey
I think what we'll see is something less like the Jordan Peele version of
Obama based on a public clip, and something more like a faked version of the
Romney 47% video

Start with something naturally unfocused and exists only as a one off
original, and it would be extremely hard to disprove without comparing it to
other video of the same event.

------
mjw_byrne
Deepfakes are surely no more a "problem" than photoshopped still images?

Everyone knows that a still image can be altered to (convincingly) depict
something that never happened. So, a still image is not considered good
evidence of anything unless it has a verifiable provenance, chain of custody
etc. Presumably that wasn't always the case, but now that photo editing tech
is universally accessible, we all know we can't trust a still image unless its
authenticity can be corroborated somehow.

The deepfake tech merely puts video into the same category of
(un)trustworthiness. The potential for abuse depends on people's trust that
video is always authentic. So the "solution" is simply to make it as widely-
known as possible that video is now easy to fake.

~~~
roywiggins
The volume alone might be a problem. It takes some Photoshop skills to pull
off an acceptable fake. If there's a turnkey website that can spit them out en
masse, that's something different than having to actually hire a human to do
it by hand.

Previously it took at least one CGI artist to convincingly fake video,
probably several. This inherently limits how much of a problem it can be,
because it doesn't scale. AI scales.

I mean- either deep learning AI automation is going to be important and
useful, or it isn't. If replacing all drivers with robots is important, then
automated turnkey fakery has the potential for disrupting something, isn't it?

~~~
mjw_byrne
I agree, but I think my original point covers this - if video proves to be
very easy to fake at scale (e.g. using your example of a turnkey website) then
that should result in equally large-scale scepticism about video as evidence.
The internet may drown in annoying deepfake-based memes but as long as
everyone knows its all fake, there seems little potential for the kind of
serious harm people are worried about (reputational damage, election meddling
etc.)

~~~
save_ferris
> ...as long as everyone knows its all fake,

There's a shockingly large number of people who aren't even convinced that
election meddling took place in 2016, or that propaganda was disseminated to
many people on social media, despite unanimity from US intelligence.

If we as a society can't all agree that these things even happened, how can we
hope to properly educate people on the existence/prevalence of fake video? It
seems like common sense to many people, but we can't assume that everyone will
accept these truths based on the political rhetoric we've seen in the last few
years.

~~~
mjw_byrne
US intelligence doesn't enjoy a reputation for trustworthiness, to put it
mildly. But if people can just go online, upload a video and get back a
"hilarious" deepfake, that's an irrefutable demonstration that video-meddling
is easy and widely-available, isn't it? They can literally see for themselves
how accessible the faking tech is.

~~~
skybrian
_Text_ can be convincingly faked at scale. Most meme images are widely reused
photos. We do have Snopes and other fact-checkers, but it's only a partial
solution.

And yet, fake news does seem to be a problem. Checking provenance of facts and
statistics is often skipped, even by journalists, let alone on Twitter.

This suggests that people will have lots of fun resharing deep fakes and won't
care much that they're fake, as long as they express some emotion that they
agree with.

Give people a fake that they want to believe in, and it's going to be very
hard to talk them out of it.

------
craftyguy
I'd be more concerned with US officials using this technology to indict
innocent people who disagree with them. Being able to conjure up a confession
for anything by anyone is extremely powerful, given that the alternatives to
accomplish the same thing today are basically apprehension and
torture/threats.

~~~
creaghpatr
>Being able to conjure up a confession for anything by anyone is extremely
powerful

Being able to dismiss credible video evidence as 'deepfake' is equally
powerful.

In any case, it would be extremely risky to hang a case on a single confession
video that the accused claims is fake, given the current pattern of
prosecution the DOJ would never bring that case; it would be embarrassing in
the public eye and probably overturned on appeal, if it even made it to trial.

~~~
prepend
The bigger risk would be PR types where there is no court and prosecuted. For
example, the Clippers were forcibly sold after the owner-Donald Sterling[0]
was outed as a racist. A few fake videos like that would have high impact and
wouldn’t need to go through court.

[0]
[https://en.wikipedia.org/wiki/Donald_Sterling](https://en.wikipedia.org/wiki/Donald_Sterling)

~~~
creaghpatr
I agree, but would-be fakers should be deterred by the airtight defamation
case that would cost them millions. That is, of course, if the plaintiffs can
prove the video is in fact, fake.

~~~
prepend
I think it’s actually hard to launch such a case. I mean take Sterling. The
tape was illegally leaked and was there a defamation suit against the leaker?
California requires dual consent for recording [0], and no criminal charges
have been brought or civil charges. Not the same, but shows there’s no
repurcussion in this type of situation.

The issue is that it will be really hard to prove as fake so even if an
adversary didn’t just anonymously leak the material. In a PR situation you
don’t need authenticity, just confusion.

I expect that you could use this pretty effectively for stock scams (eg, video
posted on Friday that Bezos is launching electric car tanks Tesla, etc). Or
even more sinister fake videos of things that may be true (video of Musk
assaulting some woman).

What I’m actually scanning for are small markets with small $10-100k upside
potential like a fake video of Tom Cruise that impact Mission Impossible
opening that can be shorted through mediapredict or other markets that aren’t
SEC regulated.

Not to do it of course, but to see trends in how these videos are developed.

[0] [http://www.dmlp.org/legal-guide/california-recording-
law](http://www.dmlp.org/legal-guide/california-recording-law)

------
evan_
Any "solution" to this that relies on crypto/signing will fail. Propagandists
will just make their fake videos and then record them off of a computer
monitor and claim that they're a whistleblower "leaking" the video from an
internal system they can't otherwise access.

I don't really think reputation has a better chance of solving this either,
people are all-too-happy to believe anything that confirms their personal
beliefs and ignore whatever contradicts them regardless of source.

It's interesting that a lot of the comments here look at how to prove a
certain video is real (and assume that people will accept that any video
without a certain trait is fake) rather than proving directly that a certain
video is fake.

------
avivo
It's not a totally lost cause — I wrote a piece on what we can do about this:
[https://www.washingtonpost.com/news/theworldpost/wp/2018/02/...](https://www.washingtonpost.com/news/theworldpost/wp/2018/02/22/digital-
reality/)

It's a first step, and my thinking has also evolved since—expect more updates
in this space soon.

~~~
majos
From this article:

> AI researchers and platform technologists need similar in-house or third-
> party societal impact review boards to help them evaluate the potential
> unintended consequences of their work.

As a machine learning researcher myself (albeit one who is probably unlikely
to run afoul of one of these boards), this suggestion for research gives me
pause. In the context of deepfakes, are you suggesting that it would have been
more prudent to stop the research that made it possible before it had a chance
to do so? This seems like a good way to ensure that less scrupulous parties
harness destructive technologies before more scrupulous parties (the kind
who'd adopt review boards).

I imagine this debate to some extent has already played out with nuclear
weapons and bioweapons. I'm much less familiar with these two spheres, but as
I recall research continued in both.

~~~
avivo
Lots of details to be worked out, but yes there are models in other fields.
You bring up a real concern and there are ways to address it.

I'm not suggesting stopping research on this. But the way in which that
research is communicated, what the focus is, and the way which particular
sorts of replication is enabled or discouraged has an impact.

An example that the HN crowd likely knows better is vulnerability/exploit
disclosure. There are good ways to do it and harmful ways to do it.

You can perhaps think about deepfakes as an exploit for the human
sensors/cognition that handle authentication.

------
segmondy
Anyone know where deepfakes moved to? It was great seeing what was going on in
/r/deepfakes but due to the "offensive nature" of some of the videos that
subreddit was killed. Anyone know the best place to watch and observe the
latest happenings in the scene?

~~~
k__
Good question, I was just waiting for the all Nicolas Cage version of LotR

~~~
prepend
It’s a bit of a pain to follow progress without the subreddit, but progress is
still being made- [https://youtu.be/QhxTTshL3b0](https://youtu.be/QhxTTshL3b0)

------
drivingmenuts
When this happens (not if), I expect it will play right into the current
administration's habit of dismissing anything that disagrees as fake news. For
the administration that follows? Who knows? I only have evidence for right
now.

I do think most of it will come from people lower down the political food
chain (the meme creators, etc) who seek to convince others of their POV out of
desperation and/or maliciousness.

------
zkascak
As a filmmaker I am both in awe and horrified with this.

Some of these techniques can be used to make fixes in motion picture where it
would be far too expensive to reshoot.

But there is the opposite where this technology can be used to falsify a video
that can potentially be used to make false confessions or change what a person
said for nefarious purposes.

Welcome to the world of what was once thought to be science fiction.

------
gnicholas
Will FB, Twitter, YouTube, and other social networks create tools to flag
deepfakes? It wouldn't solve the problem, but at least it could blunt the
spread.

It could also raise awareness of the issue, so that if people see a video that
seems "too good to be true" on a different platform, they would know that it
might not be real.

~~~
mizzack
Inevitably you'll hear cries of censorship with varying degrees of veracity.

~~~
gnicholas
I would think that there'd be enough deepfakes on both sides that everyone
would recognize that iff there are tools that accurately detect deepfakes, it
would be useful to have them enabled on social networks. They don't need to
censor the videos — just flag them as having deepfake characteristics.

~~~
mizzack
Like when Facebook (briefly) tagged articles from certain sources as being
disreputable and it led to an _increase_ in clicks to those articles?

People will believe what they want to believe. We're also seeing that they
don't like mega corps acting as arbiters of truth. So, I'm not sure what, if
any, effect this moderation/curation effort would have.

~~~
gnicholas
I think what we really need is an embarrassing term for "someone who was
fooled by a deepfake" (like "catfish"), so that there will be a social stigma
that incentivizes people to not be tricked by them.

I don't currently have any better ideas that "getting deepfaked". Suggestions?

------
noja
The only solution to this is signed live lifecasting to a blockchain for every
person on earth. I'm only half kidding.

~~~
rohit2412
How about a sort of steganographic signing with assemytric cryptography.

------
baxtr
A crucial thing will be to detect these videos as they appear just like it is
possible to detect Photoshoped material

------
mgraybosch
You should never trust digital media.

You should be extremely selective about what analog media you trust.

If you weren't there to see it happen yourself, all you've got to go on is
somebody else's perception. Be careful who you trust.

~~~
zkascak
As a filmmaker I feel that completely distrusting a medium just because it is
digital might be a little harsh.

I think that we may want to consider using cryptographic signature methods as
a way to mark materials as coming from the source they claim to be from.
Though this is not a perfect solution.

Though I still work with analog film, though it is becoming more and more cost
prohibitive everyday. Analog can just as easily be altered. If you looked at a
silver gelatin print, if the printmaker was talented you probably would never
be able to detect the changes to the image that were made during the printing
process. Is the resulting image a fake, or is it just an artists vision?

