
What a deepfake video of Mark Zuckerberg reveals about how we're manipulated - colinprince
https://www.cbc.ca/radio/day6/what-a-deepfake-video-of-mark-zuckerberg-reveals-about-how-we-re-manipulated-online-1.5174405
======
lifeisstillgood
So this is the future - we can't trust anything and so we trust nothing but
our pre-existing biases

Bugger that.

This is a tech-created problem it can be tech-solved.

1\. Photo manipulation leaves traces on the pixels, and this must be worse.

2\. cameras can hash / sign every other frame or similar - we can build a
chain of hardware to image that can be followed - and of an image or video
does not have a CA cert chain it should be as mis-trusted as a web site
without TLS

3\. Err - there must be other ways.

~~~
Cpoll
> 2\. cameras can hash / sign every other frame or similar - we can build a
> chain of hardware to image that can be followed - and of an image or video
> does not have a CA cert chain it should be as mis-trusted as a web site
> without TLS

Why would this not be easily beatable? Hardware hacking aside, why not the
naive approach of filming a monitor playing a deepfake?

~~~
dylan-m
The important part there would be chain of trust. You should be able to see,
given a video, "oh, that was recorded by some guy who lives in a basement; I
can figure out he was never even /near/ Washington" (or "that doesn't have a
signature. What is this, 2019?!") and treat it accordingly. It means you need
to give up your own identity if you want to publicly verify that a video
you've captured is yours, but that's kind of the point. Anonymity is great,
but it isn't a source of trustworthy information. Never has been, never will
be, and now we have the math to say it.

So basically GPG for cameras, but with a hip startup name? Maybe any camera
would ship with its own private key out of the box, and there'd need to be
some kind of key rotation as well as a way to push your public keys …
somewhere(s). I guess an attack here is somebody could lift a private key from
someone else's camera and use that key to sign their fakes, but at least
that's a reasonably tangible sort of attack that people know how to protect
against.

(And of course we'll never stop people from watching 24/7 news every minute of
the day, so we're all doomed anyway).

~~~
Cpoll
But at most you can prove which camera shot a picture, and its original
owner... Kind of, because there's nothing stopping a photo agency from signing
the pictures after they're taken, in order to anonymize their contributors.

So, if I want to fake a picture and make it a bit more convincing, I only need
to get my hands on a few cameras from pawn shops in Washington. I just can't
say "this is a Reuters photo" or "this is a banned commercial" or whatever.

------
jsonne
2 big impacts I think deepfakes will have.

1\. Internet mobs will be de-clawed and any random person on the street that
may have had their private information used against them now has instant
plausible deniability. This is a big step towards valuing due process and
personal privacy as digging into people's personal histories will become far
less valuable than before if it can simple be handwaved away.

2\. This could be a step back from the balkanization of the media. If anything
can be faked than personal reputation and organizational reputation will be
become far more valuable when it comes to the spread of stories.

~~~
Reedx
The problem is that mobs don't care about what's real. They're incentivized to
believe whatever reinforces their existing narrative.

------
segmondy
Most people think deepfake will be something massive like this. I suspect it's
going to be very subtle. With a few words and phrases changed. Imagine a
politician saying, "I'm for low taxes" and it's changed to "I'm for high
taxes" Just one word that changes everything and has a big enough impact.

~~~
mlb_hn
Or imagine it at scale. It seems there are two uses of deepfake which have
separate effects:

1) High profile low volume content that trends (article)

2) Low profile high volume content spam (your point)

The second one seems like a much harder problem than the first (but both
matter). Everyone already gets dozens of spam/phishing emails and links a day
and not much has been done about it. I have no idea what the impact of
deepfakes on the latter even looks like.

------
jedberg
The 2020 election is going to be interesting with deepfake tech getting better
and better. Either everyone will be fooled, or everyone will quickly doubt any
video ever presented. Either way, the candidates will need to find a new way
to spread their propaganda (I mean campaign messaging) that people trust, if
they stop trusting video.

~~~
jsonne
When everyone can scream fake news we'll have to take politicians at their
word. While President Trump was the first he won't be the last to use this
tactic, and it may be a step towards debate turning to policy again rather
than personal background gotchas etc.

~~~
jedberg
> and it may be a step towards debate turning to policy again rather than
> personal background gotchas etc.

That would be a nice change! I think that's also the most optimist view. A
pessimist would say that the end is no one trusts anything and we get more
fractured and partisan.

~~~
jsonne
Realistically it'll probably be a bit of column A and a bit of column B haha

------
wuliwong
I kind of suspect that fake videos will not be much more of an issue than
photoshopped images. I don't recall with certainty but I believe there was
some fear surrounding the ease at which photographs could be manipulated in
the digital age. Some combination of a general growing awareness of the ease
at which photos could be altered along with continued advancements in the
ability to determine the authenticity of photographs has rendered faking of
photographs relatively harmless. In fact, I think the feedback loop has
resulted in not being a huge market for faking images at least for political
gain. I am not referring to photoshopping images of models and things to just
make them look more attractive. Also, although it is very easy to doctor an
image, in general people have still managed to keep a level of intuition which
allows them to not suspect every image they see as being fake.

------
lurquer
The Rules of Evidence in Federal Court (and the States) make it fairly easy to
introduce photo/video/audio. This needs to change. The rules regarding
admissibility of this type of evidence were drafted when fakes were difficult.

The treatment of emails, however, does not inspire much hope. That is, in most
courts, a printout of an email is treated -- as a practical matter --
similarly to a document signed by the author. Very frustrating. There is
nothing magical about a piece of paper with an email header... you can create
same in two minutes with a word processor. And don't even Get me started with
text messages... try to introduce an unsigned typed scrap of paper and you'll
get laughed out of court; but, put the text into something that looks like a
screenshot of a phone and, voila, instantly admissible as an authentic text
message.

------
danShumway
I'm going to make a moderate to strong-ish claim that deepfakes will not be a
substantially serious issue for Democracy or reporting. I'm not certain, but I
feel reasonably confident.

\- We've already had sophisticated image editing capabilities for a long time,
and it hasn't substantially changed how newspapers, blogs, and social media
accounts share photos. In particular, I see screencaps of Twitter threads all
the time, even though they're trivial to fake. For whatever reason, people
already seem to be OK with this. Maybe image editing did upset everything and
I'm just too young to remember, but in either case, it doesn't look to me like
the world fell apart.

\- We've had (relatively) sophisticated audio editing capabilities for a
somewhat lesser period of time, and that hasn't changed the landscape of how I
see audio reported on. Reporters still use audio as evidence in articles, we
haven't seen a wide spread of proprietary chains of trust, anything like that.

\- We've seen decent evidence that deepfakes aren't required to currently fake
video content to the point that you can deceive people. Deceptive cutting,
slowdown, etc... seems to be enough. For people who want reasons to distrust a
video, those methods also seem to be adequate. I regularly encounter people
who'll look at body-cam footage and say, "well, you don't know, it doesn't
show the full context." I'm skeptical deepfakes will make this situation worse
-- at most I suspect they'll just further divide us into groups who research
sources and groups who don't.

\- Deepfakes are getting better, and I keep on getting told that they're
eventually going to be perfect, but again, in practice I can't help but
compare it to Photoshop. It turned out that with Photoshop, people's ability
to spot fakes improved at roughly the same rate as the technology, to the
point where I don't think it would be trivial for me today to make a fake
photo of a politician that wouldn't get quickly identified as such on Reddit
or Twitter. I've seen some really impressive AI generated content that blows
me away, but right now I can still tell that it's all fake. I haven't seen
anything that can consistently produce results that fool me beyond the most
straightforward, boring use-cases. And I'm not an expert on this stuff, if
it's good it _should_ be able to fool me.

In theory, deepfakes are concerning, and I get why people are concerned about
them. The technology will get better, it will be easier and easier for anyone
to make a video saying anything they want. In theory, I agree with all of
that. I just don't see a lot of practical evidence that it's going to be any
better or worse than what we have right now.

If I hadn't grown up in an era where people were making what to me sounded
like similar claims about Photoshop, I would be more worried.

~~~
misterprime
I agree with this sentiment. However, there's another threat from deepfakes
that someone had to point out to me: generating false memories.

Imagine you're a VIP and someone shows you footage of yourself doing something
unacceptable from 30 years ago. There's a good chance that you might believe
that footage and accept that as a true memory, even though it is entirely
fake.

~~~
danShumway
Would that scenario be significantly harder to pull off today with a faked
audio file or photograph?

I'm not necessarily being theoretical about that. Growing up, I used to
Photoshop family Christmas photos when not all of us could get together at the
same time. Near the end of their lives, I vaguely remember it causing my
grandparents to occasionally get confused about who was or wasn't actually at
a gathering.

And it's not like I did a particularly great job of it -- they just didn't
have experience with what the technology was capable of.

I guess what I'm trying to say is, yes, it seems obvious that there will be a
generation of people who are not equipped to deal with the current state of
image/audio/video manipulation, but is that actually a new thing? Hasn't that
kind of always been the case?

------
vernie
Why didn't they use voice cloning?

------
techrich
Its a crap deep fake, and sounds nothing like him.

~~~
jedberg
And yet, almost everyone will fall for it, because they have no idea what he
sounds like.

