
Twitter wants feedback on its proposed deepfakes policy - mikece
https://arstechnica.com/tech-policy/2019/11/twitter-wants-your-feedback-on-its-proposed-deepfakes-policy/
======
hombre_fatal
The moral panic around deepfakes is hilarious to me.

Especially on a platform like Twitter where a tweet of a screenshot of a
headline with no source (that possible has no source) will have thousands of
upvotes and angry responses which is much more alarming and something that
already exists today.

For example, every once in a while on r/PoliticalHumor you'll see a screenshot
of a tweet that Trump didn't even write, yet everyone responding will take it
at face value. Deepfakes are a red herring and a distraction from a ubiquitous
phenomenon we might never solve.

Requiring deepfakes to have a disclaimer to me is like training people that
it's safe to insert their credit card info on a website as long as they see
the https lock icon in the navbar. Instead, people should be trained to be
eternally vigilant and to be skeptical even if there is no "this is fake"
disclaimer.

We're long past screwed.

~~~
hart_russell
Agreed. In general, it feels like modern America lacks critical thinking
skills when it comes to discerning obviously fake information from real.

~~~
account73466
nothing to do with America, it is worldwide and thousands years long.

------
tus88
> The issue of what to do about them hit the spotlight in May when a video of
> House Speaker Nancy Pelosi (D-Calif.) that was heavily modified

From what I recall, one video was slightly slowed down, the other was just a
montage of various clips joined together. I am not sure either constitutes
serious modification or doctoring.

~~~
nwalker85
Yeah, calling that video a "deepfake" sounds like someone has no idea what
they are talking about.

------
rococode
On the subject of the dangers of deepfakes, the most recent episode of The
Blacklist addressed deepfakes in a storyline I found quite interesting.

Basically (spoilers ahead), this researcher creates a sentient AI and the AI
promptly decides that sentient AI is a danger to humanity and tries to kill a
few of the top AI researchers. Ok, kinda unrealistic.

The more realistic part? To get one of the AI researchers killed, a deepfake
video is created of that researcher saying something along the lines of "over
the years at X corp I've seen the worst of humanity, too much evil, it's time
to end it all" accompanied by him strapping on a bomb vest. The video is
released and everyone freaks out. He doesn't notice, goes to work, gets
surrounded by cops while holding a small black device (his phone), and the
police shoot and kill him thinking it's a detonator.

I'd always considered deepfakes in the context of making false political
statements which could eventually be disproved. Worst case, a bunch of people
think the wrong thing for a while. This use case of forcing rapid response
without time for validation or refutal is quite a bit scarier and one I
personally hadn't considered before.

~~~
gruez
>To get one of the AI researchers killed, a deepfake video is created of that
researcher saying something along the lines of "over the years at X corp I've
seen the worst of humanity, too much evil, it's time to end it all"
accompanied by him strapping on a bomb vest. The video is released and
everyone freaks out. He doesn't notice, goes to work, gets surrounded by cops
while holding a small black device (his phone), and the police shoot and kill
him thinking it's a detonator.

To be fair, I can imagine this happening in the US without a deepfake video.
Just look at all the instances where the cops are called because of a
"suspicious person" and end up shooting an unarmed civilian. Random example:
[https://en.wikipedia.org/wiki/Shooting_of_Charles_Kinsey](https://en.wikipedia.org/wiki/Shooting_of_Charles_Kinsey)

------
enneff
By attempting to police this aren’t they lending credence to the instances
that they failed to detect? “See it’s [not marked as fake/it’s on twitter]! It
must be real.”

Seems better that we all just adjust to the fact that we can’t trust what we
see (we never could anyway).

~~~
sundvor
That's well and true for people at HN, but I fully believe that the general
public needs a lot more help.

------
mikece
I like the idea of not necessarily removing content just because an algorithm
or group of people say it's a deepfake. Apply a label and let people make up
their own minds.

Of course if there's no transparency to the process or a known way to contest
being classified as a deepfake, this could lead to other problems. And is a
work of performance art -- an actor who can do a spot-on impression of someone
-- a deepfake if meant as art?

------
xwdv
I worry that the moderation of deep fakes will only lead to deeper and deeper
fakes, until no content on the internet can be trusted, and nothing left
believable.

~~~
bgun
> until no content on the internet can be trusted, and nothing left
> believable.

You seem to be under the impression that this is not already the case. Why is
that?

~~~
a1369209993
> Why is that [not already the case]?

Because the internet/web was created by technical and mathematical oriented
people - programmers, electrical engineers, researchers - who are typically
much less committed to systemic deceit than marketers and propagandists.
Consequently, much of the initial content was posted out of a genuine desire
to inform rather than deceive people.

~~~
freeflight
But the Internet was created literally _decades_ ago.

By now the marketers and propagandists have long taken over, social media was
just one out of their many trojan horses.

"Content" has merely become an excuse to drive ad revenue and interactions up,
lure users into signing up accounts, to harvest as much data about them as
possible.

~~~
a1369209993
They said _no_ content and _nothing_ \- there's still stuff (and people) left
over from the early days. If the internet had been designed by marketers and
propagandists from the get-go, that would not be the case, so I think "because
marketers and propagandists were not originally involved (and originally is
suffiently recently)" is a valid answer to "Why is the internet not (yet)
devoid of anything trustworthy or believable?".

~~~
freeflight
_> there's still stuff (and people) left over from the early days_

Very little stuff and only a very few people, they've also been busy warning
about this whole situation, even offering solutions [0] that sound rather
fantastical in their probability of being realizable.

So while you are technically correct; There's still a little tiny bit of
trustworthy content left on the www, it doesn't make really that much of a
difference when 70%+ of the webs traffic is systematically routed past that to
game the attention economy and facilitate mass-scale data collection.

[0] [https://www.theguardian.com/technology/2017/mar/11/tim-
berne...](https://www.theguardian.com/technology/2017/mar/11/tim-berners-lee-
web-inventor-save-internet)

------
jejones3141
Two things:

They must fully specify how they categorize content as misleading.

Deceit predates computers. Lies of omission and half-truths, misleading
presentation of statistics (e.g. the ubiquitous pie chart of US federal
spending that only shows discretionary spending). If they're setting
themselves up as guardians, they should cover non-digital methods as well as
deepfakes.

------
Miner49er
From the survey:

> Misleading altered media does NOT include photos and videos that are edited
> to remove blemishes or physical imperfections.

~~~
jka
Yep. People use filters and image retouching a lot.

If Twitter wants users to pay attention to items of content which are
particularly misleading, they need to avoid alert fatigue - i.e. these notices
need to be rare and reliable.

~~~
jka
NB: The only long-term solution to this conundrum that I can see is that
people become genuinely confident and comfortable sharing who they are without
self-editing and self-censoring, and that we collectively appreciate and
understand that.

Until then -- and especially in the presence of rewards for narcissism
(iPhones, Instagram, venues designed solely for vanity) -- filtering and
editing your own self-image is at least understandable, and at worst rational
or beneficial.

That leads to issues determining (and putting into writing) the difference
between malign manipulation / generation of truth and the behaviour of a
reasonable person.

------
zitterbewegung
What if the tweets themselves are synthetic? Would those be deleted or is that
already covered somewhere?

------
narrator
I'd imagine if any video of that guy who didn't kill himself's alleged clients
leaked somehow, anyone powerful who might be identified in a video would claim
it was a deepfake.

------
new_guy
I run a network of social sites and we've had this functionality for a couple
of years, and the thing is people - especially Americans - just don't care.

They _love_ being outraged, even when what they post is clearly labelled as
fake they ignore it, and people commenting ignore it.

From the site perspective there's not much more we can do without driving
people away, if you try to police content too much people will just go to
another site.

But it does get pretty frustrating.

~~~
netsharc
> They love being outraged

I find this too, and I'm fascinated with this aspect of our new online world.
I wonder why and how. I'm guessing outrage is a way of feeling superior to
other people, "How can they be so stupid, I'm so happy I'm smarter than them
and know the truth!" (Trying to think if this would apply to online public
shamers, I guess so.).

I'm guessing the loneliness, insecurities and FOMO created by online social
networks has lead to this. Although it was TV before that, there are reports
of how Bhutan's society suffering negative things after the introduction of TV
in 1999 (e.g.
[http://news.bbc.co.uk/2/hi/entertainment/3812275.stm](http://news.bbc.co.uk/2/hi/entertainment/3812275.stm)
)

------
Geee
Maybe it's even more dangerous that a photo or a video can no longer be used
as a proof in court. Deepfake technology gives plausible deniability.

------
est
I think deepfakes will force us to think deeper into words and meanings, not
just familiar faces. It's an inevitable invention.

------
Nasrudith
The thing I find most worrying about Deepfakes is the moral panic aspect.
Although flagging them as shopped and why they think that would be
unobjectionable and respectful "good citizenship".

Even the false positives would be good for both laughs and insights if say an
art musuem exhibit which screws with perspective or scale results in being
flagged as fake.

~~~
9HZZRfNlpR
There's more to worry about screenshots of modifying a bit dom and spreading
'deepakery' like that, but then again no one cares anyway. Worrying snout
deepfakes is truly a pseudo problem.

------
crb002
Every meme photoshop now needs a label?

~~~
KingMachiavelli
From the original twitter blogpost:

> we propose defining synthetic and manipulated media as any photo, audio, or
> video that has been significantly altered or fabricated in a way that
> intends to mislead people or changes its original meaning.

'meme' is pretty broad so if it's literally just a screenshot of a faked tweet
then Yes it would fall under this rule based on the broad language they are
using. BUT I think that's still a good thing as long has the label/warning is
unobtrusive yet at the same time it needs to not be used on every meme/image
since then it would loose it's effect.

I'm very curious how they are going to differentiate between normal image
mashups (memes, etc.) and false information alterations at least without human
oversight.

The blog post has a survey for feedback so I encourage everyone to leave some
since this most likely impacts everyone at least indirectly.

~~~
mc32
What if it’s a photo of a montage of collage or authentic images?

It’s artistic manipulation but it’s not fakery.

Maybe they’ll have s table with the hash of all verified images and anything
not in there gets auto tagged as unverified/fake.

I think it’s s losing proposition, but let’s see what comes of it. Of course
the downside is this benefits incumbents and harms challengers.

