
Malicious Deep Fake Prohibition Act of 2018 [pdf] - Hard_Space
https://www.govinfo.gov/content/pkg/BILLS-115s3805is/pdf/BILLS-115s3805is.pdf
======
jawns
From the definitions section of the act:

> The term 'deep fake' means an audiovisual record created or altered in a
> manner that the record would falsely appear to a reasonable observer to be
> an authentic record of the actual speech or conduct of an individual.

This limits the scope of the act to prohibiting deep fakes that are not
explicitly labeled as such. If the recording (or, potentially, links to the
recording) contains any type of disclaimer, that should be sufficient to
establish that no reasonable observer would consider it to be authentic.

But I wonder if this provides enough protection to victims in the case of deep
fakes where a person's face is grafted onto sexual content or other content
that might embarrass the person, whether or not a disclaimer exists.

For instance, if someone puts an acquaintance's face into a sex scene and
distributes it online, then even if there's a huge "This is a deep fake"
scrolling disclaimer across the video, that person may still feel as if they
have been defamed.

~~~
herostratus101
I'm not sure your reading of this is warranted at all.

In any event, are you sure it protects you from criminal liability if you
label something as a deep fake, then other people redistribute it without that
label?

~~~
Shish2k
> if you label something as a deep fake, then other people redistribute it
> without that label?

If I print off a copy of a Van Gogh making it clear that it's a copy, and then
somebody else sells it as an original, who is liable for fraud?

~~~
apetresc
The intent of the law seems different in these cases though. The purpose of
anti-forgery laws is to protect the producers and consumers of art from being
ripped off financially. The purpose of deep-fake prohibitions is to protect
the viewers of the video from being misled about reality, and the subject of
the video from the consequences thereof.

Yes they both involve making visual copies of things but the underlying
dynamics are completely different.

------
beager
With respect to the U.S. Congress, I don't think there is sufficient language
to make something like this sufficient without also being overly broad. I'm
looking at this, specifically:

    
    
      10 ‘‘(2) the term ‘deep fake’ means an audiovisual
      11 record created or altered in a manner that the
      12 record would falsely appear to a reasonable observer
      13 to be an authentic record of the actual speech or
      14 conduct of an individual; and 
    

If that's the case, any sort of creative editing, even just quick cuts, could
fall under this (see: any primetime or cable news, any TV campaign ad, the
quick cuts of Obama where it looks like he's singing Never Gonna Give You Up,
etc). And, not to get on the US politics slant, but a law like this could be
weaponized against political foes—basically, label everything you don't like
as "fake news" and prosecute it under this law.

Additionally, if you look at this and just say "well, we all know what a deep
fake is, so your point is moot," I will say, somewhat at the risk of
contradicting myself, that maybe the language needs to be forward-thinking to
cover whatever the next "deep fake" is.

In my opinion, the sort of clause above would be better written like:

    
    
      The term "Computer-generated audiovisual impersonation" means an
      audiovisual record created or altered by computer generation in
      a manner that the record would falsely appear to a reasonable
      observer to be an authentic record of the actual speech or conduct
      of an individual;

~~~
e40
_If that 's the case, any sort of creative editing, even just quick cuts,
could fall under this_

Maybe that wouldn't be so bad. This technique has been used to deceive untold
times.

~~~
kbenson
I agree. A world where edited interviews must air with a disclaimer that it
has been edited to remove certain portions might be extremely beneficial.

~~~
mcbits
Almost every prerecorded interview would carry the disclaimer, rendering it
meaningless. Though it would be fun to see "fake reaction" every time they
splice in footage of the interviewer nodding, smiling, scowling, etc.

~~~
kbenson
Almost every prerecorded interview presented as they _currently are_ would
carry the disclaimer. It might cause interviews to be presented differently
(either ruthlessly trying to keep on topic, or with an easy link to the full
interview), but even if not, having that disclaimer would be useful as an
indicator that forced people to remember that what they are seeing might be
out of context, and to look for that context.

Finally, I think it would give people more tools for going after the purveyors
of misleading content. Either they would need to have a disclaimer, which
could be pointed to for those that accepted the content without reservation,
or they could face repercussions.

I see no problem with forcing people and organizations that purport to be
representing a real situation but are instead presenting a view of that
situation ideally suited to their own narrative to note they are doing so.
Just because we've been conditioned to be tolerant of it in our media does not
mean it's acceptable or needs to continue as it has.

------
jdpedrie
"Deep Fakes" in general seem to be already covered by existing libel laws,
though that has some problems. Libel is generally a civil matter, and is
extremely difficult to prosecute.

There is precedent, even in the US, for criminal libel laws[0]. Perhaps
following that path (combined with continued work on detection and defeat of
the technology) would be preferable. Since libel and defamation are well-
defined and have a long history of jurisprudence, many of the constitutional
and legal issues which would have to be answered in opening a new avenue of
speech restriction could be contained within the existing contexts.

I'd even think leaving it in the civil arena would make sense until it became
a problem worthy of criminalization. Legislating a matter before it arises is
almost never a good idea, from either a prudential or principled standpoint.

[0] [https://www.washingtonpost.com/news/volokh-
conspiracy/wp/201...](https://www.washingtonpost.com/news/volokh-
conspiracy/wp/2014/05/13/impersonating-someone-online-with-intent-to-injure-
his-reputation-is-a-crime-in-new-york/)

~~~
Bartweiss
Aside from the civil versus criminal distinction, it seems significant that
this statute judges harm differently than libel laws. Libel/slander requires
reputational damage to an individual, while this is about facilitating
criminal/tortious conduct in general. That seems to cover some fairly
significant non-libel cases, like knowingly creating a video for someone to
use as an alibi.

I agree that it still seems like premature legislation, though. Even from the
viewpoint of a random programmer, it's both overbroad (this covers any video
editing, not just 'deepfakes' as commonly understood) and incomplete (what
happens when no recognizable people are represented, but an edit is still used
to facilitate criminal action?)

If doctored video becomes as believable as authentic video, it's going to be a
major upheaval, returning us to a world where seeing isn't believing. Probably
not completely; there will be an obvious market for hard-to-fake
authentication, even in forms as simple as registering a video hash with a
trusted source as soon as it's taken. But the departure from a world where a
high-res video of an event is reliable proof is going to be a very big change,
and I seriously doubt any law written today will productively adjust for it.

------
orbifold
Just yesterday it occurred to me that Deep Fakes could be used to create some
of the most potent disinformation campaigns ever. Already now we have had
memes with misattributed quotes and photos edited to associate Clinton with
the devil, satanic rituals and so on. Think of how much more impactful
slightly off quotes and speeches would be.

~~~
Zecar
> Already now we have had memes with misattributed quotes and photos edited to
> associate Clinton with the devil, satanic rituals and so on

Right, like my favorite one is the idea that she took $150 million from Russia
in exchange for a shipment of our uranium. Those funds went to the Clinton
Foundation a _charity_. Sheesh.

~~~
arcticfox
> Those funds went to the Clinton Foundation a _charity_. Sheesh

While independent investigations have concluded there was no wrongdoing, the
reason above is really no defense. The Trump Foundation, for example, has
shown how nonprofits can be manipulated to the benefit of the founders.

~~~
travisoneill1
No _proof_ of wrongdoing. They also failed to come up with any plausible
reason for the payment other than wrongdoing.

------
daenz
IANAL, but based on the wording, it seems like it's only unlawful to create w/
intent to distribute, or to knowingly distribute, a deep fake IFF you're
facilitating unlawful conduct. So for example, blackmail? Does libel count?
Does the very act of distributing a deep fake count as libel against the
subject?

Can any lawyers chime in?

------
saint_abroad
> (2) the term ‘deep fake’ means an audiovisual record created or altered in a
> manner that the record would falsely appear to a reasonable observer to be
> an authentic record of the actual speech or conduct of an individual;

I see nothing of this act that restricts the applicability of so broad a
definition of "deep-fake" so as not to cover the activities of Hollywood,
specifically the CGI mapping of actors onto body doubles such as that
performed in "The Crow" (1994).

Yet another deeply troubling knee-jerk reaction of an act that promises to
catch "just the bad".

~~~
cwkoss
Conversely, I think that we may soon need regulation around Hollywood's right
to digitally reincarnate dead actors for use in their commercial film
productions.

The idea of Universal CGI'ing George Carlin into saying politically-tinged
speech he would disagree with would be a travesty. And I'm not sure how that
would be prevented, save his estate successfully suing Universal.

------
donatj
I mean deep fakes clearly seems protected by the first amendment, particularly
strongly in the case of parody. That's not too say however they are clear from
libel law.

~~~
hopler
So are they protected or not?

~~~
donatj
Basically my argument is that they're protected from criminal but not civil
charges.

------
pseingatl
The crime is not well-defined. Meanwhile, there's this out without further
explanation: "No person shall be held liable under this section for any
activity protected by the First Amendment to the Constitution of the United
States.’’

~~~
curuinor
Lots of legislation effectively leaves it to courts to hash the actual
specifics out. Courts can only rule on specific examples, so it prevents a
genre of legal stupidity.

------
michaelt
Does anyone know how this act would interact with the first amendment?

It's my understanding that the first amendment is extremely broad - if
donating money can be counted as protected speech, surely 'parody' videos of
politicians would also be?

~~~
SamuelAdams
Page 4, line 8:

    
    
        (2) FIRST AMENDMENT PROTECTION.—No person shall be held liable under this section for any activity protected by the First Amendment to the Constitution of the United States.’’.

~~~
giornogiovanna
But what activity does this act prohibit, that isn't "protected by the First
Amendment"?

~~~
fiblye
Libel is a crime and some states have defamation laws. It wouldn't be hard to
tack this onto those and--if they really want to push it, which they will if
given the chance--make it a sex crime to get that lifelong punishment.

------
c0nfused
It would seem to ban only international or interstate created deep fakes. This
appears to prop up the locally created deep fake market.

I'm only half kidding, it seems like the immediate defense would be: I made it
at home or in a data center in my home state thus the statute does not apply.

~~~
grigjd3
I suspect this limitation is because the supreme Court has traditionally held
the federal government to such limitations in many areas.

~~~
the_pwner224
The US Constitution only gives the Federal government the power to regulate
international and interstate commerce (and of course other powers too, but
those aren't relevant here). Article I section 8:
[https://www.archives.gov/founding-docs/constitution-
transcri...](https://www.archives.gov/founding-docs/constitution-
transcript#toc-section-8-)

The Bill of Rights (first 10 amendments to the Constitution) explicitly states
that all powers not explicitly granted to the Federal government are retained
by the states and by the people. So that's probably why this limitation
exists.

Of course, the Federal government has really stretched the meaning of "To
regulate Commerce ... among the several states" and has gained quite a bit of
power out of that line. But it's still a limitation.

------
adolph
I'd bet that this act would paradoxically increase the power of "deep fakes"
in that the problem isn't the existence of "deep fakes" but each person's
reaction to them. The prevailing culture of knee-jerk hot-takes already
thrives on selectively edited video and fresh discoveries of forgotten pasts.
If it isn't inoculated into thoughtful skepticism by steady revelations of
various fakes it will become more susceptible to the few left un-prosecuted
under this law.

------
pbhjpbhj
How does one prove the fake is fake.

There's going to be development of "live notarised" data streams, so a camera
feed gets certified, etc..

People are going to use deep-fake tech as an excuse -- someone faked me, I'm
not racist/sexist/fascist/... . How can you show the real is real.

I'm imagining a future where you can choose the actors in the lead roles of
your films; great for narcissists!

~~~
tlynchpin
Heinlein explored this idea of Fair Witness as a profession in his 1961 novel
Stranger in a Strange Land.

[https://lccn.loc.gov/61011702](https://lccn.loc.gov/61011702)

------
echelon
Jeez, yet another annoying new law encroaching on something fun I do as a
hobby. I'm really sick of our lawmakers doing this when our existing legal
code seems sufficient to punish abuse.

Does this mean I should take down my Donald Trump text to speech engine [1]?
Or consult a lawyer?

It yields really poor quality (right now), and I doubt any reasonable person
would consider it to be actual audio from Trump.

Does this prohibit me from improving it? I was about to train an ML model on
my samples and switch to parametric generation.

[1] [http://trumped.com](http://trumped.com)

------
cpr
In other words: deep state sets the stage for forthcoming shocking real videos
of major players.

Yes, yes, downvote into oblivion, but remember this when it happens...

------
randyrand
This concerns 'records', but I wonder if a program that can do a realtime
overlay would solve the moral/possible legal issues.

------
rjf72
I'm curious about peoples' views on something.

It's clear that in the future we will be able to create fakes that are
effectively indistinguishable from reality. The audio in this 'Trump speech'
[1] is already remarkable. So there are two ways we can go from here. The
first is to try to maintain faith in multimedia. What you see or hear is
_probably_ real because we try to pass a bunch of laws making it a really bad
thing to try to impersonate people.

The second is to go the route of internet speech today. If somebody claims to
be somebody of note, you generally would not believe them without extensive
proof. And so what they say does not reflect upon the person they claim to be.
If video/audio manipulation tech was allowed without constraint, this would
eventually become the same for general audio/video. Having no trust in what
you see or hear is not a great thing. But at the same time, I think there's a
very good argument to be made for the fact that people are already far too
susceptible to fake information because we, even before the 'real' advent of
deep fake type technology, are still teetering on the precipice between
believable and not.

For instance this [2] famous image of "animal testing" that keeps going viral
on social media every couple of years. It has nothing to do with animal
testing, but people are naive. Deep fakes would throw us well off that
precipice to the point that I think we'd see substantially increased amounts
of scrutiny given to misinformation. The downside here is of course we'd also
see substantially increased amounts of scrutiny given to legitimate
information, though I'm not entirely sure I see that as a negative.

[1] -
[https://www.youtube.com/watch?v=7Gpc_artOYI](https://www.youtube.com/watch?v=7Gpc_artOYI)

[2] - [https://speakingofresearch.com/2014/02/27/fact-into-
fiction-...](https://speakingofresearch.com/2014/02/27/fact-into-fiction-why-
context-matters-with-animal-images/)

------
naringas
sooner or later they have to make a full length feature film using this
technology

~~~
taneq
I thought there already was (some of) one. Starring Nicholas Cage, as
_everyone_.

~~~
gbrown
That's amazing.

------
bitL
It's funny how a very simple DL architecture any beginner could prepare ended
up in a prohibition act - essentially training two convolutional autoencoders,
one on the face one wants to replace (extracting dataset by e.g. tracking face
with OpenCV in target video), the second on many pictures of the face one
wants to replace (e.g. scraping from Internet or known videos), then
essentially gluing together encoder from one and decoder from the other,
assuming latent variables match, and suddenly a fake video is done.

IMO if you really want to control deep fakes, you should use a blockchain and
track all processing steps, starting from image/video acquisition. I don't
want to give anyone ideas, but they will do it anyway.

~~~
dieblur
This doesn't even begin to make sense. You can't force people to register any
creation on the blockchain and there can't be a main "only real things
allowed" chain. If we want to shoehorn a blockchain into every possible
problem ever, lets begin with ones that could actually work.

~~~
bitL
The idea is to enforce equipment manufacturers to prepare uniquely
identifiable IDs in all imagery/footage their equipment produces, entice all
software vendors to register all operations in a blockchain and store a
reference to it in metadata; then reject any footage that can't have all steps
verified as "fake".

How is that any different than tracking production inputs in logistic chains
using blockchain?

~~~
xamuel
Nothing can stop me from pointing a camera at my computer monitor and
recording a verified video of a non-verified video.

~~~
bitL
Sure, but in that case your chain starts with your camera, producing "pro"
looking video which would be totally unlikely (not mentioning trivial stuff
like noticing monitor/lens distortions etc.). Analyzing subsequent processing
steps stored in blockchain would likely show very low probability of authentic
work due to missing many required steps (OK, there is still certain low
probability you can fool it somehow, but would you want to waste so much
time/processing power on it?)

~~~
xamuel
This is whack-a-mole. If I can design deepfake software, it isn't much harder
to design it to specifically anticipate the user filming the result. The user
would input their monitor specs, their camera specs, etc., and the software
would produce a weirdly distorted video which looks perfect when filmed with
that camera from that monitor.

Or, people just outright sell doctored cameras where you can intercept the
input feed.

This isn't like adult content filtering, where all that matters is that kids
can't get around it. You have to assume people who know what they're doing are
going to attack your technical solution, and apply ingenuity in doing so. When
the enemy consists of hackers, you can't ward them off with a hack!

