Hacker News new | comments | ask | show | jobs | submit login
Malicious Deep Fake Prohibition Act of 2018 [pdf] (govinfo.gov)
121 points by Hard_Space 15 days ago | hide | past | web | favorite | 113 comments



From the definitions section of the act:

> The term 'deep fake' means an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.

This limits the scope of the act to prohibiting deep fakes that are not explicitly labeled as such. If the recording (or, potentially, links to the recording) contains any type of disclaimer, that should be sufficient to establish that no reasonable observer would consider it to be authentic.

But I wonder if this provides enough protection to victims in the case of deep fakes where a person's face is grafted onto sexual content or other content that might embarrass the person, whether or not a disclaimer exists.

For instance, if someone puts an acquaintance's face into a sex scene and distributes it online, then even if there's a huge "This is a deep fake" scrolling disclaimer across the video, that person may still feel as if they have been defamed.


> For instance, if someone puts an acquaintance's face into a sex scene and distributes it online, then even if there's a huge "This is a deep fake" scrolling disclaimer across the video, that person may still feel as if they have been defamed.

A lot of people might not like relinquishing the authoritarian reaction to control the fictional creative output of others when it specifically targets them in a negative and public manner, but unfortunately respecting freedom of speech demands that deep fakes be protected from civil or legal action. We have to adapt our laws to reality as technology changes it, not the other way around.

However, if a deep fake is used to specifically target and harass an individual on a personal level, i.e. not parody/political speech but actually contacting and engaging with family/friends/associates of the victim in order to cause harm, then afaik that is already covered by existing harassment laws.


> We have to adapt our laws to reality as technology changes it, not the other way around

social norms and law as a consequence always have to strike the balance between personal freedom of [x] vs the others personal freedom of [x,y].

just because a technology exists, it doesn't mean it should be used in any way. As an extreme example let's take "knife technology", you just should not put that tech piece into any human body and see what happens... so it's clear where the balance is in this case. deep fake should be the same as any other human dignity vs. free speech discussion imo. There seem to be different norms in different countries too regarding this, it's not always authoritarian.


For what it's worth, I kind of agree, this is a pretty big loophole that essentially makes revenge porn legal. It's gonna be about 2 seconds before you start seeing disgruntled exes of every kind putting up deep fakes of porn encounters. But I tend to file that under "This Is Why We Can't Have Nice Things".

The reality is that when that happens, and we all know it will, the good legislators will be under ENORMOUS pressure to close that loophole, and that loophole will be closed. Same thing happened with regular revenge porn, it was "loophole legal" at first, then the law came down on it like a hammer because the pressure just grew too large. The amount of time behind bars that they give out for revenge porn is increasing even as we type.

To be completely honest, I'm fairly certain this act won't pass muster as it's written specifically because the revenge porn loophole is in it. The combination of the anti-revenge porn ladies and the law enforcement people will just make this way too risky a "yes" vote for anyone in a competitive district.


To quote my earlier comment:

> if a deep fake is used to specifically target and harass an individual on a personal level ... then afaik that is already covered by existing harassment laws.


The issue is, a deep fake, is clearly a parody. This can't be argued. That's the purpose of a deep fake at its root.

So you need a more narrow definition. If I do a deep fake of Melania giving Trump a golden shower in Moscow, that goes way beyond parody, even though it is, technically, by definition, parody.

So there is no way to say that is not a personal attack, AND there is no way to say that is not a parody.

That's the loophole that needs to be closed. You can try to argue that showing a hot school teacher or some male cop performing oral on a roomful of men was only parody, but if you get away with it there will be about 20 seconds before the law is changed. All the activists and the law enforcement people will claim that we just can't have video of every ex-girlfriend, cop, school teacher, or boss in the nation involved in whatever sexual activity seems the most degrading.


> If I do a deep fake of Melania giving Trump a golden shower in Moscow, that goes way beyond parody, even though it is, technically, by definition, parody.

By what principle is this the case?

> That's the loophole that needs to be closed.

You're not closing a loophole. You're limiting free speech.


The damage caused by stabbing someone with a knife is not the same as editing a video with someone's face.


Consider what happened to the pizza parlour named in the “Pizzagate” story (despite the named pizza parlour not having a basement), and consider what would’ve happened if someone had made a DeepFake with Clinton‘s face to “prove” the claims.


We've been getting along alright so far with photoshop. This isn't fundamentally different.


Not fundamentally, sure. It’s a matter of degrees, and of skill that will (eventually, not currently) no longer be required.


Sure, but we can cope with that. If we can look at a photo and think to ourselves, “perhaps it was photoshopped?”, why will we be unable to do the same for a video? Would we not be even more likely to do that if faking videos becomes trivial and presumably universally accessible? In particular the specific scenario I responded to seems like fear mongering.


What would have happened?


Law enforcement or FBI etc would likely have issued a statement to say they believe it's fake. It's just like the Birther situation, you can almost never disprove a conspiracy but you can have a trusted body reassure everyone that a specific piece of evidence is fabricated.


And you can give others tools to go after misinformation by making certain aspects illegal, so law enforcement has recourse before it gets too large. Recourse like notifying hosts and law enforcement when purposefully misleading information is posted, as this bill allows.


> that person may still feel as if they have been defamed.

Don’t defamation or personal honour laws cover this scenario already in most countries, as would be the case with obscene caricatures of someone? Do we need new laws specific for deep fakes in that case? What about for videos that very obviously just paste the target’s face on top of another video?


I'm not a lawyer, but the answer may well be "no, this isn't already covered". Especially when courts like to construe things narrowly, and that anything not specifically mentioned in a law requires relief through the legislature rather than the judicial branch. There will surely be people who claim that this isn't covered by existing defamation laws, and I would not be so sure that a court wouldn't agree with them.


Yeah, though I can see the case for additions that makes defamation using deep fakes a more serious crime than less competent caricatures.


It seems quite a constraint on creativity to say 'You can't make something look legit when it isn't'.


It’s not a new concept, and it applies to the difference between a painter copying a Van Gogh (acceptable) and trying to pass that work off as the genuine article (illegal). In short, it’s the line between creative expression and forgery.


As others have pointed out, "making something look legit when it isn't" is a good description of fraud, and films have long had disclaimers in the credits that they are fictional.


Or, you have to at least put a disclaimer on it if it's likely to cause confusion. This isn't new.

> All Characters and events in this show -- even those based on real people -- are entirely fictional. All celebrity voices are impersonated... poorly. The following ...


> an audiovisual record created...in a manner that the record would falsely appear to a reasonable observer to be an authentic record

Presumably this 'created' case is meant to prevent something like Nvidia's face-gen tool being used to create a lookalike video from scratch with the defense "it's not an alteration".

But there's no wording restricting this to digital generation (or alteration). As far as I can tell, this covers anything from a photo with a lookalike to a real but misleadingly-cut sound recording. Am I missing something, or would this have made producing that 2004 faked photo of Jane Fonda and John Kerry a felony offense?


What if someone produces a deep fake with the disclaimer, but then someone copies the video, edits out the disclaimer and then posts that edited video? Is the second person guilty of creating a deep fake or guilty of something else?


I'm not sure your reading of this is warranted at all.

In any event, are you sure it protects you from criminal liability if you label something as a deep fake, then other people redistribute it without that label?


> if you label something as a deep fake, then other people redistribute it without that label?

If I print off a copy of a Van Gogh making it clear that it's a copy, and then somebody else sells it as an original, who is liable for fraud?


The intent of the law seems different in these cases though. The purpose of anti-forgery laws is to protect the producers and consumers of art from being ripped off financially. The purpose of deep-fake prohibitions is to protect the viewers of the video from being misled about reality, and the subject of the video from the consequences thereof.

Yes they both involve making visual copies of things but the underlying dynamics are completely different.


Did you sell the copy to the person who passed it off as an original? If so, you might be in real trouble, especially if it forms a pattern of behavior. If it was a gift and no form of remuneration was expected or received you’re going to be fine. The more nuanced answer is “prosecutorial discretion” because this will be investigated. If you’re an art student who did this once, for $100, you’re probably just going to be told to stop being so naive, not held criminally liable.


Whoever the jury decides.


I suppose the person who removed the label would, theoretically, be the guilty party. One could argue that law enforcement is flawed, but that's true for any law.


> are you sure it protects you from criminal liability if you label something as a deep fake, then other people redistribute it without that label?

No, I'm not sure. I said "potentially," but I don't know.

I think it may be plausible to argue that labeling a recording outside of the recording itself should be one factor in deciding whether a reasonable observer would consider it authentic.

And I think it may also be plausible to argue that doing so protects the labeler, and that (particularly if the license to redistribute requires that the disclaimer be shown in any subsequent distribution channels) only the people who distribute it without the label should be found to have violated the act.


Could someone already sue you for libel if you created a deep fake of them?


I think that case is fine since that is already covered by laws which punish harassment and malicious defamation. It gets more complicated if you account for celebrity.


With respect to the U.S. Congress, I don't think there is sufficient language to make something like this sufficient without also being overly broad. I'm looking at this, specifically:

  10 ‘‘(2) the term ‘deep fake’ means an audiovisual
  11 record created or altered in a manner that the
  12 record would falsely appear to a reasonable observer
  13 to be an authentic record of the actual speech or
  14 conduct of an individual; and 
If that's the case, any sort of creative editing, even just quick cuts, could fall under this (see: any primetime or cable news, any TV campaign ad, the quick cuts of Obama where it looks like he's singing Never Gonna Give You Up, etc). And, not to get on the US politics slant, but a law like this could be weaponized against political foes—basically, label everything you don't like as "fake news" and prosecute it under this law.

Additionally, if you look at this and just say "well, we all know what a deep fake is, so your point is moot," I will say, somewhat at the risk of contradicting myself, that maybe the language needs to be forward-thinking to cover whatever the next "deep fake" is.

In my opinion, the sort of clause above would be better written like:

  The term "Computer-generated audiovisual impersonation" means an
  audiovisual record created or altered by computer generation in
  a manner that the record would falsely appear to a reasonable
  observer to be an authentic record of the actual speech or conduct
  of an individual;


If that's the case, any sort of creative editing, even just quick cuts, could fall under this

Maybe that wouldn't be so bad. This technique has been used to deceive untold times.


A carefully drafted law against circulating that sort of material without a label might be alright. But I very much don't want to see this law, with low specificity and felony penalties, used to criminalize dubious editing. After all, we already have libel laws, so this is largely going to be used where those fail to apply.

If you cut a real interview answer to align with a different question than was asked, is that altered to a false appearance? What if it's done in good faith, to streamline out an interviewer's request for clarification?

If you share real, unedited footage of someone, but end it before they give a caveat to their comment, is that an authentic record of their speech? How about if you cut off a more serious inversion, like "what my opponents want you to think I'd say..." What about footage of a fight that starts too late and misrepresents the aggressor?

Weirdest of all, if you edit an interview favorably, is that a felony? Since this isn't a libel law but a public-interest law, could cutting out a stupid answer or trimming filler words form an inauthentic record of someone's speech?

(Actually, even worse: a slightly broad reading of 1B and 2A suggests that it's even possible to commit this crime without intent. The fake need not be purposefully misleading, and intent to facilitate criminal conduct doesn't necessarily require knowledge that the conduct facilitated would be criminal.)

This law does a pretty good job of saying that it's illegal to computer generate a video of a politician taking a bribe. But even if deceptive editing ought to be illegal, I think accomplishing that with this definition and these penalties would be disastrously unclear.


I agree. A world where edited interviews must air with a disclaimer that it has been edited to remove certain portions might be extremely beneficial.


Almost every prerecorded interview would carry the disclaimer, rendering it meaningless. Though it would be fun to see "fake reaction" every time they splice in footage of the interviewer nodding, smiling, scowling, etc.


Almost every prerecorded interview presented as they currently are would carry the disclaimer. It might cause interviews to be presented differently (either ruthlessly trying to keep on topic, or with an easy link to the full interview), but even if not, having that disclaimer would be useful as an indicator that forced people to remember that what they are seeing might be out of context, and to look for that context.

Finally, I think it would give people more tools for going after the purveyors of misleading content. Either they would need to have a disclaimer, which could be pointed to for those that accepted the content without reservation, or they could face repercussions.

I see no problem with forcing people and organizations that purport to be representing a real situation but are instead presenting a view of that situation ideally suited to their own narrative to note they are doing so. Just because we've been conditioned to be tolerant of it in our media does not mean it's acceptable or needs to continue as it has.


I strongly agree.

As far as I can tell, this wording covers any sort of misrepresentation set to film: creative editing, lookalikes, photo composites, etc. What constitutes undue creative editing is worryingly undefined; there are a lot of practices which create 'inauthentic records' for both innocent and malicious ends, like cutting segments out of an interview. And it seems to have seriously strange edge cases, since unlike libel rules it's not defined by harm to the person misrepresented - can you edit an interview favorably and be charged for that?

I suppose the "intent to facilitate criminal/tortious conduct" clause is supposed to bypass all of that, but I'm not convinced it actually does so. §1041.b.2.A requires actual knowledge that a a record is false for distributors, but §1041.b.1 does not. Presumably the idea is that creating a deepfake shows knowledge, but the deepfake definition itself doesn't require intent by the creator; it's sufficient that the resulting record be false and seemingly authentic. If "intent to facilitate criminal conduct" is read in the same way as e.g. conspiracy statutes, it could connect intent to the conduct, not the criminality. (As a further bit of weirdness, it would then be legal to create without intent to distribute, and later distribute without knowledge. But if you create while intending to distribute, you're in hot water.

An extreme example: someone rounds a corner and sees Person A punching Person B. They arrive too late to see that B initiated the fight and A acted in self defense. They quickly take some video so there's evidence, and after the fight ends send it to B, who requested it as evidence with which to file an assault charge. The taper has now created with intent to distribute what falsely appears to be an authentic record of A's conduct (intent not required). And they reasonably expected that the record would be used to affect the conduct of a state judicial proceeding (§1041.c.1.A). The videographer, without knowledge or intent, is now facing 10 years in prison for creation of a deep fake.


"Deep Fakes" in general seem to be already covered by existing libel laws, though that has some problems. Libel is generally a civil matter, and is extremely difficult to prosecute.

There is precedent, even in the US, for criminal libel laws[0]. Perhaps following that path (combined with continued work on detection and defeat of the technology) would be preferable. Since libel and defamation are well-defined and have a long history of jurisprudence, many of the constitutional and legal issues which would have to be answered in opening a new avenue of speech restriction could be contained within the existing contexts.

I'd even think leaving it in the civil arena would make sense until it became a problem worthy of criminalization. Legislating a matter before it arises is almost never a good idea, from either a prudential or principled standpoint.

[0] https://www.washingtonpost.com/news/volokh-conspiracy/wp/201...


Aside from the civil versus criminal distinction, it seems significant that this statute judges harm differently than libel laws. Libel/slander requires reputational damage to an individual, while this is about facilitating criminal/tortious conduct in general. That seems to cover some fairly significant non-libel cases, like knowingly creating a video for someone to use as an alibi.

I agree that it still seems like premature legislation, though. Even from the viewpoint of a random programmer, it's both overbroad (this covers any video editing, not just 'deepfakes' as commonly understood) and incomplete (what happens when no recognizable people are represented, but an edit is still used to facilitate criminal action?)

If doctored video becomes as believable as authentic video, it's going to be a major upheaval, returning us to a world where seeing isn't believing. Probably not completely; there will be an obvious market for hard-to-fake authentication, even in forms as simple as registering a video hash with a trusted source as soon as it's taken. But the departure from a world where a high-res video of an event is reliable proof is going to be a very big change, and I seriously doubt any law written today will productively adjust for it.


Just yesterday it occurred to me that Deep Fakes could be used to create some of the most potent disinformation campaigns ever. Already now we have had memes with misattributed quotes and photos edited to associate Clinton with the devil, satanic rituals and so on. Think of how much more impactful slightly off quotes and speeches would be.


Yes, it definitely has the potential to become a major issue. So much so that the DoD is already working on tools to detect deepfakes [1]. Of course this will very likely devolve into an arms race akin to antivirus vendors vs malware developers.

The fundamental problem is going to be that many people seem to actually like being in their comfortable echo chamber. So, if they're presented with a video that reaffirms their hunch that the Clintons run a child trafficking ring from under a pizza store, those people seem to be unlikely to venture out and do the necessary due diligence to verify the information they're being given :(

[1]: https://medium.com/mit-technology-review/the-defense-departm...


> those people seem to be unlikely to venture out and do the necessary due diligence to verify the information they're being given

Especially since people in those circles have for the last year been pushing the idea that the attention on deep fakes is preemptive cover for such a video rumored to be out there.


And .. what if it is preemptive cover? What do we do about that circumstance?

Like, seriously, this is shaping up to be a cyberwar nobody wants to admit. Either we can trust video sources from now on, or we can't - how is it going to be possible for TPTB to continue to divert public attention with video, if we can no longer trust our own eyes?


Hardware-based signing-while-recording, and only consider previously recorded self-verified unsigned recordings legitimate, otherwise unsigned=assumefake.


> only consider previously recorded self-verified unsigned recordings legitimate, otherwise unsigned=assumefake

Can you clarify this part, especially "self-verified unsigned"? I don't think I understand it - presumably you could generate camera-and-video-specific signatures which an edit couldn't reproduce, but it's not obvious to me how you would verify them without access to the original camera.

As far as sign-while-recording, I agree. We'll quickly end up in a world where video of ongoing events is signed as it's taken. The simplest tactic I can think of is submitting a hash of the video file to some public store, which at least authenticates when it was shot. For an event with a well-known time (a speech, public protest, etc), that should suffice to prove the footage isn't faked, and corroboration of multiple videos should prove it's not an outright fabrication (e.g. pre-rendering a whole different speech).

For time-nonspecific events like a video of a suspicious meeting, this wouldn't be enough. A hardware signature might prove that a video was real, but we'd still face a significant change in that unsourced videos floating around the web couldn't be treated as convincing. We're already getting there with photos, of course, but at the moment a clear, high-res video is pretty trustworthy without a source.


Whether it is done or not doesn't even matter. It also does not matter whether deep faking videos is made illegal.

All that matters is that it is possible to do so, and supporters of political candidate A will claim with possible deniability that political candidate A never said "X". While also claiming that political candidate B definitely said "Y", even though this may be false. Soon there will be no way to prove whether or not the videos are actually real and people will believe whatever fits their preconceived notions.


Deep fakes lack context. There are no witnesses, etc. So they’ll be most “effective” in trying to establish something in the past. It’ll be harder to produce the context where it’s believable in the near past (present minus up to a day a month a year?). So they may be useful in sowing distrust in someone by establishing they did something untoward in the past, but less likely to incite immediate reaction about something happening now, or recent past, once people become used to the possibility.


Presumably you'd fake things it would be plausible the politician was trying to keep secret. So you'd expect there to be no witnesses.

For example you'd fake hidden camera footage of politicians beating prostitutes and accepting brown envelopes from russian agents.


People will "remember" things that didn't happen.


People already remember things that didn't happen; and public figures already get maligned by news outlets lying about what they did and where they went. Even when the situations are revealed to be lies.


There was a "middle finger scandal" in 2015 where German journalists found a clip of the man who was to be the Greek finance minister giving a speech and saying "[Greece] should stick the finger to Germany" and doing the gesture as well, and of course stoking the typical "outrage culture".

Strangely a German TV satire team have claimed they doctored the video to make it look like he's giving the finger: https://www.youtube.com/watch?v=Vx-1LQu6mAE (although I stil wonder if they did the opposite: doctor the gesture out of the video, IMO the gesture itself was par for the course for him and perfectly acceptable).


We had a crude version of this a few months ago:

https://www.marketwatch.com/story/video-of-acosta-incident-p...


Just yesterday? The very first examples we saw of this kind of tech was heads of state being puppeted to say things. That and porn are about all its been used for.


IIRC, the majority Pizzagate was centered around Clinton's campaign chair John Podesta, not the candidate herself.

John and Tony Podesta collect some very creepy art - google it. Certainly not evidence of a crime, but fairly-public figures closely connected to a presidential campaign collecting art of children bound and in morbid situations raises some eyebrows for sure.


In theory you would need to visit all events in person again as you will no longer be able to trust anything around you.

We would need to invent something like public/private key auth which signs every videostream and which allows you to trust specific sources like reporters from NYT for example.


You can watch this video of Obama calling Trump names already: https://www.youtube.com/watch?v=cQ54GDm1eL0 Those times have already begun.


Ok. One generation might be a bit confused thinking that what they see on video is what actually happened. But further generations as the technology becomes common will just intuitively understand that video by itself proves nothing and it was most likely made for the lulz than recorded as it happened.

Video of Trump saying something already proves nothing especially not that he ever said it because now he says he never did. And all that even without deep fake technology.

People care less and less about history of reality. They become aware that it's to large extent unknowable drowned in narratives.


> Already now we have had memes with misattributed quotes and photos edited to associate Clinton with the devil, satanic rituals and so on

Right, like my favorite one is the idea that she took $150 million from Russia in exchange for a shipment of our uranium. Those funds went to the Clinton Foundation a _charity_. Sheesh.


I don't know the details of this Clinton-uranium-$$ story, but I can think of a few questions to ask...

1. If you were going to give somebody an illegal bribe, would you write "bribe" in the For: line on the check? (no, you would hide or disguise your bribe)

2. What plausible, legitimate reasons are there for Russia to donate to the Clinton Foundation?

3. Is charitable giving to anybody, especially in quantities exceeding $25-50 million, normal for Russia?

Cthulhu_ and arcticfox are definitely both right that charities are used by many rich people to avoid taxes and just because it's a charity doesn't mean the owner isn't benefiting from it.

I'd appreciate hearing your response. Not saying it happened one way or the other, just making some observations based on the tiny bit I've heard about this.

Edit:

Regarding #2, a quick search turned up [1]. I don't know anything about the Borgen Project, but a quick glance at the page doesn't look too suspicious. So, assuming that page is correct and that Russia donated similar quantities in 2013 as they did the same year the uranium incident happened, they donated over 50% to the Clinton Foundation. If those assumptions are fair and that actually is the case, it sure sounds suspicious to me.

[1]: https://borgenproject.org/russias-charitable-giving/


Don't all rich people stash their billions in charities to avoid taxes anyway?


> Those funds went to the Clinton Foundation a _charity_. Sheesh

While independent investigations have concluded there was no wrongdoing, the reason above is really no defense. The Trump Foundation, for example, has shown how nonprofits can be manipulated to the benefit of the founders.


No _proof_ of wrongdoing. They also failed to come up with any plausible reason for the payment other than wrongdoing.


IANAL, but based on the wording, it seems like it's only unlawful to create w/ intent to distribute, or to knowingly distribute, a deep fake IFF you're facilitating unlawful conduct. So for example, blackmail? Does libel count? Does the very act of distributing a deep fake count as libel against the subject?

Can any lawyers chime in?


> (2) the term ‘deep fake’ means an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual;

I see nothing of this act that restricts the applicability of so broad a definition of "deep-fake" so as not to cover the activities of Hollywood, specifically the CGI mapping of actors onto body doubles such as that performed in "The Crow" (1994).

Yet another deeply troubling knee-jerk reaction of an act that promises to catch "just the bad".


Conversely, I think that we may soon need regulation around Hollywood's right to digitally reincarnate dead actors for use in their commercial film productions.

The idea of Universal CGI'ing George Carlin into saying politically-tinged speech he would disagree with would be a travesty. And I'm not sure how that would be prevented, save his estate successfully suing Universal.


I mean deep fakes clearly seems protected by the first amendment, particularly strongly in the case of parody. That's not too say however they are clear from libel law.


So are they protected or not?


Basically my argument is that they're protected from criminal but not civil charges.


The crime is not well-defined. Meanwhile, there's this out without further explanation: "No person shall be held liable under this section for any activity protected by the First Amendment to the Constitution of the United States.’’


Lots of legislation effectively leaves it to courts to hash the actual specifics out. Courts can only rule on specific examples, so it prevents a genre of legal stupidity.


Does anyone know how this act would interact with the first amendment?

It's my understanding that the first amendment is extremely broad - if donating money can be counted as protected speech, surely 'parody' videos of politicians would also be?


Page 4, line 8:

    (2) FIRST AMENDMENT PROTECTION.—No person shall be held liable under this section for any activity protected by the First Amendment to the Constitution of the United States.’’.


This seems like a bit of CYA on the part of the drafters for the inevitable court challenge. "We did consider the constitutional issues, we promise!"

Without specifics, that line is meaningless.

edit: This speaks to a larger problem with how our representatives work these days. They're supposed to respect the limits placed on their power and they're supposed to guard the powers they are given jealously. Instead, they trip over each other to run away from their lawful authority, and when they finally decide to "do something", limits are dismissed with a wave of the hand.


But what activity does this act prohibit, that isn't "protected by the First Amendment"?


Libel is a crime and some states have defamation laws. It wouldn't be hard to tack this onto those and--if they really want to push it, which they will if given the chance--make it a sex crime to get that lifelong punishment.


Parody is a valid defence only if a reasonable person would consider it as parody. We have libel laws. They work well enough. I think of this an extension of libel protection.

Though in the long run, the defence will need to be algorithmic. Not all actors are stateside.


Well making a deep fake is legal, however using it for illegal activities is not - freedom of speech is not freedom of consequences. You're free to make a bomb threat and nobody will silence you if you do, but it's likely the FBI will knock on your door.

It's not about what you say (or in this case produce), it's about your intent.


Not to disagree with your statement about intent, but...

>freedom of speech is not freedom of consequences

...that's not a very convincing form of free speech. What exactly does "free speech" mean then, if not freedom from consequences? It's not as if it's even possible to silence someone before the fact.

"You are free to criticise comrade Stalin! You are also free to go to gulag!"


>freedom of speech is not freedom of consequences

Usually when I see that statement it’s to make the point that freedom of speech (in the first amendment sense) protects you from government consequences, but not societal consequences. The government won’t stop you from saying something offensive, but your country club can kick you out for saying it.

I agree with you that this sense doesn’t really apply here, though.


Merriam-Webster defines freedom of speech as "the legal right to express one's opinions freely".

Creating a deep fake of a celebrity hardly counts as expressing your opinions and if you use it for illegal activities that's far out of scope.


> Creating a deep fake of a celebrity hardly counts as expressing your opinions...

On its own, sure. It may still be one step in the process, however. On their own merits, renting out an audience hall or TV broadcast slot, or operating a 3D graphics program, are not "expressing one's opinions" either. These things are only a means to an end. However, restrictions on such activity would still impact your ability to effectively communicate your opinions to the public.

Use of "deep fakes" (or shallow ones) in a commercial context to trick someone into purchasing goods or services or otherwise agree to a contract under false pretenses is fraud—which in the end is just a form of theft, and deserves to be treated as such. That doesn't violate the 1st Amendment because it's not the speech per se which is being punished, but rather the act of taking someone else's stuff when its ownership was never properly transferred to you through a valid contract. Contracts require "meeting of the minds", which is precluded by fraud.

Other than that narrow scope, the law has no business being involved.


>Other than that narrow scope, the law has no business being involved.

I would disagree there. The personal rights of the people who are being deepfaked (both provider of body acting and the face) are violated unless you get consent from both. These personal rights in my opinion also trump any free speech rights as these people have their own, more important rights to their body and images thereof.


The most stringent proportional response available to punish people for creating these "deep fakes" would simply be for those offended by the practice to do the same in return. That is their right, of course—turnabout is always fair play—and it may help reduce the impact of the original fake video, but it's unlikely to be seen as much of a punishment.

> as these people have their own, more important rights to their body and images thereof

Nonsense. You do own your physical body, of course, and consequently have the right to use it as you please, but there is no right to control images of your body. That would be tantamount to claiming the right to control the contents of others' minds.

Note that your right to use your body does not imply that you have the right to use others' bodies or other property as you please just because your own body happens to be involved. Owners have the right to veto any action which affects their use of their property; in other words, you have the right to do whatever you want as long as it involves only your own property, but need others' consent if your actions would impact their use of their property.

Regarding the image: (a) the image (the content, as opposed to the physical media) is not property; (b) even if it were, it would not be your property; (c) even if it were your property, someone else's use of the image would not have any impact on your own ability to use it, so you wouldn't have the right to veto that use.

Rights sometimes overlap, but they never conflict and are certainly never "trumped" by other rights.


> but there is no right to control images of your body. That would be tantamount to claiming the right to control the contents of others' minds.

There is, it's called copyright. At least where I live you own copyright on the image of your face and body. That gives you the right to tell others how and when that image may be used.

And you would certainly have the right to veto certain uses of your image.

Rights conflict and some rights can trump others. For example, the police may place the rights of others not being hurt over your right to bear arms in the US. If you spoke loud enough with a megaphone into someone's ear, their right to not be bodily harmed would conflict with your right to free speech and likely would trump it.

Another example would be religious rights; they frequently trump other rights and laws and some laws and rights trump your freedom of religion.

Rights aren't equal and black and white, they have an order of importance that depends on the situation and their importance to society.


This actually would be the first step towards criminal libel.

As traditionally malicious cartoons would be a civil matter. This is just an animated misrepresentation.


> Creating a deep fake of a celebrity hardly counts as expressing your opinions

I agree that deep fakes seem an odd and unexpected fit for a dictionary editor's word choice made under the obvious constraint of brevity. But I don't get the relevance.

To be clear: I don't see what it has to do with the relationship between producing/distributing a deep fake and a) the explicit rights enumerated in the 1st Amendment, b) the implicit rights reserved for citizens by the 1st Amendment being written in the scope of the U.S. Constitution, and c) 200+ years of legal precedent which clarifies the scope of free speech protections of the 1st Amendment.


Would you say that this parody video [1] which takes several speeches from then-british-prime-minister David Cameron and cuts between them, sometimes mid-word, to construct a song with lyrics like "I am disgusted by the poor, and my chums matter more" is political speech? Would you say it's fake, and of a celebrity?

[1] https://www.youtube.com/watch?v=0YBumQHPAeU


I would say it's not political speech but it's a parody containing a figure of public interest.


It would seem to ban only international or interstate created deep fakes. This appears to prop up the locally created deep fake market.

I'm only half kidding, it seems like the immediate defense would be: I made it at home or in a data center in my home state thus the statute does not apply.


I suspect this limitation is because the supreme Court has traditionally held the federal government to such limitations in many areas.


The US Constitution only gives the Federal government the power to regulate international and interstate commerce (and of course other powers too, but those aren't relevant here). Article I section 8: https://www.archives.gov/founding-docs/constitution-transcri...

The Bill of Rights (first 10 amendments to the Constitution) explicitly states that all powers not explicitly granted to the Federal government are retained by the states and by the people. So that's probably why this limitation exists.

Of course, the Federal government has really stretched the meaning of "To regulate Commerce ... among the several states" and has gained quite a bit of power out of that line. But it's still a limitation.


I'd bet that this act would paradoxically increase the power of "deep fakes" in that the problem isn't the existence of "deep fakes" but each person's reaction to them. The prevailing culture of knee-jerk hot-takes already thrives on selectively edited video and fresh discoveries of forgotten pasts. If it isn't inoculated into thoughtful skepticism by steady revelations of various fakes it will become more susceptible to the few left un-prosecuted under this law.


How does one prove the fake is fake.

There's going to be development of "live notarised" data streams, so a camera feed gets certified, etc..

People are going to use deep-fake tech as an excuse -- someone faked me, I'm not racist/sexist/fascist/... . How can you show the real is real.

I'm imagining a future where you can choose the actors in the lead roles of your films; great for narcissists!


Heinlein explored this idea of Fair Witness as a profession in his 1961 novel Stranger in a Strange Land.

https://lccn.loc.gov/61011702


Jeez, yet another annoying new law encroaching on something fun I do as a hobby. I'm really sick of our lawmakers doing this when our existing legal code seems sufficient to punish abuse.

Does this mean I should take down my Donald Trump text to speech engine [1]? Or consult a lawyer?

It yields really poor quality (right now), and I doubt any reasonable person would consider it to be actual audio from Trump.

Does this prohibit me from improving it? I was about to train an ML model on my samples and switch to parametric generation.

[1] http://trumped.com


In other words: deep state sets the stage for forthcoming shocking real videos of major players.

Yes, yes, downvote into oblivion, but remember this when it happens...


This concerns 'records', but I wonder if a program that can do a realtime overlay would solve the moral/possible legal issues.


I'm curious about peoples' views on something.

It's clear that in the future we will be able to create fakes that are effectively indistinguishable from reality. The audio in this 'Trump speech' [1] is already remarkable. So there are two ways we can go from here. The first is to try to maintain faith in multimedia. What you see or hear is probably real because we try to pass a bunch of laws making it a really bad thing to try to impersonate people.

The second is to go the route of internet speech today. If somebody claims to be somebody of note, you generally would not believe them without extensive proof. And so what they say does not reflect upon the person they claim to be. If video/audio manipulation tech was allowed without constraint, this would eventually become the same for general audio/video. Having no trust in what you see or hear is not a great thing. But at the same time, I think there's a very good argument to be made for the fact that people are already far too susceptible to fake information because we, even before the 'real' advent of deep fake type technology, are still teetering on the precipice between believable and not.

For instance this [2] famous image of "animal testing" that keeps going viral on social media every couple of years. It has nothing to do with animal testing, but people are naive. Deep fakes would throw us well off that precipice to the point that I think we'd see substantially increased amounts of scrutiny given to misinformation. The downside here is of course we'd also see substantially increased amounts of scrutiny given to legitimate information, though I'm not entirely sure I see that as a negative.

[1] - https://www.youtube.com/watch?v=7Gpc_artOYI

[2] - https://speakingofresearch.com/2014/02/27/fact-into-fiction-...


sooner or later they have to make a full length feature film using this technology


I thought there already was (some of) one. Starring Nicholas Cage, as everyone.


That's amazing.


Forrest Gump comes to mind, and this was what, 20 years ago?


an *entire full length movie


It's funny how a very simple DL architecture any beginner could prepare ended up in a prohibition act - essentially training two convolutional autoencoders, one on the face one wants to replace (extracting dataset by e.g. tracking face with OpenCV in target video), the second on many pictures of the face one wants to replace (e.g. scraping from Internet or known videos), then essentially gluing together encoder from one and decoder from the other, assuming latent variables match, and suddenly a fake video is done.

IMO if you really want to control deep fakes, you should use a blockchain and track all processing steps, starting from image/video acquisition. I don't want to give anyone ideas, but they will do it anyway.


This doesn't even begin to make sense. You can't force people to register any creation on the blockchain and there can't be a main "only real things allowed" chain. If we want to shoehorn a blockchain into every possible problem ever, lets begin with ones that could actually work.


The idea is to enforce equipment manufacturers to prepare uniquely identifiable IDs in all imagery/footage their equipment produces, entice all software vendors to register all operations in a blockchain and store a reference to it in metadata; then reject any footage that can't have all steps verified as "fake".

How is that any different than tracking production inputs in logistic chains using blockchain?


Nothing can stop me from pointing a camera at my computer monitor and recording a verified video of a non-verified video.


Sure, but in that case your chain starts with your camera, producing "pro" looking video which would be totally unlikely (not mentioning trivial stuff like noticing monitor/lens distortions etc.). Analyzing subsequent processing steps stored in blockchain would likely show very low probability of authentic work due to missing many required steps (OK, there is still certain low probability you can fool it somehow, but would you want to waste so much time/processing power on it?)


This is whack-a-mole. If I can design deepfake software, it isn't much harder to design it to specifically anticipate the user filming the result. The user would input their monitor specs, their camera specs, etc., and the software would produce a weirdly distorted video which looks perfect when filmed with that camera from that monitor.

Or, people just outright sell doctored cameras where you can intercept the input feed.

This isn't like adult content filtering, where all that matters is that kids can't get around it. You have to assume people who know what they're doing are going to attack your technical solution, and apply ingenuity in doing so. When the enemy consists of hackers, you can't ward them off with a hack!


I believe that the usual response to this is that the devices include some kind of unmodifiable time and location stamping, although that argument spirals out of control pretty quickly.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: