> The term 'deep fake' means an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.
This limits the scope of the act to prohibiting deep fakes that are not explicitly labeled as such. If the recording (or, potentially, links to the recording) contains any type of disclaimer, that should be sufficient to establish that no reasonable observer would consider it to be authentic.
But I wonder if this provides enough protection to victims in the case of deep fakes where a person's face is grafted onto sexual content or other content that might embarrass the person, whether or not a disclaimer exists.
For instance, if someone puts an acquaintance's face into a sex scene and distributes it online, then even if there's a huge "This is a deep fake" scrolling disclaimer across the video, that person may still feel as if they have been defamed.
A lot of people might not like relinquishing the authoritarian reaction to control the fictional creative output of others when it specifically targets them in a negative and public manner, but unfortunately respecting freedom of speech demands that deep fakes be protected from civil or legal action. We have to adapt our laws to reality as technology changes it, not the other way around.
However, if a deep fake is used to specifically target and harass an individual on a personal level, i.e. not parody/political speech but actually contacting and engaging with family/friends/associates of the victim in order to cause harm, then afaik that is already covered by existing harassment laws.
social norms and law as a consequence always have to strike the balance between personal freedom of [x] vs the others personal freedom of [x,y].
just because a technology exists, it doesn't mean it should be used in any way. As an extreme example let's take "knife technology", you just should not put that tech piece into any human body and see what happens... so it's clear where the balance is in this case. deep fake should be the same as any other human dignity vs. free speech discussion imo. There seem to be different norms in different countries too regarding this, it's not always authoritarian.
The reality is that when that happens, and we all know it will, the good legislators will be under ENORMOUS pressure to close that loophole, and that loophole will be closed. Same thing happened with regular revenge porn, it was "loophole legal" at first, then the law came down on it like a hammer because the pressure just grew too large. The amount of time behind bars that they give out for revenge porn is increasing even as we type.
To be completely honest, I'm fairly certain this act won't pass muster as it's written specifically because the revenge porn loophole is in it. The combination of the anti-revenge porn ladies and the law enforcement people will just make this way too risky a "yes" vote for anyone in a competitive district.
> if a deep fake is used to specifically target and harass an individual on a personal level ... then afaik that is already covered by existing harassment laws.
So you need a more narrow definition. If I do a deep fake of Melania giving Trump a golden shower in Moscow, that goes way beyond parody, even though it is, technically, by definition, parody.
So there is no way to say that is not a personal attack, AND there is no way to say that is not a parody.
That's the loophole that needs to be closed. You can try to argue that showing a hot school teacher or some male cop performing oral on a roomful of men was only parody, but if you get away with it there will be about 20 seconds before the law is changed. All the activists and the law enforcement people will claim that we just can't have video of every ex-girlfriend, cop, school teacher, or boss in the nation involved in whatever sexual activity seems the most degrading.
By what principle is this the case?
> That's the loophole that needs to be closed.
You're not closing a loophole. You're limiting free speech.
Don’t defamation or personal honour laws cover this scenario already in most countries, as would be the case with obscene caricatures of someone? Do we need new laws specific for deep fakes in that case? What about for videos that very obviously just paste the target’s face on top of another video?
> All Characters and events in this show -- even those based on real people -- are entirely fictional. All celebrity voices are impersonated... poorly. The following ...
Presumably this 'created' case is meant to prevent something like Nvidia's face-gen tool being used to create a lookalike video from scratch with the defense "it's not an alteration".
But there's no wording restricting this to digital generation (or alteration). As far as I can tell, this covers anything from a photo with a lookalike to a real but misleadingly-cut sound recording. Am I missing something, or would this have made producing that 2004 faked photo of Jane Fonda and John Kerry a felony offense?
In any event, are you sure it protects you from criminal liability if you label something as a deep fake, then other people redistribute it without that label?
If I print off a copy of a Van Gogh making it clear that it's a copy, and then somebody else sells it as an original, who is liable for fraud?
Yes they both involve making visual copies of things but the underlying dynamics are completely different.
No, I'm not sure. I said "potentially," but I don't know.
I think it may be plausible to argue that labeling a recording outside of the recording itself should be one factor in deciding whether a reasonable observer would consider it authentic.
And I think it may also be plausible to argue that doing so protects the labeler, and that (particularly if the license to redistribute requires that the disclaimer be shown in any subsequent distribution channels) only the people who distribute it without the label should be found to have violated the act.
10 ‘‘(2) the term ‘deep fake’ means an audiovisual
11 record created or altered in a manner that the
12 record would falsely appear to a reasonable observer
13 to be an authentic record of the actual speech or
14 conduct of an individual; and
Additionally, if you look at this and just say "well, we all know what a deep fake is, so your point is moot," I will say, somewhat at the risk of contradicting myself, that maybe the language needs to be forward-thinking to cover whatever the next "deep fake" is.
In my opinion, the sort of clause above would be better written like:
The term "Computer-generated audiovisual impersonation" means an
audiovisual record created or altered by computer generation in
a manner that the record would falsely appear to a reasonable
observer to be an authentic record of the actual speech or conduct
of an individual;
Maybe that wouldn't be so bad. This technique has been used to deceive untold times.
If you cut a real interview answer to align with a different question than was asked, is that altered to a false appearance? What if it's done in good faith, to streamline out an interviewer's request for clarification?
If you share real, unedited footage of someone, but end it before they give a caveat to their comment, is that an authentic record of their speech? How about if you cut off a more serious inversion, like "what my opponents want you to think I'd say..." What about footage of a fight that starts too late and misrepresents the aggressor?
Weirdest of all, if you edit an interview favorably, is that a felony? Since this isn't a libel law but a public-interest law, could cutting out a stupid answer or trimming filler words form an inauthentic record of someone's speech?
(Actually, even worse: a slightly broad reading of 1B and 2A suggests that it's even possible to commit this crime without intent. The fake need not be purposefully misleading, and intent to facilitate criminal conduct doesn't necessarily require knowledge that the conduct facilitated would be criminal.)
This law does a pretty good job of saying that it's illegal to computer generate a video of a politician taking a bribe. But even if deceptive editing ought to be illegal, I think accomplishing that with this definition and these penalties would be disastrously unclear.
Finally, I think it would give people more tools for going after the purveyors of misleading content. Either they would need to have a disclaimer, which could be pointed to for those that accepted the content without reservation, or they could face repercussions.
I see no problem with forcing people and organizations that purport to be representing a real situation but are instead presenting a view of that situation ideally suited to their own narrative to note they are doing so. Just because we've been conditioned to be tolerant of it in our media does not mean it's acceptable or needs to continue as it has.
As far as I can tell, this wording covers any sort of misrepresentation set to film: creative editing, lookalikes, photo composites, etc. What constitutes undue creative editing is worryingly undefined; there are a lot of practices which create 'inauthentic records' for both innocent and malicious ends, like cutting segments out of an interview. And it seems to have seriously strange edge cases, since unlike libel rules it's not defined by harm to the person misrepresented - can you edit an interview favorably and be charged for that?
I suppose the "intent to facilitate criminal/tortious conduct" clause is supposed to bypass all of that, but I'm not convinced it actually does so. §1041.b.2.A requires actual knowledge that a a record is false for distributors, but §1041.b.1 does not. Presumably the idea is that creating a deepfake shows knowledge, but the deepfake definition itself doesn't require intent by the creator; it's sufficient that the resulting record be false and seemingly authentic. If "intent to facilitate criminal conduct" is read in the same way as e.g. conspiracy statutes, it could connect intent to the conduct, not the criminality. (As a further bit of weirdness, it would then be legal to create without intent to distribute, and later distribute without knowledge. But if you create while intending to distribute, you're in hot water.
An extreme example: someone rounds a corner and sees Person A punching Person B. They arrive too late to see that B initiated the fight and A acted in self defense. They quickly take some video so there's evidence, and after the fight ends send it to B, who requested it as evidence with which to file an assault charge. The taper has now created with intent to distribute what falsely appears to be an authentic record of A's conduct (intent not required). And they reasonably expected that the record would be used to affect the conduct of a state judicial proceeding (§1041.c.1.A). The videographer, without knowledge or intent, is now facing 10 years in prison for creation of a deep fake.
There is precedent, even in the US, for criminal libel laws. Perhaps following that path (combined with continued work on detection and defeat of the technology) would be preferable. Since libel and defamation are well-defined and have a long history of jurisprudence, many of the constitutional and legal issues which would have to be answered in opening a new avenue of speech restriction could be contained within the existing contexts.
I'd even think leaving it in the civil arena would make sense until it became a problem worthy of criminalization. Legislating a matter before it arises is almost never a good idea, from either a prudential or principled standpoint.
I agree that it still seems like premature legislation, though. Even from the viewpoint of a random programmer, it's both overbroad (this covers any video editing, not just 'deepfakes' as commonly understood) and incomplete (what happens when no recognizable people are represented, but an edit is still used to facilitate criminal action?)
If doctored video becomes as believable as authentic video, it's going to be a major upheaval, returning us to a world where seeing isn't believing. Probably not completely; there will be an obvious market for hard-to-fake authentication, even in forms as simple as registering a video hash with a trusted source as soon as it's taken. But the departure from a world where a high-res video of an event is reliable proof is going to be a very big change, and I seriously doubt any law written today will productively adjust for it.
The fundamental problem is going to be that many people seem to actually like being in their comfortable echo chamber. So, if they're presented with a video that reaffirms their hunch that the Clintons run a child trafficking ring from under a pizza store, those people seem to be unlikely to venture out and do the necessary due diligence to verify the information they're being given :(
Especially since people in those circles have for the last year been pushing the idea that the attention on deep fakes is preemptive cover for such a video rumored to be out there.
Like, seriously, this is shaping up to be a cyberwar nobody wants to admit. Either we can trust video sources from now on, or we can't - how is it going to be possible for TPTB to continue to divert public attention with video, if we can no longer trust our own eyes?
Can you clarify this part, especially "self-verified unsigned"? I don't think I understand it - presumably you could generate camera-and-video-specific signatures which an edit couldn't reproduce, but it's not obvious to me how you would verify them without access to the original camera.
As far as sign-while-recording, I agree. We'll quickly end up in a world where video of ongoing events is signed as it's taken. The simplest tactic I can think of is submitting a hash of the video file to some public store, which at least authenticates when it was shot. For an event with a well-known time (a speech, public protest, etc), that should suffice to prove the footage isn't faked, and corroboration of multiple videos should prove it's not an outright fabrication (e.g. pre-rendering a whole different speech).
For time-nonspecific events like a video of a suspicious meeting, this wouldn't be enough. A hardware signature might prove that a video was real, but we'd still face a significant change in that unsourced videos floating around the web couldn't be treated as convincing. We're already getting there with photos, of course, but at the moment a clear, high-res video is pretty trustworthy without a source.
All that matters is that it is possible to do so, and supporters of political candidate A will claim with possible deniability that political candidate A never said "X". While also claiming that political candidate B definitely said "Y", even though this may be false. Soon there will be no way to prove whether or not the videos are actually real and people will believe whatever fits their preconceived notions.
For example you'd fake hidden camera footage of politicians beating prostitutes and accepting brown envelopes from russian agents.
Strangely a German TV satire team have claimed they doctored the video to make it look like he's giving the finger: https://www.youtube.com/watch?v=Vx-1LQu6mAE (although I stil wonder if they did the opposite: doctor the gesture out of the video, IMO the gesture itself was par for the course for him and perfectly acceptable).
John and Tony Podesta collect some very creepy art - google it. Certainly not evidence of a crime, but fairly-public figures closely connected to a presidential campaign collecting art of children bound and in morbid situations raises some eyebrows for sure.
We would need to invent something like public/private key auth which signs every videostream and which allows you to trust specific sources like reporters from NYT for example.
Video of Trump saying something already proves nothing especially not that he ever said it because now he says he never did. And all that even without deep fake technology.
People care less and less about history of reality. They become aware that it's to large extent unknowable drowned in narratives.
Right, like my favorite one is the idea that she took $150 million from Russia in exchange for a shipment of our uranium. Those funds went to the Clinton Foundation a _charity_. Sheesh.
1. If you were going to give somebody an illegal bribe, would you write "bribe" in the For: line on the check? (no, you would hide or disguise your bribe)
2. What plausible, legitimate reasons are there for Russia to donate to the Clinton Foundation?
3. Is charitable giving to anybody, especially in quantities exceeding $25-50 million, normal for Russia?
Cthulhu_ and arcticfox are definitely both right that charities are used by many rich people to avoid taxes and just because it's a charity doesn't mean the owner isn't benefiting from it.
I'd appreciate hearing your response. Not saying it happened one way or the other, just making some observations based on the tiny bit I've heard about this.
Regarding #2, a quick search turned up . I don't know anything about the Borgen Project, but a quick glance at the page doesn't look too suspicious. So, assuming that page is correct and that Russia donated similar quantities in 2013 as they did the same year the uranium incident happened, they donated over 50% to the Clinton Foundation. If those assumptions are fair and that actually is the case, it sure sounds suspicious to me.
While independent investigations have concluded there was no wrongdoing, the reason above is really no defense. The Trump Foundation, for example, has shown how nonprofits can be manipulated to the benefit of the founders.
Can any lawyers chime in?
I see nothing of this act that restricts the applicability of so broad a definition of "deep-fake" so as not to cover the activities of Hollywood, specifically the CGI mapping of actors onto body doubles such as that performed in "The Crow" (1994).
Yet another deeply troubling knee-jerk reaction of an act that promises to catch "just the bad".
The idea of Universal CGI'ing George Carlin into saying politically-tinged speech he would disagree with would be a travesty. And I'm not sure how that would be prevented, save his estate successfully suing Universal.
It's my understanding that the first amendment is extremely broad - if donating money can be counted as protected speech, surely 'parody' videos of politicians would also be?
(2) FIRST AMENDMENT PROTECTION.—No person shall be held liable under this section for any activity protected by the First Amendment to the Constitution of the United States.’’.
Without specifics, that line is meaningless.
edit: This speaks to a larger problem with how our representatives work these days. They're supposed to respect the limits placed on their power and they're supposed to guard the powers they are given jealously. Instead, they trip over each other to run away from their lawful authority, and when they finally decide to "do something", limits are dismissed with a wave of the hand.
Though in the long run, the defence will need to be algorithmic. Not all actors are stateside.
It's not about what you say (or in this case produce), it's about your intent.
>freedom of speech is not freedom of consequences
...that's not a very convincing form of free speech. What exactly does "free speech" mean then, if not freedom from consequences? It's not as if it's even possible to silence someone before the fact.
"You are free to criticise comrade Stalin! You are also free to go to gulag!"
Usually when I see that statement it’s to make the point that freedom of speech (in the first amendment sense) protects you from government consequences, but not societal consequences. The government won’t stop you from saying something offensive, but your country club can kick you out for saying it.
I agree with you that this sense doesn’t really apply here, though.
Creating a deep fake of a celebrity hardly counts as expressing your opinions and if you use it for illegal activities that's far out of scope.
On its own, sure. It may still be one step in the process, however. On their own merits, renting out an audience hall or TV broadcast slot, or operating a 3D graphics program, are not "expressing one's opinions" either. These things are only a means to an end. However, restrictions on such activity would still impact your ability to effectively communicate your opinions to the public.
Use of "deep fakes" (or shallow ones) in a commercial context to trick someone into purchasing goods or services or otherwise agree to a contract under false pretenses is fraud—which in the end is just a form of theft, and deserves to be treated as such. That doesn't violate the 1st Amendment because it's not the speech per se which is being punished, but rather the act of taking someone else's stuff when its ownership was never properly transferred to you through a valid contract. Contracts require "meeting of the minds", which is precluded by fraud.
Other than that narrow scope, the law has no business being involved.
I would disagree there. The personal rights of the people who are being deepfaked (both provider of body acting and the face) are violated unless you get consent from both. These personal rights in my opinion also trump any free speech rights as these people have their own, more important rights to their body and images thereof.
> as these people have their own, more important rights to their body and images thereof
Nonsense. You do own your physical body, of course, and consequently have the right to use it as you please, but there is no right to control images of your body. That would be tantamount to claiming the right to control the contents of others' minds.
Note that your right to use your body does not imply that you have the right to use others' bodies or other property as you please just because your own body happens to be involved. Owners have the right to veto any action which affects their use of their property; in other words, you have the right to do whatever you want as long as it involves only your own property, but need others' consent if your actions would impact their use of their property.
Regarding the image: (a) the image (the content, as opposed to the physical media) is not property; (b) even if it were, it would not be your property; (c) even if it were your property, someone else's use of the image would not have any impact on your own ability to use it, so you wouldn't have the right to veto that use.
Rights sometimes overlap, but they never conflict and are certainly never "trumped" by other rights.
There is, it's called copyright. At least where I live you own copyright on the image of your face and body. That gives you the right to tell others how and when that image may be used.
And you would certainly have the right to veto certain uses of your image.
Rights conflict and some rights can trump others. For example, the police may place the rights of others not being hurt over your right to bear arms in the US. If you spoke loud enough with a megaphone into someone's ear, their right to not be bodily harmed would conflict with your right to free speech and likely would trump it.
Another example would be religious rights; they frequently trump other rights and laws and some laws and rights trump your freedom of religion.
Rights aren't equal and black and white, they have an order of importance that depends on the situation and their importance to society.
As traditionally malicious cartoons would be a civil matter. This is just an animated misrepresentation.
I agree that deep fakes seem an odd and unexpected fit for a dictionary editor's word choice made under the obvious constraint of brevity. But I don't get the relevance.
To be clear: I don't see what it has to do with the relationship between producing/distributing a deep fake and a) the explicit rights enumerated in the 1st Amendment, b) the implicit rights reserved for citizens by the 1st Amendment being written in the scope of the U.S. Constitution, and c) 200+ years of legal precedent which clarifies the scope of free speech protections of the 1st Amendment.
I'm only half kidding, it seems like the immediate defense would be: I made it at home or in a data center in my home state thus the statute does not apply.
The Bill of Rights (first 10 amendments to the Constitution) explicitly states that all powers not explicitly granted to the Federal government are retained by the states and by the people. So that's probably why this limitation exists.
Of course, the Federal government has really stretched the meaning of "To regulate Commerce ... among the several states" and has gained quite a bit of power out of that line. But it's still a limitation.
There's going to be development of "live notarised" data streams, so a camera feed gets certified, etc..
People are going to use deep-fake tech as an excuse -- someone faked me, I'm not racist/sexist/fascist/... . How can you show the real is real.
I'm imagining a future where you can choose the actors in the lead roles of your films; great for narcissists!
Does this mean I should take down my Donald Trump text to speech engine ? Or consult a lawyer?
It yields really poor quality (right now), and I doubt any reasonable person would consider it to be actual audio from Trump.
Does this prohibit me from improving it? I was about to train an ML model on my samples and switch to parametric generation.
Yes, yes, downvote into oblivion, but remember this when it happens...
It's clear that in the future we will be able to create fakes that are effectively indistinguishable from reality. The audio in this 'Trump speech'  is already remarkable. So there are two ways we can go from here. The first is to try to maintain faith in multimedia. What you see or hear is probably real because we try to pass a bunch of laws making it a really bad thing to try to impersonate people.
The second is to go the route of internet speech today. If somebody claims to be somebody of note, you generally would not believe them without extensive proof. And so what they say does not reflect upon the person they claim to be. If video/audio manipulation tech was allowed without constraint, this would eventually become the same for general audio/video. Having no trust in what you see or hear is not a great thing. But at the same time, I think there's a very good argument to be made for the fact that people are already far too susceptible to fake information because we, even before the 'real' advent of deep fake type technology, are still teetering on the precipice between believable and not.
For instance this  famous image of "animal testing" that keeps going viral on social media every couple of years. It has nothing to do with animal testing, but people are naive. Deep fakes would throw us well off that precipice to the point that I think we'd see substantially increased amounts of scrutiny given to misinformation. The downside here is of course we'd also see substantially increased amounts of scrutiny given to legitimate information, though I'm not entirely sure I see that as a negative.
 - https://www.youtube.com/watch?v=7Gpc_artOYI
 - https://speakingofresearch.com/2014/02/27/fact-into-fiction-...
IMO if you really want to control deep fakes, you should use a blockchain and track all processing steps, starting from image/video acquisition. I don't want to give anyone ideas, but they will do it anyway.
How is that any different than tracking production inputs in logistic chains using blockchain?
Or, people just outright sell doctored cameras where you can intercept the input feed.
This isn't like adult content filtering, where all that matters is that kids can't get around it. You have to assume people who know what they're doing are going to attack your technical solution, and apply ingenuity in doing so. When the enemy consists of hackers, you can't ward them off with a hack!