Hacker News new | past | comments | ask | show | jobs | submit login
South Park creators have new political satire series with AI-generated deepfakes (theregister.com)
813 points by LaSombra 6 months ago | hide | past | favorite | 302 comments

I _think_ that the Michael Cain section is Peter Serafinowicz doing a Michael Cain impression (which has then been deepfaked), but it could be Lyre Bird or similar! The fact that I really can't make my mind up which it could be and the weird sense of doubt, to me, means that this has hit the mark perfectly.

I've been thinking about this video a lot since I first saw it. It's genuinely very unsettling.

He also did Michael Cain in his show, it’s extremely spot on.


I think Rob Brydon's is the best.

This is the first thing I thought about when I saw the Sassy Michael Cain. This is the only thing I remember from The Trip.

At about 3:18 into the video you can see what appears to be Serafinowicz playing Sassy as the camera moves behind him. Makes sense that faking would be hard from strange oblique angles, so looks like they just skipped it.


NYT did an article about it [0], which has some details on the casting. Peter Serafinowicz is indeed doing his Michael Caine impersonation, and also does Trump. Other characters are done by Trey Parker and various family members.

0: https://www.nytimes.com/2020/10/29/arts/television/sassy-jus...

I thought the punchline was going to be a wavenet-style audio deepfake for that segment haha, I had no idea it was a person doing it haha. It explains why they repeatedly said it was impossible for a _human_ to do a perfect impersonation.

That's where my mind went as well. There are some audio artifacts right around the time he says that that almost hint at the audio being generated. I really enjoyed the ambiguity.

In my experience wavenet-audio isn't that good. You can definitely hear the difference. That sounded very natural. I'd be surprised if it wasn't an impression.

That's exactly what Michael Caine would say!

Wouldn't be surprised. While searching him on Youtube I even stumbled on Peter Serafinowicz's appearance on Stephen Colbert where they're talking about him even doing 'bad lip reading' videos for trump in different personalities, one of them being 'Sassy Trump', so he might have had something to do with this show coming about in general.

I don't think he's doing the voice for Sassy Justice, though. That sounds more like Trey Parker to me. But they probably brought him in to do Michael Caine.


I think you need to listen. There's little differences between Peter Serafinowicz as Fred Sassy and Trey Parker as Al Gore. There's tiny little mistakes in the voice. I think you need to watch it again, but this time watch with your ears. </s>

Funny, I kind of got the opposite message. The Tom Cruise bits were poking fun at the fact that all of these deepfakes are every bit as obvious as the puppet. And they didn't even try deepfaking the voices because it doesn't work at all. Just listen to the latest state of the art voice mimicking samples. They're terrible! It says a lot that the only actually convincing fake in the entire video was a human impersonator's voice.

That said, this stuff will improve over time. But before you label it the end of the world you really have to think about how much has always been possible with impersonators and makeup/prosthetics.

The podcast Twenty Thousand Hertz did an episode on deepfake voices[1], which is my only real exposure to the audio side of this that I know about. Certainly wouldn't say it didn't work or was terrible.

[1]: https://www.20k.org/episodes/deepfakedallas

I guess it's a matter of opinion, but it's obvious to me that the quality in that episode and in the state-of-the-art models is very, very far from the quality of that Michael Caine impersonation. If they had deepfaked Michael Caine's voice, nobody would have been fooled.

I would have to agree that impersonations still come off as far more convincing at the moment, and some are really good.


Peter Serafinowicz is a co-creator of the show and I think Sassy Trump was originally his idea, I remember seeing him do it on The Late Show.

This is the funniest thing Matt and Trey have done in years. I don't know if they still write the show, but it hasn't been funny to me in a couple years. This clip had me rolling. This feels inspired and I hope it creates more awareness in the general public.

Trey Parker had a similar reaction [0]:

“But when Parker got to see himself digitally altered to look like Al Gore, he said, “It was the first time I had laughed at myself in a long time.”

Parker added: “I always hate watching myself. Even with ‘South Park,’ I have a perfect image of what it’s going to look like in my head all the time. But on this, there were moments where we felt like kids in our basement again.”

To Parker and Stone, the experience also reminded them of “The Spirit of Christmas,” their 1995 homemade short film that became a viral sensation in a more primitive age of the internet and paved the way for “South Park.””

[0]: https://www.nytimes.com/2020/10/29/arts/television/sassy-jus...

I couldn’t agree more. It managed to feel like good old fashioned low budget internet fun from YouTube yesteryear. Putting aside Matt and Trey, it’s one of the few things that’s made me genuinely burst out laughing in years, period.

It’s made me stop and think about why that is, and the best I can come up with is that very little is surprising anymore. When you get past 30, everything feels like something you’ve seen done before. Deep fake comedy is so cutting edge, that it couldn’t help but be fresh.

Or maybe I’m just a boring old man. The last thing that made me laugh this hard was this Reddit submission, and again, it succeeds because I didn’t see it coming: https://i.reddit.com/r/ContagiousLaughter/comments/gzdja1/on...

Trey Parker is credited as the writer for the vast majority of episodes of South Park, especially since Season 4. Most of their episodes are topical these days, I recommend checking out the recent Pandemic Special.

Matt, Trey and a guy named Robert Lopez wrote the theatre production Book of Mormon, which was well received and won Tony awards (Trey studied Musical Theatre in College, which explains why the South Park Movie was a musical).

Trey's daughter was cast as Jared Kushner in Sassy Justice, which was hilarious, and the lead in Sassy Justice is played by Peter Serafinowicz [0], who is a British comedian!

I hope they make more of these, not just because I'm a fan, but because it massively disseminates the power of deepfakes!

[0] https://www.youtube.com/watch?v=6-7NDP8V-6A

I saw the pandemic special and, I don't know, it just didn't seem funny to me. I still watch Mr. Hankey's Christmas Classics every year when I decorate the tree, and I still love what they did up until around season 12. After that it gets hit-or-miss and after a few more seasons just didn't seem worth watching anymore. I'd chalk it up to getting older and not smoking pot anymore, but I still love their old stuff.

The older episodes are my favourite as well. Ding dong m'kay!

That Robert Lopez also wrote the awesome Avenue Q musical, and co-wrote the Frozen songs with his wife. He is also the youngest person to achieve an EGOT (https://en.wikipedia.org/wiki/List_of_people_who_have_won_Ac...).

Until very recently, I thought they wrote all the episodes without outside involvement, but then I recently came across this clip where Bill Hader talks about being part of a "South Park retreat" where they bring people to brainstorm episodes


Bill Hader was on the writing team, he worked in the office during the seasons. There is a documentary that shows what its like there, 6 Days To Air. The writers room is hilarious.

Others have probably mentioned this, but my first thought is how this will bring deep fakes to people's attention, and make them realize that nothing they see on the internet is necessarily real.


Just post under your own account; why create a new one for this comment?

Presumably they're actually aware they're being an ridiculous troll and don't want to tarnish their main account.

People are going crazy over deepfakes but we've had Photoshop for a long time and the world hasn't ended. We find ways to verify the content of photos through other means, such as their provenance.

i dunno, seems pretty significant to me. used to be that we only had to be skeptical of blurry newspapers photos, then detailed photos, then videos without people as giveaway, now highly detailed videos and audio aren't trustable. You don't think it seems significant that it's getting to a point where only "seeing with your eyes" is believing? All our systems of spreading information beyond our senses start to fail when we can't trust anything outside our perceived experience (or things brought to us by our trusted friends)

Blurring around the face edges and mismatched lighting still give them away. Time will tell if the machines can overcome a trained eye.

I think that is a bit of selection bias. Outside of what I've seen in papers Ctrl Click Face[0] has some of the best "in practice" deep fakes I've seen. The video in the main post is meant to be a joke. But these still aren't state of the art and really are just "some dude" that is doing this on their own. Not a production studio. What I think is different here is that we can think that: photoshop got so good that your average person can create convincingly fake content. At the end of the day deep fakes are part of photoshop (I mean they have neural filters now...). The question is more about ease of use. "Requires a team of highly trained special effects artists" vs "some dude in his basement and a shiny computer."

And a big part is that you have prior knowledge. I'll be honest, I didn't realize it was Trump at first. Nor did a friend that I sent the video to that didn't have the prior that all characters were fake. Took him a good minute. That's a meaningful difference.

[0] https://www.youtube.com/watch?v=H3pV-_iyT4U

Small correction, but the video in the main post was indeed created by a production studio of deep fake artists.

From the NY Times article:

> The “Sassy Justice” creators said they had spent “millions” of dollars to make the video, including the initial investments to produce the halted movie and set up the Deep Voodoo studio, though they declined to specify the exact cost. “It’s probably the single most expensive YouTube video ever made,” Parker said.

Sorry for the confusion, I was saying that Ctrl Click Face is "just some dude." Talented, but as far as I'm aware, just one person.

The perception of things being untrustable (and that it's coming down the pipeline) is what will break larger society. If people are expecting deep fakes to arrive, we'll start seeing the knock-on effects of that erosion in trust waaaaaay before even a fraction of the risks are real. (imho of course :)

Easy to get around this by shooting the deepfakes in high quality, then crushing the final footage into lower quality before sharing it.

Would make those kind of artifacts much harder to identify

Looking at the video here, the two best by far are the Michael Caine and Julie Andrews deep fakes, and in both cases the voice is doing most of the heavy lifting. Deepfake audio is somewhat scarier to me in terms of political / legal chaos than video, much easier to possibly trick someone with a "secret" audio recording than video recording if we some day get to the point of near identical audio mimicry.

Audio deepfakes are getting better but they have a surprisingly long way to go still.

Here's an exploration of a deepfaked Jay-Z reading/rapping the Navy Seal copypasta: https://www.youtube.com/watch?v=UZzYoOdIXoQ

Not sure there’s any deepfakery on the Michael Caine audio. Peter Serafinowicz (the other collaborator and originator of ‘sassy Trump’) is well known for his Michael Caine impersonation.

>Not sure there’s any deepfakery on the Michael Caine audio

Most likely not, and the parent comment seems to agree with you on that. I think they were just trying to point out, in general, that the arrival of commonplace audio deepfakes might be way more disruptive than video deepfakes, despite a lot of people (including myself) who used to counter-intuitively think that video deepfakes would be more disruptive.

Fair point, and I agree audio is potentially more disruptive - especially if it can get to good real-time performance.

except (in my experience as a photographer), most/a significant chunk of the general public not only don't understand photoshop, but will actively disbelieve you about the extent, purpose and outcome of manipulations.

Also, it's one thing to say the world hasn't ended, but that's to potentially downplay at a minimum the idea that commercial and widespread use of photoshop hasn't had widespread effects on body and self-image, creating and interacting with arguably culture-bound psychological issues such as anorexia, bulimia, unnecessary surgery, self- harm, suicide, etc. Or to take examples from non Anglo culture, eyelid removal, skin whitening, nose surgery, etc.

it's true that the world hasn't ended, but that's a thought terminating cliche. there's a lot of evidence it's creating and created significant harm and significant effects.

It helps that a lot of the photoshops going around the internet are either super-sloppy or depict things that are obviously ridiculous. I actually can't think of any time somebody tried to alter a photo in a way that would change the meaning in a believable way and distributed it in such a way that they were trying to convince people it was real. Have I missed something?

Wind the clock back to 2008 when the problem came to the attention of the wider public. Lots of press about Iranian missile tests, which turned out to be a fake.


(and of course, you certainly have missed the fakes that didn't get noticed!)

There's a lot of points in there to unpack, and it's a different set of beliefs and claims from the idea that it's not doing any harm.

Focusing first on 'changing the meaning in a believable way'. Believability is a subtlety bordering on a truism. If it's believable, we would often deem it to have "not sufficiently changed the meaning". If it "sufficiently changes the meaning", does that make it no longer believable?

Second, it's arguable that the whole point of being a good photographer/photo-editor is to make an altered image look "believable" or "impactful" without triggering the brains uncanny valley or rejection response. That's fundamentally why we photo-edit.

For a start, there's actually lots of explicit examples from history:

- Lincoln/Calhoun composite

- Stalin editing opponents during purges

- John Kerry + Jane Fonda in US election

- George bush holding book upside down reading to school child

-etc etc etc.

Those examples are just the obvious things that a general man on the street would accept as photoshopped, and the ones that I know are 'shopped. There's a whole bunch of photography techniques involving telephoto lenses into crowds doing the rounds worldwide to make people look more numerous and close together (to spark social distancing outrage), framing and cropping like of trump's inauguration to make it look more crowded than it really was, and arguably "Hunter biden finger-lake tattoo child-molesting" stuff currently doing the rounds on the internet's seedier sites, but I don't know how/whether they're photoshopped, and you get the photographer's dilemma of discussing the lay person's belief that effects done "in camera" are "real" but effects done "out of camera" are "'shopped": a distinction which is often effectively meaningless.

Then we have the general "mass photoshopping" phenomenon, where practically every commercial image one sees has been explicitly edited to create some effect and giving the idea that the subverted-reality is just 'normal' and not in anyway actively subversive: be it skin that doesn't look like actual skin; eyes, hair and teeth standards and colours that are practically biologically impossible; general biological-hair removal; slimming, shaving and exaggerating the respective female body parts; slimming and expanding male body parts.

And practically everyone's doing it: take Kamela harris' officially supplied photo: https://upload.wikimedia.org/wikipedia/commons/d/d9/Kamala_H...

You can zoom in on the eyebrow and hairline to see such poor use of the blur tool that they barely even bother hiding it any more.

Now, as per my previous point you can argue "oh yeah, sure all our pictures and images are messed with and touched up, but it's not harmful and that's not really changing the photo!".

To that I repeat my initial observation: no, it does indeed seem to have a harmful affect in terms of body-loathing, a drive to self-hatred and consumption, and general reality distortion. It can be picked up upon and observed via the way it expresses itself differently in different societies and different ethnicities by observing the peculiar culture-bound mental illnesses and aesthetic phenomenon that appears as each is exposed to different photo-editing ideals and cultural mores.

And then there's the second point that these deep-fakes will actually be the next level. If we see, and know, that there are problems with our photoshopped media while the common man has folk-beliefs about the objectivity of the photographic media, imagine what's possible once we see that disconnect begins to apply to video + sound as well. It's probably going to get worse. Much much worse.

And remember, with doubtful photos, we have the belief in the authority of sound + video recordings to fall back on. Once we remove the authority of sound + video, we no longer have any effective authority to determine the factual nature of the media we're viewing. The deepfakes aren't generally quite there yet, but in 5 - 10 years, they'll be better. And they only have to get to a baseline where a significant number of people are fooled by them, or at least create enough of a reasonable doubt to allow them to dismiss or accept positions they disagree/agree with, before it becomes a real problem.

Something to consider is that we already had video when photoshop arrived, which is able to merit at least more authenticity than images do. Are there other mediums that's able to take this baton of authenticity now? I don't think so.

Verifiable content is definitely not the norm through history. I’d love to see an academic analysis of whether misinformation really went down by much (if at all) when photography became a thing. Or when camera phones and social media came along.

Yesterday there was a fake Trump tweet with a zillion upvotes on Reddit, and that’s just text. Text is trivial to fake, so maybe the chain of trust we see in text is what we’ll see with everything else moving forward. “An anonymous source within the White House provided this footage,” being countered with, “you can’t trust anonymous sources!” It will come down to who you trust, just like it always has.

Democracy and the existence of social media are not the norm throughout history either. Voters used to depend on the news media, which whatever its faults has some serious institutions built-in for fact checking. Today any idiot can post something that goes viral.

Are you blind at how stupid people are verifying even the basic things?

We as a culture are not able to verify content of deceptively cut videos.

The problem isn't about the existence though. Accessibility and convenience to use this type of technology are the issues here.

People are making a bit deal about these "auto mobiles", but we've had horses for ages.

This is SO GREAT!

I often feel deep existential dread about deepfakes and the lag between when they are viable (now) and when everyone stops trusting video that doesn't have some kind of rock-solid signed/encrypted/testified provenance (years from now or never?).

What better way to educate the public about deepfake technology than with over-the top satire?

I couldn't agree with you more. It probably is the best way to popularize what deep fakes can really do.

I also agree on video provenance/signing. Perhaps signed video will be something devices do by default in the future. If there are multiple devices recording, we can probably find a way to cross-check. Actually, that probably also exists. ;)

Multicamera deep fakes are already a thing. I just saw some patches for injecting the same hardware codec fingerprints into the synthesized video so that one appears to come from a samsung and the other an iphone 11.

> I often feel deep existential dread about deepfakes and the lag between when they are viable (now) and when everyone stops trusting video that doesn't have some kind of rock-solid signed/encrypted/testified provenance (years from now or never?).

Maybe we'll have to go back to sworn witness statements. Worked for thousands of years.

Sadly we are well aware of how inaccurate and untrustworthy eye-witness accounts can be.

But at least there is a human being behind each statement. You can question, you can examine their motives, you can challenge. A video is just a video. An anonymous accuser. You can't ask a video any questions.

Slightly OT, but I finally realized this was probably the long-game behind Facebook et al.'s real-identity-only policy.

Sure it has the side (main!) benefit of empowering targeted advertising.

But fundamentally creating traceability between content/comment and a singular identity enables elimination of the most egregious abuses (mass-disinformation/ganging).

I'm as 90s internet-is-for-anonymous as anyone, but I have to admit traceability has merit in the larger social ecosystems.

We can create decentralized identity systems, where you decide which information should be public, while also providing accountability.


Rather than let mega corps control identities, which should be building open source tools to allow developers to include decentralized identities within their platforms, preferably systems that interoperate with Ethereum.

Much of the worst content is published by people under their own names and news bylines. And then happily retweeted by people under their own names.

It gets worse when businesses and adverts are included in the mix. Who is Bob's News Agency, really? Can you click on every advert to find who paid for it? No.

Absolutely, but the difference is those people have fingerprints.

If someone is repeatedly toxic (which should be algorithmically identifiable), you can take steps to balance that.

You cannot do the same if 50% of your userbase lacks a stable / historical identity. At least without attempting to recreate identity on the basis of metadata (IP, patterns, etc).

A video is data that exists with forensic context.

What we need are institutions that are trusted to verify this data, like journalism. Except it is being allowed to be perverted for profit or ideology.

What does a journalist know about verifying video data?

And isn't a journalist going to bring their own side of the story to it? Hardly independent.

>What does a journalist know about verifying video data?

I don't think the parent comment was using journalists as a specific group of people who should be the ones in charge of verifying video data. I took it more as "we need a dedicated group of people for verifying video data, similarly to how we had journalists verifying all other sorts of data for their articles since the dawn of journalism".

Goes without saying, but, also, the parent comment was clearly alluding to the "verification by journalists" in a traditional sense of proper investigative journalism, not "journalism" that is based on pulling random tweets without context from random no-names.

I'm sure the New York Times, with its resources, is able to hire some help. It's honestly incomprehensible to me that anyone would think otherwise.

A journalist that has a reputation for truth would hopefully bring that integrity to the process. Every party will have some kind of perspective or bias, we should pick the ones who are incentivized towards truth vs lying for profit/sensation.

Eye witnesses are very accurate when it comes to broad ideas like did you hear a loud noise on this day.

Where they fail, especially compared to evidence and video, is details like what did the noise sound like, what did the person look like, what were they wearing, etc

As inaccurate and untrustworthy as deepfakes are making video?

If not worse.

1. Physical evidence can at least by analyzed by a third party, and

2. Misleading deepfakes aren't created accidentally by honest people.

I would say it didn't work for thousands of years. It just was the best we had.

Think how many instances of someone's word vs the police there have been. It did not 'work' very well. Mobile phone video has changed some things for the better.

Agree, video has always been a deeply manipulative media. Between editing, angles of shooting, music. The simple fact that you have a voice over that tells you something and that somehow because you are watching the video (what you watch is real) the voice over is also the truth.

It will introduce a healthy dose of skepticism to that media.

We might as well ask Kuleshov about this, father of the Kuleshov effect with his experiments between 1910 and 1920 ...

In the - now famous - film below, Kuleshov edited a shot of the expressionless face of an actor, which was alternated with various other shots (a bowl of soup, a girl in a coffin, a woman on a divan).

- https://www.youtube.com/watch?v=_gGl3LJ7vHc

When the clip was shown to an audience, they believed that the expression on the protagonist's face was different each time he appeared, depending on whether he was "looking at" the bowl of soup, a girl in the coffin, or the woman on the divan, showing an expression of hunger, grief, or desire, respectively.

Of course, the footage of actor Ivan Mosjoukine was actually the same shot each time.-

The audience even went on to rave about the actors "performance" ... "the heavy pensiveness of his mood over the forgotten soup, [they] were touched and moved by the deep sorrow with which he looked on the dead child, and noted the lust with which he observed the woman".-

The point is that, given the audiovisual medium, a certain degree of manipulation, or "intent" on the part of those creating the work is always to be expected ...

... and, also, that audiences bring their own bagage when they view something.-

I agree 100%. Deepfake hysteria is absurd.

I'm the author of https://vo.codes, and my goal has been to make deep fakes as accessible as possible so people become familiar with them.

You know what people use vo.codes for? Memes. That's it.

A number of journalists have decided the technology is entirely confusing and deceitful. While there are new risks posed by deep fakes, I believe the potential for good far outweighs the bad. Today I see it being used mostly for artistic purposes and for humor. The real threat is social media and the soapbox amplification and attention algorithm, not deep fakes.

Deep fakes enable creatives to make this kind of stuff (most of these are made with vocodes) :







vtubers are using it to give themselves new voices, which is amazing:


As the technology improves, it will become disruptive and enable average people to join the ranks of Hollywood actors, directors, music producers, and vocalists. The future isn't A-list celebrities and grammy winners, it's people like you and me.

The people stoking the anti-deepfake fire are the media. They claim that this is the end of trust and authenticity, despite the fact that classical methods of trickery and deceit are already way more common. Where are their doomsday articles about catfishing, phishing, and social engineering? There aren't any, because it isn't exciting.

This technology doesn't really move the needle for deception if you can slow a video of Nancy Pelosi to half speed and claim that she's drunk.

Deepfakes are just the next wave of photoshop. People don't use photoshop to steal elections and win court cases. They use it to make memes. It's the same deal with deepfakes.

The technology is going to improve. And I'm sure the hysteria will increase in volume too.

Real time voice conversion and video generation is going to be cited as "scary", but it'll mostly be a hit with gamers and vtubers. It's not going to be a "Mission Impossible"-style espionage tool. It's going to be used in good humor and to good effect.

These tools are going to bring about a new media renaissance. They'll let the small players compete with the giants. That's not scary - it's exciting.

That's also what I'm working on for my startup: memes today, hollywood / old media disruption tomorrow.

I've already got insane growth (millions of requests a week, and our videos have hundreds of thousands of YouTube views). I'm wondering when to pull the trigger and start hiring people. (My users are already working with me to build more!)

I'm super excited by this field, and you should be too. The media is screaming fire, but I'm running at it with full speed. I see the magic and the amazing opportunity.

The problem I'm most afraid of isn't people actually using deepfakes for nefarious purposes (although I think you are not nearly concerned enough about that... It would be very easy for Russia or China to alter local elections by making a grainy video of a candidate appearing to smoke crack a la Rob Ford, for example). I'm afraid of people internalizing the idea that you can't trust video at all. I agree with your criticism of the media on this, but from a slightly different angle... They are spreading that exact idea. True or not, it's a dangerous idea.

Imagine a smaller less developed religious country having a viral video of their religious leader having sex with another man.

It can be used to destabilize small countries rapidly to provoke a ethnic or religious conflict. In conjunction with other tactics and a prepared military effort.

Imagine if ISIS did this to Assad just before their main offensive? Or if Hezbollah did it to the Prime Minister or Lebanon before a renewed military offensive? Or if China did it to the mayor of Hong Kong just before sending in troops.

It has a lot of potential when used together with other tactics.

"You know what people use vo.codes for? Memes. That's it."

Not really. Browsing the dark corners on 4chan and reddit, you'll often find people posting pictures of real people and asking if someone can put them into a deepfake.

> It is difficult to get a man to understand something, when his salary depends on his not understanding it.

― Upton Sinclair

I'm not sure if this is directed at me or the media.

If it's at me, I could just as well be building anything else. I find this technology fascinating, and I see an almost magical future ahead where we can tweak sensory input and play it like an instrument.

It's the closest we've come to building our dreams. The possibility of the The Matrix made more real, and bent to our own desires.

This stuff is going to sink Hollywood and replace it with an improvement at least an order of magnitude more imaginative.

So it's not that I'm letting personal interest or profit motives cloud my judgment. I think this is truly revolutionary, and I don't understand why others don't see the same glittering and fantastical future.

They're too afraid of the demons to build the cool thing.

> A number of journalists have decided the technology is entirely confusing and deceitful.

It's not the journalists I'm worried about. It's the advertisers.

> Deepfakes are just the next wave of photoshop. People don't use photoshop to steal elections and win court cases. They use it to make memes. It's the same deal with deepfakes.

You say that now. By 2024 we'll be getting served political ads depicting "Person who looks like my cousin" in a riot, "Child that looks so much like mine" being shoved into the backdoor of a pizza parlor, or "Sad sack that looks like me" standing in an unemployment line. Fairly certain we all signed away the permission to use our likenesses in the various TOS.

It's also going to be a whole new vein for bullying, e.g. "Goofus hates Gallant. Most kids hate Gallant. Goofus posts low-grade deepfakes of Gallant dying and committing acts of self-harm. The bodies and the hair don't match at all since the source GIFs are from movies and tv, but it's definitely Gallant's face. Goofus gets a short term dopamine burst from his fake internet points as his peers pluslike and cross-post. One day Gallant decides, 'maybe they're right'"

I know it sounds like panicky, theatric, Black Mirror script stuff, but there are no missing pieces to keep either of these from being a button click away. It just might not be quite cheap enough yet.

As the technology improves, it will become disruptive and enable average people to join the ranks of Hollywood actors, directors, music producers, and vocalists. The future isn't A-list celebrities and grammy winners, it's people like you and me.

Just my two cents, but acting is not as simple as wearing someone's face. If deepfakes make me look like an A-list actor, that alone will not get me a lead role in a big budget movie. I would still require acting skills.

Do you know how many skilled, under-appreciated actors there are in the world? The economics of production create limited head room at the top. You'd be shocked how many talented actors, directors, and writers there are that go undiscovered and under-utilized.

Screenwriters will be able to make entire movies using literally only their voice. I tell a story to a computer, it will "imagine" the entire thing.

And after that, the computer will make the stories and we will watch, at which point the Drake Equation takes over.

More or less bingo. It might be a few decades, but this is where we're headed. And I'd rank this as more certain and profitable than self-driving cars.

I don't think Disney will keep pace. This trend will cannibalize their IP and this level of tech competence isn't in their DNA.

"Hey Google, make me a 90 minute movie about X where Y also happens but put in some plot twists"

> People don't use photoshop to steal elections and win court cases.

People definitely attempt to do both of these things.

While we definitely can't (and certainly shouldn't) stop this tech from existing, there is still an important civic and academic need to address deep-fakes in conversations about media literacy. People should understand how to analyze the reputability of media regardless of the type. It doesn't matter if they're watching a video or reading a book.

As a corollary - how many people already spread inaccurate or misleading video clips, sound bites, etc without this technology, and how many already refuse to believe real ones produced by the 'lying news media?' I think the hysteria is completely misplaced, and the extremely polarized media landscape is itself to blame.

Sorry, but if the "new media renaissance" is SpongeBob singing WAP, count me out. I don't think it's exciting, I think it's lame.

But you're right, there's always money to be made in making the Internet an even dumber place than it is already. Have fun!

That's such a narrow-minded, salt vinegar take.

How many leaps away are we from 10 year olds making their own Star Wars movies? Not many, I posit. And I think that many of them can and will do better than George Lucas.

This technology is going to give so many more people the ability to create. As we begin to automate the tedious jobs and industries, it's important we have something fulfilling and engaging for people to move to. The creative field is rewarding and leads to self-growth and entrepreneurism.

The future is going to be a Cambrian explosion of creativity and expression. Look at YouTube, TikTok, and Patreon. Imagine what more tooling will do for these folks. Brains are teeming with ideas and imagination, but they often don't have the resources to breathe life into things imagined - with this next round of tech, we're going to change that.

Conversely, the concentration of wealth and production value at the top (entities like SpongeBob and Cardi B) will erode once everyone has the ability to generate character designs, animation, music, lyrics. More money will pump into the system, and it'll spread more evenly.

This is the Internet / Smart Phone revolution all over again.

We don't have the same idea of the meaning behind the words "create" or "creativity".

TikTok is exactly the kind of thing I'm talking about -- you just want to create another vector for even more soulless, time-wasting popularity contests.

What you're describing to me sounds like a technological way for adult children to play brand-promoting dress-up inside the already-worn-out shell of pop culture, a way to infinitely recombine the old without actually creating anything new or interesting.

Call it a narrow-minded take if you like. I'm sure you'll be laughing all the way to the bank, so who cares what I think? It'll be a hit on 4chan.

That's such a sad outlook. We'll see the next Scorsese and Miyazaki using these tools and techniques.

Look, I would love to be wrong about this. The potential for deepfakes to e.g disrupt financial markets worries me much more than the upside of some cool videos on the internet. It’s nothing personal. If you build something genuinely cool I promise I will be happy for you!

>years from now or never?

I really don't think this is true. I hear stories about deepfakes on NPR pretty regularly. It would be very strange if a subject that has managed to gain some mainstream media traction was never internalized by a large percentage of the general public.

NPR listeners != large percentage of the general public.

Also, people believe what they see, on an instinctual level. Hell, people believe what they hear, mostly. It will take a lot of education and time to untrain that.

I try to avoid consuming too much news, so I don't have much exposure to other news sources (and I only probably get an hour or two of NPR programming per week). Given the level of coverage I've heard on NPR, I would be pretty shocked if the subject hasn't received some coverage by most major media outlets.

"I often feel deep existential dread about deepfakes and the lag between when they are viable (now)"

But they still should be spottable as Fake, if examined.

Related, a MIT project to build a KI to spot deepfakes (also ways to spot deepfake as human):


Digital signatures do exist.

> What better way to educate the public about deepfake technology than with over-the top satire?

There is a sizable group of people who do not know that The Colbert Report was satire. The problem with satire is that there will always be a group of people who think it's real, it's like a corollary to Poe's Law.

Source: Research: Conservatives believe Colbert isn't joking https://www.cnet.com/news/research-conservatives-believe-col...

Not a compelling problem imo. You could say the same thing about irony, sarcasm, etc. but nobody calls it a problem.

Yup. The same goes the other direction. The Babylon Bee has been fact checked a bunch of times because some liberals think it is fake news not satire. Whether it's a few, some, many, whatever is debatable, but seems like a general rule that people generally don't perceive satire to be satire when it doesn't match their preferred consumer choice of gaslighting mainstream media. It's hardly specific to liberals or conservatives.

Is there some kind of private/public key signing technique for videos, when the camera creating a video signs it with some key providing a signature that can be verified for authenticity ? so you would be able to at least verify that the video was created at certain date on a certain camera ?

You can sign the data, but it won't help.

Basically, the ability to fake any video means that other data is needed to prove that the person/events in the video was in fact the person/events that that video is claimed to depict.

Other proof, other measurements of the same events, etc.

Deep fakes mean people will know they can't trust their eyes, they need to think critically. That's been true for a while, but now they'll know it.

Which I presume is why Parker/Stone are doing this now, other than the obvious (make money). They're going to force people to recognize this is possible and how perfect the fake can be.

As a side note regarding the money: My understanding is they had formed a studio to make a deep faked movie some time ~2019? I think? But covid rolled around and the studio dissolved. They took advantage of the contacts and assets they had initially made to produce this youtube video. They think it might be the most expensive youtube video ever made.

Maybe if we start the habit of signing one's own appearances, this at some day will make unsigned video considered unreliable.

But I guess this is wishful thinking given that we don't even generally sign written comments yet.

There's the Trusted News Initiative [0] and Project Origin [1,2]. Too little, and not enough support imo.

[0] https://www.bbc.co.uk/mediacentre/latestnews/2020/trusted-ne...

[1] https://www.bbc.co.uk/blogs/aboutthebbc/entries/46f5eb33-b7b...

[2] https://www.originproject.info/

So I can cryptographically prove that at a particular date at a particular time with a particular camera that I authentically pointed my camera at a screen projecting faked content?

I think this only works if you're signing something currently much harder to fake, like a full light-field or something, and even then it won't stay hard to fake forever.

You could have the camera record the lidar also and crypto sign that too. Assuming it is all done in camera and the lidar is high resolution enough it should be possible to completely tamper proof a video recording at least at the “talking head” level.

I think Apple could possibly start providing this for iPhone videos if they decided to as the hardware is all there already both the lidar sensor as well as the Secure Enclave etc

I think this idea of adding additional layers is on the right track because it confounds the deepfake generation problem by adding a curse of dimensionality. You can't just fake 2D. You need to fake 3D which is much harder.

I'm curious what the equivalent of "3D" to "2D" will be for deepfaked voice would be.

There is already the color channel dimension in addition to the spatial dimensions and time dimension. Adding depth doesn't make things much harder, especially because there is so much correlation between depth and color and time.

I know they don't currently do this, but if GPS satellites cryptographically signed their timing signals, could that potentially be usable to establish that, in addition to the particular camera capturing the images at a particular time, that it also did so in a particular-ish place ?

It's possible to record and replay signals from gps satellites with different delays, so maybe that could be used to spoof the location, but if the camera also has an internal clock, perhaps it could detect if there was too much discrepancy there? But I don't know how much that could constrain the location. Also gps only has limited precision, especially indoors? Uh, maybe using cell towers instead of, or in addition to, gps would be better? Hm.

I mean presumably you could just have the subject or subjects of the video sign the content. A full unedited interview for example could be signed by the interviewer and interviewee.

Edit: This would of course be predicated on PKI being safe and easy for most people to use which decades has proven otherwise though.

I think this solves the wrong problem. Most of these deepfakes have obvious places where you can tell they are renders. I think it's more important to have tooling that can tell when a video has been altered.

I'm more concerned with the manipulation of grainy low-res video. Police body cameras are an incredibly tool for police departments to fight misinformation (well, only when people watch the whole thing). Is editing these types of videos as obvious has high resolution video, or is it more easy manipulable?

> I think it's more important to have tooling that can tell when a video has been altered.

It's probably going to be a cat and mouse game. Think about how far along green screen technology has come. You watch a movie in the 80s and 90s today and the green screening is super obvious to our 2020 trained eyes. Imagine how someone from the 80s and 90s would perceive the greenscreening from 2020?

I imagine there will always be a cutting edge of deepfakes that will travel halfway around the world before the truth comes out and won't spread to all the people that believed the fake.

I can't help but to think that as a society, we're entering the "schizophrenic" phase of popular reality.

Read these quora answers on what it's like to have schizophrenia and think "What would a society collectively afflicted in the same way behave?"


It's going to be weird to see society have to operate in a state of constant disbelief of so many things that were previously accepted as fact.

It's a good question, crytpo can be used for this sort of thing. It does preclude editing, unless you also establish a cryptographic trust chain where each successive editor also has a trusted key.

Probably never going to work with cameraphone journalism, but if we're talking about newsroom & press conference footage you could make it happen.

Expect from editing, which, as you say should be able to be solved somehow (encrypt every frame?), I think the biggest problem is the equivalent of the "analog hole".

How do you know if a signed recording is from the actual event, or from a camera being pointed at a screen in a pitch black room recording the playback of some malicious video trying to show said event?

Trust. If the root signatory is CNN, you decide whether you trust CNN not to do that.

Not really any different from other encryption. Your bank could be defrauding you on the back end despite the verified https session. But you trust they aren't.

Why do we need to sign the video then? Isn't it enough for me to ensure that I have a secure TLS connection to https://cnn.com? Maybe for external platforms? But then the challenge is just proving that https://www.youtube.com/user/CNN/ is controlled by CNN. Much simpler than involving the cameras.

The hope would be that if video clips were verifiable, they would be widely expected to be verified, such that when someone links to "old clip of Senator says outlandish thing", you habitually check to see if it's verified- no matter who or what is re-sharing it.

Why not? Can't there be "TLS for cameras"?

I've done some thinking about how to prove that a video was filmed at a particular time. I've written parts of the design but haven't written any code yet. The idea is to have a group of public Internet servers that create a hash chain mesh. Every second, each server produces a new block, containing the SHA3-512 hash (digest) of the previous block, hashes of many other servers' previous block, and hashes of user documents.

To show that a video was created after a particular time, a person can include a hash from any public server in the video. The easiest way is to have somebody hold a phone in view of the camera, running an app that shows a QR code of a recent hash, updating in realtime. They could also read the hash digits. And the phone app could play the hash as audio tones.

Afterward, anyone who has the video can use software to extract the hash, query public servers for the hash's position in the immutable historical hash mesh, and get the associated timestamp. The existence of the hash in the video proves that the video was created (or edited) after the timestamp.

To show that a video was created before a particular time, use software to send the file's hash to a public server for inclusion in the mesh. A good recording app can publish repeated hashes of the file as it is recorded and gets longer. When the user stops recording, the app adds an "end of recording" mark to the file and publishes a final hash.

Afterward, anyone with good enough software can determine the time interval between the hash embedded in the video and the published hash. This is the interval when the video was produced. A malicious actor would have that much time to edit the video. With a good enough network, this could be 2 seconds, insufficient time for a human to make any editing decisions.

Hashes apply only to the original file. Anyone distributing a file with hashes must provide the original file that was used to create the hashes. Low-res resampled files can't be verified. Shortened or spliced files can't be verified. Only the alleged original file from the camera can be verified.

To fake a video, malicious software would need to insert a hash QR code into the video, process the video so it looks like it was recorded from a camera, and publish the hashes, all with a few seconds of delay. Fakers could also use a physical camera to re-record a video on a monitor. Either technique should be detectable. Fakery software will get better and may produce undetectable fakes. I hope that people will invent new physical anti-fake techniques to thwart fakery software.

I'm thinking of calling the system "Livestamp". It would work for any type of file or recording, not just videos.

Such a system exists, it’s called OpenTimestamps

Thanks for letting me know. It seems to rely on bitcoin's block timestamps which are accurate within tens of minutes or hours.


I believe Canon has special cameras that the police uses. It’s been a while but they use some encoding scheme I think.

Canon's image signing scheme was half-assed and was comprehensively broken, they didn't even store their signing keys on a secure element:


>In Canon's second version of its ODD system, the HMAC code is 256 bits. The code is the same for all cameras of the same model. Knowing the HMAC code for one particular model allows the ODD to be forged for any camera within that model range, Sklyarov wrote

Ack...not so good then :/

The toughest issue here, IMO, is being able to sign and authenticate a document without proprietary DRM. Been thinking about this a lot.

DRM is indeed what it is; you need to verify a signature somewhere to ensure it hasn't been tampered or altered, which requires an internet connection to verify or download.

This seems absolutely inevitable.

More practically, PGP.

I think the biggest casualty of deepfakes are going to be citizens. Record a video of police brutality or crime footage and it can all be shrugged off as a deep fake. TV stations and journalists can afford 'signed videos'.

Deep fakes typically are putting a face onto an actor (or other footage). Not faking an entire scene with multiple people, that's just called CGI/Animation.

So you could easily take existing footage and change who it appears to be. But,

1) doing that to frame or unframe someone will be pretty niche 2) the original correct footage might still be findable 3) with access to the video files(which is going to be required in any legal situation), I'm sure there will be (exists for photos) algorytms that detect the editing the deepfake did to video file.

Deepfaking doesn’t have to be used on its own. A dedicated entity can shoot entirely new original scenes (unique footage), deepfake actors’ faces to look like the people they’re targeting, edit it all together, and possibly add CG VFX as a final layer to make the resulting video look very real[0].

Of course, knowing all that is not really necessary for someone to shrug off truthful incriminating footage as “fake news”.

[0] One caveat is that deepfaking works better with higher quality footage. To produce a complicated scene with altered actor faces while maintaining realistic “phone video” look, it would make sense to film with good lighting and high-quality gear into log or raw format, and then imitate the look in post-production after deepfaking is applied.

Thus, one method of detecting such fakes could be by checking for traces of VFX, artificial noise, signature lens properties, signature behavior of phone video recording “magic” (such as noise reduction and stabilization), etc. Enough of producer’s dedication could make that tricky, but IMO it could be easier than applying automated deepfake detection straight up—it’d be buried early enough in post-production workflow, with a lot of noise introduced by subsequent “phone look” VFX.

>Deepfaking doesn’t have to be used on its own. A dedicated entity can shoot entirely new original scenes (unique footage), deepfake actors’ faces to look like the people they’re targeting, edit it all together, and possibly add CG VFX as a final layer to make the resulting video look very real

This was treated (along with the ubiquity of CCTV in London) by the 2019 BBC show The Capture[0].

They glossed over a bunch of technical issues, but got the idea across pretty well.

Of course, they made the Americans look like the worse bad guys (no one comes out looking good, even the nominal "victim.") but that's to be expected, given the darkness the show attempts to purvey.

It's not great, but it absolutely provides context for the eventual advent of reasonable quality deep-fakes and the potential for abuse of the technology.

I'd expect that we'll see lots more of this sort of story telling, especially since any sort of video production (not including raw video footage) is never seen the way things actually are. As such, the "faking" (minus the deep-fakes with faces, etc.) is the key to the exercise.

Go watch the taping/filming of any television or movie and that will become immediately clear.

[0] https://en.wikipedia.org/wiki/The_Capture_(TV_series)

Edit: Added the missing link.

The type of person who will claim that has already been claiming 'false flag' or 'fake news' anyways.

If someone has confirmation bias, they're still going to find ways to call it fake with or without deep fake tech.

Exactly, we've had a wave of "fake news" hysteria surrounding mostly accurate and true reporting.

People don't need deep fake videos to believe lies, they do it with little to no supporting evidence anyways.

It will be much more difficult to do deepfakes of that sort. Doing a good deepfake requires lots of training data. We might be able to get away with less training data a the tech progresses, but that is one area that will probably be most difficult to make progress in.

True. I've already noticed people who aren't cynical, become cynical after seeing some deepfake videos. This, coupled with confirmation bias will not help real victims who are regular citizens.

People weren't walking around with cameras all the time 10-15 years ago and the world functioned still.

This is thinking too small. While the West has a lot of focus on its leaders and the population is fairly tech savvy - in the developing world can have a deep fake start a revolution before calmer heads can disseminate the truth.

I could easily see a deep fake viral video go around in a sub-Saharan country, with a leader claiming he is gay or something else socially unforgivable in that country, and starting an internal ethnic conflict that cannot be undone.

Seems like it would be pretty easy to add a way to sign a video using keys stored in a phone's enclave, surely Apple and Google are capable of doing so.

I also don't think that politicians will suffer the most. Some people already know that social websites are full of fake videos, pictures and quotes from politicians. The others who don't know that already, could be more economically tricked, for example use a picture of a politician, place some outrageous quote next to him/her with Comic Sans, and some will believe that, too.

Maybe in the court of public opinion, but I doubt it would matter much in actual court. Chain of custody for video evidence has always mattered there.

Funny how the image caption is "Parker and Stone at a pre-corona premiere", when the photo is like, 20 years old, while the epidemic has been this year.

Yep. I decided to check if they really still look anything like that. Shouldn't have done it.

I got a tickle out of that as well.

I understand the worries behind deep fakes, but i'm going to be a bit facetious here, hasn't getting the most realistic, lifelike images always been the goal of computer graphics? Wouldn't deepfakes be the ultimate culmination of lifelike CGI. So lifelike it's indistinguishable from the real thing?

Like for most of my life, this is the goal I keep hearing about. Now it's here, suddenly it's too real.

I dunno just seems a bit funny to me. I've never been one of those graphics people myself, but it seems like a case of getting what you asked for, but being upset because it's different to how you expected it was going to be.

I don't think anyone is disputing the idea that technology in this arena can/will/should advance, but rather that this specific set of technologies come with consequences that are, to many of us, absolutely terrifying.

I also don't think we truly understood how susceptible so many people are to made up nonsense until recent years, so this particular kind of made up nonsense suddenly looks like a serious threat to social stability and cohesion.

>I also don't think we truly understood how susceptible so many people are to made up nonsense until recent years

That's not true, that's been the basis behind marketing and advertising since it existed. It's been fairly well known for a long time people easily believe nonsense.

Just a few more months of these weight loss pills and I'm going to look like David Hasselhoff.

Deep fake is photoshop for video and audio. Alarmists are going to opine the many ways it will cause the sky to fall but life will continue.

Life will continue is a pretty low bar...

After the sky falls or how it has after the invention of photoshop? I'm referring to the latter.

Who knows, maybe out of that more attention will be paid to a message, rather than the person. Cults of personality had always been problematic. And, technically, this allows to grab the personality and dissolve the cult.

The problem is that, in a representative democracy, voters are deciding on the person, rather than the message. If you have systems that increase the noise floor in the link between the two, you end up with an electorate that's less well-equipped to make coherent decisions.

It seems that voters are deciding mostly on charisma and cult of person, rather than the message.

It'd be likely better, if that decision is made based on character - the pattern of thoughts of the person, rather than the charisma - attractiveness or charm.

If it is possible to curb charisma/charm/cults of personality with realistic fakes, overall the effect might be advantageous.

The question isn't "what's wrong with the technology," but rather "what harm can it do in the context of our societal development?"

The key difference that you’re missing is “consent"

Regardless of the actual content or quality of the show, I hope this reaches a wide audience. All people should know about the technical capabilities of deepfakes.

Is this a method to innoculate the viewer against deepfakes? I wonder if part of thief goal is to specifically make people more aware?

Matt and Trey usually have an interesting side project on the go; where South Park pays the bills, shows like The Book of Mormon and Sassy Justice are their creative expression.

At first I thought the sub-headings were summaries of sketches from the show (which I thought was an odd thing to do) so imagine my surprise that the article ends with announcing a new release of PyTorch.

The article consists of 5 different small articles.

Does anybody know where the new show by Trey Parker and Matt Stone called Sassy Justice is airing? Is it a free Youtube series?

The homepage is sassyjustice.com. Currently it's embedded from YouTube.

This feels like the start of something very bad - they are going to be upfront about the deepfakes... others are not.

It’s going to get to the point where you can’t even say in a court room “well we have video of him doing this”. The fact that deepfakes will exist will erode confidence in even those things that are true. At the same time it will add additional fake situations to the conversation.

Worst part is that eventually even the AI-powered counter measures are going to fail eventually. The moment a computer knows what gives away a hint that it is a deepfakes and not real, a computer can solve to not present that give away. The “good guys” and “bad guys” will iterate with each other until it is perfected.

I’ve said it here before and I’ll say it again:

Audio and video evidence isn't admissible because it's audio or video (and may be inadmissible nonetheless). It's admissible because someone testifies under oath that they have personal knowledge of its provenance. The burden is on the party introducing the evidence to show that it's reliable, and the question of whether or not it is indeed reliable is a factual one for a judge or jury to answer. It's not assumed to be "true" or to accurately reflect reality just because it's a purported photo, video, or audio recording.

This works in a courtroom. This won't work for a politician who retweets a faked video that then incites real violence. And that, at least for now, is where we truly need to be concerned.

> This works in a courtroom.

Does it though?

Suppose there is a theft at a company. The police go to the company's security team and get the surveillance footage. Presumably admissible.

If it was an inside job, the surveillance footage could be a deepfake showing someone else committing the crime. Or maybe it's real surveillance footage. Without some way to distinguish the two, how do you know?

The interaction of our current legal system with SotA deepfakes seems terrifying.

To rip from current headlines: https://www.justice.gov/usao-wdpa/pr/erie-man-charged-arson-...

(Leaving aside the merits of the case / opinions, and just using it as an example)

".. the [Facebook Live & coffee shop] videos depict a male – with distinctive hair to the middle of his back wearing a white mask, white shirt, light blue jean jacket, black pants with a red and white striped pattern down the side and red shoes - setting a fire inside of Ember + Forge."

"A review of additional Facebook public video footage from the area of State Street near City Hall in Erie on the evening of May 30, 2020, shows the same individual without the mask but wearing identical clothing and shoes. The subject’s face is fully visible in this video footage."

And that's a federal arson charge (min 5 years, max 20 years prison).

And a witness might lie. How will we manage?

Frequently by incarcerating innocent people.

The onus of responsibility is still on the person doing the violence. Contrary to the fearmongering, I'd say deepfakes will just make people not believe things they read on the Internet (again), especially as the technology becomes more widespread.

Where the ultimate onus of responsibility falls doesn't matter much when you've got a bunch of Rohingya hanging from trees because some politician retweeted a fake video.

If you follow it through, that line of argument makes no sense. If the video weren't fake, would that somehow make the murder of Rohingya acceptable? Of course not.

Ultimately you end up back at the beginning: holding people, and not pieces of information, accountable for their actions. That's an issue with civil society, not social media.

Deepfakes have the potential to exacerbate existing issues in civil society. If your husband has just been murdered because of a riot instigated by a deepfake, it's not a great comfort to know that it's not really the "fault" of the deepfake but instead the "fault" of a broken civil society.

Plus, deepfakes have the potential to create much more noxious videos than real life. Real life videos depict real life people, flaws and complexities all included; deepfakes will be constructed to depict representations of the target that're most likely to generate inchoate rage of the mob. It's rare that the former happens to be exactly the latter.

Well the world does not operate based upon what is comforting. I thought that it was obvious even before COVID-19 but people keep on missing this.

Why not blame the person who confirmed your husband's death at this point if accepting comfortable illusions of someone easy to punish is what we are doing? Without them you would still have some remote hope of survival!

The first step of solving a problem is recognizing the actual problem - what we find comforting is only a distraction for rationalization purposes. Better to recognize that microscopic organisms caused the crop failure instead of burning an old woman as a witch.

Burning a woman as a witch doesn't build any kind of incentive structure to prevent future crop failures. Punishing politicians who incite violence creates the obvious incentive structure for politicians not to incite violence, which results in less violence. This applies both the deepfakes and calls to violence in general.

Politicians don't have a particular right to remain in office despite inciting violence, even if it gives them the sads if they face repercussions.

  If the video weren't fake, would that somehow make the murder of Rohingya acceptable?
Your answer is astray. What is at stake is evident : one can now provoke riots and deaths from thin air (and bits).

Genocides happen because of propaganda. The planners and financers and propagandists don't tend to kill with their own hands. But they command massive acts of violence.

Just one example: A radio station (RTLM) in Rwanda laid the groundwork of genocide by dehumanizing Tutsis, among other things by referring to them as "cockroaches" repeatedly. The station was deemed instrumental in the resulting murders of at least half a million people.

It's not fear mongering to look at the facts of history and see that propaganda is a primary way that violence scales. Deep fakes are another tool which will certainly be used towards these same ends. What propagandist would not want to use such a tool?

> It's not fear mongering to look at the facts of history and see that propaganda is a primary way that violence scales.

Propaganda is a primary way that human action and organization, whether violent or not, for good or ill, scales.

So people will believe what they choose to believe. A populist's wet dream. Truly nothing to be afraid of!

We already have plenty of politicians who misquote or lie to sow division and therefore violence. What's new here?

The argument would be that we have social antibodies to typical politician lies, but we've not developed the same antibodies to deepfakes. Since political deepfakes are inevitable IMO, the question is how can we accelerate the development of these needed social antibodies.

It doesn't work right now due to the partisan divide in America. Spreading misinformation to help your party's cause isn't currently looked down on - so, honestly, having deepfakes out there might not hurt much and might at least help reduce the weight that blatantly doctored videos carry with the general populace.

I appreciate you saying this, I hadn't known that. I think therefore it may be OK in the court of law but what about in the court of public opinion? I could see videos like this easily eroding our trust in the judicial system even further.

It's probably not gonna take a single generation to get used to this but as it always happened with new technology throughout centuries, people who are born into this new normal will know how not to be fooled.

Will they be able to? Widespread cynicism and distrust feels more likely.

But the jury are humans. Should we be confident that a jury of 12 (if in US) can pretend that really convincing deepfakes should not be trusted? Especially when this form of evidence has been trustworthy for a decent chunk of time?

If no one can testify about the video’s provenance, then it simply won’t be admitted into evidence. If someone commits perjury in order to get the video admitted, the jurors will be the least of the problems.

People commit perjury all the time

Well, others would be doing it anyway. Deepfakes are something you can do at home, today (if you're willing to spend some time and money on power to train them).

By being so open about it Stone and Parker will just increase awareness of how easy it is to do, so people will know how realistic these things can look and not blindly believe what they see.

So, again, what these guys are doing is a true public service.

I think it'll be similar to the situation with pictures. We've had the ability to 'photoshop' people's faces in and out of images for a long time now.

I think the first question asked of a video is going to be "is this a deep fake?" just like we currently ask "is this picture photoshopped?".

It doesn't mean that we no longer use pictures, or that we don't alter pictures, just that we are more critical of them.

That said, there is just more realism in a series of moving pictures, but I don't see why the situation for a series of fake pictures has to be wildly different from the situation of a single fake picture.

The issue is cost. Proving that an image is or isn't photoshopped isn't all that difficult/expensive. Anyone well versed in digital imagery can use basic tools to expose 99.99% of photoshopped images. But moving pictures are different. Due to compression artifacts and the overall lower quality of each frame there is much more room for opinion. An answer one way or another may be possible but it will be a much greater and more expensive battle of experts.

If anything it's the opposite and it's much harder to fake videos without leaving traces.

In just the last couple hours someone has tweeted a picture of Biden from June 2019 not wearing a facemask, and claiming it's from more recently and he's being irresponsible by not wearing a facemask. It has tens of thousands of retweets, no deep fake necessary.

Actually, I'm a bit more optimistic about this.

One thing that worries me is that everything is becoming immutable, saved forever, and digitally signed. Hard to claim you didn't post something if your account is super secure. You can't go anywhere without cameras taking your picture. And if someone steals your bitcoin or your smart contract has a mistake, your money is gone. You can't argue with an algorithm. And the internet never forgets. (Well, unless it is inconvenient for you, then Murphy's law applies ;-))

There are many such phenomena but I'd say they are all related: They mean you have less wiggle room for mistakes, or social deviance. If you are in a situation where you have to break a law, or if you are just having an affair, chances are someone or something is going to see you. There is no anonymity in the crowd anymore, on the contrary.

But this deep fake technology, if it really evolves to be undetectable, can be liberating, if it erodes the trust in pictures. At least the social control mechanisms based on cameras are going to stop working.

We were living in a very specific phase of human history, where we learned how to produce pictures from real scenes, but haven't learned yet to easily fake them. I just think this is going to come to an end, and we'll have to adapt. (I hope we'll adapt socially, and not just by cramming DRM into our technology like we tend do to; but that is a different topic.)

Indeed. We went from a world where nothing was recorded outside of very fallible memory, to one where everything is recorded, to perhaps one where even though everything is recorded, there’s enough deniability via deep fakes that it doesn’t matter as much that things were recorded.

Everything is not immutable or even becoming immutable. Content is merely endpoints. Unless extraordinary care is taken, like publishing hashes, the response from the endpoints can change over time. Lots of content has changed, for instance on Wikipedia. In theory you could wade through the wiki history. I don't know if that is protected by hashing or not. Lots of content gets shoved down the memory hole.

> The moment a computer knows what gives away a hint that it is a deepfakes and not real, a computer can solve to not present that give away. The “good guys” and “bad guys” will iterate with each other until it is perfected.

You just described a GAN (generative adversarial network)!

But you can say "we have emails where he admits to doing this" in a court room, even though emails can be trivially faked. You just have to provide information about how you got the emails and how you know they're authentic. Is there a reason to expect video will be more problematic as it becomes easier to fake?

Chain of custody is a good argument for protections within a court room but court of public opinion is where the deep fakes will really run wild.

So there are two problems: people not trusting true videos, and people trusting fake videos. I think the second problem is probably worse, and this satire will at least help with that.

Maybe... photoshop detection hasn’t gone down that path as far as I know. I would imagine the problems of hiding detection in video to be a magnitude harder to hide?

True, but photoshop is usually done by hand. An AI-generated deepfake could maybe eventually be trained so that it outputs videos that are indistinguishable from the real thing.

The detection side also gets the benefits of AI though. I can't see the generation side getting that far ahead.

From what I understand though, generators always have an advantage because the generator is allowed to "see" the discriminator's gradients during training. [0]

>The model for the discriminator is usually more complex than the generator (more filters and more layers) and a good discriminator gives quality information. In many GAN applications, we may run into bottlenecks where increasing generator capacity shows no quality improvement. Until we identify the bottlenecks and resolve them, increasing generator capacity does not seem to be a priority for many partitioners. [1]

Put another way: GAN training ends when the discriminator can no longer meaningfully distinguish real from fake. By definition then, the best generator will have no useful discriminator that can distinguish its output from real data. (conversely, if you did have such a discriminator, you could use it to train a better generator)

[0] https://developers.google.com/machine-learning/gan/generator

[1] https://towardsdatascience.com/gan-ways-to-improve-gan-perfo...

> I can't see the generation side getting that far ahead.

How far ahead do they need to be?

Suppose that it's cat and mouse, at least initially. Every six months someone comes up with a new way to detect the best known deepfakes, then six months after that there is a new way to evade that means of detection as well.

Someone drops a deepfake five weeks before an election.

The synthetic media apocalypse is even worse then these examples. The ability for liberal democracy to exist at all is highly in doubt. We need serious leadership to be attacking this problem head-on. Instead, I'm pretty sure that our current administration (in the US but also elsewhere) only see it as a welcome propaganda tool.

Wouldn't surprise me if society gets completely blindsided. As corona has shown, the people who know lack power, and the people with power don't give a shit.

> It’s going to get to the point where you can’t even say in a court room “well we have video of him doing this”. The fact that deepfakes will exist will erode confidence in even those things that are true. At the same time it will add additional fake situations to the conversation.

You cannot underestimate the global political ramifications of this. If you think the amount of video manipulation was bad enough this cycle in the 2020 presidential election? Imagine a few years from now when video's are being put out by political operatives and people with nefarious agenda's with impunity all over social media.

This sort of thing absolutely terrifies me since you can start to twist reality to whatever you want and influence people in ways we never thought possible. I really feel like the genie is out of the bottle and this has the potential to become a very dangerous tool for people with bad intentions.

The other way round. Any shocking video trending on twitter will start with a -100 credibility as people will think "just another deepfake".

You are assuming people will have never heard of these deepfakes. Sure there are still a few grannies today who have never heard of photoshop. But your average twitter user?

I think that's overoptimistic. A shocking video trending on Twitter that reinforces some population's existing beliefs and biases about the subject will find a ready audience of believers.

Every piece of controversial video content will have a group loudly screaming "TRUTH!" at the same time as another group screaming "FAKE!", and it will just amplify the social polarization we're already experiencing.

It's worse than that, politicians will be able to credibly claim that anything that's not advantageous to them is fake. We'll end up in a world where a tin pot dictator like Trump can go out and say that anything he said or did in the past was not real.

Would signing the image in hardware alleviate some of the problem? It would create a high cost to faking.

Maybe we should just go back to analog video?

I recommend watching Adam Curtis' documentary called Hypernormalisation to get a sense of how badly this can get abused by political operatives. Especially this clip: https://youtu.be/Y5ubluwNkqg


There are a lot of valid and important ideas in Adam Curtis (and many, many others) documentaries people could benefit from exposing themselves to (Century of Self is another one that I believe is a must watch for citizens of a democracy).

A weird thing about this thread, and others like it, is that there seems to be this broadly shared implicit premise within the thread context that the feared ill effects of new technologies like this have not already been prevalent in our societies and information ecosystems for decades. In a thread about bias, fake news, propaganda, etc, people seem to have no problem realizing and acknowledging that we already have a very serious problem (often only visible in one's personal outgroup, but that's better than nothing) - but when the specific topic of conversation is a new technology, the majority of the comments seem to be written as if we don't really have any significant issues currently. It seems as if there's some sort of a phenomenon whereby the logical methodology for evaluation of the situation changes according to the topic, as opposed to there being a consistent methodology that at all times has an explicit awareness of the ever-present bigger picture.

Here's [1] an 8 minute video on Presidential debates. This fairly well demonstrates how this aspect of our political system is largely pure theatre...and yet, intelligent people often speak (again, depending on the specific(!) topic of discussion) as if this charade is highly legitimate process, within a larger political (and journalistic) process that is also highly legitimate.

The way I view the ecosystem is that the vast majority of things are to a very large extent ~fake (in whole or in part). Cranking up the absurdity to 11 in classic South Park style, making a complete mockery of both the politicians as well as those who can't consistently(!) conceptualize the true nature of our system, seems like an excellent response to a situation that has been sorely in need of some good old fashioned satirical mocking for decades. Western society & politics lost the right to be taken seriously ages ago - admitting to ourselves that there's a problem seems to me like a prerequisite first step in fixing it.

[1] Winning the Presidency: Debating


I'm much more concerned about the inverse, not people getting fooled by fake content, but believing that anything that doesn't align with their views is fake. We can end up in a post truth society where someone like Trump, Duterte, Bolsonaro or Orban can claim that any footage that is not politically advantageous to them is fake and created by an evil opposition to hurt their cause.


  Trump access hollywood tape -> fake Soros conspiracy to save the pedophiles

We'll have a lot more people believing conspiracies like the moon landing being fake and end up with a lot of 9/11, JFK, holocaust, etc deniers.

> I'm much more concerned about the inverse, not people getting fooled by fake content, but believing that anything that doesn't align with their views is fake. We can end up in a post truth society where someone like Trump, Duterte, Bolsonaro or Orban can claim that any footage that is not politically advantageous to them is fake and created by an evil opposition to hurt their cause.

I believe the optimum approach is to be concerned with all risks, and weigh the magnitude of each in a state of careful self-monitoring of one's potential biases (and ideally, have your conclusions reviewed by others, preferably from a diversity of ideologies and perspectives in an attempt to minimize the well known affects of groupthink). Noteworthy to me is that a significant number of people (if not the majority, depending on which community you are in) are easily able to see the epistemic errors in their outgroups thinking, but has more difficulty in doing the same within their ingroup.

For example, in your comment it seems that you have noticed shortcomings when it comes to politicians of one general ideology, but I wonder if you are of the belief that this phenomenon does not occur across all ideologies?

Who cares? Photographic evidence has barely existed for 100 years, and it’s been frequently doctored the whole time.

> This feels like the start of something very bad - they are going to be upfront about the deepfakes... others are not.

Well, something really bad is definitely ahead with fake videos (as if real videos, with significantly meaning-changing omissions made with the plain old cutting process weren't bad enough) and people with high visibility creating awareness by doing it in the open is the closest thing to a defense that we have. It's a hopelessly weak defense but better than nothing.

It’s not bad, it’s inevitable and it’s nice we have some light shed on it. Stupid people will no doubt try to “ban deepfakes”, but we are going to have to learn to live with them.

Maybe people will learn to no longer believe partisan information sources, and organisations whose reputation is based solely on the veracity of their content will predominate.

The current media environment demonstrates that inaccurate reporting in favor of the audience biases is actually rewarded.


Tucker Carlson spent the month of October pushing Russian disinfo on Hunter Biden, all for it to culminate in "losing" his documents, re-finding them and then suddenly, inexplicably backing down.


Can you post your source on this story being Russian disinformation please.

> “well we have video of him doing this”. The fact that deepfakes will exist will erode confidence in even those things that are true

That's good. Courts should not have high confidence in any single bit of evidence. It can all be manufactured. And long before now. It's just (too) hard to manufacture, not be detected, and get larger number of people required to "go along" with it, when you have manufacture multiple collaborating pieces.

It’s been exactly like that with newspapers and eyewitness accounts for hundreds of years. We survived.

> you can’t even say in a court room “well we have video of him doing this”.

Good. We should dispense with retributive justice, and replace it with restorative, transformative systems which are a noop on the wrongly accused.

How is that different from photoshop? Today no one trusts a photo, as even a teenager can do a decent fake on a smartphone. Now the same for audio and video. Doesn't change our world.

What? There are obvious examples where faked video can have a serious affect. What kind of solipsistic nightmare are we slipping into?

Because people still have some faith in videos. Deepfakes will remediate that very quickly.

I fear they won't. Some crowds only need a spark to explode. They won't wait for counter-evidence. They just need a pretext.

It's never perfected, just look to the animal kingdom for examples. Mimicry and camouflage is a never ending evolution.

We'll know our disinformation program is complete when everything the American public believes is false.

Yes, deepfakes are going to kill plausible deniability in audio and video.

> deepfakes are going to kill plausible deniability in audio and video

Huh? Don't deepfakes do the exact opposite - make plausible deniability in audio and video a much stronger argument?

Before if you denied a video of you was genuine that would not be plausible. Now it would be plausible.

Dark days ahead for Hollywood. How long until Silicon Valley eats that industry?

I can imagine lifelike movies that render characters to the users preference in real-time.

Talk about “representation”!

I can imagine as well dialects, language, etc being rendered in real time to adapt to what the user prefers. You and another person could watch the same film and talk about the same story but have totally different experiences on what the characters looked like, talked like, and even said within some parameters.

Making a movie with humans will be a prestige event like riding horses today or driving an ICE in the future. They don’t be able to compete with rendered film at a mass scale due to cost. Rendered films could be built and distributed cheaply and cost a fraction of a real movie to watch.

They would make their living by licensing the likeness of actors. Because you might have a great rendered movie, but most people will want a great rendered movie with Tom Cruise and Samuel L Jackson, not with nonane actors.

Until you make a completely artificial persona that gets famous.

Really that would be a good thing for "character actors" as it goes less on image and more on as what they need to contribute is creative to be a collaborator instead of a "living prop".

What would be interesting would be the techniques used. Are they like animators or roleplayers focused on a single character to give them emotional touches in added details, quirks, and improved line changes or "greenscreened" such that what they actually look like is utterly irrelevant to their job?

Wow, that deep fake of Tom Cruise is just uncanny...

I'd be very interested in a show where they have deepfakes of politicians reading FOIA obtained emails that they wrote. It would be a positive use for a technology that has some pretty negative use cases =)

Impressive. A show that puts those figures in their real-world roles and surroundings, but satirizing everything they do will raise hell indeed. Something like House of Cards, but with satire and known figures.

Well, personally I think deep fakes are a good thing.

The reality is that 'we' were too trusting of what is presented on screens as true. Its the main piece that is used to manage and govern us.

So, I welcome distrust on what we see on screens - that trust was always misplaced, and all about manipulation rather than information.

Deepfakes have the potential to turn the word upside down and cause utter chaos in the world. I don’t know how we are going to deal with it as a society, I think we are not equipped for it. In fact I fully expected trump to use it for the October surprise, but I guess he didn’t go there for whatever reason.

In the early days, you could spot one by the lack of blinking because the model was trained on open-eyed images. Not sure if that's still the case, I'd be surprised if it was.

Any time I see a "this is peak technology" comment, I'm always reminded of the PC gaming magazine cover showing the first Unreal game's graphics ("Yes, that's an actual PC screenshot!").

It looks awful now, but in the nineties, it blew us all away.


People didn't think it looked like reality. Back in that time, marketing used a lot of illustration to promote the game that had little connection to the in-game visualizations. So that cover is more saying, "This wasn't drawn by an artist".

These do blink so it's no longer an issue. It's still not perfect because other than blinking the eyes have no expression to them, they're mostly still which is very unnerving once you start paying attention to that (it's also how you can tell a fake from a genuine smile). But we're at the point where unless you're looking for it, it's easy to believe.

The catholic church survived the printing press. We'll be fine. Maybe not all of us will live to see it but society and its institutions will go on.

> In fact I fully expected trump to use it for the October surprise, but I guess he didn’t go there for whatever reason.

Because he's not the cartoon supervillain you think he is?

He has to be a supervillain, or else how could he have defeated the perfect team that ran the perfect campaign for the perfect candidate in 2016?

People have had the opportunity to impersonate anyone they'd like over the radio for over 100 years and yet, here we still stand.

If an obvious lie can consistently fool 80% of the players in among us (details below) then I can imagine deepfakes will fool just as many people despite how obvious it is. By obvious I mean almost 100% of the time it's a lie and people fall for it


I played maybe a hundred games of among us (they can be very short). The game is about one or more imposters trying to murder the rest of the crew but you'll have to be discreet and get/find people alone so you don't get voted off. When a body is found a meeting happens and you can lie (text chat)

One problem is you don't want to accuse someone when you're an imposter because you immediately become suspicious. Most games will tell you if you voted off an imposter or not so they'll know you're lieing right away once game tells them they voted a non imposter. Most of the time you want to accuse noone, play dumb and act like you're everyeone else and saw nothing.

I lost count when a guy doesn't accuse anyone for 20+seconds, get accused then claims the guy who found the body is an imposter and all these things he did that are suspicious. (why didn't you say it right away?!). Like 90+% of the time the guy being accused is the imposter who waited so he can feel the situation out. It's extremely obvious but maybe 70% of the time literally every player but me and the guy reporting the body is fooled. Which is far too many players at a far too high fool rate. It's so painful because it's so obvious. 90+% of the time in that scenario the reporting guy is telling the truth.

Create short deepfakes that can go viral and be so ridiculous that they're both entertaining and obviously fake. Obama advocating for a border wall. Al Gore lobbying for the oil industry. Steve Jobs advising against staring at a screen all day.

> Obama advocating for a border wall

Would be used as evidence of how untrustworthy the Dems are.

This video of two opponents endorsing each other cropped up last year.


Biden advocating for single payer Medicare for all...the possibilities are endless!

I think you're right. It sounds like their plans fell through:


He has hunter's laptop, no reason to lie.

I hold a similar view. In my opinion Deepfakes will help accelerate the end of the democracy experiment, and you know what? good riddance.

Only people that know you on, at least, a last name basis, should have political power over you.

If you think the end of democracy will mean the end of people having power over you who don't know your name, I have some really bad news.

The governments will get it right this time. They just need absolutely authority to ensure it goes smoothly for everyone.

The problem has always been accountability. If we can devise a system that includes accountability in the authority we could be on to something.

Democracy has accountability by virtue of everyone having a vote. It’s a small power that everyone can use to hold their leaders to account. But it still allows people to hold arbitrary power over each other. If you can convince enough people, you can apply your morals and beliefs on others. For instance, lots of discrimination is a function of democracy and codifying oppression of certain people.

We sit here furiously debating who can use a bathroom, who gets preferential treatment, who you have to interact with and a million other things. “Both sides” are bent on forcing people to behave a certain way and they use the power of the vote everyone has to accumulate power and make things “how they ought to be”.

I think this could be done democratically but everyone has different interests. How do we find a common, singular interest and then optimize around that?

Or maybe democracy is the “best bad system” and we just have to make do. I do believe with the hyper connected world we have today, cryptography, and resource abundance that we could transcend the modern system and discover liberation from each other to be ourselves and pursue truly enriching lives at a mass scale within local communities. And this means a different thing to different people. But just about everything in our modern system would need to be disposed of and recast.

I think it's all fucked until we can get humans out of the loop entirely. The sooner we can develop benevolent AI overlords the better.

At least under this scheme I'll have an easier time telling my friends from my enemies.

That view makes sense if the only alternatives are democracy, extreme localism, and benign anarchy. A brief historical survey should suffice to demonstrate that there are, in fact, other less pleasant alternatives.

True, but maybe deepfakes can help challenge or prevent the "alternatives"? They are fairly cheap to make. They seem like they could be a great tool for challenging authoritarianism.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact