Hacker News new | past | comments | ask | show | jobs | submit login
Deepfakes: MIT brings Nixon's Apollo disaster speech to life (wbur.org)
215 points by jbredeche 22 days ago | hide | past | web | favorite | 177 comments



I find it amusing that Congress has suddenly become so interested in the dangers of deep fakes. When they started going on and on about the dangers I thought “why is this dude so worried?”.

Then of course the Epstein stuff becomes news[1] months later and the cynical part of me can’t help but think they’re looking for a defense if they ever partied at his island or ranch.

1. https://nypost.com/2019/11/18/jeffrey-epstein-accuser-claims...


Deepfakes are readily understandable as a problem for politicians. Politicians are experts in public messaging; this technology has the ability to alter previously-immutable messages.


I think the bigger problem is it provides deniability and removes accountability. The "I never said that" kind.


Could blockchain give us a secure audit trail for digital video clips?


The blockchain could provide a timestamp, but it's hard to see how that would help.

Every type of video authentication system I've ever heard proposed can be defeated by simply pointing a 'trusted' video camera at a screen playing a fake video.


I was chatting with someone about making deep-fake-resistant video. An interesting solution that came up was making artifacts that would be hard to simulate: ex. surround the speaker is a swirling vortex of confetti, where words have a minimal but detectable effects on the air currents.

Video could be 'authenticated' by post facto tracking of particles to validate both continuity and plausible velocity deltas based on generated model of air currents. It wouldn't eliminate the possibility of deepfakes, but would drastically increase cost of production.

Similarly, I predict that cameras will increase in resolution past the point of utility for human vision: tetrapixel camera video would be extremely expensive to produce without leaving tiny artifacts.


In real world you could have an event that happened under five different, unaffiliated cameras. If all of these match up, that seems to be something a judge should consider.


I suppose you could have 'trusted devices' where upon taking pictures/videos a hash of the file is committed to blockchain. Although a privacy nightmare.

This obviously wouldn't solve the 'no fake videos' problem but would least mean 'this video isn't fake' for a select subset. There could be tiers of information in the future where this audited media is weighted higher.


Commiting a timestamping hash is not a privacy nightmare, as you don’t need to reveal the evidence until you need to use it for proving what happened.


"Every type of video authentication system I've ever heard proposed can be defeated by simply pointing a 'trusted' video camera at a screen playing a fake video."

That's a good point. But I'm imagining situations where the camera itself is a reasonably secure device. For example, a city CCTV camera or a police body worn camera.

If a video stream can be securely (cryptographically) matched to a particular device, with an unforgeable (blockchain?) timestamp, it would increase confidence in video evidence presented in court, for example.


Of course not. There’s no plausible mechanism by which it could.


Well blockchain can solve any problem, and everyone working with it is totally reputable and competent, so what could possibly go wrong?


cf. XKCD -- "Bury it in the desert. Wear gloves."


kind of... you can create a hash of the video, and store the hash on a blockchain.

Then when viewing the video in the future, you can check it's hash and compare it to the one stored on chain to be sure the video wasn't tampered with.


But then you need to establish that the hash was posted on the chain by a trusted source, too.


And ensure that they weren't hacked, etc. Then you need another immutable blockchain to keep track of the bad posts on the first immutable blockchain.


"simple" would suffice.


We've had the ability to make convincing fake videos for a long time. There are whole industries devoted to the task.


Yeah but it definitely never was this accessible.


Is there any benefit to deepfake technology besides maybe entertainment?


Why are deepfakes now a huge problem, or any worse than photoshop? This technology has existed for decades.


Video used to be the holy grail for 'that's real'. There's a realpolitik side to some american politics where if the traditional international order gets reshuffled they'd be happy to prove international misbehaviour with video footage and then use that as evidence for escalation. Also in the justice system video footage is seen as good as gold.

The credibility of still footage has dropped before. I don't know where that leaves us for authentic media...


Also, don’t courts have pretty strict chain of custody rules for anything submitted as “evidence”?


Of all the places where you could find "epstein news" you choose to link the New York Post.


The New York Post has actually been one of the most consistent places for breaking Epstein news. Most major news outlets have employees implicated with Epstein and have reported (or not) accordingly.

It's quite telling when the corruption and scandal is so large that only "fringe" outlets are able to report on in. Also cf. with Ronan Farrow and Weinstein.


I would honestly love to link to a trusted MSM report, but these outlets have been burying Epstein stories[1]. It's awful.

1. https://www.newsweek.com/abc-jeffrey-epstein-story-amy-robac...


I'm not criticizing the New York Post for being too indie. I'm criticizing them for being profoundly untrustworthy.


Can you provide examples?


They have a string of controversies like most news organizations, but I think they may be referring to the fact it is owned by News Corp, which is run by Rupert Murdoch who is notorious for companies like Fox News.


The New York Post story you linked is just a write up of original reporting done by CBS.

https://www.cbsnews.com/news/jeffrey-epstein-accuser-maria-f...


That article is from nov 18th versus 5th?

Besides, for credibility they should have been writing about the obvious problems with Epstein, and everyone but fringe outlets were basically silent over the last decade(s), and in some cases complicit (abc news).


Both articles are from November 18th. The New York Post one is from four hours later and just summarizes the CBS interview. Your New York Post article links to the CBS one and leads with a picture of the woman telling her story on CBS.


Except it "didn't air because we could not obtain sufficient corroborating evidence to meet ABC's editorial standards about her allegations." This should be an article about journalistic integrity, and ensuring we don't accuse people of things without corroborating evidence. Just because more evidence came out after that point, does not mean it was wrong to require more evidence at the time. I cannot see how this is anything other than a good thing.


Miami herald for the big story last spring that lead to a lot of this coming back up.


Same. The cynical part of me was looking to see if this project came out of MIT Media Lab (it didn't, just MIT)


The President of MIT (Rafael Reif) while not directly implicated in fraternizing with Epstein did work to actively cover up the Joi Ito scandal and has essentially side-stepped all responsibility. There are still students protesting and calling for his resignation.


Deep fakes are yet another prime example of how the writing is on the wall (i.e. it's clear that this technology can/will be used for nefarious purposes - esp. by our enemies) and yet mitigations - through regulation and enhanced standards - won't be pursued until long after the horses have left the barn.

I don't know how/what form this should take, but an older analogy might be how colour photocopier manufacturers imbed microdots into each reproduction so counterfeit bills can be traced to the equipment that produced them.


The sorts of bad actors or 'enemies' who'd deploy deep fakes for advantage aren't typically discouraged by 'regulations'.

On the other hand, there is a reasonable amount of active research on both detecting current faking-techniques, and methods of adding cryptographic attestation from point-of-recording.

I don't believe "detection" can win in the end, as widespread detection-technology can generally be used to tune better fabrications.

So ultimately we'll have to rely on: "do we trust the specific chain-of-people-and-sensors-and-relays that brought this evidence to our purview?" And various kinds of constantly-applied cryptographic signing & timestamping can help with that, though interpreting the challenging cases will require a lot of abstract expertise. (So again, for most people, it may reduce to: "who do you choose to trust?")


Yes, we should not underestimate plain old social solutions. I remember when you used to answer the phone every time because caller id didn't exist. Who answers their phone now? This just mainstreams exceptional validation workflows that exist in finance for establishing identity and preventing phishing and other forms of identity attack.

We just won't be able to trust anyone is actually saying anything except after confirmation via non-repudiable channels.


I think this is a regional thing.

I'm from a rural community in Canada, and of course we answer the phone.

Just floors me to visit my mother-in-law in Texas; when we visited eight years ago, she'd answer the phone; now she doesn't answer her cell or her home phone unless it makes a distinctive ring. I'd hate to need to get ahold of her using someone else's phone.


Obviously I'm not your mother-in-law, but I also don't answer my phone unless it shows it's someone I know because 95% of calls I get are spam. However, I will check my voicemail. So far every legitimate caller has left one to my knowledge.


That is pretty strong selection bias however...


I don't think detection would really help even if we had it. Even now, most fake-news stories are obviously fake. They have no subtlety; they don't fool anybody paying even the faintest shred of attention.

They'll be used on people who are already perfectly happy to believe fake printed quotes from obviously unreliable sources. When challenged, they'll refer to conventional partisan media citing those sources as proof. Ordinarily showing video would increase their certainty, but they're already absolutely certain. It'll just be a more entertaining way of delivering it to them.

Detection, no matter how definitive, can't deter that. They're already perfectly happy with their trusted sources.


I wonder how long it will be before someone stuffs a crypto module into the same package as a PDM microphone to provide a digitally signed audio stream.


This comment kinda scares me, because it is exactly that panicky reaction that makes all these stupid regulations come to life, and you are reminding me that it's not really "just stupid politicians", but in fact scared, easy to manipulate crowds behind them.


What if we put the regulations and enhanced standards on to the people who we are most worried will abuse our trust? We'll never be able to stop a malicious actor from Deepfaking our leaders and spreading disinformation, but at the very least we should have something in place to make sure our leaders aren't the malicious actors using Deepfakes and react accordingly with punishments when they are.


So you think it would be better if politicians have a knee jerk emotional reaction like with 3d printed guns and start shutting down research and making it a felony to publish deep fake videos?


Another recent example where regulations were (relatively) quick to address some new tech/gadget: drones.

It was recognized by law makers that drones could be used for nefarious purposes and needed some form of regulation.

I'm not saying that drone registration thwarts illegal use, but at least something was done by regulators.


>I'm not saying that drone registration thwarts illegal use, but at least something was done by regulators.

Drone regulation has done precisely nothing to thwart illegal use, while imposing substantial cost and inconvenience on legitimate actors. The laws don't stop me from buying a heavy-lift drone from China, nor do they stop me from doing something nefarious with it. At best, the regulations have slightly reduced the risk of inadvertent airprox incidents, but they have been utterly useless in addressing the sorts of risks they were supposed to combat.


> at least something was done

That's an example of the politician's fallacy. Doing something can be much worse than doing nothing. Did drone registration actually improve anything?


That drones could be used for nefarious purposes was kind of obvious after years of deadly US drone strikes. That didn't seem to tip off regulators, however. Not until anyone could buy drones capable of carrying heavy cameras or other equipment anywhere.

Either way these regulations don't help much when people just don't care, as can be seen with all the drones flying too close to airports.


Pretty sure nobody was going to buying surplus/refurb Predators and flying them around the neighborhood.

When the recreational/commercial models got range and mass enough to get in the way of aircraft, that's when the first round of "whoa!" kicked in.

Still going to take more work to figure out reasonable controls.


People have been faking videos for as long as video has existed. Deepfakes is just a scary new word for what people would call "CGI" or "VFX".


> Deepfakes is just a scary new word for what people would call "CGI" or "VFX".

"Ryzen 3950x" is just a scary new word for "z80" or "6502".

Deepfakes don't provide any fundamentally new capability, but they do reduce the cost and time required to create a convincing fake video by multiple orders of magnitude. That completely changes the threat model, from "our propaganda rival occasionally releases a fake video that we have to debunk" to "our propaganda rival is producing thousands of fake videos every day and we can't even keep track of them".


> Deepfakes don't provide any fundamentally new capability

I'm not sure this is entirely true, although your larger point is quite good.

People have been faking video since the dawn of video, yes. But for several decades, a clear, high-res video of a specific person has been pretty much inviolate. Faking a steady closeup of a president or famous actress would have been unthinkable - no VFX hoax ever convincingly impersonated Nixon. As a result, bad actors had to resort to deceptive editing, low-quality "covert" footage, or very rarely an exceptional look-alike. Even Hollywood had to rely on old footage and rewrites when an actor died.

Today, we're right on the edge of changing that. MIT can revive the President, but not perfectly, and only to the standards of a 70s news video. Lucasfilms can revive Carrie Fisher, but not well enough to fool an alert viewer, and not in a natural/unedited setting. Within a few years, we might hit a level that tricks even the most acute viewer, and is only caught by forensic analysis. We've been there with photographs for years, but it's new ground for video.


> no VFX hoax ever convincingly impersonated Nixon

Because nobody really cares about Nixon. People fake UFO videos because they're clickbait. You can hit and run and make $10,000 on YouTube revenue before someone debunks you.

The reason political campaigns aren't using VFX to make their opponents look bad is because someone would eventually figure it out and the backlash would be immense. Additionally, it's not even necessary. People do fine planting conspiracy theories and taking advantage of people's disinterest in fact checking to say whatever they want, all without investing any effort in video production.

I predict that deepfakes will be nothing but a series of amusing "I can't believe they thought they could get away with that" stories.


The point is that nobody talks about making altered photographs illegal, or having federal regulations around altering photographs, or anything remotely as ridiculous. But because it's video everybody is losing their minds.


To me Tarkin felt much more natural than Leia in that film. Maybe people with more "official" face expression are in danger sooner.


We’ve had photographic deep fakes for decades now. How does this fundamentally change anything more than Photoshop already did?


We've also had video since before convincing photo edits were available.

So people learned that we shouldn't trust a photo of Martians, or a sound clip of the President confessing to murder, but should hold out for video. When footage of Rob Ford doing crack cocaine shows up, we believe it's real. And when an investigative reporter wants a verifiable record of something, they resort to a high-quality video.

Even if video deepfakes aren't any more realistic than image or audio fakes have been, they're important because there's no fallback. With perfect-accuracy photo and audio fakes, we'd have to be more skeptical. But with perfect-accuracy photo, audio, and video fakes, we'd suddenly be back to the pre-photography era when there was essentially no way for an untrusted source to convince you that something had happened. Deepfakes aren't flawless, we're not at that point, but the specifics of video are less important than the impact of having some medium which can't be convincingly faked.


Firstly, Ford was already known for that behavior. The bar of legitimacy was low. The famous early nazi fakes; looping rows of tanks and troops to suggest more, was indeed effective, and nothing new. We have been in this space for nearly a century already.


"Video can be used to mislead" is nothing new, but I'll still argue that we're watching a very specific new door open. From the first days of high-fidelity video to the recent past, there has never been a time when someone could fake this type of content.

Faking a clear, unbroken closeup of a recognizable person has never been a serious option before this. Even when Hollywood had likeness rights and massive budgets, it resorted to rewrites and old B-roll footage when actors died mid-shoot. When people wanted to slander politicians, even state actors with Cold War budgets, they edited photos, faked audio recordings, or staged candid, hard-to-see footage. In the modern era of CGI and digital video editing, bad actors still resorted to misleading edits and out-of-context clips. This, though, is 20 seconds of clear, closeup video of an incredibly famous face. And the fake was good enough that even placed in an art exhibition centered on something that never happened, some viewers thought it was real footage.

I don't share the popular paranoia that deepfakes of politicians are going to start upending democracy; there have always been plenty of people fervently convinced by manipulative video, altered pictures, or simple "somebody said so" lies. What's new is the increasing difficulty of proving truth in the most trusted contexts. Validating even a very narrow claim like "at some point in the past, for some reason, the President said this exact sentence in front of a video camera" is increasingly difficult.


It won’t be that difficult, if it’s a digital video that’s cryptographically signed with the White House’s private key.


Most people believe whatever they see and hear. This turns out to be a hell of a business model.

The business model works on "most" people, but not all of them. You can't fight a Fox News with an Air America. It's been tried, more than once. When deepfake material comes to light at the expense of a Democratic president, rest assured, most people aren't going to bother demanding the White House's public key.

All they'll know is what they heard from the talking heads on their news channel of choice: "Well, I don't know, Sean, but a lot of people are saying it's real."


So why don’t we have the same issue with Photoshopped images? It’s because everybody knows Photoshop exists and is skeptical about photographs. The more and more deep fakes get published—first as tech demos and then increasingly as jokes—the more people are going to adjust to how this works. Some people are going to be credulous but that’s just the human condition.


Yep, this is my reaction too. People are conflating two fundamentally different problems. One is having the ability to know what's real, the other is actually knowing.

Ability is what interests me most. Faking this sort of content - a continuous, closeup shot of the President saying something - has basically never been possible before. If a total stranger had showed you this video in 1969, it would have been clearly authentic. Not clearly truthful, you wouldn't trust that the landing had really failed as opposed to e.g. Nixon recording speeches for both outcomes. But you could be pretty sure it wasn't an imposter or an edit or anything except the real Nixon on camera.

Deepfakes challenge that ability, but I don't think they'll destroy it. Official sources will sign video. Untrusted sources will use verified timestamping and other methods to prove specific claims about their footage.

Actual knowledge, on the other hand, can't be destroyed by deepfakes because we already don't have it. There are a hundred ways to lie to people - with photoshopped images, deceptive editing, or just lying and having people trust that the evidence exists somewhere. People who aren't easily fooled today will become skeptical about video footage too. People who are easily fooled are already fooled, have been fooled since before photography straight through to the present, and will just stay that way.

"Banning political deepfakes" or anything similar is not just a case of closing the barn door after the horses have left, but without ever getting the horses into the barn.


I seriously think the best reaction to deepfake technology is the same as Photoshop: let a bunch of sarcastic teenagers pirate the software for it and use it to make jokes. Wise people will get wise to it quickly enough, and the gullible will always be with us.


So why don’t we have the same issue with Photoshopped images?

We do. Even today, very little of what you see in media is entirely real. The alterations are hardly ever as overt as Trotskyites being airbrushed out of Party publicity photos, but that doesn't mean they aren't being done.

Maybe an African-American person ends up looking a little blacker than they really are if they're being accused of a crime, or a little whiter if they're running for office. Maybe they just sound a little more or less "ethnic" depending on whether the press wants them in office or in jail. Or a video is sped up and slowed down at just the right moments to turn a journalist's uncomfortable question into an unprofessional partisan attack.

IMHO we can expect more of these cognitive shenanigans as the technology improves. Manipulation will be performed not just on the subjects in question, but on the contexts in which they appear.


"We've had computers for decades now. How does this fundamentally change anything more than the IBM 360 did?" -- investor being asked for funding in 1978 by a couple of hippies in Cupertino


Do you actually have a response or are you just being contemptuous and dismissive to amuse yourself at my expense?


Well, if you put it that way... :-P

It's well understood that quantity -- and availability to the masses -- has a quality all its own.

I do believe we're facing an arms race, in which purveyors of synthesized bullshit will likely fight the defenders of truth to a draw at best, and more likely win. No disrespect intended but I don't agree that your Photoshop analogy is valid.


Photoshop doesn't have a "put this person's face on this other person's shoulders"-button that anyone can use.

Creating convincing deepfake videos is still a lot of work, but a couple years ago altering one's face in live streams took also much more effort than pressing a button in a free mobile app.


What regulations / standards would actually stop these things from getting out though?


The only answer here is drm esque file signing. Any attempts at detecting fakes will only work while the tech is young. Ofc this requires a web of trust and verification techniques to be applied, since videos are often re encoded.


Ok, I've signed my cellphone footage of a crime being committed. Now what? How does that prove my footage was taken where I said, when I said?


I was thinking more along the lines of hardware manufacturers implementing a secure enclave style of signing files as they're recorded. The sig could be tagged with location/date if available, but at least the video would be verifiable. The web of trust would have to exist for reprocessors and media outlets who like to edit footage.


>at least the video would be verifiable

Nope, it'll just give well-executed fakes a veneer of legitimacy. A state-level actor will have easy access to the signing keys. Non-state-level actors could potentially extract them from a device or bribe an engineer at a third-tier Chinese OEM. A reasonably competent hardware hacker could desolder the CCD from a signed device and feed whatever video signal they like into it.


you mean for literally every piece of video taken everywhere..


This comment is so egregiously out of touch I hesitate to even respond at all, let alone to do so civilly, however in the interest of discussion I'm going to try to.

Which "enemy" do you envision would use deep fakes for a nefarious purpose, yet will stop short once they realize there are - gasp! - regulations around them? And as far as I'm aware, at least in the United States, there are no regulations requiring the use of microdots in printers or copiers, they're done of the manufacturers' own accord to aid law enforcement.

The idea that we should have ill-informed, knee-jerk regulations in reaction to every mildly upsetting technological fad is antithetical to everything the vast majority of technologists believe in and stand for.


I get your point your making, but I think your reacting a bit harshly. We don't need one regulation that solves all our problems. We can have 'defense in depth'.

Current laws that put microdot detection in color scanners will not deter huge resources to counterfeiting like state backed operations. But they do prevent the local meth junkie down the road from making a bunch of fake 20's to get his next fix.

At the sametime, the paper that we use is also highly regulated. Its not impossible to get something that is similar, but its not easy. (and not cheap).

The points of all these layers, is to prevent the casual and common crimes. By doing that, you can spend your resources on the larger operations.


It just reminds me of my 85-year old grandfather who still in 2019 maintains that photocopiers should be banned for private ownership because people can make illegal copies of documents on them.


You said you'd try to respond civilly, but your comment is openly hostile and this takes away from your valid core point.


As far as technical responses, there are at least four different goals to pursue regarding fakes:

1. Origin tracing. This is the microdot example: it's not meant to catch deepfakes, but link nefarious instances to their source. But the existing technical options here seem to be a mix of privacy-eroding (printers are dumb compared to phones/computers, and you'd have to prevent or ban sharing non-marked media) and ineffective (bulk color copying requires physical access to a device that's hard to make or modify; deepfakes can be constructed on a server in another country, or have their identifiers scrubbed after the fact).

2. Reactive, general verification. This is just an arms race between fakers and observers, like catching art forgers. Existing fakes have tells like a modified 'halo' around faces. Right now we only see manual checks, but major content hosts like Youtube could flag "suspected deepfake" like they do "music copyright strike". (Or hopefully better than that...) But it depends on staying ahead of fakers, and once the tells are too subtle to simply watch for it will only work when hosts or viewers choose to validate content.

3. Proactive, general verification. This corresponds to hard-to-implement, easy-to-verify security features like UV watermarks on money or prescriptions. But those rely on controlled supply, and decades of DRM failures tell us that digital fakes are much easier to make and safer to pass than physical ones. I don't expect this to expand beyond closed groups like news orgs giving out auto-watermarking cameras.

4. Specific authentication: not eradicating fakes but proving certain videos are legitimate. This is the most plausible, interesting category. We can't even prevent manipulative photo edits today, so we're unlikely to prevent manipulative deepfakes, but today we can prove specific aspects of specific images are legitimate. People will still believe fakes and lack proof of some real events, but this prevents a more fundamental transition to a "post-truth" era; we'll still have known-good records of key events.

We've had low-tech authentication since the dawn of photography - think of hostage photos taken with a daily newspaper to prove "this image is newer than this day". As photo editing emerged, steganography developed to catch out altered elements. Cryptographic signing took that further, allowing us to prove that an image is unaltered from a specific keyholder. We even have the reverse of the old newspaper photo; publishing a hash or encrypted file lets us date an image back to a specific time without having to actually release it.

Proving that a file is authentic to the world, not just an owner, is trickier. But we already have some steps: deepfakes take time, so any livestreamed video is not being edited that way after the fact. Authenticating something time-specific like a Presidential speech would only require combining that rapid turnaround with proof that the video wasn't prepared in advance; until on-the-wire editing becomes convincing a newspaper in the background would suffice. Quite likely we'll see more complex arrangements eventually, like trusted hosts that issue random values and demand their use in rapid responses.

None of this is going to stop people from believing fakes, but nothing ever has. What's more significant is whether we maintain the ability to create records which can be verified and trusted.


Sadly they do not really need deep fake technology to spread fake news


"can/will" - deception of this magnitude has already been seen: Adnan Hajj.


naive question, won't any deep faked video leave some pretty readily detectable artifacts in the underlying video file?


If they are visible to the model, then these artifacts could also be generated.


I mean the model producing artifacts that wouldn't be found in the genuine article.


that's an issue with electing presidents and powerful people that are over 70 years old lol


I'm curious to know why you feel like regulation and standards are the solution here. It seems fairly clear that the propaganda metagame heavily favors disinformation and is only getting better at creating disinformation products. If we accept that premise, then it seems like the real problem is that we're still trying to preserve the idea that the internet is a tool for learning. Certainly parts of the internet are, but if the bloggers vs. professional press are anything to go by then it seems like the most practical solution is to establish lists of trusted entities and the contexts in which they're trusted. The managing of lists is regulatory in a sense but the trust side of the equation seems more social than bureaucratic, people need to be much less trustful of the internet.


We should be less worried about overt propaganda like DeepFakes and more worried about the assumptions baked into the media of all imperialist nations, the misdirection, and the selection of "what" to report on. States have been lying to us since time immemorial, and they don't need fancy video evidence to do it.

But this is a cool demo, and since the Genie is out of the bottle, this can be a great tool. You could use it to force those who speak in public the most to be accountable for the things that they don't say, or equivocate on, by making videos of them saying it, and forcing them to go on the record denying the video, in contradiction to their established (unspoken) position.


> States have been lying to us since time immemorial, and they don't need fancy video evidence to do it.

True, but now they'll be able to drive any narrative with as much A/V 'evidence' as they want. Lies by omission or one sided reporting are dangerous in their own right, but challenging that is different from challenging fake evidence.

It's hard to say "It wasn't me" if you're on tape/film. You'd have to have experts argue over validity but the public doesn't have the attention span or trust to follow that. Deepfakes have the potential to be extremely damning/damaging to public image/reputation in a way that biased reporting never did.


What is a non-“imperialist” nation, and what makes you think their media is any better?

The rest of your comment I agree with, but attributing it to “the media of imperialist nations” instead of just being a property of media organisations in general, is wrong.


I mean states which are ruled democratically rather than by elites. This means that the implicit bias in a democratic media is that of serving the people and exposing truth rather than covering for corpo-fascists and state-sponsered terrorism. There's such a thing as genuinely good reporting and integrity in journalism (even in authoritarian states)- it just needs to also be free of coercion and aware of the frame of reference that it exists in.


I was fourteen once too, kid.

I know the world looks simple when you’re out there forming your very first political opinions, but it’s really not. The world isn’t some simple struggle between good guys and bad guys, it’s an incomprehensibly complicated overlap between lots of people acting in response to various incentives that they themselves don’t understand.

I recommend keeping your mind open and your mouth shut as you learn a bit more about how the real world works; hopefully before you’re old enough to vote.


Personal attacks will get you banned again. Would you please review the site guidelines and stick to the rules?

https://news.ycombinator.com/newsguidelines.html


It still looks wrong. There's still some of the floating/stuck-on head feeling about it. Or is that just because we already know it's fake from the headline? (... and the way that the flag moves is totally wrong .../s).


You’d have to be paying real close attention to notice that, which you are primed to do because you’ve been told it’s a deep fake. You’re probably also more of an expert than most.

But think of the general US electorate. How many of them, seeing this clip on the news as they prepare dinner, would know it’s fake?


> But think of the general US electorate. How many of them, seeing this clip on the news as they prepare dinner, would know it’s fake?

The technology to fool someone who has the TV on in the background has existed forever. You could do it with a stunt double. The problem is fooling everyone such that "this video is faked" doesn't become a bigger story than the fake video in the first place.


Some people would demand fake videos supporting their point of view be taken seriously.

Also, it isn't difficult to imagine a future where people are criticized for correctly identifying deep fakes.


Exactly. We've seen this in responses to (for example) the Rolling Stone fake sexual assault story. Many people said "the problem is real even if this story may not be."

So people may push fake stories to make what they believe to be a justified point.


  Also, it isn't difficult to imagine a future where people are criticized for correctly identifying deep fakes.
I didn't understand your scenario. Can you expand on that?


Saying "the Nixon video is fake" in a community of Nixon truthers will get you verbally attacked? It seems extremely likely.


It's already a common troll[1] to just post "FAKE" in anything where the evidence is difficult/impossible to verify. There's a whole subreddit, /r/NothingEverHappens, devoted to making fun of this troll, and adding politics/culture war to the mix will just make the backlash to people who call "FAKE" that much worse.

[1] "Troll" as in "derailing tactic used by people who don't know or care if it's really fake or not" simply to be extra-clear. In politics, throwing out chaff and using derailing tactics to make it effectively impossible to have certain discussions in open fora can be a good way to prevent inconvenient ideas from being spread, in addition to the usual tactics around sowing uncertainty and confusion among the enemy.


>But think of the general US electorate. How many of them, seeing this clip on the news as they prepare dinner, would know it’s fake?

I'm not convinced deepfakes will be that big of a problem because people already believe things that aren't real (and don't believe things that are real) without the help of technology.


> people already believe things that aren't real

That's the problem: These people can now parade proof when there isn't any, perhaps even rally to garner majority and dilute out actual facts.

> (and don't believe things that are real)

For instance, refusing to acknowledge deep-fakes despite being labeled as such...?


The point I was going for is that "proof" already doesn't work on lots of people, that's the real issue. Information literacy is an important skill that much of the populace lacks.


But now people will also have more reason to disbelieve things that are true.

Imagine if Trump could claim that the "Grab em by the $#@%" tape was a fake, or if Biden could do the same about the tape of him bragging about getting the Ukrainian prosecutor fired.



Real life is not exactly preparing people for spotting fakes these days. The bizarre is real, the plausible will likely get a pass.


> But think of the general US electorate. How many of them, seeing this clip on the news as they prepare dinner, would know it’s fake?

This won't change much compared to now. There are videos of Trump saying something, then in an interview he denies he ever said that. People believe that he never said those things and real proof is dismissed. What I'm saying is, people believe lies even when there's no proof and when there's video proving that what was said is a lie.


I don't agree. If I didn't know that was a deepfake, there is no way I would guess that it is. It's more than good enough to convince me.


I'm with you. If this article was titled, "unbroadcasted footage of Nixon speech found in archive" I would've bought that headline 100%.

This is too good.

This is going to cause problems. Many, many problems.

We already have issues with what is real and what isn't, when you can still trust video for the most part. I am not excited to see what this does to society.

It's one of those - just because we can, does that mean we should - sort of scenarios.


Absolutely agreed. I’d really love to be able to talk to someone who works on stuff like this, like from all those companies mentioned in the article, and ask them why they continue to advance the state of this art? Do they think it is inevitable? Are they just integrating many separate features used to create entertainment like CGI/dubbed movies?

We’re opening Pandora’s box like a kid on Christmas morning.


That's true, but you have to also realize that Nixon looks wrong on TV in general. Maybe they did too good of a deep-fake. ;)


If you're not old enough to tell what Nixon was really like, then you don't have that comparison. You wouldn't suspect it was a fake without being told.

However, there are some clear give-a-ways that may or may not stick around, priming at least myself into identifying a deep fake from not: His movements do not line up with the way muscles work in the human body. In a single direction it works, but how he bounces back would cause a lot of neck strain, so no one would do that.


Yeah the acting is off. The mouth doesn't seem to carry the same gravitas and tension that his eyes do.

Also there's a line across his cheek in the closeup and the bottom left hand corner of the mouth fades into a blurry mess once or twice. The deepfake look generates blurry results for blending.


Don't forget the other side of the coin:

The President really does give a speech or make a comment, but 50% of the electorate is sure it's a deepfake.


No one says that deep fakes are production ready today... but given the speed of progress, they’re going to be in FAR less time than takes for our social and political systems to mount a response, so there’s going to be some fucked up consequences of them.


I agree, I could tell. The shadowing was wrong, he looks like a bobble head. That being said, it was too close for my comfort... it won't take long before they can fool anyone.


Both the head-motion and the voice seemed a little jumpy/choppy to me, reminiscent of Max Headroom.

Do authentic TV recordings of Nixon have that same quality?


The disparity between his speech and head movement makes it looks like he's suffering from Parkinson's disease.


Honestly even knowing it was fake I had a hard time convincing myself it was fake. My only clue was the way his chin intersected with the collar looked a little off

But I’ve seen things look off like that in real life under certain lighting conditions so I would’ve waved it away.


The wider shot definitely had an uncanny valley feeling for me. His head movement seemed unnatural.

The tight shot, however, was really good. I'm not sure I'd think it was fake if I wasn't primed for it.


Part of me feels that the widespread proliferation of deep fakes can potentially be positive. Seeing will no longer be believing and society will be forced to look at things with a more critical and skeptical eye or with a higher level of diligence. Value should also shift back to more legitimate sources. The transition will certainly suck though.


Make-A-Wish Foundation can really benefit from this technology by sending these "heartfelt" deepfake videos of famous celebrities and idols to dying kids, hoping to cheer them up. Is it an ethical thing to do? You be the judge!


This is fantastically evil. I think you win the dangerous idea of the day award.

It reminds me of the market for answering machine messages, but here you will have a large catalog of celebrities and you can create a personalized video message of them talking about you or someone you know.

A con artist could also buy that to pretend they know a certain person.

So the website would work like this: you pick a celebrity, you pick a theme/context/environment, like Skype call, handheld phone video, at home webcam, etc. and then you pick the message.

You can have actors with similar build as the target celeb acting specific scenes, to make the scenes more unique but still reusable. Payment options differs based on the exclusivity of the acted scene.


What if that's what the startup Cameo is actually offering? https://www.cameo.com/

:-)


In a thread full of awful futures, I find this one to be the most depressing.


Hallmark shares surge on the introduction of the Custom Celebrity Holiday Greetings Digital Card Series.


More likely it will further reinforce echo chambers so that people will truly only see what they want to believe and nothing more. Anything they disagree with or is inconvenient to their worldview is "deep fake"


What’s the difference with today “fake news” outrage for absolutely real news?


The difference is that now people will have video proof that their belief is true, and will know any counter-evidence is fake.

Today some people can be persuaded by evidence to abandon a faulty belief. That number seems destined to dwindle in a world where evidence is increasingly malleable.


You're spot on. It's going to be hard and difficult.

The complete breakdown in privacy described in 'The Light of Other Worlds' really changed my worldview. I am and will remain to be a proponent of personal privacy, but the world isn't going to respect that. We need to seriously start considering taking back some of our power. Example: UK CCTV footage, that should be public domain. The knowledge of those that access the footage, public domain too. Then include public education about the dangers of performing activities in the public. That someone might use the footage to steal your credit card shouldn't be a reason to hide this data. That someone could stalk another individual through the network isn't an excuse to hand over the data to the government.


>Seeing will no longer be believing and society will be forced to look at things with a more critical and skeptical eye or with a higher level of diligence.

That's not what will happen. What will happen is that people will consider video which corroborates their biases to be legitimate, and video which contradicts their biases to be fraudulent. Bear in mind that skepticism is already on the rise - the web is rife with it, to the point that any news source whose reporting isn't sufficiently paranoid, cynical or piss-taking is dismissed out of hand as likely propaganda, yet this widespread mistrust in just about everything hasn't resulted in a rise in critical thinking or due diligence, rather the opposite, because it's easy for people to live in self-perpetuating alternate universes with multiple positive feedback loops provided through the web reinforcing their filter bubbles.

>Value should also shift back to more legitimate sources.

It might, if anyone believed that legitimate sources existed anymore outside of 4chan, Reddit and Comedy Central. Unfortunately it seems as if we as a society have decided to abandon the premise that objective truth exists, as the world around us is fed to us more and more as abstractions by untrustworthy arbiters. Deepfakes aren't going to help solve the problem of who can be trusted to "legitimize" truth in a "post-truth" world.


That’s awfully optimistic. Even when seeing shouldn't be believing, people still will.


That's only the case if making the fakes becomes a very easy and common thing to do. I think more likely the case will be that certain already-powerful groups will have more ability to make these fakes and thus actually make the balance of information influence even worse.


There are already very good porn videos with celebrities, probably not as good as this, but is just a matter of time. The barriers to enter are really low, you don’t need the already-powerful groups.


More likely everything even legitimate things will be 'poo pooed' as fake. Believing in nothing is as dangerous as believing everything. It's both a form of credulity.


Seeing will no longer be believing and society will be forced to look at things with a more critical and skeptical eye or with a higher level of diligence. Value should also shift back to more legitimate sources.

Not to drag the thread in a political direction, but that's what I thought when Trump won the Presidency. Rather than a boost to our cognitive immunity, though, we're only seeing stronger polarization.

The transition will certainly suck though.

Exactly, only there's no sign of an end to the 'transition.' Today, the common person is more empowered than ever before to ignore what they don't want to hear.


I find this as interesting as I do worrying. Given our tenuous grasp of shared facts and basis for (political) relality these days, making videos like this just seems irresponsible.

We don't need any more fuel for the fire: "They faked this Nixon Video, so they could have faked <X> too!"


Well, in this case, they would be correct.

Finding a reliable way to exchange trusted information is a central problem of our time.


There's already an outbreak of "fakes" by the simple technique of misleading editing:

https://www.independent.co.uk/news/uk/politics/corbyn-ira-vi...

https://www.mirror.co.uk/news/politics/tories-release-anothe...


And it doesn't even have to be sophisticated editing. Slowing down a video of a person speaking is enough to fool a lot of viewers into thinking that person is drunk:

https://www.washingtonpost.com/technology/2019/05/23/faked-p...


Is there any reason it should be a worse problem for video than it's been for print over the last 20 years?


"I saw it with my own eyes" is inherently more convincing than "someone told me [in writing] how it is". It will be naturally harder for humans to reject false recordings as deceitful.


And one wonders how muddied the waters will become, actual videos may be discarded on accusation of being deepfakes!


I imagine that future society will develop some system of "truth certification"; networks distributing "seals of approval / authenticity".

It may become a very different world psychologically - just like the medieval people would be psychologically, largely incomprehensible to us.


The speculation on Deepfakes in politics reminds me of a particular period in Russian history known as the Time of Troubles, when numerous people claimed (called False Dmitris) to be the heir to the Russian throne. I believe a similar scenario happened a few times in the Ottoman Empire but I’m having difficulty finding it.

https://en.wikipedia.org/wiki/False_Dmitry


I hadn't thought about how deep fakes could help us explore alternate history before. That's a chilling clip.


The audio is so realistic. I could have been fooled easily.


And how do you know this video isn't real history and all the other Apollo videos aren't deep fakes? Answer seems obvious now, but once the generation that was alive during Apollo dies off, this is a likely conspiracy candidate that will take root in some skeptical future generation.

We are increasingly living in an age of misinformation where it is becoming more difficult to tell what is true and what is not without significant effort.

This tech I'm sure CCP will use extensively to shape history/propaganda however they want.


Moon landing conspiracies have already formed a very robust community on YouTube.


I am more curious how they did the audio than the video. From experience, it's not nearly as easy to clone someone's voice as you might think.

It might be that they just found a good voice actor. That's what most deepfake videos do now. But maybe someday it will be possible to press a button and hear a beautiful result.


The audio is also generated. We used speech2speech voice conversion for this, so it is indeed more involving than TTS, for instance, but also more expressive and controllable. Here's another example: https://youtu.be/t5yw5cR79VA


Possibly also generated. From the top of HN last week: "AI Clones Your Voice After Listening for 5 Seconds" https://news.ycombinator.com/item?id=21525878


OK, forget the implications of low-cost and convincing fake video footage for nefarious purposes. "Fake videos"—e.g., CGI—have been around forever! I don't suddenly think Star Wars is real because I saw a video of it.

At the end of the day, what cool tech for legitimate purposes! Hollywood VFX, training video or PR customization, etc. Nixon giving this speech is a cool look into "what-if", without the moral burden of someone trying to convince me that an alternate reality is the real one.


The point is that society currently takes video at face value (pardon the pun)

A clever hacker could replace the face on security footage of someone committing a crime with yours, put that file back on to security cam dvr, removing metadata that indicates the file was ever modified. Let the police retrieve the footage from the dvr. Do you think a good lawyer could get you off the charges for committing that crime, if you didn't have an alibi?

Evidence standards around video will need to change soon, and society isn't ready for this shift currently.


This is way too perfect. If it didn’t know it was a deep fake I would think it’s real. Fake Richard Nixon looks and sounds genuinely upset.


As the technology to do this becomes exponentially cheaper, integrity (in all it's ramifications) becomes exponentially more valuable.

- - - -

Sooner or later, nanotech will mature and all of this will escape the digital realm (largely photons and electrons) into "real" life (IRL) (protons and neutrons) and we'll have to deal with that.


We need ways to sign or put signatures in videos that proves their authenticity. Not sure how such a thing would work though...

I guess, in the world of blockchain, we could guarantee it’s origin at least.


Yup. Let's prevent anonymous video recording and dissemination...

I think that for any instinctive reaction like that you should ask - will authoritarian governments want this effect. If the answer is yes you should think this trough.


The common mistake with signing schemes is assuming that it actually means anything on its own. There is no need to it.

That would only make sense if you want to mark something as offical and have authorative sourcing. It would have a place for chains of command and actually signed off on orders but not evidence that the content is true.

An offically signed video of the president punching Adolf Hitler in the groin, High Fiving George Washington, and then flying off into space could be cryptographically valid although obviously fake. It would only say that the President actually approved this but not that it was real.


This looks quite a bit more believable than the fake (dubbed) Nixon speeches in "For All Mankind".

Deepfakes could have legitimate applications in film & television production.



How long before all the moon landing hoax people start pointing at this and saying, "see, I told you so."?


There is literally a mirror in the moon you can get a reflection from it you know where to look. Mom landing hoaxers don't need evidence to believe what they believe.


I mean, it's a little more difficult than that. You need an expensive high powered laser only available at universities and precision calibration and timing equipment to fire and detect the return pulse.

It's not like you can just pull out a hobby telescope and be like "oh look, a tiny mirror next to an american flag"


> "Deepfakes can be used for many of the things we already know," [co-director of the Nixon film Francesca] Panetta says, "but also to create kind of alternative histories or have the potential to kind of rewrite history as well."

Movies, from Hollywood and elsewhere, have been convincing people of rewritten histories since their inception. How well known is the story of the real Spartacus compared to the Hollywood Spartacus? How many people's vision of the antebellum US South came from Gone With The Wind rather than from historical documents and interviews with former slaves and slave-owners? Birth of a Nation inspired the revival of a terrorist organization that lasted decades; Triumph of the Will painted the Nazis' rise as a matter of noble heroism triumphing over cowardice. Last week I argued with some epistemologically incompetent person who wanted me to watch an anti-vaccine movie about Gardasil, apparently unaware that YouTube videos are not really a publication venue used by medical researchers.

So, what should we do about it? Well, the Soviets had an answer: since movies were so powerful, people would be carefully vetted before they got access to the equipment needed to make them, and if someone made a movie with harmful contents anyway, they would go to GULAG. Is that the solution we want?


At the end of the day, our only defense against deepfakes is going to be to disregard the supposed identity of the speaker and only judge the content itself. Easier said than done.


Sounds like a good time to create a honeypot deepfakes service. Could charge a handsome fee for it, too, then authenticate deepfakes funded by political parties.


Somewhat fitting that they did it for the mother of all deepfake rumours - the Kubrick “never landed on the moon” conspiracy theory.


The Running Man movie is coming to life.


Welcome to the end of truth and fact.


Obligatory xkcd: https://xkcd.com/1484/


eat your heart out bill safire


I was kinda hoping they would have done this version of the speech -> https://lifestyle.clickhole.com/this-speech-was-written-for-...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: