Then of course the Epstein stuff becomes news months later and the cynical part of me can’t help but think they’re looking for a defense if they ever partied at his island or ranch.
Every type of video authentication system I've ever heard proposed can be defeated by simply pointing a 'trusted' video camera at a screen playing a fake video.
Video could be 'authenticated' by post facto tracking of particles to validate both continuity and plausible velocity deltas based on generated model of air currents. It wouldn't eliminate the possibility of deepfakes, but would drastically increase cost of production.
Similarly, I predict that cameras will increase in resolution past the point of utility for human vision: tetrapixel camera video would be extremely expensive to produce without leaving tiny artifacts.
This obviously wouldn't solve the 'no fake videos' problem but would least mean 'this video isn't fake' for a select subset. There could be tiers of information in the future where this audited media is weighted higher.
That's a good point. But I'm imagining situations where the camera itself is a reasonably secure device. For example, a city CCTV camera or a police body worn camera.
If a video stream can be securely (cryptographically) matched to a particular device, with an unforgeable (blockchain?) timestamp, it would increase confidence in video evidence presented in court, for example.
Then when viewing the video in the future, you can check it's hash and compare it to the one stored on chain to be sure the video wasn't tampered with.
The credibility of still footage has dropped before. I don't know where that leaves us for authentic media...
It's quite telling when the corruption and scandal is so large that only "fringe" outlets are able to report on in. Also cf. with Ronan Farrow and Weinstein.
Besides, for credibility they should have been writing about the obvious problems with Epstein, and everyone but fringe outlets were basically silent over the last decade(s), and in some cases complicit (abc news).
I don't know how/what form this should take, but an older analogy might be how colour photocopier manufacturers imbed microdots into each reproduction so counterfeit bills can be traced to the equipment that produced them.
On the other hand, there is a reasonable amount of active research on both detecting current faking-techniques, and methods of adding cryptographic attestation from point-of-recording.
I don't believe "detection" can win in the end, as widespread detection-technology can generally be used to tune better fabrications.
So ultimately we'll have to rely on: "do we trust the specific chain-of-people-and-sensors-and-relays that brought this evidence to our purview?" And various kinds of constantly-applied cryptographic signing & timestamping can help with that, though interpreting the challenging cases will require a lot of abstract expertise. (So again, for most people, it may reduce to: "who do you choose to trust?")
We just won't be able to trust anyone is actually saying anything except after confirmation via non-repudiable channels.
I'm from a rural community in Canada, and of course we answer the phone.
Just floors me to visit my mother-in-law in Texas; when we visited eight years ago, she'd answer the phone; now she doesn't answer her cell or her home phone unless it makes a distinctive ring. I'd hate to need to get ahold of her using someone else's phone.
They'll be used on people who are already perfectly happy to believe fake printed quotes from obviously unreliable sources. When challenged, they'll refer to conventional partisan media citing those sources as proof. Ordinarily showing video would increase their certainty, but they're already absolutely certain. It'll just be a more entertaining way of delivering it to them.
Detection, no matter how definitive, can't deter that. They're already perfectly happy with their trusted sources.
It was recognized by law makers that drones could be used for nefarious purposes and needed some form of regulation.
I'm not saying that drone registration thwarts illegal use, but at least something was done by regulators.
Drone regulation has done precisely nothing to thwart illegal use, while imposing substantial cost and inconvenience on legitimate actors. The laws don't stop me from buying a heavy-lift drone from China, nor do they stop me from doing something nefarious with it. At best, the regulations have slightly reduced the risk of inadvertent airprox incidents, but they have been utterly useless in addressing the sorts of risks they were supposed to combat.
That's an example of the politician's fallacy. Doing something can be much worse than doing nothing. Did drone registration actually improve anything?
Either way these regulations don't help much when people just don't care, as can be seen with all the drones flying too close to airports.
When the recreational/commercial models got range and mass enough to get in the way of aircraft, that's when the first round of "whoa!" kicked in.
Still going to take more work to figure out reasonable controls.
"Ryzen 3950x" is just a scary new word for "z80" or "6502".
Deepfakes don't provide any fundamentally new capability, but they do reduce the cost and time required to create a convincing fake video by multiple orders of magnitude. That completely changes the threat model, from "our propaganda rival occasionally releases a fake video that we have to debunk" to "our propaganda rival is producing thousands of fake videos every day and we can't even keep track of them".
I'm not sure this is entirely true, although your larger point is quite good.
People have been faking video since the dawn of video, yes. But for several decades, a clear, high-res video of a specific person has been pretty much inviolate. Faking a steady closeup of a president or famous actress would have been unthinkable - no VFX hoax ever convincingly impersonated Nixon. As a result, bad actors had to resort to deceptive editing, low-quality "covert" footage, or very rarely an exceptional look-alike. Even Hollywood had to rely on old footage and rewrites when an actor died.
Today, we're right on the edge of changing that. MIT can revive the President, but not perfectly, and only to the standards of a 70s news video. Lucasfilms can revive Carrie Fisher, but not well enough to fool an alert viewer, and not in a natural/unedited setting. Within a few years, we might hit a level that tricks even the most acute viewer, and is only caught by forensic analysis. We've been there with photographs for years, but it's new ground for video.
Because nobody really cares about Nixon. People fake UFO videos because they're clickbait. You can hit and run and make $10,000 on YouTube revenue before someone debunks you.
The reason political campaigns aren't using VFX to make their opponents look bad is because someone would eventually figure it out and the backlash would be immense. Additionally, it's not even necessary. People do fine planting conspiracy theories and taking advantage of people's disinterest in fact checking to say whatever they want, all without investing any effort in video production.
I predict that deepfakes will be nothing but a series of amusing "I can't believe they thought they could get away with that" stories.
So people learned that we shouldn't trust a photo of Martians, or a sound clip of the President confessing to murder, but should hold out for video. When footage of Rob Ford doing crack cocaine shows up, we believe it's real. And when an investigative reporter wants a verifiable record of something, they resort to a high-quality video.
Even if video deepfakes aren't any more realistic than image or audio fakes have been, they're important because there's no fallback. With perfect-accuracy photo and audio fakes, we'd have to be more skeptical. But with perfect-accuracy photo, audio, and video fakes, we'd suddenly be back to the pre-photography era when there was essentially no way for an untrusted source to convince you that something had happened. Deepfakes aren't flawless, we're not at that point, but the specifics of video are less important than the impact of having some medium which can't be convincingly faked.
Faking a clear, unbroken closeup of a recognizable person has never been a serious option before this. Even when Hollywood had likeness rights and massive budgets, it resorted to rewrites and old B-roll footage when actors died mid-shoot. When people wanted to slander politicians, even state actors with Cold War budgets, they edited photos, faked audio recordings, or staged candid, hard-to-see footage. In the modern era of CGI and digital video editing, bad actors still resorted to misleading edits and out-of-context clips. This, though, is 20 seconds of clear, closeup video of an incredibly famous face. And the fake was good enough that even placed in an art exhibition centered on something that never happened, some viewers thought it was real footage.
I don't share the popular paranoia that deepfakes of politicians are going to start upending democracy; there have always been plenty of people fervently convinced by manipulative video, altered pictures, or simple "somebody said so" lies. What's new is the increasing difficulty of proving truth in the most trusted contexts. Validating even a very narrow claim like "at some point in the past, for some reason, the President said this exact sentence in front of a video camera" is increasingly difficult.
The business model works on "most" people, but not all of them. You can't fight a Fox News with an Air America. It's been tried, more than once. When deepfake material comes to light at the expense of a Democratic president, rest assured, most people aren't going to bother demanding the White House's public key.
All they'll know is what they heard from the talking heads on their news channel of choice: "Well, I don't know, Sean, but a lot of people are saying it's real."
Ability is what interests me most. Faking this sort of content - a continuous, closeup shot of the President saying something - has basically never been possible before. If a total stranger had showed you this video in 1969, it would have been clearly authentic. Not clearly truthful, you wouldn't trust that the landing had really failed as opposed to e.g. Nixon recording speeches for both outcomes. But you could be pretty sure it wasn't an imposter or an edit or anything except the real Nixon on camera.
Deepfakes challenge that ability, but I don't think they'll destroy it. Official sources will sign video. Untrusted sources will use verified timestamping and other methods to prove specific claims about their footage.
Actual knowledge, on the other hand, can't be destroyed by deepfakes because we already don't have it. There are a hundred ways to lie to people - with photoshopped images, deceptive editing, or just lying and having people trust that the evidence exists somewhere. People who aren't easily fooled today will become skeptical about video footage too. People who are easily fooled are already fooled, have been fooled since before photography straight through to the present, and will just stay that way.
"Banning political deepfakes" or anything similar is not just a case of closing the barn door after the horses have left, but without ever getting the horses into the barn.
We do. Even today, very little of what you see in media is entirely real. The alterations are hardly ever as overt as Trotskyites being airbrushed out of Party publicity photos, but that doesn't mean they aren't being done.
Maybe an African-American person ends up looking a little blacker than they really are if they're being accused of a crime, or a little whiter if they're running for office. Maybe they just sound a little more or less "ethnic" depending on whether the press wants them in office or in jail. Or a video is sped up and slowed down at just the right moments to turn a journalist's uncomfortable question into an unprofessional partisan attack.
IMHO we can expect more of these cognitive shenanigans as the technology improves. Manipulation will be performed not just on the subjects in question, but on the contexts in which they appear.
It's well understood that quantity -- and availability to the masses -- has a quality all its own.
I do believe we're facing an arms race, in which purveyors of synthesized bullshit will likely fight the defenders of truth to a draw at best, and more likely win. No disrespect intended but I don't agree that your Photoshop analogy is valid.
Creating convincing deepfake videos is still a lot of work, but a couple years ago altering one's face in live streams took also much more effort than pressing a button in a free mobile app.
Nope, it'll just give well-executed fakes a veneer of legitimacy. A state-level actor will have easy access to the signing keys. Non-state-level actors could potentially extract them from a device or bribe an engineer at a third-tier Chinese OEM. A reasonably competent hardware hacker could desolder the CCD from a signed device and feed whatever video signal they like into it.
Which "enemy" do you envision would use deep fakes for a nefarious purpose, yet will stop short once they realize there are - gasp! - regulations around them? And as far as I'm aware, at least in the United States, there are no regulations requiring the use of microdots in printers or copiers, they're done of the manufacturers' own accord to aid law enforcement.
The idea that we should have ill-informed, knee-jerk regulations in reaction to every mildly upsetting technological fad is antithetical to everything the vast majority of technologists believe in and stand for.
Current laws that put microdot detection in color scanners will not deter huge resources to counterfeiting like state backed operations. But they do prevent the local meth junkie down the road from making a bunch of fake 20's to get his next fix.
At the sametime, the paper that we use is also highly regulated. Its not impossible to get something that is similar, but its not easy. (and not cheap).
The points of all these layers, is to prevent the casual and common crimes. By doing that, you can spend your resources on the larger operations.
1. Origin tracing. This is the microdot example: it's not meant to catch deepfakes, but link nefarious instances to their source. But the existing technical options here seem to be a mix of privacy-eroding (printers are dumb compared to phones/computers, and you'd have to prevent or ban sharing non-marked media) and ineffective (bulk color copying requires physical access to a device that's hard to make or modify; deepfakes can be constructed on a server in another country, or have their identifiers scrubbed after the fact).
2. Reactive, general verification. This is just an arms race between fakers and observers, like catching art forgers. Existing fakes have tells like a modified 'halo' around faces. Right now we only see manual checks, but major content hosts like Youtube could flag "suspected deepfake" like they do "music copyright strike". (Or hopefully better than that...) But it depends on staying ahead of fakers, and once the tells are too subtle to simply watch for it will only work when hosts or viewers choose to validate content.
3. Proactive, general verification. This corresponds to hard-to-implement, easy-to-verify security features like UV watermarks on money or prescriptions. But those rely on controlled supply, and decades of DRM failures tell us that digital fakes are much easier to make and safer to pass than physical ones. I don't expect this to expand beyond closed groups like news orgs giving out auto-watermarking cameras.
4. Specific authentication: not eradicating fakes but proving certain videos are legitimate. This is the most plausible, interesting category. We can't even prevent manipulative photo edits today, so we're unlikely to prevent manipulative deepfakes, but today we can prove specific aspects of specific images are legitimate. People will still believe fakes and lack proof of some real events, but this prevents a more fundamental transition to a "post-truth" era; we'll still have known-good records of key events.
We've had low-tech authentication since the dawn of photography - think of hostage photos taken with a daily newspaper to prove "this image is newer than this day". As photo editing emerged, steganography developed to catch out altered elements. Cryptographic signing took that further, allowing us to prove that an image is unaltered from a specific keyholder. We even have the reverse of the old newspaper photo; publishing a hash or encrypted file lets us date an image back to a specific time without having to actually release it.
Proving that a file is authentic to the world, not just an owner, is trickier. But we already have some steps: deepfakes take time, so any livestreamed video is not being edited that way after the fact. Authenticating something time-specific like a Presidential speech would only require combining that rapid turnaround with proof that the video wasn't prepared in advance; until on-the-wire editing becomes convincing a newspaper in the background would suffice. Quite likely we'll see more complex arrangements eventually, like trusted hosts that issue random values and demand their use in rapid responses.
None of this is going to stop people from believing fakes, but nothing ever has. What's more significant is whether we maintain the ability to create records which can be verified and trusted.
But this is a cool demo, and since the Genie is out of the bottle, this can be a great tool. You could use it to force those who speak in public the most to be accountable for the things that they don't say, or equivocate on, by making videos of them saying it, and forcing them to go on the record denying the video, in contradiction to their established (unspoken) position.
True, but now they'll be able to drive any narrative with as much A/V 'evidence' as they want. Lies by omission or one sided reporting are dangerous in their own right, but challenging that is different from challenging fake evidence.
It's hard to say "It wasn't me" if you're on tape/film. You'd have to have experts argue over validity but the public doesn't have the attention span or trust to follow that. Deepfakes have the potential to be extremely damning/damaging to public image/reputation in a way that biased reporting never did.
The rest of your comment I agree with, but attributing it to “the media of imperialist nations” instead of just being a property of media organisations in general, is wrong.
I know the world looks simple when you’re out there forming your very first political opinions, but it’s really not. The world isn’t some simple struggle between good guys and bad guys, it’s an incomprehensibly complicated overlap between lots of people acting in response to various incentives that they themselves don’t understand.
I recommend keeping your mind open and your mouth shut as you learn a bit more about how the real world works; hopefully before you’re old enough to vote.
But think of the general US electorate. How many of them, seeing this clip on the news as they prepare dinner, would know it’s fake?
The technology to fool someone who has the TV on in the background has existed forever. You could do it with a stunt double. The problem is fooling everyone such that "this video is faked" doesn't become a bigger story than the fake video in the first place.
Also, it isn't difficult to imagine a future where people are criticized for correctly identifying deep fakes.
So people may push fake stories to make what they believe to be a justified point.
Also, it isn't difficult to imagine a future where people are criticized for correctly identifying deep fakes.
 "Troll" as in "derailing tactic used by people who don't know or care if it's really fake or not" simply to be extra-clear. In politics, throwing out chaff and using derailing tactics to make it effectively impossible to have certain discussions in open fora can be a good way to prevent inconvenient ideas from being spread, in addition to the usual tactics around sowing uncertainty and confusion among the enemy.
I'm not convinced deepfakes will be that big of a problem because people already believe things that aren't real (and don't believe things that are real) without the help of technology.
That's the problem: These people can now parade proof when there isn't any, perhaps even rally to garner majority and dilute out actual facts.
> (and don't believe things that are real)
For instance, refusing to acknowledge deep-fakes despite being labeled as such...?
Imagine if Trump could claim that the "Grab em by the $#@%" tape was a fake, or if Biden could do the same about the tape of him bragging about getting the Ukrainian prosecutor fired.
This won't change much compared to now. There are videos of Trump saying something, then in an interview he denies he ever said that. People believe that he never said those things and real proof is dismissed. What I'm saying is, people believe lies even when there's no proof and when there's video proving that what was said is a lie.
This is too good.
This is going to cause problems. Many, many problems.
We already have issues with what is real and what isn't, when you can still trust video for the most part. I am not excited to see what this does to society.
It's one of those - just because we can, does that mean we should - sort of scenarios.
We’re opening Pandora’s box like a kid on Christmas morning.
However, there are some clear give-a-ways that may or may not stick around, priming at least myself into identifying a deep fake from not: His movements do not line up with the way muscles work in the human body. In a single direction it works, but how he bounces back would cause a lot of neck strain, so no one would do that.
Also there's a line across his cheek in the closeup and the bottom left hand corner of the mouth fades into a blurry mess once or twice. The deepfake look generates blurry results for blending.
The President really does give a speech or make a comment, but 50% of the electorate is sure it's a deepfake.
Do authentic TV recordings of Nixon have that same quality?
But I’ve seen things look off like that in real life under certain lighting conditions so I would’ve waved it away.
The tight shot, however, was really good. I'm not sure I'd think it was fake if I wasn't primed for it.
It reminds me of the market for answering machine messages, but here you will have a large catalog of celebrities and you can create a personalized video message of them talking about you or someone you know.
A con artist could also buy that to pretend they know a certain person.
So the website would work like this: you pick a celebrity, you pick a theme/context/environment, like Skype call, handheld phone video, at home webcam, etc. and then you pick the message.
You can have actors with similar build as the target celeb acting specific scenes, to make the scenes more unique but still reusable. Payment options differs based on the exclusivity of the acted scene.
Today some people can be persuaded by evidence to abandon a faulty belief. That number seems destined to dwindle in a world where evidence is increasingly malleable.
The complete breakdown in privacy described in 'The Light of Other Worlds' really changed my worldview. I am and will remain to be a proponent of personal privacy, but the world isn't going to respect that. We need to seriously start considering taking back some of our power. Example: UK CCTV footage, that should be public domain. The knowledge of those that access the footage, public domain too. Then include public education about the dangers of performing activities in the public. That someone might use the footage to steal your credit card shouldn't be a reason to hide this data. That someone could stalk another individual through the network isn't an excuse to hand over the data to the government.
That's not what will happen. What will happen is that people will consider video which corroborates their biases to be legitimate, and video which contradicts their biases to be fraudulent. Bear in mind that skepticism is already on the rise - the web is rife with it, to the point that any news source whose reporting isn't sufficiently paranoid, cynical or piss-taking is dismissed out of hand as likely propaganda, yet this widespread mistrust in just about everything hasn't resulted in a rise in critical thinking or due diligence, rather the opposite, because it's easy for people to live in self-perpetuating alternate universes with multiple positive feedback loops provided through the web reinforcing their filter bubbles.
>Value should also shift back to more legitimate sources.
It might, if anyone believed that legitimate sources existed anymore outside of 4chan, Reddit and Comedy Central. Unfortunately it seems as if we as a society have decided to abandon the premise that objective truth exists, as the world around us is fed to us more and more as abstractions by untrustworthy arbiters. Deepfakes aren't going to help solve the problem of who can be trusted to "legitimize" truth in a "post-truth" world.
Not to drag the thread in a political direction, but that's what I thought when Trump won the Presidency. Rather than a boost to our cognitive immunity, though, we're only seeing stronger polarization.
The transition will certainly suck though.
Exactly, only there's no sign of an end to the 'transition.' Today, the common person is more empowered than ever before to ignore what they don't want to hear.
We don't need any more fuel for the fire: "They faked this Nixon Video, so they could have faked <X> too!"
Finding a reliable way to exchange trusted information is a central problem of our time.
It may become a very different world psychologically - just like the medieval people would be psychologically, largely incomprehensible to us.
We are increasingly living in an age of misinformation where it is becoming more difficult to tell what is true and what is not without significant effort.
This tech I'm sure CCP will use extensively to shape history/propaganda however they want.
It might be that they just found a good voice actor. That's what most deepfake videos do now. But maybe someday it will be possible to press a button and hear a beautiful result.
At the end of the day, what cool tech for legitimate purposes! Hollywood VFX, training video or PR customization, etc. Nixon giving this speech is a cool look into "what-if", without the moral burden of someone trying to convince me that an alternate reality is the real one.
A clever hacker could replace the face on security footage of someone committing a crime with yours, put that file back on to security cam dvr, removing metadata that indicates the file was ever modified. Let the police retrieve the footage from the dvr. Do you think a good lawyer could get you off the charges for committing that crime, if you didn't have an alibi?
Evidence standards around video will need to change soon, and society isn't ready for this shift currently.
- - - -
Sooner or later, nanotech will mature and all of this will escape the digital realm (largely photons and electrons) into "real" life (IRL) (protons and neutrons) and we'll have to deal with that.
I guess, in the world of blockchain, we could guarantee it’s origin at least.
I think that for any instinctive reaction like that you should ask - will authoritarian governments want this effect. If the answer is yes you should think this trough.
That would only make sense if you want to mark something as offical and have authorative sourcing. It would have a place for chains of command and actually signed off on orders but not evidence that the content is true.
An offically signed video of the president punching Adolf Hitler in the groin, High Fiving George Washington, and then flying off into space could be cryptographically valid although obviously fake. It would only say that the President actually approved this but not that it was real.
Deepfakes could have legitimate applications in film & television production.
It's not like you can just pull out a hobby telescope and be like "oh look, a tiny mirror next to an american flag"
Movies, from Hollywood and elsewhere, have been convincing people of rewritten histories since their inception. How well known is the story of the real Spartacus compared to the Hollywood Spartacus? How many people's vision of the antebellum US South came from Gone With The Wind rather than from historical documents and interviews with former slaves and slave-owners? Birth of a Nation inspired the revival of a terrorist organization that lasted decades; Triumph of the Will painted the Nazis' rise as a matter of noble heroism triumphing over cowardice. Last week I argued with some epistemologically incompetent person who wanted me to watch an anti-vaccine movie about Gardasil, apparently unaware that YouTube videos are not really a publication venue used by medical researchers.
So, what should we do about it? Well, the Soviets had an answer: since movies were so powerful, people would be carefully vetted before they got access to the equipment needed to make them, and if someone made a movie with harmful contents anyway, they would go to GULAG. Is that the solution we want?