The Weekly World News served (serves?) up that sort of nonsense for years, but for the most part the media didn't go around falling for hoaxes, whatever the source. Probably a few deepfakes will spread like wildfire on social media and then the public will start wising up to the technology.
As it is, there are apparently more than a million people who actually believe QAnon is real, despite the fact there is nobody willing to put their name to the source.
The basic premise - some 4chan shitposter happens to have Q-clearance - is entirely plausible. The fact that anyone can post anything as QAnon also gives plausible deniability for any claim that turns out false, while any claim vague enough to turn out true is taken as evidence.
Conspiracy thinking is extremely widespread and many conspiracy theories are widely believed, such as the JFK assassination, the fake moon landing, or "vaccines cause autism".
It's entirely possible that a secret service agent accidentally shot Kennedy, and they're on the record of saying so.
"We don't know" is not really a conspiracy theory. "We don't know, therefore X must have done it" is a conspiracy theory. Conspiracy thinking is the tendency to make these connections.
Is QAnon actually a highly placed government official (or someone with access to information about the function and plans at a high level)? From all their big predictions so far (arrests etc) being false I think we can pretty easily conclude no.
Are you someone who cannot resist coming up with a counter-argument no matter how thin? You might want to ponder how much value that adds to discourse in general.
Adding or removing cancer.
Image manipulation of medical related diagnostics/data. Imagine a hospital hack where 50% of cancer patient images are clear and unsuspecting other 50% have fake cancer injected in. The ransomeware on this stuff is going to be absolutely insane to resolve and/or insure against.
It seems that such a move would be used to upset a fight for power among a government. If you have a government where this is already an issue, then the problem is more likely the government than the action created by the hack. Furthermore, such a large rise in cancer cases would be heavily researched, and causation would be discovered, prompting hospital care facilities to start issuing hash code with images to prevent manipulation, if that is not already being done.
As for politics and fact-checking, personally I just don’t care. If you do, then sign your damn video with a public key already and check if a video matches it. That would require a new sort of signature that is not lost on resize/rebalance/etc, but nothing unreal. As a bonus, video players could show the source’s certificate fields right below the video.
Signed "by Washington Post" to prevent fake videos with fake attribution wouldn't be "just signing videos". That requires a source of trust infrastructure like the CAs and a display infrastructure the your green lock in the browserbar. Otherwise people will just ignore "by Waѕhington Post" and mistake it for "by Washington Post"
I mean, the entire “problem” arises not because deepfakes, but because that video thing still lives in a century which allowed “hacking” sites and owning bank accounts via plain-text “secret” queries or cvc codes clearly printed on the back of the card with no additional confirmation required to withdraw your money. That window should have been closed regardless of deep fakes.
If anything I think collation of data is the way to prove it along with physical evidence or lack of it. If the Washington Monument takes off with thrusters on video and it is still there we know it was faked.
If there is a scorched crater where the monument once was and the Washington Monument can be seen in orbit then it suggests someone engaged in a seriously reckless and expensive prank with a national monument.
The classic example of a tech fix is a hostage taking photos with the day's newspaper; it's a way to specify an oldest-possible date of a photograph. Similarly, you can prove that a video is not newer than some specific time by publicizing a hash of it. After that, format conversions, rehosts, etc. are all believable as long as they can be traced back to the same-hash original. And until deepfakes reach realtime speed, any specified-time event like a politician's speech can be authenticated just by posting a hash immediately after it ends.
That isn't a full solution to authenticity; a video of some unexpected event could always have been prepared in advance. And trusted-third-source verification is tricky; I'd mostly believe a Periscope or ACLU Mobile Justice video is being shot live, but uploading a pre-faked stream to the service is hardly impossible. Even there, though, I expect we'll see technological solutions. Apps could issue steganography (or other directions, like when to set keyframes) to the client as they record a video, so that pre-altered video won't update the way it's meant to. And I'm sure a lot of other options exist also.
It's going to be an interesting time, but I think we'll see quite a few clever attempts to preserve both anonymity and source-independent trust.
I think we're going to see a lot of things like this, soon. Signing for ownership, definitely, but also lots of other tech tricks. We already have livestreaming, which prevents fakes in the moment. If you upload a video as soon as you take it, you can put a Google or Facebook vouched timestamp on the content. If you don't want to rely on their reputation, you can share a hash of the video content before a deep-faker would have had time to edit the content.
Timeless content like a video of police violence is tougher to prove, and fakes will be very hard to disprove, but the idea that convincing-looking fakes obviously destroy trust is a failure of imagination.
Surely the imaging companies are on to this by now, right? They should be signing the raw data as it comes off the sensor. Tesla does this for the input and the output, on-chip if I recall.
So all post-production work ceases? Or do we have a situation where the original is published alongside the media-edition where they've applied a few filters and adjusted the colour space?
I'm not saying it's a bad idea, just interested in how it might work.
Presumably this format is going to be of interest to journalists and courts, at the very least, though courts are most likely going to be most interested in the simplest case - the provable original.
Interesting idea :)
So basically storing a lot of encrypted video from private areas, and trusting the camera makers to not install a backdoor.
I guess the only thing I don't like about this is that a camera maker can be bribed to create a deepfake to frame some person. Video evidence would be admissible in court only from trusted encrypted-camera makers, until one or two scandals for a camera maker will bring the trust to zero.
Boomers would smash that share button so hard CA would experience another earthquake..
You don't need deepfakes to fool millenials on Facebook. You could just take statistics that could support your narrative and claim they do. They'll smash that share button so hard...
I've gotten really good at catching misleading statistics. In the areas where I have expertise I have a lot of practice with noticing reverse causation, un-normalized data, p-hacking, manipulated time windows, and so on. Every so often, I see someone cite evidence for a claim I strongly disbelieve that's really convincing, that looks clear and direct and immune to basically all the usual tricks. And it can take me an embarrassingly long time to remember that I should go and check the damn source. Because even in a world of fancy statistical tricks, there's nothing stopping people from making claims that simply don't appear in their source, or outright falsifying the claim. There's nothing quite as irritating as somebody claiming a "twenty year high" in something and citing a paper about a "twenty year low".
And I always wonder just how hopelessly screwed up my - and everyone else's - knowledge base is. If a "vaccines cause autism" claim gives me so much whiplash I check the source and find it's fake, how many plausible claims did I not run down? If I read a Vox article and barely catch a statistical mistake that invalidates the entire claim, how many stories have completely-invalidating flaws I don't catch?
Deepfakes might be important as a way to disrupt some of our strongest forms of proof. Something like an uncut high-fidelity video of a politician saying something atrocious is basically undeniable today. That can help convince people biased against believing the story, or let untrusted sources provide trustworthy contributions. But deepfakes are going to be a mere ripple on the total amount of bullshit; that's been high for decades, riding on totally disprovable claims.
Based on just a few tweets she appears to believe ancient aliens existed, satan worshippers hide their symbols everywhere, said satan worshippers occupy most positions of power and sacrifice children, "everything is energy", Nikola Tesla invented free limitless energy, the elites consume adrenochrome extracted from tortured children, all electricity comes from tall structures collecting lightning strikes, ancient cultures were able to utilize said "athmospheric energy" and humans are sprayed with aerosols released from aeroplanes.
That was just from the tweets of the past week. It's truly mind boggling to gaze into such a fractal of conspiracy theories.
that's... not wrong, but I guess for the wrong reasons.
In a way I already like deep fakes since they force people not to take anything they see on the net too seriously.
Essentially the problem is very similar to detecting artifacts in video and audio. Reverse steganography, if you prefer.
Run the output of the generator through some noise, lens distortion, video compression, downscale it to a 480p surveillance style video, and the discriminator is at a huge disadvantage.
Unfortunately I am not a crypto expert but maybe a codec could check for signature pixels every frame in playback and provide a warning if no key signature is detected in realtime for any of the loaded frames if the user has a matching verification key.
Either an attacker would maliciously blank or randomise the LSB which would show the entire video as unsigned, or the tampered portions would show up if and when the signature pixel chain is broken. I guess the issue would be around securely distributing verification keys that can verify both the signing identity and verify the correct sequence? But then that would put the ability to recreate the sequence in the hands of the attackers?
Oh and I guess variable bitrates could cause issues.
Damn this stuff is harder than I thought.
There's probably a billion ways this wouldn't work and a thousand existing solutions?
Personally I find that perspective to be incredibly naive.
Roughly equivalent in effectiveness to "we should spread fentanyl everywhere. Because it's so easy to OD people will be afraid and stop using drugs." Nope, people will just start over dosing way more frequently.
Add to this that it doesn't solve the problem of deceptive editing, a la Covington kid. The real problem here is still that people are by and large sheep who care not to read past a headline. If this sounds rude, perhaps it stems from my frustration with the issue. See case number one, mass shootings. As tragic as it is, you remain more likely to be struck and killed by lightning than shot in such an event. Most deaths involving guns are suicides. We still get several days of air-time for each, while all the while we have real problems. Homeless populations, poor educations, starving children in africa. Focus on those first, and go by order of what affects your nation most.
By that point, just like with Chess, humans are way behind. They can't tell for sure anything.
So no matter WHAT we do, we are going to be in a world where we can't trust video or audio evidence of anything. And when we can achieve deepfakes in realtime, you won't be able to trust that the person you're conversing with is really your friend.
At that point, people will voluntarily create cryptographic timestamps with trusted equipment, or several trusted combinations (e.g. a phone + a beacon + wifi hotspot etc.) and then voluntarily share that (zero-knowledge proof) with whoever needs to know.
But if bots ever reach conversational level, and we have realtime deepfakes, it's game over essentially for trusting any interaction online with anyone.
1. Scientific evidence -- that's just words some other ape like yourself wrote down you now believe.
2. Personal experience -- those are just memories of experiences you think you had, but you can't really be sure, you just ask your brain and it responds, once that stops working you never "had the experience".
3. Photographs and video -- well those are just captured images, and any image or data can be changed. There is no "sacred data" that can not be manipulated.
4. Trusted institutions: just more flawed apes that you hope are more trustworthy than those they are instructing, controlling, and / or in charge of.
5. Rules / Laws: pieces of writing on paper that we all choose to adhere to. History shows that people can begin deciding not to follow them at any time, they have no power in and of themselves, only people have power, and usually because they are willing to wield violence (e.g. police, military, resistance, militia).
In the end all you have is the certainty that you are indeed experiencing something, but living with the ego-shattering realization that nothing else is certain or under your control is a very large red pill to swallow.
So nothing changes and you have to get additional sources. Just like you should already.
Peoples perception of truth is already skewed. Imagine presenting a photoshopped picture to someone 10, 20, 30 years ago.
There is no way out of this as the tech is out there.
Photography as evidence (what's apparently called the "truth claim" of photographs) is perhaps 100 years old; the Kodak Brownie provided mass access to quick-capture images with too much detail to easily fake, and the Leica enabled systematic photojournalism. Near-universal access to cameras is only ~30 years old, with cheap digital cameras and then cellphones making photo evidence near-mandatory for bold claims. Audio recording as evidence is ~70 years old, with portable tape recorders producing on-demand records of speech with hard-to-fake fidelity. Video recording as evidence also starts ~70 years ago, but became a systematic form of documentation only ~40 years ago with camcorders, and phones only reached widespread high-fidelity access about 20 years ago.
Meanwhile, photographic digital fakes have been plausible for about 20 years (less for human faces). Audio fakes are similar - extremely convincing fakery is newer (10 years?), but the baseline level of proof was never as high. And now video is less than a decade from compromise.
I don't think the simple loss of evidence will be quite as serious a crisis as some people expect. The era before "hard evidence" is still on the edge of living memory in the US, and areas with less access to expensive technology never entirely developed that expectation. We'll adjust back to weighting source more and fidelity less, though that comes at a price in centralized control. But we're in for a very rough adjustment period where easily-faked content is taken as clear truth.
Beyond that, there's still a major open question about the psychological impact of these fakes. We might adjust back to a society that doesn't rely on photo proof, but there's no clear precedent for how people will be affected by untrusted but gut-level convincing content. Stories of actors being harassed over unlikeable roles suggest that we may yet be pretty impacted by visualizations we know aren't accurate.
Because this is a long-term operation designed to make you question what your own eyes see. There will be a day when an extremely damaging video of a politician/celebrity comes out and our trusted media sources will say "we've determined this was merely a deep fake" and thereby dismiss it out of hand.
Even if it's a tempest in a teapot right now (I think this is a questionable assertion in 2019) it seems pretty safe to bet that it will be a big problem within 5 years. Maybe less.
Just planting the seed is enough.
An episode of The Secret History of the Future, From Zero to Selfie has a snippet interview with Hany Farid of Dartmouth, talking about the fake photo of John Kerry and Jane Fonda, in context of the 2004 US Presidential election.
"After the election, after Kerry lost, I remember listening to a radio station news story, and they were interviewing somebody who voted for Kerry's opponent, and he said 'Why didn't you vote for Kerry?' and he said 'I couldn't get that image of Kerry and Fonda out of my head. And the reporter said, 'Well, you know that was a fake image,' and this is true. The guy said, 'I know, but I couldn't get the image out of my head.'"
near the 22:40 mark.
It should be obvious that the deep fakes that don't look wrong will be (wrongly) classified as "not deep fakes", so the premise of "deepfakes still look fake" is completely flawed. "deepfakes that look fake still look fake" isn't very useful.
The deepfakes that don't look fake are "real". And they already pass as such. And unless you have the original source material, you cannot tell.
So you agree that there are deep fakes people can't tell are deep fakes, but now you're saying that's OK because no one has used them in an impactful way yet?
I remember seeing examples from conflicts where video from another event was passed off by the mainstream media to spin the story. It was not until it was called out because people found the real source and posted it debunking what was shown.
I expect more or less the same process with deep fakes. Find the source material and then show how it was adulterated to produce the fake.
I'd be far more terrified if there were an AI capable of generating genuinely catchy/hilarious/original meme images. The "original" part is the hardest part of the equation, because even if you managed the "catchy/hilarious" part (which is already almost impossible), the novelty of the results would wear off quickly if the machine isn't capable of true originality.
To me the odd part is that these are just generally individuals tinkering with the technology using source material that wasn't intended to support a fake.