I'm pretty sure I know people who have been convinced by meme quotes. A headshot of a politician they don't like, with text overlaid, which they never said. People are outraged! And never bother to inspect the source.
Anything that makes it easier to lie about what someone said or did, or makes it harder to disprove... They're all politically weaponized, already.
Look at the "drunk pelosi" video.
30 years ago, that might have been the case with doctoring images, now practically everyone has a personal computer that they can install and use PhotoShop or some similar tool on.
Technology is evolving but not in a vacuum, society's reactions also evolve in response. Today, many people interpret video to be "evidence" but those same people can interpret a photo to be a "claim" or perhaps some form of lower-confidence indication. Before photo manipulation was commonly known, I think photos were in a similar place as video - more trusted. Based on history, it's reasonable to expect video may follow a similar trajectory as photos to becoming less trusted in situations where it matters.
So, what happens when media types which were previously more trusted as evidence become less trusted? The same things that happened with print, audio and photos. Viewers will evaluate external cues such as the reputation of the publisher and corroborating evidence. The leading indicators that we should suspect deception will likely be similar. For example, how divergent the behavior depicted is from expectation, how contentious the surrounding context is and the existence of parties with an interest in creating such a deception.
This effect already happens with manipulation of intent through tricky video editing, for example deleting the rest of a reply to a question or even swapping in an alternate question. In the last decade I'd say the typical person is far more aware this is possible.
So, in the near-term there may be some successful deception but in the long-term I expect the potential value of creating such deceptions will diminish and we'll arrive at a new "normal" much like we have now. The biggest long-term impact may be false claims of "doctored video!" from those who were actually caught on video doing something they didn't want seen by others. But as we already see now, those pre-disposed to believe whatever is shown is false will search for indications it's doctored. Those pre-disposed to believe whatever is shown is true will search for indications it's just more confirmation of what what they already suspected. Either way, the existing reputation of the person shown, the distribution source and the pre-existing knowledge of viewers will likely be more determinitive than the media itself.
It's hard to predict actual effects, I don't think that anyone could have foreseen that the primary use of stills image editing for manipulation is not the perfect crime of an elaborate fake but a barrage of provocatively simple memes that don't even pretend to care about believability. The act of sharing is the message.
I really wonder which type of meme has the most influence on average, straightforward and outright lies like the one you've noted, or the more subtle, subversive social commentary style. I'm a big fan of the latter, I think they're very interesting and underappreciated.
For example, this one - nothing more than a simple screenshot of Twitter, but to me this seems very persuasive: http://magaimg.net/img/80rb.jpg
Pointing out hypocritical grandstanding: https://i.redd.it/74n4exoy2g131.jpg
Media bias: https://i.redd.it/gtrq4xgemtx21.jpg
If Biden runs will be interesting to see how much airplay this meme will get: https://i.imgur.com/tqzGS6E.mp4
Laughing at the silliness of popular narratives: https://i.redd.it/a9wrex3ijc231.jpg
Just for a laugh: https://i.redd.it/i5bvml5m92131.jpg
Laughing at logical inconsistency: https://i.redd.it/rp9ydf0s88y21.jpg
An interesting way of looking at Brexit: https://i.redd.it/cs6o72p3dp031.png
Historical hypocrisy on border control: https://i.redd.it/983sm9wjley21.png
All of these are from t_d so obviously one-sided, I'm sure a similarly impressive collection from the other perspective could easily be assembled, and it's not that uncommon to encounter otherwise intelligent people who have obviously had their beliefs shaped by those memes.
I'm now just about ready to unplug the whole thing and launch it at the sun.
To your point, there are people who are persuaded by assertions.
I personally find it gets even more entrenched when people believe they have seen the evidence with their own eyes. When they see a doctored photo, clip out of context, etc.
I'm reminded of the gun / water context photo:
Oh ya I love that picture, was trying to find it not that long ago with no luck. It does a brilliant job communicating how powerful propaganda can be.
If you're thinking such an evil persons should be in jail, don't worry, they are! It's them who are in South American prisons using burners or stolen phones.
Edit: that was from Argentina, another one in Chile:
That can be very useful when it is useful to only have establish trust once but that's not really the problem described here. The secret word is probably more useful in being so simple, however it still has to be established beforehand.
Videos are being altered by machines. They’re being optimized for natural looking results. It’s harder to notice small mistakes when frames are going by at 24FPS vs poring over a static image for 30 seconds until you finally notice the one region with mismatched shadows or odd clipping.
So don’t delete the fake video. Put a big red exclamation mark next to it that says “this video has been substantially manipulated. Contents may not be genuine.”
Also: while viewers and producers both deserve the same respect, producers can forfeit theirs by consistently failing to respect their viewers. A consistent pattern of intentional deception should earn a shadow ban.
I guess if you think that epidemics regarding images of male/female body image, body dismorphia, self-harm, anxiety, and using celebrities to sell products aren't connected to it, but I've found exactly the opposite.
I love photography and I am utterly unable to talk to non-photographers or convince them about what happens in the production of most images they see in most forms of commercial media.
It goes something like this:
"Hey ACowAdonis, how much of that photo do you think was retouched?"
looks at photo
"All of it".
"All of it? What do you mean?"
"I mean all of it."
"But that's Reese Witherspoon! (or insert popular celebrity here)"
"Yep, and you can see how her eyes have been adjusted, her skins been adjusted, they've changed the shape of her arm, taken a few pounds off the mid section, increased the boob size, changed the colour of her hair...and i'm pretty sure that's not her hand".
"Nah, you crazy..."
"You want crazy...pretty much every photo in every fashion magazine and every media item involving that celebrity has been adjusted to a similar extent"
"Nah mate, you're having me on. You're nuts."
I see this technology as no different.
Other dummy accounts will take the deepfake and start making a narrative around it, sending chain emails to their real world contacts.
Real world contacts will start passing around deepfake chain mail they were sent.
Some of these emails will take the deepfake as true, some will talk of it as being a funny parody, but "funny because it's true" anyway.
Major news organizations can now address the issue as news because people are passing it around, maybe it will be something like 'Well Bob, I think the X have a real image problem on their hand, if the video is true or not..' You don't mean to say you think it's true!? "I didn't say that Bob, I'm frankly not qualified to judge and I haven't done any research what I'm worried about here is that there is a perception that it is true or if it is not exactly true in this particular instance that it might be true, and that is what I mean by a real image problem"
And their fanbases will listen because it's easier than accepting their idol could be a bad actor.
Yeah, the trust is not coming back, regardless of fake videos.
Society survived before videos and photos, when all information was easily edited. I think we'll be fine. Maybe we're just in a brief decade or two where we became complacent at believing all videos were real without taking any of the care that we used to take with grainy films of alien autopsies or spoken testimonies of people who claimed to have seen bigfoot.
Oh boy are you in for a rude awakening. It doesn't matter what smart people believe if enough stupid people are convinced of something else. You're using examples from times of very low information distribution to form expectations about the opposite condition, which is already problematic, and ignoring all nonsensical superstition that used to be the norm.
When I was growing up I remember a minor local mania over a supposed miracle at a religious shrine which became a summer sensation. People were charting tour buses to say prayers and hoping to witness a miracle themselves. Right now in the US we have a community of people who have been enthusiastically chanting at political rallies about locking their opponents up for the last 3 years without any apparent care for evidence or factual basis. Obviously political rallies are known for their hyperbole but at some point you have to feed the beast.
I think writing computer programs designed to spot these deepfake videos would be very helpful as the volume of doctored videos increases, but this isn't some disruptive technology (at least for people trying to deceive others).
Next up: the person you’re sitting across from at dinner.
Writers have been able to write nonsense for a long time... and photo manipulation we've gotten quite used to. All we do is add video to the category of things that might be lies, and so need independent verification.
Skepticism is good and healthy, and verification in the age of Google isn't that hard.
You can trust that if the NY Times or CBS publishes a video, they verified its authenticity, or else will be publishing a big retraction within a few days that will also make the news because it's so rare.
Whether your uncle sends you a random photo or a video of a politician that seems too exaggerated or weird or unbelievable... you assume it might be manipulated... as you already do now. Making Nancy Pelosi seem drunk didn't take a deepfake, just slowing it down.
It's not any kind of big change. Just applying the same skepticism we already automatically apply to so many other things.
This may be, or become false, due to political motivation to seriously damage the "other side of the aisle".
> and verification in the age of Google isn't that hard.
It’s hard because publications often parrot each other. You walk away confident of your “verified” truth due to echo chamber effect, which might be worse than not verifying at all.
> You can trust that if the NY Times or CBS publishes a video...
I can’t. Again, you don’t need to make factual mistakes to push an agenda.
There's no way deepfake videos won't make the propaganda situation worse, at least for a while.
It will be interesting to see if such tools are developed for video as well.
I think people vastly underestimate how much editing and framing change the perceived truth of what happened. It is more subtle than manipulating the contents of video, but I think it can be in many ways more effective as most of this stuff bypasses your cognition and is not straight up lying.
It feels the same as in written news changing the quote vs. changing text around the quote.
I think we would be better of looking at video like it was picture drawn or text written by someone. It's an artistic rendition of the events.
Together with a friend I was one of the geocaching and confluencing pioneers in Germany.
Some large papers and TV stations reported about this "phenomenon" and wanted to make an article/ documentation about us. Every one of them came with a story in their head which we had to fill with our pictures and quotes. No one was interested in "reality". For a news clip we had to shoot situations several times, I remember leaving a house 5 times until they shot was done. Up until then I thought news would be unstaged.
I think what we really take issue with is something related but different. And that is people voluntarily believing things that confirm their biases, while ignoring or even denying things that challenge them. Pew did a nice piece on that here . These  are just their poll questions, which are quite interesting. For instance, "spending on social security, medicare, and medicaid make up the largest portion of the US federal budget" - 41% of Americans incorrectly labeled that as an opinion. And of those that labeled it an opinion, 82% further incorrectly labeled it as false. Going the other direction, "increasing the federal minimum wage to $15 an hour is essential for the health of the US economy" - 26% of Americans incorrectly classified this as a factual statement, and of those 83% claimed it was accurate. The same is of course true on both sides of the aisle, with the questions affirming as such.
People have a tendency of believing what they want to be true, while challenging (often quite aggressively) everything that goes against that, or even simply denying it. This can create the perception of a naive people being misled by malicious actors, but I think reality is that people tend to pick the views that they want to be true often for entirely subjective reasons that cannot be clearly qualified, and then work to find evidence to support that.
 - https://www.washingtonian.com/wp-content/uploads/2017/01/gar...
 - https://news.gallup.com/poll/195542/americans-trust-mass-med...
 - https://www.journalism.org/2018/06/18/distinguishing-between...
 - https://www.journalism.org/2018/06/18/distinguishing-between...
A simple morph cut in a John Pilger interview of Assange made a sizeable portion of nutjobs believe Assange has been long dead. Don't think this kind of behaviour can't eventually extend to the mainstream.
I don't understand your point, you wouldn't sign a doctored video unless you wanted to do so. It's entirely possible to apply the principles of PGP to video.
>If it's only your camera that can sign the video, not you directly, don't be fooled that it will be possible to protect the private keys in the camera from extraction.
I doubt this would be the solution on which the world settles.
When you publish a video of you speaking at a public event, you sign it. When someone else publishes a doctored video of you, you do not sign it.
This alone doesn't protect you in the case that someone speaks and then later intentionally doctors and signs the video in order to change what they said. In this case trusted third parties (e.g. news organizations) could sign videos as well. A set of signatures taken together can provide trust.
You wouldn't think a letter from your mom is from your mom unless your mom signed it.
What would signing video prove? Who signs it? Who controls the signing keys? How would blockchain, in any form, help here, in any capacity whatsoever?
The aset isn't a single entity in one place, it can be distributed via IPFS or whatever. The earliest known version of a thing is the canonical one for practical purposes. In this view blockchain isn't producing coins for hoarding, but tags for people to locate things on the public graph.
Perhaps the malicious use cases are more obvious due to how trustworthy a video can appear.
Think of the impact of this on dubbing movies between languages. This seems like an incredible tool.
Of course, we can’t just forget about deepfakes and such, but this particular usecase kind of excites me.
I like your idea, it reminds me of something I stumbled upon reading about the 60Hz hum of AC electricity.
"...this side effect has resulted in its use as a forensic tool. When a recording is made that captures audio near an AC appliance or socket, the hum is also incidentally recorded. The peaks of the hum repeat every AC cycle (every 20 ms for 50 Hz AC, or every 16.67 ms for 60 Hz AC). Any edit of the audio that is not a multiplication of the time between the peaks will distort the regularity, introducing a phase shift. A continuous wavelet transform analysis will show discontinuities that may tell if the audio has been cut."
The politician says it was doctored.
The poster says it is unedited.
How do you verify who is telling the truth.
Granted, derived content will fail validation, but it will motivate tracking down the original, until validation can be performed. Maybe you can take pictures of fake imagery printed onto large high-def paper, but at least you eliminate one stage in the process...
Honestly, we should not trust digital content these days.
PGP involves a private key, and if you have the private key you can "forge" any message. If you put the key in hardware, it can be read by an adversary with access to a powerful microscope.
The adversary would have to extract the key within 10 seconds without damaging the security envellope of the device, which can't be powered down. If such a camera is powered down, a replacement camera would need to be manufactured and sent and isntalled (again by citizens selected through sortition) at the place the malfunctioning / perturbed camera was. If the cameras cover each other (say cameras along both sides of a street such that a camera sees 2 or more other cameras) the perturber can be tracked, both where he came from, as well as where he went to...
the first type of camera is the one we have today, the second type would be more expensive, need to stay connected to the group consensus protocol, and need to stay powered, so journalists will be lugging extra batteries, and the camera would have 2 battery ports for switchover...
Wonder if that is possible. One can always convert a picture to a 2-D array of RGB values, so signature can't be in the video (or image) file container. So it has to be a watermark of a kind. If the algorithm is known, it's interesting if the signature can be unforgeable. If algorithm isn't known, then other issues (like with DeCSS) can appear.
The whole chain of video production should be signed in order to trace filming and edition altogether and, all intermediary signatures from each stage of production process contained in a final signature.
That may be a business opportunity or at least (may be preferably) an interresting open source project.
There are techniques for steganography, which may apply here, I wonder. Like adding a digital fingerprint in someway directly into the media itself.
A place where I don't think it will be used much is actual facing-the-camera-talking-head content. Something we have learned from YouTubers is that audiences don't care if there are discontinuous cuts during a monologue. YouTubers don't try to pretend they did it all in one take, and will happily edit their video as if editing text. The cuts are obvious in both the audio and video. And still it works.
Example: "That was a short trip" vs "That was a reaaaaaalllly long trip".
Language is so much more than words. When you deliver the variant message, your whole facial expression might change. So much would get lost if that doesn't carry over. Your facial expression and tone in that context also completely changes the meaning from you enjoyed the long trip to not enjoying it, but how can a machine know which one to pick.
Reminds me of the HP Lovecraft novel "The Call of Cthulu" Page 1, Paragraph 1:
> The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.
It's good enough that it will fool some people on Facebook/Twitter, but it's pretty far from being able to stand up to any scrutiny.
Think of the scenario where someone edits a few seconds of a 30-minute interview. They make the interviewee go from saying they hate drugs to saying they love drugs. Even if you weren’t expecting that claim from that person, would you go back and recheck their mouth movement, to be certain if it was edited or not? Unlikely. Even if you would, I’d wager most wouldn’t, including most of us that could detect it.
For truly scary things, like falsifying evidence, I think it will be awhile before this will get past expert analysis, or even a group of people on Reddit trying to prove it wrong.
In the long term, video will simply be treated like photos are now. With disbelief.
That was done in The Expanse novels, I wonder if that is where people get the ideas to create things like this from.
> Can we also determine if the video was actually edited or forged?
Detecting fake videos is actively being researched and there are working methods available, governments are very much aware of the danger.
Smaller economies are going to get screwed by this, though, as they won't have the resources to fight yet another technological battle.
Also, rule34 implications.