We're not too far away from the manufacture of literal "fake news" out of whole cloth. I'm not sure this bodes well for the idea of an informed electorate.
The ability to fake such things means that we have to return to trusting our fellow human and making wise choices about who can be trusted.
Fingerprints, DNA samples will remain of varying degrees more difficult to fake, but one must assume that those too will fall as paragons of unassailable guilt (or innocence, but usually guilt).
Read recently about a case with apparently solid DNA evidence under victim's fingernails, supposed perp no real memory, but it turns out the police had him in custody at the time of the crime. He'd eaten in the same restaurant days earlier. I can't find the article quickly, but it was recent.
DNA is not the end all of an investigation
Fakable media needs to be decentralized, period.
Edit: not parent commenter, btw, just my personal two cents
One, people will be less swayed by emotionally charged imagery, and that's a good thing. That means less #FOMO and less rage at the perception that everyone is having fun but you. People can feel normal, just being normal again, because of the soon-to-be-wide-spread understanding that any and all photography, video or audio can be fabricated.
Two, media is manipulative regardless of who holds the keys to the castle, as most trusted information source. It's not a conflict of fake versus real. It's a conflict of one group of large personalities versus another set of polarizing characters, and external players fanning the flames. That one group can shout down another isn't great, because even if there is One True Narrative neither side presents it to us, and most of the smarter folks among us are already aware of this, and digest all information under such a context implicitly.
It’s quite a mental load to question every source of data and have one’s guard up. Unless our own wiring evolves I can’t see the general population changing our data processing habits.
This sounds like the kind of "trust problem" I've heard will supposedly be solved by practical blockchain technology. I'm wondering this aloud, since I'm not sure how that could really be accomplished. As someone below suggested, some means of cryptographically signing "authentic" or ungenerated/unmodified video.
Come to think of it, have major scandals been invoked by photos that were proven or likely to be manipulated since the invention of that technology? Does the public place less stock in the authenticity of photos by default these days?
Back in the day - a fax wasn't considered a "signed legal document", but it _was_ considered "proof of existence of a signed legal document".
Perhaps the response to this is to not "trust" video or images as standalone legal "evidence", but to allow a person to testify "yes, that's an unmodified depiction of an event I witnessed", with that person then being held liable to legal penalties for perjury if that's disproved.
There's precedent (here in Australia) - traffic cameras put a hash of the image and detected infringement details into the file they use for enforcement action and as evidence if you elect to have it heard in court, at which time they need an "expert witness" from the manufacturer to state that the image cannot have been manipulated as "proved" by the hash (there was a hilarious case in Sydney a couple of decades back because back where a defence lawyer got smart enough advice to point out the fixed speed camera his client was accused by had a design flaw where the hash was calculated of the photograph _before_ overlaying the date/time/detected speed on it, so it was impossible to verify the hash since the annotations destroyed enough information... They had to let the alleged offender off, and then scrabbled to create a twisted legal interpretation to ensure all other drivers fine by the same model speed camera couldn't all dispute past fines...)
Ordinary photo/video evidence can no longer be trusted as either support or impeachment of an eyewitness' report.
I know for a fact that there was an attack of some sort - this is not in dispute by any party. I also know that Assad had been steadily regaining complete control of his nation - again something that's not really in dispute. I also know that Assad and his forces are capable of highly effective conventional attacks with minimal international issue. And finally I know, and I know that he knows, that chemical weapons would likely spur international outrage and risk dragging outside players (such as the US) into his conflict, which could turn everything upside down. The conclusion I draw is controversial, but makes infinitely more sense than what my trusted third parties tell me I should believe.
And maybe most importantly, I also accept that I could be wrong. There may be evidence or information, that would qualify as first principles, that I'm not aware of. It's also possible that somebody acted in a completely irrational and self destructive way for no apparent benefit. And as new information comes to light, I'm happy to change my views. I find that people who rely on third parties are generally not all that well informed on what they're talking about as pundits tend to focus on pathos over logos. And I think this ignorance is what drives people to double down on their views as a sort of defensive mechanism - this is probably playing a major role in our antagonism towards each other on controversial topics.
You're making the same mistake that a lot of economic analysis makes and assuming that Assad (and people, in general) will always act in their own best interest without bias or emotion clouding their judgement. This is the same reasoning that Stalin probably applied to his treaty with Hitler before WWII. Hitler invaded Russia anyways, even though bringing Russia into the war was a huge blunder for Germany.
People don't always make the right - or even rational - decision. Trying to infer the truth based on what a "rational actor" would do is utter nonsense.
Operation Barbarossa is great example - I really enjoy WW2 history! You seemingly think this was an arbitrary act of a mad man. In reality it was a rational and calculated decision. The Red Army was amassing on Germany's border and it was later revealed that Red Army generals had already been requesting permission to begin their attack, but Stalin was waiting for a more strategic opportunity. In particular he wanted England and Germany to go to war and for his army to march over the remains. By contrast England in the west was wanting the exact same thing with the USSR and Germany. America was also looking more and more like it might take a more direct role in the war.
Germany basically had two substantial powers on either side of it, waiting to pounce. It had even offered peace to England, who rejected the offer, in an effort to avoid fighting a war on both sides. And both sides were also massively ramping up their forces. It was during this era that millions went without under Stalin as he dedicated massive and unprecedented resources to a military buildup. And England, as mentioned, was also receiving increasing support from America. In other words, his enemies were getting stronger faster than he was. His invasion was what he saw as the best of many bad options.
When there is an explanation that follows logically and clearly for something, and another explanation that simply relies extensively on illogical events and turning 'bad guys' into caricatures, I tend to go with logic. Could I be wrong? Like I said, absolutely. And I'm fine with this because history shows time and again that logic tends to be far more accurate than a one-sided perspective on events.
One, people will be less swayed by emotionally charged imagery, and that's a good thing. That means less #FOMO and less rage at the perception that everyone is having fun but you. People can feel normal, just being normal again, because the understanding that any and also photography, video or audio can be fabricated.
I think everyone knows this, security just makes things harder, and that might be enough.
The last election already had algorithmically generated artificial news designed to sway people (note: I'm talking FB articles with headlines like "$POLITICIAN just insulted $NICHE_AUDIENCE. STOP THEM." that linked to articles that were scraped/spun from other sources). They relied upon people not actually reading and just hitting like/forward/heart whatever and spreading the top level message.
We are not going to be able to convince ourselves, never mind other people, that video (which has been the gold standard for "truth" for nearly a century) isn't actually real.
Case in point: https://www.youtube.com/watch?v=cQ54GDm1eL0 (which was included in the article) has comments from people deeply confused by it on YT.
1 - https://www.newyorker.com/magazine/2017/02/27/why-facts-dont...
And I'm not just talking about image fakes. You can have MCTS find the best arguments for ludicrous statements, and other paths to justify fake things that look just like real arguments.
Effective detection methods may end up being closely guarded secrets.
Pretty much nothing except other neural networks are differentiable, unless you put effort into designing them to be.
Everyone has some gpg-like setup.
If we see a statement or video or audio by a person accompanied by a public (graphic or audible) key that checks out...ok, done, verified. If we don't, then we can justifiably disregard the statement or video or audio.
A. Video/audio/text of/by person, signed, we have knowledge that this video bears their signature, they endorse it
B1. Unsigned because it's not them
B2. Unsigned because they don't want it to be associated with them
I think this is still a better state of affairs because we can verify positive endorsements as genuine or not.
You're right, we couldn't verify stuff they refused to sign, but!
Imagine the state coerces people to carry small devices with radio transmitters that constantly transmit, over the length of a second, a signed key, ad infinitum; that way no one can plausibly deny that it wasn't them (unless someone plants the persons device on an imposter).
If there could be some repercussions (e.g. police brutality, gang violence, whistleblowing, etc.) I'm going to go above and beyond to make it not traceable to me. And you're here asking the government to make it a law to make all video outputs traceable to the creator?
Perhaps you should apply to the NSA or Palantir.
It's easy to imagine this idea becoming a part of video (be it VOD or broadcast) encoding standards in the future, once this becomes a critical thing.
In terms of UX: one could imagine a simple indication on TVs: an RGB led that shines in different colors depending on verified authenticity levels.
I don't see how this video signing system will help verify a cell phone video taken by a random bystander.
The image and signature could be added as authentication layers and then a third layer added to allow post-production (crop/levels/etc). Right click in your browser to 'see original image' with a little padlock in the corner. Ideally this would include GPS info to reduce likelihood that someone would just project a desired image on a screen and photograph that.
It will however make sure that you're watching actual Obama 2, and not some computer simulation of him doing some disgusting thing.
If you employ a set of digital cameras, say for security cameras or other things, each has a private key which it uses to sign whatever it records. Obtaining said key would require physical access to the device. Then, when it matters, your organization has a set of public keys which it uses to verify that the picture/video is legitimate.
The self-answering question.
I could imagine an entire series (or part of a series) about "deepfakes".
I hope the name sticks, as the technology inevitably becomes more common.
Whatever method used would have to come up with a reference photo (or "model"), ie. what the photo "should" be. And this tech can then be used for the purpose of more convincing deep fakes and other AI trickery. Hurray! Progress! :)
Depressingly true, but keep in mind that words < pictures < video when it comes to provoking an emotional reaction.
>Also, if the US military did spot a deep fake, do they have the trust for people to believe them?
Depends on how their deepfake detection method works, if it's opensource, or at least publicly available and verifiably functional, then who wouldn't believe them? No one thinks their GPS is lying to them.
OTOH, when time is a factor (as in movies - temporal succession of images), perhaps simple curve fitting may never be quite perfect? Perhaps you need anticipation, and counterfactual thinking, cause and effect, and all that?
Also, like in video games, maybe even some understanding of real world physics may be required for a perfect fake.
I think we'll end up in a world where the only way to prove what the past really is if we hash and stuck on a blockchain somewhere. One can then see PoW as a constant tax being paid to maintain a literal link to a past when deepfakery did not exist.
You could then monetise it by charging a nominal fee for the path from any given bit of data to a timestamped bitcoin transaction.
The disadvantage is the cost to do this involves "get access to all data on the web", but perhaps you could partner with the web archive or something to do it?
Now that making copies is as easy as owning a camera or USB drive it is dis-information that is the new way to change history.
LOL. Blockchain to the rescue! Except. Which fork is the right one? The one with the most hash power? How can you be sure that one is right? Which ethereum fork is the "correct" fork? Which bitcoin fork is the "correct" fork?
Blockchains don't help for shit with proving something existed. The are fully mutable data structures--it just requires more effort than a simple UPDATE statement.
Real physical, historical evidence.
Why are fakes supposedly such a big problem? Their identity might be fake, but that doesn't make their ideas any less attackable and that's what it should be about: Ideas, not identities.
If an idea is good I couldn't care less who had it, I only care about the idea itself.
Yet all the public discourse focuses on "fakes spreading wrong ideas", even the recent Facebook EU hearing had that as a major topic, with Zuckerberg constantly reiterating that catching fake profiles, who could influence elections, is one of Facebook's top priorities (which I don't doubt).
But barely anybody seems to make the effort to think this trough consequently, once you do that, you realize that we are heading in exactly the same direction China is heading: No anonymity, Social media becoming the de-facto replacement for government institutions. Is that really the world we want to live in?
Sure, ideas can stand on their own, but we have to have facts that we can know so that we can use those facts to judge the ideas. If we can fake images and videos, we can create fake facts, and that is certainly dangerous.
Looks like the BAA came out in 2015: https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-BAA-15-58/listin...
But whats to stop someone reverse engineering the camera and using the signing key to sign something else?
e.g. press conference containing multiple mobile phone camera videos of the speaker from different audience perspectives. Probably the models will eventually catch up and allow this too, to be faked.
Here's a video of your officer giving orders. Should you obey them? (Note that "no" as an automatic answer is just as bad as "yes". The only correct answer is "yes if authentic, no if not". But how do you know if it's authentic?)
The Western Intelligence Community has incredible power and very little oversight. There's a long way to fall if public opinion suddenly swings against them because someone publicized a deepfake of CIA, DGSE, MI6, and AIVD agents ritually sacrificing a child.