Hacker News new | past | comments | ask | show | jobs | submit login
‘Deepfakes’ Trigger a Race to Fight Manipulated Photos and Videos (wsj.com)
77 points by jkuria 74 days ago | hide | past | web | favorite | 87 comments



Seems to me that Photoshop fakes are well handled by a sort of social whitelist. It doesn't matter what's on the picture if you can't show us who took it.

The Weekly World News served (serves?) up that sort of nonsense for years, but for the most part the media didn't go around falling for hoaxes, whatever the source. Probably a few deepfakes will spread like wildfire on social media and then the public will start wising up to the technology.


How small of a percentage of people need to fall for it to tip an election? How many people need to doubt real evidence as being possibly manipulated to change a jury's conviction?

As it is, there are apparently more than a million people who actually believe QAnon is real, despite the fact there is nobody willing to put their name to the source.


A million people 'believing' QAnon sounds like faulty extrapolation from Lizardman's Constant.

https://slatestarcodex.com/2013/04/12/noisy-poll-results-and...


I doubt that. Lizardpeople are absurd, even by the standards of your average conspiracy theorist. QAnon isn't.

The basic premise - some 4chan shitposter happens to have Q-clearance - is entirely plausible. The fact that anyone can post anything as QAnon also gives plausible deniability for any claim that turns out false, while any claim vague enough to turn out true is taken as evidence.

Conspiracy thinking is extremely widespread and many conspiracy theories are widely believed, such as the JFK assassination, the fake moon landing, or "vaccines cause autism".


What is the non-conspiracy explanation for the magic bullet involved in the JFK assassination? I only know the conspiracy theory that claims that the CIA accidentally shot JFK and then covered their tracks. And then the other conspiracy theory, where that wasn't an accident.


> What is the non-conspiracy explanation for the magic bullet involved in the JFK assassination? I only know the conspiracy theory that claims that the CIA accidentally shot JFK and then covered their tracks.

It's entirely possible that a secret service agent accidentally shot Kennedy, and they're on the record of saying so.

"We don't know" is not really a conspiracy theory. "We don't know, therefore X must have done it" is a conspiracy theory. Conspiracy thinking is the tendency to make these connections.


Last time I looked at this, the magic bullet was the official stance. When did the narrative change?


At this point it's no longer conspiracy theories, it's just conspiracy shower thoughts.


Well, QAnon is real, no? It's not a spirit writing those posts.


Yes sure, if we reduce 'real' to just mean it's not created ex nihilo out of the aether sure, but everything is 'real' under that definition... ultimately everything is still attributable to a human decision to kick off a process.

Is QAnon actually a highly placed government official (or someone with access to information about the function and plans at a high level)? From all their big predictions so far (arrests etc) being false I think we can pretty easily conclude no.


To understand what makes something real, we first have to understand what makes something fake.


QAnon purports to be an important person with access to inside information. It is beyond silly that you interpreted my statement to mean that there isn't a physical person typing those posts.

Are you someone who cannot resist coming up with a counter-argument no matter how thin? You might want to ponder how much value that adds to discourse in general.


Here is an example what keeps me up at night https://youtu.be/_mkRAArj-x0

Adding or removing cancer.

Image manipulation of medical related diagnostics/data. Imagine a hospital hack where 50% of cancer patient images are clear and unsuspecting other 50% have fake cancer injected in. The ransomeware on this stuff is going to be absolutely insane to resolve and/or insure against.


It's not as if political dissidents haven't been executed properly without this tool, but what you're stating is now it's going to get worse. I'm of a pretty strongly held belief that everyone the top echelons really want dead are already dead. People that have the money to reliably make this thing happen need no low tech solutions, although possibly you're worried now that anyone can do it?

It seems that such a move would be used to upset a fight for power among a government. If you have a government where this is already an issue, then the problem is more likely the government than the action created by the hack. Furthermore, such a large rise in cancer cases would be heavily researched, and causation would be discovered, prompting hospital care facilities to start issuing hash code with images to prevent manipulation, if that is not already being done.


Wow! That is eye opening. Thanks for the share on that one.


I actually like this tech. It opens infinite possibilities for using a real(-istic) image in a made up situation (think games and entertainment). Get ready for a new wave of “dynamic models” soon, who will not only post images and videos with new clothes and makeup on their whatever-gram, but also perform movement scenes that could be used to model them in a virtual way, for money of course. People will like it, because if it is not them who make an easy living, at least they can see a good show that doesn’t depend on performer’s preferences, or make one themselves by movement translation. It is also a time to make a new fortune out of it.

As for politics and fact-checking, personally I just don’t care. If you do, then sign your damn video with a public key already and check if a video matches it. That would require a new sort of signature that is not lost on resize/rebalance/etc, but nothing unreal. As a bonus, video players could show the source’s certificate fields right below the video.


Would signing actually help? People who trust "Tru Woke Media TV" won't change their minds just because TWM TV signs their stuff.

Signed "by Washington Post" to prevent fake videos with fake attribution wouldn't be "just signing videos". That requires a source of trust infrastructure like the CAs and a display infrastructure the your green lock in the browserbar. Otherwise people will just ignore "by Waѕhington Post" and mistake it for "by Washington Post"


Browsers do that for sites, and I don’t see how sites are different than videos in this regard. Both are information with an origin. CA already exists, only video players have to keep up. People who trust anything they see, hear or read cannot benefit from that, but why is this a concern? Politics and cheap press fooled them since forever. The only problem I see here is that the same amount of uneducated people could be fooled like never before, but the same could (and will) be done in the opposite way to overwhelm them and detract from wrong reasoning.

I mean, the entire “problem” arises not because deepfakes, but because that video thing still lives in a century which allowed “hacking” sites and owning bank accounts via plain-text “secret” queries or cvc codes clearly printed on the back of the card with no additional confirmation required to withdraw your money. That window should have been closed regardless of deep fakes.


Signing videos is the worst of both worlds in my opinion - it provides gatekeepers and prevents anonymity while doing nothing to stop the fakes from getting signed later.

If anything I think collation of data is the way to prove it along with physical evidence or lack of it. If the Washington Monument takes off with thrusters on video and it is still there we know it was faked.

If there is a scorched crater where the monument once was and the Washington Monument can be seen in orbit then it suggests someone engaged in a seriously reckless and expensive prank with a national monument.


I think signing is a particularly weak approach, but we're going to see other tech advances to handle this problem. And I think you're right about the biggest problem: trusted sources were fine before photo evidence and will be fine without signing, but what we're going to miss is the ability for arbitrary people to provide trustworthy accounts of events.

The classic example of a tech fix is a hostage taking photos with the day's newspaper; it's a way to specify an oldest-possible date of a photograph. Similarly, you can prove that a video is not newer than some specific time by publicizing a hash of it. After that, format conversions, rehosts, etc. are all believable as long as they can be traced back to the same-hash original. And until deepfakes reach realtime speed, any specified-time event like a politician's speech can be authenticated just by posting a hash immediately after it ends.

That isn't a full solution to authenticity; a video of some unexpected event could always have been prepared in advance. And trusted-third-source verification is tricky; I'd mostly believe a Periscope or ACLU Mobile Justice video is being shot live, but uploading a pre-faked stream to the service is hardly impossible. Even there, though, I expect we'll see technological solutions. Apps could issue steganography (or other directions, like when to set keyframes) to the client as they record a video, so that pre-altered video won't update the way it's meant to. And I'm sure a lot of other options exist also.

It's going to be an interesting time, but I think we'll see quite a few clever attempts to preserve both anonymity and source-independent trust.


> As for politics and fact-checking, personally I just don’t care. If you do, then sign your damn video with a public key already and check if a video matches it.

I think we're going to see a lot of things like this, soon. Signing for ownership, definitely, but also lots of other tech tricks. We already have livestreaming, which prevents fakes in the moment. If you upload a video as soon as you take it, you can put a Google or Facebook vouched timestamp on the content. If you don't want to rely on their reputation, you can share a hash of the video content before a deep-faker would have had time to edit the content.

Timeless content like a video of police violence is tougher to prove, and fakes will be very hard to disprove, but the idea that convincing-looking fakes obviously destroy trust is a failure of imagination.


> sign your damn video with a public key already

Surely the imaging companies are on to this by now, right? They should be signing the raw data as it comes off the sensor. Tesla does this for the input and the output, on-chip if I recall.


As long as you can point a camera at a printout, you can get a camera-signed photoshop.


> Surely the imaging companies are on to this by now, right? They should be signing the raw data as it comes off the sensor. Tesla does this for the input and the output, on-chip if I recall.

So all post-production work ceases? Or do we have a situation where the original is published alongside the media-edition where they've applied a few filters and adjusted the colour space?

I'm not saying it's a bad idea, just interested in how it might work.


Right, you string the signatures together. "This image was generated by Tom Fipples on a Nikon D610, working the Washington Post, and edited by Jane Wizzlepans working for AP by X copy of Photoshop. Certs include Nikon, Washington Post, and Adobe. Signatures include Tom and Jane. And the whole chain has to be in sequence. Just like intermediate certs in all the other parts of your browser.


OK, so we're effectively (to my mind) looking at something like git with signing on every commit, and an image file being its own self-contained repo, so you can [;ay back through the versions to the 'root' that comes from a camera.

Presumably this format is going to be of interest to journalists and courts, at the very least, though courts are most likely going to be most interested in the simplest case - the provable original.

Interesting idea :)


How about encrypting video with the public key, on the hardware? Then we can have cameras all over the place and finally solve issues like rape, etc. by figuring out what happened. We just need the plaintiff or multiple jury members to agree in writing to subpoena the video and decrypt it. You can use shamir secret key sharing to give the ability to obtain a key to a video.

So basically storing a lot of encrypted video from private areas, and trusting the camera makers to not install a backdoor.

I guess the only thing I don't like about this is that a camera maker can be bribed to create a deepfake to frame some person. Video evidence would be admissible in court only from trusted encrypted-camera makers, until one or two scandals for a camera maker will bring the trust to zero.


Or you just don't resize it at all.


Check out ctrl shift face on YouTube


You don't need deepfakes to fool boomers on facebook. You could create an image with rolling fields, an American flag in the background, and the caption "Share if you think veterans shouldn't be euthanized when they catch a cold"...

Boomers would smash that share button so hard CA would experience another earthquake..


> You don't need deepfakes to fool boomers on facebook.

You don't need deepfakes to fool millenials on Facebook. You could just take statistics that could support your narrative and claim they do. They'll smash that share button so hard...


Thanks for this - treating easily-debunked fakery as a boomer problem is absurd. Entire sites like Mic exist to convert shoddy n=8 studies or narrow statistical observations into grand narratives for 20-somethings to believe uncritically.

I've gotten really good at catching misleading statistics. In the areas where I have expertise I have a lot of practice with noticing reverse causation, un-normalized data, p-hacking, manipulated time windows, and so on. Every so often, I see someone cite evidence for a claim I strongly disbelieve that's really convincing, that looks clear and direct and immune to basically all the usual tricks. And it can take me an embarrassingly long time to remember that I should go and check the damn source. Because even in a world of fancy statistical tricks, there's nothing stopping people from making claims that simply don't appear in their source, or outright falsifying the claim. There's nothing quite as irritating as somebody claiming a "twenty year high" in something and citing a paper about a "twenty year low".

And I always wonder just how hopelessly screwed up my - and everyone else's - knowledge base is. If a "vaccines cause autism" claim gives me so much whiplash I check the source and find it's fake, how many plausible claims did I not run down? If I read a Vox article and barely catch a statistical mistake that invalidates the entire claim, how many stories have completely-invalidating flaws I don't catch?

Deepfakes might be important as a way to disrupt some of our strongest forms of proof. Something like an uncut high-fidelity video of a politician saying something atrocious is basically undeniable today. That can help convince people biased against believing the story, or let untrusted sources provide trustworthy contributions. But deepfakes are going to be a mere ripple on the total amount of bullshit; that's been high for decades, riding on totally disprovable claims.


Yea, no kidding. Here's a fine example: https://twitter.com/mumonamission5

Based on just a few tweets she appears to believe ancient aliens existed, satan worshippers hide their symbols everywhere, said satan worshippers occupy most positions of power and sacrifice children, "everything is energy", Nikola Tesla invented free limitless energy, the elites consume adrenochrome extracted from tortured children, all electricity comes from tall structures collecting lightning strikes, ancient cultures were able to utilize said "athmospheric energy" and humans are sprayed with aerosols released from aeroplanes.

That was just from the tweets of the past week. It's truly mind boggling to gaze into such a fractal of conspiracy theories.


To be fair most of the things you listed can be "learned" on Discovery History Channel.


> "everything is energy"

that's... not wrong, but I guess for the wrong reasons.


Well that was a fun rabbit hole to go down


Picking that person as the average boomer is as fair as reading the FBI crime statistics.


Deep fakes can easily be caught if you have the original material to compare it to. Finding that automatically shouldn't be that hard if it is available.

In a way I already like deep fakes since they force people not to take anything they see on the net too seriously.


After a weekend of two mass shootings as a result of Internet radicalization, it's clear that it's too late to ask people not to take the internet seriously.


True, but I believe the recent surge in shootings to be a direct response to censorship ambitions and media coverage these offenders expect to receive.


This won't work if someone with malicious intentions fakes a video without releasing the source.


Even less likely to succeed if they wholesale fake it from nothing.

Essentially the problem is very similar to detecting artifacts in video and audio. Reverse steganography, if you prefer.


It's not similar because it's based on adversarial networks. Any improvements in detection methods can be incorporated to improve the fakes.


Not only this, but the discriminator component of the adversarial net requires the original output from the generative component.

Run the output of the generator through some noise, lens distortion, video compression, downscale it to a 480p surveillance style video, and the discriminator is at a huge disadvantage.


This is an oversimplification. I can have ways of detecting artifacts that is not based on a differentiable loss function, so no, not true.


Interesting train of thought, a potentially basic idea to combat this could be having a cryptographic algorithm used to sign each frame, basically set a specific value in one or more pseudo-random pixels each frame of a video, forcing a 1 or 0 in the least significant bit in a known pattern along with the signing identity of the original creator - an attacker without the private key wouldn't be able to successfully re-sign the video as they'd have no way of really knowing the original sequence, and they wouldn't have the signing key to be able to regenerate the original sequence or recreate it from the tampered frames.

Unfortunately I am not a crypto expert but maybe a codec could check for signature pixels every frame in playback and provide a warning if no key signature is detected in realtime for any of the loaded frames if the user has a matching verification key.

Either an attacker would maliciously blank or randomise the LSB which would show the entire video as unsigned, or the tampered portions would show up if and when the signature pixel chain is broken. I guess the issue would be around securely distributing verification keys that can verify both the signing identity and verify the correct sequence? But then that would put the ability to recreate the sequence in the hands of the attackers?

Oh and I guess variable bitrates could cause issues.

Damn this stuff is harder than I thought.

There's probably a billion ways this wouldn't work and a thousand existing solutions?


> In a way I already like deep fakes since they force people not to take anything they see on the net too seriously.

Personally I find that perspective to be incredibly naive.

Roughly equivalent in effectiveness to "we should spread fentanyl everywhere. Because it's so easy to OD people will be afraid and stop using drugs." Nope, people will just start over dosing way more frequently.


It was just expressed cynism. I just dislike the proposed alternative as having a truth provider that evaluates information for people to consume. That will end up like a joke.


And how does easily being able to catch it not feed back into being able to create better Deep Fakes vis-à-vis something like a GAN?


To be honest, this is just part of a larger problem. Even if we have "trusted" media companies, that will just increase centralization in the space by making it harder for indie outlets (who, in the age of the internet, can break significant stories well before larger media corps). Much of the material used by such corps these days comes from videos posted on the internet, any way. Grainy smart-phone video posted to the internet has been behind a large number of stories over the past few years.

Add to this that it doesn't solve the problem of deceptive editing, a la Covington kid. The real problem here is still that people are by and large sheep who care not to read past a headline. If this sounds rude, perhaps it stems from my frustration with the issue. See case number one, mass shootings. As tragic as it is, you remain more likely to be struck and killed by lightning than shot in such an event. Most deaths involving guns are suicides. We still get several days of air-time for each, while all the while we have real problems. Homeless populations, poor educations, starving children in africa. Focus on those first, and go by order of what affects your nation most.


Think about it for a second. Deepfake generation and detection are just like a two-person game. They have been solved - Go, Chess, and so on. MCTS is a pretty good approach, but there can be others. That stuff iterates until a point where the detector cannot be sure anymore that something is or isn't a deepfake.

By that point, just like with Chess, humans are way behind. They can't tell for sure anything.

So no matter WHAT we do, we are going to be in a world where we can't trust video or audio evidence of anything. And when we can achieve deepfakes in realtime, you won't be able to trust that the person you're conversing with is really your friend.

At that point, people will voluntarily create cryptographic timestamps with trusted equipment, or several trusted combinations (e.g. a phone + a beacon + wifi hotspot etc.) and then voluntarily share that (zero-knowledge proof) with whoever needs to know.

But if bots ever reach conversational level, and we have realtime deepfakes, it's game over essentially for trusting any interaction online with anyone.


When global pedophile networks and human traffic rings start getting uncovered, introducing the topic of deep fakes in the public discourse becomes relevant to discredit the compromising footage of high-profile individuals who participated in them and are actual three-letter agency assets due to blackmail.


This is interesting because it speaks to one of the existentially troubling aspects of being human. That is the fact that there is no evidence that we can really trust for any external reason. This goes to the point that Descartes made many years ago:

1. Scientific evidence -- that's just words some other ape like yourself wrote down you now believe.

2. Personal experience -- those are just memories of experiences you think you had, but you can't really be sure, you just ask your brain and it responds, once that stops working you never "had the experience".

3. Photographs and video -- well those are just captured images, and any image or data can be changed. There is no "sacred data" that can not be manipulated.

4. Trusted institutions: just more flawed apes that you hope are more trustworthy than those they are instructing, controlling, and / or in charge of.

5. Rules / Laws: pieces of writing on paper that we all choose to adhere to. History shows that people can begin deciding not to follow them at any time, they have no power in and of themselves, only people have power, and usually because they are willing to wield violence (e.g. police, military, resistance, militia).

In the end all you have is the certainty that you are indeed experiencing something, but living with the ego-shattering realization that nothing else is certain or under your control is a very large red pill to swallow.


I'm terrified of deepfakes because they will at some point completely devalue photographic evidence. On one hand, it will make it easier to manufacture evidence and frame people. On the other hand, it will make it easier to get away with stuff despite real photographic evidence by simply blaming it on deepfakes. This is already happening with the whole fake news phenomenon. People's perception of truth will be even more skewed.


> On one hand, it will make it easier to manufacture evidence and frame people. On the other hand, it will make it easier to get away with stuff despite real photographic evidence by simply blaming it on deepfakes.

So nothing changes and you have to get additional sources. Just like you should already.

Peoples perception of truth is already skewed. Imagine presenting a photoshopped picture to someone 10, 20, 30 years ago.

There is no way out of this as the tech is out there.


You mean s/photographic/video/ evicence, right? Because I'd hope everyone had very limited baseline trust in photographic evidence for the past two decades. Photoshop isn't new.


It's fascinating how brief the era of "hard evidence" promises to be. We're accustomed to having nearly-unfakeable, easily disseminated documentation of key events, but it's a very recent advance.

Photography as evidence (what's apparently called the "truth claim" of photographs) is perhaps 100 years old; the Kodak Brownie provided mass access to quick-capture images with too much detail to easily fake, and the Leica enabled systematic photojournalism. Near-universal access to cameras is only ~30 years old, with cheap digital cameras and then cellphones making photo evidence near-mandatory for bold claims. Audio recording as evidence is ~70 years old, with portable tape recorders producing on-demand records of speech with hard-to-fake fidelity. Video recording as evidence also starts ~70 years ago, but became a systematic form of documentation only ~40 years ago with camcorders, and phones only reached widespread high-fidelity access about 20 years ago.

Meanwhile, photographic digital fakes have been plausible for about 20 years (less for human faces). Audio fakes are similar - extremely convincing fakery is newer (10 years?), but the baseline level of proof was never as high. And now video is less than a decade from compromise.

I don't think the simple loss of evidence will be quite as serious a crisis as some people expect. The era before "hard evidence" is still on the edge of living memory in the US, and areas with less access to expensive technology never entirely developed that expectation. We'll adjust back to weighting source more and fidelity less, though that comes at a price in centralized control. But we're in for a very rough adjustment period where easily-faked content is taken as clear truth.

Beyond that, there's still a major open question about the psychological impact of these fakes. We might adjust back to a society that doesn't rely on photo proof, but there's no clear precedent for how people will be affected by untrusted but gut-level convincing content. Stories of actors being harassed over unlikeable roles suggest that we may yet be pretty impacted by visualizations we know aren't accurate.


Fear media triggers a race to fight manipulated photos and videos. “OMG we’re under ATTACK”


I've seen probably hundreds of moral panic stories about DEEP FAKES (which sound so scary!), and have yet to see a malicious use. It's more of an ML party trick than anything to fear, but the "I fucking love science" crowd eats it up.


> I've seen probably hundreds of moral panic stories about DEEP FAKES (which sound so scary!), and have yet to see a malicious use.

Because this is a long-term operation designed to make you question what your own eyes see. There will be a day when an extremely damaging video of a politician/celebrity comes out and our trusted media sources will say "we've determined this was merely a deep fake" and thereby dismiss it out of hand.


I have a feeling that the next 15 months will be an interesting time for this.


While perfecting their own manipulation technologies.


no they don't. deepfakes still look fake, until that changes deekfakes are a tempest teapot.


So are you saying they won't ever look realistic "enough"? Are you saying that even the average grandma or child has enough media and computer literacy to recognize a deepfake? Even if the fake is of someone they've never met? Is it harmless for a child or mother to see a deepfake of a family member doing something horrible?

Even if it's a tempest in a teapot right now (I think this is a questionable assertion in 2019) it seems pretty safe to bet that it will be a big problem within 5 years. Maybe less.


Also it gets worse because it also undermines the credibility of factually correct news. Once a fake isn’t easily distinguishable from reality any more, even media literate people will start to question or even dismiss any piece of information the moment it feels at all contrary to their thinking.


Very good deep fake, still pretty damn fake to me.

https://www.youtube.com/watch?v=3dBiNGufIJw


The funny/sad thing here is you are setting yourself up for failure. In the future you will end up considering a faked image real because you will reason there was no way you wouldn't recognize a faked image/video.


Our brain is exceptionally good at filling in the blanks. It doesn't have to be perfect to make people believe.


I think people are willing to suspend their disbelief, if the message aligns with their beliefs. A couple of minutes of scrolling on facebook reveals a ton of obvious fake stuff, and people lapping it all up.

Just planting the seed is enough.


You are exactly right.

An episode of The Secret History of the Future, From Zero to Selfie has a snippet interview with Hany Farid of Dartmouth, talking about the fake photo of John Kerry and Jane Fonda, in context of the 2004 US Presidential election.

"After the election, after Kerry lost, I remember listening to a radio station news story, and they were interviewing somebody who voted for Kerry's opponent, and he said 'Why didn't you vote for Kerry?' and he said 'I couldn't get that image of Kerry and Fonda out of my head. And the reporter said, 'Well, you know that was a fake image,' and this is true. The guy said, 'I know, but I couldn't get the image out of my head.'"

https://podcasts.google.com/?feed=aHR0cHM6Ly9mZWVkcy5tZWdhcG... near the 22:40 mark.


deepfakes still look fake

It should be obvious that the deep fakes that don't look wrong will be (wrongly) classified as "not deep fakes", so the premise of "deepfakes still look fake" is completely flawed. "deepfakes that look fake still look fake" isn't very useful.


Better term: Survivorship Bias.

The deepfakes that don't look fake are "real". And they already pass as such. And unless you have the original source material, you cannot tell.


Let's see examples of what you are speaking about? There are no magical deep fakes that are undecipherable and yet have had any sort of impact.



It is a live video enhancement filter, but is that based on deepfake?


Most here are using the term "deepfake" as a generic term describing the use of software to create a believable fake visage of a human face. The discussion is about the ramifications of the widespread use of this capability, which has nothing to do with the particular software that was used. In any case, the provided link is strong evidence that currently widely available software is enough to fool a lot of people that one person is another person, and we can confidently predict that software will only get better at this.


There are no magical deep fakes that are undecipherable and yet have had any sort of impact.

So you agree that there are deep fakes people can't tell are deep fakes, but now you're saying that's OK because no one has used them in an impactful way yet?


Most people cant't even see that the Onion is satire... Half baked fakes are already far more difficult to spot.


I have yet to see concrete examples of any of this from family members, friends, or relatives. People believe what they want to believe so I am sure plenty of people are "tricked" because they see some half-ass adulterated content trying to pass itself off as the truth, that they want to believe is real. In the end that's a question of belief not a question of ability.

I remember seeing examples from conflicts where video from another event was passed off by the mainstream media to spin the story. It was not until it was called out because people found the real source and posted it debunking what was shown.

I expect more or less the same process with deep fakes. Find the source material and then show how it was adulterated to produce the fake.


Something even reaching mainstream media is already too far down the line, making the tools to nip this in the bud before it reaches mainstream would be quite valuable in my opinion.


You should check out the 'atetheonion' subreddit


Most people can tell that the Onion is fake, but given that there are millions of people out there and even the smartest of us have our off days, you're going to see people falling for it all the time. It's as much a numbers game as people being gullible.


Agree, everyone thought the "grab her by the..." soundbite was going to sink Trump, but it didn't really do anything.

I'd be far more terrified if there were an AI capable of generating genuinely catchy/hilarious/original meme images. The "original" part is the hardest part of the equation, because even if you managed the "catchy/hilarious" part (which is already almost impossible), the novelty of the results would wear off quickly if the machine isn't capable of true originality.


I dunno... the Deep Video Portraits work looks pretty convincing to me.

https://www.youtube.com/watch?v=qc5P2bvfl44


Ctrl Shift Face on YouTube has some really good stuff. There are definitely artifacts in this vid but the seamless transitions are pretty crazy - https://www.youtube.com/watch?v=bPhUhypV27w

To me the odd part is that these are just generally individuals tinkering with the technology using source material that wasn't intended to support a fake.


So do filters on selfie apps and also makeup looks fake at many places. But it seems to be tricking lots of us just fine?


of all the deepfakes we can identify, they are all obviously fakes. of all the deepfakes we dont identify, we don't count them in our identification. therefore we catch 100% of deepfakes.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: