Hacker News new | past | comments | ask | show | jobs | submit login
Fake celebrity porn is blowing up on Reddit, thanks to artificial intelligence (theverge.com)
226 points by LearnerHerzog on Jan 24, 2018 | hide | past | web | favorite | 178 comments



> We assume, too, that face swapping is the end game, but it’s clearly just the beginning.

Isn't the end game an endless stream of personalized content for everyone? Wherein the entire corpus of human-created media becomes a training set for our fantasies.

It is interesting how entertainment is again pushing the boundary of technology. Soon enough this push to make face editing tools for porn more accessible to everyone will allow anyone to:

1) Replace their ex-husband's face in their old family videos with their new husband's face.

2) Create a viral video of Donald Trump murdering someone.

3) Be the star of their favourite movie, porn or otherwise. (What's the effect this would have on people's memories, when they actively see themselves doing everything James Bond does, for instance? Shooting people, being generally powerful, and "getting the girl"?)


Speaking of the effect this would have on peoples memories, there's also the potential to use these tools to gaslight [1] someone.

An abuser could make images where a person was at an event they were never at, or with a person they never met.

> "You've totally met Steve before, here's a photo of you with him, how do you not remember?"

An abuser could even more effectively tear down someones reality than ever before. If they were having an affair with someone they just met, they could claim to be old school friends catching up, just insert them into an old photo.

Obviously, it's not all bad. There is the potential for this to be used for good as well, but I'm a pessimist.

[1] https://en.wikipedia.org/wiki/Gaslighting


I mean, there's presumably a very short window for that before photo evidence becomes unconvincing to people. The first time a Senator gets "exposed" for some misdeed but proves the evidence is fake, "there's a photo of this" loses its punch. The surrealism of seeing a fake recreation of oneself might have some impact, but we handled ultra-realistic paintings alright.

It does touch on an interesting point, though: we've had roughly 100 years in which photo and audio recreations of events constitute "hard evidence" beyond our ability to fully falsify. It appears that within the next ~20 years we'll lose that reliability - footage of a politician making a dirty deal or a businessman engaging in conspiracy will become deniable not just as a misleading edit, but as outright fabrication.

What do we do at that point? Do smartphone videos get automatically hashed and uploaded to a blockchain somewhere, so that we can prove when the video came into being? Do we return to an 1850s sense of news, where claims effectively cease to be falsifiable except via personal experience? Are we ready for any of this?


In today's political climate, even with incontrovertible evidence, all you need to do is shout out at the top of your lungs: "FAKE NEWS", and it doesn't matter any more.


Giving in to those who shout so is exactly their goal.

We, as technologists, should come up with solutions that allow such shouts to become meaningless.


If you make shouts meaningless you make screams of help meaningless


I meant shouts of deliberate misrepresentation.


As national political news to a cynical, disinterested audience, fine.

But that's thinking too small: what happens when every recording of corporate misdeeds, every photo of a cheating spouse, even a child's baby pictures, lose their solidity as evidence? What happens to anonymous online conversations when "pics or it didn't happen" becomes "it didn't happen"?


The other side of that, detection of frauds, also benefits from current work in the ML/AI/etc.

Adversarial algos sound quite interesting. It might mean that we'd have to look at the evidence of fraud in a whole new way though, because the thing that gives away the fake might not be obvious.


> I mean, there's presumably a very short window for that before photo evidence becomes unconvincing to people.

Conversely, the people who want to believe false things (and under the right circumstances that's most of us) will be easier to convince. What is truth anyway?


That's not really true. Maybe fifty years when cameras were of a sufficient quality that touchups would be noticed compared to a photorealistic painting.


My guess is that signatures will be forged via NN before pictures.


I agree, but I wonder how much this matters? Are signatures still counted as a hard form of identity verification all that often?


Don't shoot the messenger, but here are a couple more techno-morality scenarios:

* A browser extension that detects if the social media profile you're viewing face-matches any revenge-porn that's out there, and serves it to you

* A phone app that undresses people, or [woman < AGE], or whoever, in real-time via AI-guided compositing. Will this be considered as offensive as putting a mirror on your shoe?

* Digital VR girl/boy-friends ala the movie "Her", except with the face, body, and voice of anyone you choose

Suddenly all these things seem very close at hand.


I think it'd probably be safer putting a _lower_ limit on the age for your undressing app.


Yeah, because the people that are going to be virtually undressing minors are going to be stopped by DRM. I mean seriously the world is about to get a lot weirder. These tools aren't even hard to use currently with little to no programming experience. Sure it's tough to get good results but that's going to change quick. How we adjust as a planet and society is going to be a real growth experience ... or utter chaos. Maybe both?


I think we’re going to come to consider these things normal and mundane.

Anyone can see you fake naked at any time. Meh who cares.

Anyone can put you in any random video. Meh who cares.

Anyone can ... meh

We used to think it scandalous/offensive for someone to take photos of us. Now it’s just part of being outside. We don’t even think about the fact that everyone walka around with a camera.


If it gets good enough that it becomes difficult to identify real security cam footage vs edited footage (which could be easier than you think considering the low quality/resolution of security cameras), there could be serious legal consequences.

What if a suspect's lawyer plays the same CCTV video and shows the arresting officer committing the crime and says, "See. Anyone could have made this evidence." You'd then have to prove chain of custody and it can get incredibly hairy .. but only if you're rich and famous enough to hire those lawyers and make that argument.


We're going to need to digitally sign everything at the time of production to prove to they're not forgeries then.


We already have the capability to do special effects and superposition in real time. My phone can do it with simple shapes in snapchat already, including with proper perspective and depth scaling.

Imagine what will be possible in even 5 years with good hardware. Deep fakes in real time, digitally signed.


I'd imagine that the ability to fake and tell apart fakes will scale with computational resources, and so we will also have progressively stronger signatures that cost more computational power to generate.

This is basically already the premise of PoW -- it's hard to fake out the network and the chain of hashes show you exactly how much computational work was put into demonstrating veracity.

This doesn't remove the ability to fake things, but it imposes a price. If you really want to show something is real, dump a bunch of computation into computing hashes.


Exactly. I think we're going to stop using video as evidence of anything.


Only if you are in the first world and even then if you are some subgroup of people.

I know of and read about people commuting suicide or being killed because their honor was harmed.

This is not meh for many billions of people.


> This is not meh for many billions of people.

I agree. It’s not meh for many people.

I think that the more common it becomes, the more social mores are going to change to accomodate it. Once upon a time dressing like people do today was scandalous now it’s not.

You know, like wearing a straw hat too late in the year leading to riots. https://en.m.wikipedia.org/wiki/Straw_Hat_Riot


Well there was an app named "NameTag" back in 2014 that promised to find pictures online from a potential match [1], and some russian dude was matching pictures of strangers in the Moscow metropolitan train with online available pictures (I cannot find the article anymore unfortunately)

[1] http://www.ibtimes.com/nametag-facial-recognition-app-checks...


You should contact the Black Mirror writers and give them your ideas.


Things are going to get very weird in porn, when you don’t have convince a human to actually do it. I have to assume that early adopters will also be people with predilections which are unserved, or illegal. If people worry about their kids seeing disturbing porn now, imagine when it’s AI generated, photorealistic rape, snuff, child porn. Illegal or not, if it’s purely virtual law enforcement is going to focus on the subset of crimes which involve actual human victims.


> If people worry about their kids seeing disturbing porn now, imagine when it’s AI generated, photorealistic rape, snuff, child porn.

There was a time when it was quite easy to find (without even trying for that specific content) photorealistic rape, snuff, bestiality, and child porn on the public web, without any AI involved.

> Illegal or not, if it’s purely virtual law enforcement is going to focus on the subset of crimes which involve actual human victims.

Actual prosecutions for virtual (generally not photorealistic) child porn in various jurisdictions demonstrate that this is not a hard and fast rule.


Animations or fiction of obscene content are not illegal in the US and Japan. They are illegal in the UK (a man was sentenced for Simpsons's porn) and many other countries.

Now with added realism, these lines could become blurry and we could see some of these issues brought up again.


> Animations or fiction of obscene content are not illegal in the US

Citation please. There is nowhere near enough precedent to draw such a conclusion in the US. The defendants in these cases often end up pleading guilty.

US v Hanley, US v Red Rose Stories, etc.


Hmm .. seems things have changed quite a bit since I last read up on this. It seems to vary by state:

https://en.wikipedia.org/wiki/Legal_status_of_drawn_pornogra...


But the legal reasoning right now that the children are harmed in the making of it and their victims, dead or not, suffered through the making and suffers through continued distribution of it. I'm sure there'll be some landmark cases soon enough.


I think this progression will do no more than force an existing moral question into the open. What is the moral quality of a thought?

Personally I believe that even unspoken thoughts can have a strong moral dimension for the individual, though of course I see no legal dimension.

One aspect of this will be does our indulgence of our own negative fantasies weaken our capability to act rightly when presented with a real world moral choice and does that make us culpable...or more culpable if we make a wrong choice.


> One aspect of this will be does our indulgence of our own negative fantasies weaken our capability to act rightly when presented with a real world moral choice and does that make us culpable...or more culpable if we make a wrong choice.

It'll be really interesting to see more data come out about this. This concept is pretty much at the core of the video game violence debate which is still somewhat ongoing.


It depend - if it’s executive decision making many hours after playing an immersive game - people figure out what’s fantasy and what is not.

I suspect that if it’s 2 seconds after pulling off a VR headset after being in a photorealistic world which had no forced errors then people would be very confused.


> imagine when it’s AI generated, photorealistic rape, snuff, child porn. Illegal or not, if it’s purely virtual law enforcement is going to focus on the subset of crimes which involve actual human victims.

In the US, all of that is already illegal. If you put yourself in a position where what you possess is indistinguishable from the real thing, the courts err on the side of the potential victim.

Law enforcement's priorities are not going to change; they don't distinguish between what's virtual or not. If it looks like CP, you can't point to a producer with valid 2257 documentation and it isn't obviously a cartoon, you're cooked.


This will change.

The 2257 law follows the legal reasoning that as a porn producer, you have the burden of proving your innocence. You have to show the proof that the person in your image or video is a real, live _adult_ person, and if you cannot, it is assumed that the person is a real, live _child_ person.

A landmark case will come along where a jury will decide that this new technology introduces reasonable doubt into this thinking. When this happens, the _government_ will then have the burden of proving that the person in the video or image is a real, live _child_ person.


How does "possession" work when everything's in the cloud and instantly accessible to anyone?


If it's in your Dropbox, it's presumed to be yours.

If you upload it to reddit, congratulations, you're no longer a consumer. You're a distributor. New charges apply.

If you just browse, you're sort of safe, but you better hope your browser isn't caching anything to disk. Forensic reconstruction still constitutes possession. But nobody just browses.


You're not "sort of safe" if you browse. The US has ISP reporting laws, as does Australia, South Africa, France and others:

http://chartsbin.com/view/q4y

It's a weird situation because in the UK, they simply block the content (no freedom of speech). In the US, we have freedom of speech so ISPs can't block anything. But they do have to report if you visit a site that contains illegal material or transmit it, plain text, through their services.

Child pornography is a strict liability crime to, so intent doesn't matter. Say you download something from /r/gonewild and the girl is 16, but she looks 20 to you. Too bad. You're not in violation of the law and can be put on a sex offender list.

Many people probably have illegal content without even realizing it. That's another reason why encrypting all your devices is so important.


It's going to get very legally interesting when someone puts some child porn into the Ethereum blockchain.


>Illegal or not, if it’s purely virtual law enforcement is going to focus on the subset of crimes which involve actual human victims.

I'm not convinced. https://en.wikipedia.org/wiki/United_States_v._Handley


Will they really? There are already several thought crimes.


If we really had the Holodeck from Star Trek TNG, one of the first five uses would most likely be porn and/or a brothel.


Isn't the end game an endless stream of personalized content for everyone?

We can keep going. Why would that be desirable? It hits the right chemical buttons in the brain. Drag it out far enough and we're really aiming at being blissed out brains in jars being fed shots of endorphins at the right intervals.


I think I'm partial to a variant of wireheading that sort of linearly shifts our perception of pain and pleasure. Getting an arm hacked off is like a bad headache, normal undesirable things are like a minor ache, normal day-to-day is like a fun night out, and orgasm is like ... I don't know, heroin I guess.


I don't know if the human brain could handle that. If you have too many fun nights out, they start to wear. If you take too many drugs, they start to lose the magic, they just become normal.

You need the highs with the lows.


I suppose if we got to that point, you could replace that portion of the brain with cybernetics, save the state before having taken any drugs then every so often, flash it back to the beginning?


If you haven't read Peter F Hamilton's Commonwealth saga, you might appreciate it. Memory editing plus effective immortality is an interesting concept.


Best expressed by the philosophy of Butters:

https://vignette.wikia.nocookie.net/southpark/images/b/b3/A_...



> ex-husband's face in their old family videos with their new husband's face

Calm down Charlie Brooker.


Shut up Nathan Barley.


The technology for all of these exists, it's just a matter of motivation.

see: https://www.youtube.com/watch?v=ttGUiwfTYvg


People have been able to put celebrity faces on porn photos for decades.

I don't see how this is significantly different.

We could create a viral picture of Trump killing someone now.


Number 3 is really exciting to me. Think about every movie making you the star of the action. That'd be insane!


Play any blockbuster FPS from the last decades to get a taste of this "future".


I really think it'd be more fun to watch it in some ways. Like in FPS I'm typically "anonymous guy" or "random name given by the creators". If it was me watching myself in some random movie I think it'd be pretty awesome.


Technology is degrading the value of photo and video evidence (and probably audio too) asymptotically toward that of famously unreliable testimony from memory. Criminality becomes less risky and/or innocence becomes less protective. Law becomes less effective. A bad result, to the extent that the law isn't an ass.

On the plus side artistic tools that help materialize internal life become more effective. We can interact with our dreams and fantasies more readily, to potentially therapeutic benefit.

It's hard to say whether this trend holds more danger or promise.


I wouldn't be that pessimistic just yet. It's yet possible that new advances in authentication technology might counteract some of these trends.


What do you mean "authentication technology"? Tamper detection? The ability to see that a tool was used? It may slow things down, but this is an arms race.


> What do you mean "authentication technology"?

Cryptographic signatures. Ie every frame in a video is signed with a 512-bit key that states authoritatively what camera was the source of the video and when it was taken. In order to change any pixels in the video you'd break this key and need to resign it. An attacker would be unable to do this unless they had physical access to the original camera.

But it'll be at least 3 decades before this technology is commercialized, people see the demand for it, and the majority of all cameras in the world are replaced by it. Even if this new tech is on the market in a decade (simple tech, no demand / ecosystem yet), but 90% of existing / installed cameras don't have the feature then fake videos still get created with them. Only once ~80% of videos are authenticated, and a significant portion of the remaining 20% are fakes will people be able to dismiss non authenticated video. Up until then it's fakes non-stop.

We've got some ugly decades coming up.


Requiring proof of authenticity is a niche use case, it doesn't seem realistic that this alone will be sufficient to ensure that most cameras in the world gain extreme security to prevent tampering by users, rogue employees in manufacturing companies, etc, etc. Also, we haven't ever been able to make perfectly secure software - an exploit that allows a key to be extracted from the camera is likely, and camera buyers won't care.

A key is just data. Even if the key can't be extracted from common cameras, there's no magic that can prevent from someone making a device with a known key (or falsely register to the magic worldwide-trustworthy central registry that USA, China and Russia trusts but aren't able to influence to get keys for manufacturing fake news) that a particular camera has been made), one that can be used to sign data outside the camera context to make it appear that it was "securely" seen by a legitimate camera.


You just have to make it hard enough that the likelihood of it being tampered with is small enough to be acceptable for a court.

If you're up against the NSA then pretty much any tech evidence would be invalid. But if you're having a small claim court case, then it seems reasonable that authentication technology can keep up with faking technology.


What's to keep you from adding the key to a camera after falsely generating the media? Or using the key of a known camera to generate false footage?


So spitballing, my assumption is that it'd end up looking something like SSL certificate chains today. In the situation you mentioned:

1. The attacker wouldn't have access to the original cameras private key. 2. The attacker creates a fake video, creates a private key and signs the video with the key 3. The attacker tries to install the key into a camera

Step 3 has to be made impossible. Meaning that a camera becomes a trusted entity and only allowed parties (ex the camera manufacturer) has the authority to insert a private key into a camera. This would be that after installing a camera with a new private key, the key would then need to be signed with the manufacturer's private key and also stored in the camera. This shows the manufacturer is responsible for the contents of the video. If someone tries to change the camera's private key it would no longer match what the manufacturer signed.

Which yes then means we need to have authorized camera manufactures and a process for certificate revocation and all of that.

The exact opposite of "free and open" for recording devices - and the only way we aren't flooded with fake videos flooding and seriously impacting society. We have to want this if we want to still have a concept of video evidence in either the justice or social spheres.


This isn't a completely technical problem, there's not going to be an air-tight technical solution.

I think, at this point, anything more than the camera having a secure device-generated key that it uses to sign its output and produce watermarks is overthinking it. That will add a nearly insurmountable barrier many classes of forgery, namely one were a forger claims to have a photo taken with your camera.

You also have to remember a significant class of photo-forgery would involve images that are claimed to originate from a camera the attacker controls. For instance, forged video of a robbery from a security camera. That could, in some cases, be defeated by observing that the images have authentication information they shouldn't have (such as traces of watermarks from other cameras).

At the end of the day, forgeries will be spotted by observing that they have subtle errors that don't add up. That's how it's always been done.


Require that videos submitted as evidence use keypairs which were provably registered or published prior to the event.

Don't have a registered key? Great, but that's no longer admissible.

Still, problems would exist with letting the camera read the private key for signing, but not allowing an attacker (forger) to access the private key and thus sign their modified footage.

Just a thought.


Courts will use the best evidence available for a particular case, including all kinds of things that obviously can be less than perfectly reliable. Physical photographs have been doctored (or staged) for more than a century, and that doesn't mean that photos as such are inadmissible as evidence; so the fact that the process for taking video wasn't especially tamper-proof won't by itself be sufficient to disqualify it.

If a video is available and doesn't contain a pre-published key then the courts will still want to use it as useful evidence instead of simply ignoring it - sure, the opponent might have a slightly easier time attempting to contest that evidence as possibly doctored (just as they can do now), but courts will still make a judgement for each individual case.


Atoms are harder to move around then bits, and the rapid development in our ability to generate convincing video means that it will be hard to predict what sort of evidence we can generate now that will be convincing in the future.


How easy is it to put a different key into an iPhone? How easy is it to get a private key within an iPhone?


Timestamped signatures + publishing the public key w/a timestamp (blockchain?).


I think this is the answer. But then who's going to maintain a worldwide database of all the hashes of all the photos and videos made, and who would we trust to do it?


I suspect the recent Kodak blockchain adoption might be connected with this use case. Not specifically, but with respect to getting into the technology and acquiring IP and expertise.


Canon had a cryptographic add-on module for image verification for their 1D series way back in the 2000s but it was cracked.

I don't know if they subsequently updated it ( was called OSK ) but the idea has been around for some time. It was considered important for submission of evidence and several news agencies also insisted on it.


I think in general, as creating fakes becomes more and more prevalent, society is going to have to view crypto as more and more a necessity instead of as a niche thing.


What if after I'm done creating my fake picture I carefully take a picture of it with an authenticated camera?


with a bit of setup you could remove the lens (while keeping it electrically connected for EXIG purposes) and then project whatever image you want on to the sensor.


DSLRs already do this, but it's been cracked in the past (Canon and Nikon at least).


Wouldn't this limit video post-processing (e.g. color correction, etc.) ?


As long as you keep the original this should be fine. If anyone needs proof just show them.


Both. It is an arms race, but it isn't hopeless.

Think of it this way: we have fantastic technology for forging paper documents, but we still trust them. For instance, someone forged a memo to make a former president look bad close to an election, and they were tripped up because they weren't thinking about fonts [1].

Getting a forgery right is hard, and requires a lot of attention to detail. Technology will improve to add even more details that require careful attention, and maybe some cryptographic assurances. Maybe it won't be a world where anyone can spot a forgery, but there will still be experts who can.

[1] https://en.wikipedia.org/wiki/Killian_documents_controversy



Fascinating story, although it wasn't just the font, it was trying to pass off default output from 2004 Word as a 1973 typewriter. It seems all they did was run it through a copier a few times. So it was font, paragraph style, and superscript.

If you're gonna frame a presidential candidate you should try harder.


People still use ink signatures for things. :/


How about this:

Every recording device sends a stream of hashes to be cryptographically timestamped in realtime as they record. Now the time window to create fake evidence becomes incredibly small. You have to arrange the fake to be created at the same time or earlier than you will claim that the event happened. You have to know what you need to fake in advance, too, which isn't the case for all fraud.

Evidence will be corroborated from multiple places. If I'm in a public place, I can bet the scene is being recorded from multiple places, and I will not be able to predict who will be in that place in advance. Fake evidence can be caught out by finding discrepancies. If the majority of recording device owners whose evidence corroborates can be trusted, then the identification of the fake one can be made with reasonable certainty.


> new advances in authentication technology

I'm not so optimistic. Other people here are focusing on breaking the authentication, and that's something to worry about. I'm worried about the effect that even _known_ false content will have on people. Seeing a perfectly realistic video of $POLITICIAN doing something repulsive will affect your emotional perception of $POLITICIAN even if your rational mind reads and understands some kind of "unauthenticated" label attached to the video.


I think the solution to that is a social one: generate so many authentic-looking but obviously false videos that no one will believe them without authentication. I think these immunization-fakes could reach even the most information-poor voter: just make them funny memes, etc. Plus, if there are enough of them, I'd think the emotional balance between rivals could even out.


[Edit] - Moved to sibling comment


In the meantime, perhaps privacy-conscious individuals can wear smart burkas with vocal modulation when in public.


> and probably audio too

I've grown to completely distrust audio recording ever since I found out about VoCo from Adobe[0]. I wonder how long until there's an open source alternative.

[0] https://www.youtube.com/watch?v=I3l4XLZ59iw


On the photo/picture department this is nothing new and it has existed for ages.

http://iliketowastemytime.com/sites/default/files/imagecache...


Hopefully courts can stay abreast of the trend, although I doubt it. The court of public opinion though?! I wonder if someone will commit murder with an angry mob, incited by something upsetting about the target, “caught on film.”


In the future envisioned back in the 90s manga Ghost in the Shell there's several discussions about this. For example, a random person posting video evidence of a crime is useless since it's impossible to determine if the video is real or a complete fabrication. Only video from widely trusted sources like major news outlets has any weight behind it.


There was a good radio lab episode about this:

http://www.radiolab.org/story/breaking-news/


Will evidence only be acceptable from devices that cryptographically sign their content close to the CCD (or whatever technology turns analog to digital)?


Video recordings are modified (white balance, stabilisation, etc) then compressed so any signature at the raw CCD stream is useless (except for internal camera security).


The was a pretty good discussion yesterday in /r/cyberpunk about the possible consequences of this: https://www.reddit.com/r/Cyberpunk/comments/7sexm6/deepfakes...

> The subreddit /r/Deepfakes became very active very fast and new deepfakes are submitted every day with varying degrees of realism. The most scary part is that ANYONE can be deepfaked, not just celebrities. Provided you have the right hardware (because neural networks demand beefy video cards for training) you could train a model of your friend and paste her face onto a porn video and boom. All you have to do is download a browser extension that downloads all photos from someone's instagram and work from there. Nobody is safe from this.

> I think this here is as cyberpunk as it gets. The technology is 4 months old and has already yielded extremely realistic results. Think of what we will have one year from now. Something like this matches both the high tech and the low life aspect of the cyberpunk genre.

EDIT: Pasted the wrong link.


Revenge porn doesn't even need to be real anymore to terrorize someone. A celeb has plausible deniability when it comes to stuff like this. But an average person whose career is on the line?


Now they can just say, "oh, I've been deepfaked" and move on. In many ways this will be psychologically liberating for many victims of revenge porn.


That assumes a really high level of technology literacy from the people the fake is being used to deceive. That simply isn't going to be true, probably ever. It's already easy to convince people of transparent falsehoods, plausible faked video evidence will make it worse.

It's the "Give me six lines written by the most honest man in the world, and I will find enough in them to hang him" problem except now it's "give me six photos of the most honest man and I will convince everyone he loves to abandon him".

There are notable examples of harassment mobs forming and never dissipating off of really scant "evidence", things like accusing school shooting parents of being "crisis actors" or long harassment campaigns based on flimsy claims that a woman slept around. It's disgusting.

School bullying leading to suicides is already bad enough, what about when teens are sending forged porn of each other around? Or to classmates' parents to get them in trouble?


> It's already easy to convince people of transparent falsehoods, plausible faked video evidence will make it worse

It will for a while, then it'll revert to the point that video “evidence” is—except to experts validating validation tools that most people don't understand—exactly equal to rumor in public estimation, because everyone has been exposed to so much fakery.


> That assumes a really high level of technology literacy from the people the fake is being used to deceive.

Counterpoint: "photoshopped" is a dictionary word.


Honestly this is still photoshopping. It's just automated photoshopping.


It only took 8 years for people to become Bitcoin experts.


What year is that going to happen?


It was a somewhat tongue in cheek comment.


You've clearly never been a victim of sexual harassment if you think people can just handwave these sorts of things away.


Is it better than the status quo where you just say “:| yep, that’s me, I’m a victim of revenge porn”.


if "deepfake" gets common, it could probably be similar to people's facebook account getting hacked and mass-messaging porn. Nowadays you just say "oops i've been hacked" and move on even if such message was sent to your boss. Same probably can happen


When I look at the future, I think back at these videos.

Hells Club:

Part 1: https://www.youtube.com/watch?v=QajyNRnyPMs

Part 2: https://www.youtube.com/watch?v=wfYlTtA7-ks

Where I would like this to go. Is either being able to take scenes from different films and create mashups like this.

Or perhaps, getting a whole bunch of extras. Narrating lines and acting in-front of basic sets with green screens. Then putting the faces of recognizable actors and using something like Lyrebird for the voices. Where actors have sold the rights of their faces, voices and personality for cheap.

Now you have a $100m movie for the cost of $100k.

A similar premise of the film: The Congress.

-----

I really think in about 5 years, when the software is there and the dedicated IaaS to train the sets is commonly available. We'll start to see some really cool stuff.


10 years out, who are the 'recognizable actors'? Do they just keep recycling the same ones?


Let's just say that the decision for who to play James Bond will be even more interesting.



Hell, it could be someone different in your version versus mine.


So I clicked the link in the article (for science, so you don’t have to) and I’m blown away. People are doing this on home computer rigs? I thought I was going to find some really crappy paste jobs but instead I found myself having to completely second guess what I was seeing. Some of the videos of course suffer from odd minor defects that give up their authenticity but others were flat out as real as anything else I’ve ever seen.

Now I’m concerned about the implications of this. We already know any image can be faked and almost any video but we also laugh at people who say the moon landings were faked. Given this though how could anyone believe video evidence of say, the president with Stormy Daniels, which is a matter of unfortunate import with real consequences?

How hard would it be to fake an international incident from multiple vantage points?


I've taken some machine learning classes and I've played with TensorFlow a bit. None of this was a big surprise to me.

But the overall implications are deeply troubling. I am tempted to say we're not entering a "post-truth" era, but more like a "post-reason" era. It's almost like rational thought has painted itself in a corner. These videos are almost like a mathematical "proof" too complex to be independently verified, or too complex for a human (and those are happening too).

If reason is hitting a ceiling, I'm not sure what else we could use to steer a coherent society through whatever murky waters might lay ahead.


You go from rule-based thinking to feedback-based interaction. Part of the picture is described by the old saw about international politics: "No permanent allies, only permanent interests."

I do believe we are entering what I call a "trans-rational" era. There are stable strange attractors beyond the Age of Reason. Our technology is forcing us to confront the questions of who we are and what we want to do with ourselves.


I've been thinking about this for a long while. I think this is good.

With face recognition, old pictures you might have posted online are very easy to find. Some ex-boyfriend shared a naked picture of you? You are screwed.

Now, you can simply say that it is a deepfake. Everyone will have naked pictures of "themselves" online, even if they are fake.


I agree - taking this even further, why would you consume these pictures in the first place, if you can just deepfake them.


Non-NSFW sample: https://www.reddit.com/r/deepfakes/comments/7sjkw5/ilm_fixed...

> Top is original footage from Rogue One with a strange CGI Carrie Fisher. Movie budget: $200m

> Bottom is a 20 minute fake that could have been done in essentially the same way with a visually similar actress. My budget: $0 and some Fleetwood Mac tunes


Can you explain what's going on here?

Specifically, I don't understand this sentence:

> Bottom is a 20 minute fake that could have been done in essentially the same way with a visually similar actress.


Accuracy depends on some sort of match between host face and desired face. In this example, the host face was the cgi face which is obviously already very close to the desired face. If filming from scratch and skipping cgi, you need to pick a good host face.



A quote that struck me from previous discussion on the topic:

@ekimekim 44 days ago

"We've already seen this with images and Photoshop. Society and their heuristics of belief will adjust as these new capabilities become widespread.

What's more troubling is that as media becomes falsifiable, solid evidence of...well, anything, becomes hard to have.

The ultimate loser there is the truth, sadly."


The inevitable outcome is that no recorded media will be taken at face value unless there is immense proof in some way of its veracity.

In the short term, this will probably lead to all kinds of terrible things (kids getting bullied through computer generated imagery of them, people being fired for videos of them saying things they never said, jealous spurned lovers attempting to break apart marriages with fake videos, etc.)

In the long term, it might actually be a good thing - instilling a strong sense of caution for anything that claims to be recorded from the real world.


How do you hold people accountable for bad behavior (e.g. police behaving poorly when making an arrest or traffic stop) when the video evidence is viewed as "easily faked"? I mean, eyewitness testimony has been proven many times to be faulty, but presumably someone with an axe to grind (and some AWS credits?) could make fake videos of all sorts of bad things _from multiple perspectives_.

Scary indeed.


Well in the police conduct case, if they're wearing a body camera, you can still have a properly-documented chain-of-custody backing up the photo evidence.

We've had good technology for the forgery of printed documents for a long time, and the world hasn't ended.


Cameras that digitally sign each frame?


All that does is link that frame to [a key in] the particular camera. It doesn't verify that the frame was "real" in some way. It defends whoever controls [the key in] that camera from others secretly tampering with that frame afterwards, but it doesn't defend others from tampering by whoever controls [the key in] that camera.

Central signature verification or blockchain could verify a timestamp - i.e. prove that the frame was taken (or maliciously created) before a particular time. That's about it.


> The inevitable outcome is that no recorded media will be taken at face value unless there is immense proof in some way of its veracity.

Exactly the opposite happened with print media, where any old bollocks is believed if it aligns with the readers viewpoint.


This is... scary. The potential applications of this technology extend far beyond porn. How long until intelligence agencies are using this sort of technology to sabotage political opponents?


Low resolution security footage is about 1 year from extinction.


Who's putting up the capital to replace millions of low res cameras?


Or to build the infrastructure needed to support anything else?


How is this new? Kerry was photoshopped into controversial photos in 2004. Now it is video, yes, but fraudulent photos have convinced plenty of people already.


Very true. More famously there were photos of Hitler engaging in debauchery produced by the Brits in WW2, which is why I believe such an act has precedent. But producing videos like this where if done well, you literally struggle to tell the difference between real and fake is much more different than some Polaroids or Photoshop's. It's new due to the relative ease that highly sophisticated fakes can be produced


This has been done since the 1940s (and probably earlier)

http://www.famouspictures.org/churchill-and-the-tommy-gun/


> How long until intelligence agencies are using this sort of technology to sabotage political opponents?

https://en.wikipedia.org/wiki/Nikolai_Yezhov#Execution


An interesting angle here is the arms race between manipulation and forensics. In image forensics, clever people are using clever techniques to keep us tethered to some notion of authenticity in digital media. Like this guy: http://www.cs.dartmouth.edu/farid/downloads/publications/wif...

These emerging video manipulation tools open new frontiers for related forensics research. In 10 years, when we see a video of someone doing something horrible, these people are perhaps our only hope of knowing whether or not what we're seeing ever happened.


This seems like the beginning of a giant problem. First we had Adobe able to replicate a user's voice after listening to it for 20 to 40 minutes, now this.

I guess vein scanning is going to happen sooner or later in order for personal verification.


Interesting.

It seems /r/deepfakes (NSFW) is where the content is at (I assume the article doesn’t link to that, but haven’t checked).


Maybe one day everyone will be using cameras with a combination of digital signatures and watermarking technologies.

That said not being able to definitely classify videos between "faked" ones and "original" ones could help people suffering from revenge porn (or political manipulations)


This here seems to be the only possible way forward, other than throwing out all concept of video evidence.

All video streams (and frames) are signed / watermarked with the source that recorded them. Video editing now becomes impossible without showing a change in ownership.

Unfortunately it will be a long number of decades until we can successfully replace the majority of cameras with these new ones that society doesn't yet have reason to understand the need for.


Strong authentication might protect video in news and court contexts. But I don't see how it'd help for celebrity porn, revenge porn, and other contexts where there is no authenticated original. Unless all the originals were authenticated. But that'd require mandatory authentication on all recording devices, which is rather frightening from a privacy perspective.


> But that'd require mandatory authentication on all recording devices, which is rather frightening from a privacy perspective.

Yes inherently scary, but is there a solution to still allow a user to maintain control as to when they give up their privacy?

Ex. a 3rd party escrow service that will authenticate a video belongs to a user without revealing who the user is? Or some other key sharing scheme that allows users to decide when they want to take ownership of a whistle-blower video or some other govt corruption video in a repressive country?

I'm not a public key crypto expert, but I can't imaging this is the first time user controlled authentication has been investigated. The requirement would be that everything is always signed, but (through key chaining, sharing...) a user gets to decide if they reveal their association to it.


Perhaps future image sensors used in cameras will be able to capture DNA samples, or maybe just an infrared fingerprint of the subject which would be lost in the final photo, but could be used to sign it against tampering.


do you want a blockchain for that, let me see, I have a folder here somewhere...


Between VR and the possibility to train up person-specific porn generators, porn is only going to become more and more of a super stimulus that is too hard to resist. Best to try and quit fully now.


John C. Wright, in his book The Golden Age, has worked out a pretty complete picture of the ability to mold one's perception of reality to one's liking.


It's interesting that the issue of ensuring authenticity might also swing into censorship too: if politicians only issued certificates for images of themselves doing good things, they can then dismiss those of them doing bad things as fake because the creator couldn't produce the cert.


You cannot allow the politician to be the same person that issues certs because then it creates a market for a politician to buy doctored images and certify them. A neutral unbiased third party must do the certs.


This technology makes me afraid because it deconstructs many core value I have.

You can do plenty of things with it. Create fake porn among others. But is it ethical ? Is it ethical to create fake porn with a celebrity face ? With your ex-girlfriend's ? What about if you keep it for your own personal use ?

But then were are the limits ? What about a "teen" ? What about fake child pornography from pictures of childs you found on the internet that permit to alleviate yourself without causing any harm to others ?

I would at first think its okay if you keep it for private and own use, but then that becomes scary because we've been "trained" to think that child porn is not okay.

Don't know why this is been downvoted ?


If it makes you feel any better (it won't), this kind of thing has been possible for years.

The only difference is that now instead of using photoshop to make fake images, we can automate the process, and use it to make video. This isn't a paradigm shift, it's a change in pace.


A change in pace can create a revolution. That exactly what the industrial revolution was.


Whether a tool is ethical is a different question is not the same as the question of whether various activities done with the tool are ethical.


Honest question: what's the problem if you keep it to yourself ?


Another honest question, do you think "keeping things to yourself" really means you affect only yourself?


[flagged]


Whoah.... I was not trying to imply mystical connections between private thoughts/behaviors and some global universal results...

I was just trying to suggest that our thoughts, words, and actions, even in private, are continually shaping us as people over all.

Simple example: I can often tell when my friends are depressed, even when they are trying to be depressed in private.

Also, I'm not sure why you used this as a chance to take a crack at feminists. tsk.


Also, I'm not sure why you used this as a chance to take a crack at feminists. tsk.

Well, it used to be that the loudest public voices asserting that what people did in private somehow affected the outside world were people like Pat Robertson. In 2018, it's people like Anita Sarkeesian. So I think it's apt to make the historical link there. If some other people come along in the future, I'll make the link then as well.


This is great for celebrities... From now on, if they get their phones hacked and get their private videos stolen and shared publicly, they can (plausibly) claim that it's not real.


And this is why we need a verified identity system on the Internet.

With such in place you will know who created the video as their verified identity is attached to it. If there's no verified identity attached to it the video wont hold any weight. Same goes for everything done on the iNet .. you can use it anonymously where what you say or do doesn't hold much or any weight vs. commenting,posting, etc using your verified ID does.

This just one solution that could help with all the fakery on the iNet and the mayhem it brings and will continue to bring but worse.


Do you mean digital signatures?


More like a drivers license or govt issued ID card for Internet usage.

YOu can use the Internet anonymously as you always have but if you want to get your point across and ensure the veracity of whatever your posting you'll use your Internet verified ID.

Im thinking in the case of someone impersonating a public figure where the originator of the video is not the public figure its immediately labeled fake news. Also if it's revenge porn and you upload it without your identity attached to it .. it's immediately labeled fake.


You could always just digitally sign everything you put out. You can do that today. I don't see how you're proposing anything new or effective.

Besides, what you're proposing is stupid for other reasons. If I have a video of a police officer murdering someone, you're just going to say it's fake news unless that officer himself posts it? That's straight bullshit. "Government Sanctioned Truth" bullshit. The truth about a person or agency is not determined solely by what they approve of.


Govt mandated Internet identity system that you invoke when you want to ensure the veracity of what you filmed and posted. If not invoked then it's labeled fake and just created for fun or shits n giggles. It holds zero weight and all on the Internet know/understand this system like the back of their hand.

Again I am thinking of two use cases re: this solution... revenge porn and celebrity & political figure fake videos.


This doesn't solve either of those problems. Nobody is going to ensure revenge porn veracity. They will be posted online, labeled "fake" and absolutely nothing will change.

And a political figure can already just deny that the video is real. You're not preventing it from being posted, so it will be posted, and people aren't going to believe it? Simply because it wasn't approved by that politician? Again, it makes no difference. People will believe it's real because of course that politician won't approve of an unfavorable message.

Again, things don't become true just because someone approves of them. And things don't become fake just because nobody involved approves of them. Labels do not mean anything unless you can guarantee their accuracy. You're really not thinking this through effectively. And again, digital signatures already allow all of this to happen.

You're basically asking for an official worldwide propaganda platform, and it will no more trustworthy than the existing propaganda platforms.


--->This doesn't solve either of those problems. Nobody is going to ensure revenge porn veracity. They will be posted online, labeled "fake" and absolutely nothing will change.<--

That's what I am getting at ... things change 180 degrees on the Internet where no believes anything that is not posted by someone with their verified identity. Thus making revenge porn pointless. You even say they will label it fake thus no one giving a crap about some fake b.s. and it's totally and immediately disregarded.


>things change 180 degrees on the Internet where no believes anything that is not posted by someone with their verified identity

No. People will not care what identity you used to post things. Verified or unverified makes no difference unless the verification is thorough and performed by a third party. If it is self-done, then it is no different from how things are currently.

>You even say they will label it fake thus no one giving a crap about some fake b.s. and it's totally and immediately disregarded.

No. Non sequitur. You label it fake, and people will just disbelieve your label. They do not necessarily disbelieve the content.

At this point, I don't know what to tell you. Create a startup and put your money into it. Place a big bet. You can lead a horse to water, but he drowns himself.


Im talking about an internet that does not exist yet..one where theres a system in place where every iNet user can spot a fake video vs. a real one in a heartbeat or fake news vs. real.

Do you not think based off of this Face2Face technology and all the fake news this evolution needs to happen?


You're talking about something so obviously flawed I will not address it anymore.


After over 100 years of having a massive professional industry dedicated to creating fake sequences of images for fun and profit, another decrease in the capital needed to create films doesn't seem such a revolutionary thing.

Just like people watched that first film of the train moving towards the camera and freaked out, while today that seems quaint, humans as pattern-matchers extraordinaiers will find a way to discern the fakes.


Finding the origin of contents are already a challenge these days. A random idea I have is creating a blockchain where content creators can register their creations to prove their origin. Also, camera manufacturers can get involved and build in a hardware that signs every picture and video captured by that camera which can then be registered to the blockchain.


Kodakcoin?


Well definitely black mirror but you can now recreate a passed loved one virtually.


...and the “nothing” is claiming all of Phantastica.


"DNA or it didn't happen"


You can mail-order custom DNA.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: