Hacker News new | past | comments | ask | show | jobs | submit login
AI-assisted fake porn are being used by people on Reddit for self-completion (linustechtips.com)
239 points by DyslexicAtheist on Dec 31, 2017 | hide | past | favorite | 122 comments



This is what's being accomplished by one amateur with a little bit of hardware. Imagine what will happen when you have professional services (legal or otherwise) making altered videos of whoever or whatever you want.

Want court evidence of your wife cheating? Fake grainy hidden camera video. Want to discredit a politician? Here he is picking up a male escort and using racial slurs. These sorts of things could be used to start wars.

Sure, deeper analysis might prove that this video isn't real, but that won't stop the immediate public reaction and response. The knee-jerk reaction to what the public sees is the end-goal. I'll bet good money that this technology plays a major role in future elections.


The problem I see with tech like this is that it muddies the water for everyone. I find it very strange that technologies like the internet appear to have created a world with more ambiguity and less transparency than we've had before - mostly because people now can find support for any kind of belief online. Some people are either willing to learn or have learned how to get information from reputation based systems and some will just go rogue and create their own reality. I hope democracy survives this shift: Because not only will we have to discuss the facts, now we will also have to discuss: What are the facts?


We've always had to discuss what the facts are - the extreme bias between newspapers and TV channels picking and choosing facts to provide to their users, even on the same news stories - news source A chooses not to provide background info, news source B chooses to provide background info from one primary source only, news source C chooses to provide background info from another source, news source D chooses to provide a general overview but with a slight bent towards assuming that its readers hold values X, Y and Z, etc etc.

You can go back and read stories from the 1950s which have these issues and more. There has never been a time where all individuals involved in a democracy have understood all the facts. Popular politics has always been driven by newspapers and special interest groups.

In my country, implementation of a certain policy has very clear correlation with a marked increase in suicide rates - the slow rollout of this specific policy across areas of the country provides an easy way to show this. Certain newspapers refuse to report on this - it doesn't match their ideology - after all, when pointed out, people holding this ideology will often point out that suicide is one's own decision and it's not the Government's problem.


>We've always had to discuss what the facts are - the extreme bias between newspapers

That's an assertion, but others say otherwise. According to this video below, Americans have argued values in the past, but agreed on the facts.

E.g. global warming. People agreed it was happening, but differed in what and whether to do about it. Today, we dispute its very factuality.

https://youtu.be/XirnEfkdQJM


That video of the past was faked.

And the experts you cite that say it is real are in on it.

It's the Establishment trying to suppress the truth. That's what they say today. With more technology, the underdog with AI can beat the Establishment in a few weeks. And there will not be any way to distinguish what's true anymore than you can distinguish a chess line played by AlphaGo vs any other really good chess line.


Which policy is causing suicides?

Seems like you'd mention it unless it is a politically charged issue, perhaps related to gender relations or immigration or something.


It's related to a benefits reform designed to cut costs by giving people less and making them jump through more hoops to get it, without providing any additional support.


In the UK there has been several benefit reforms.

One of these is a simplification of a bunch of benefits into something called "Universal Credit". It's a good idea, with some nice points. It's being implemented terribly, and is causing significant harm.

When someone makes a claim there is a minimum wait of six weeks before they get any money. In the England a landlord can apply to the courts to resposses a rented home if the tenant misses 8 weeks of rent. So the rollout of UC is causing some people to be evicted from their homes.

The old benefits were paid fortnightly. UC is paid monthly. People have to budget very carefully to last all month. Poor people tend to be bad at budgeting, and being poor makes it harder to budget.

There is a very strict, punitive, sanctions regime in place. You'll see people being given sanctions because they attended a medical appointment or had to look after a child. People are expected to spend 35 hours a week searching for work, and they have to provide evidence that they've done so. That could be ok, but it means a claimant will send very many applications to unsuitable jobs, rather than spend a couple of hours polishing an application for a more suitable job. There's a lot more about sanctions and suspension: the rules are very strict; there are very many rules; the DWP and JC+ don't know what their own rules are and sometimes give incorrect advice. So sometimes a claimant will ask DWP or JC+ what to do, and will follow that instruction, and end up being sanctioned because it was the wrong thing to do.

Benefits are paid by the Secretary of State for the Department of Work and Pensions. That person is busy being a government minister, so they employ "decision makers" who look at the various acts of parliament, statutory instruments, and case law and apply those to the details of the claim. The quality of decision making is particularly poor at the moment.

Two judges (who are involved in appeals) have said this. Here's a useful article: https://linustechtips.com/main/topic/872276-ai-assisted-fake...

Note here they're not just talking about UC, but also about PIP and ESA. (PIP and ESA are disability benefits. ESA is an out of work benefit.)

To get PIP or ESA the claimant will need an independent medical. This is provided by a nurse, a physiotherapist, or an occupational therapist. These people are employed by private companies. Those companies have said that over 30% of the assessments they do are not acceptable. https://twitter.com/CommonsWorkPen/status/942773136390590465

Imagine that - imagine if fully one third of everything you did was not acceptable.

Finally, because the claim is about suicide: https://www.disabilitynewsservice.com/shocking-nhs-stats-sho...

We have to be a bit careful with these figures. (We'd expect the population claiming benefits for a disability to include more people who've had suicidal thoughts).


You just set your credibility on fire with the phrase "the extreme bias between newspapers and TV channels picking and choosing facts". What you are describing is literally the opposite of how objective journalism is performed. That you believe this is how professional newsrooms function indicates you haven't worked in the industry.


That you and vertex-four disagree not only on the facts, but on the nature of facts indicates that perhaps his point has some merit. ;-)

FWIW, I've been that primary source for some major news stories, and the final story usually bears just a passing resemblance to reality as I experienced it. Assuming good faith on the part of the reporter, I have to assume that reality as I experienced it bears only a passing resemblance to reality as other primary sources experience it, which is congruent with other known facts like the difficulty of communication, the existence of cognitive biases, and the field of psychology. Objectivity is a myth; it's oftentimes a useful myth, but it's worth remembering that the reporter has his own biases and preconceptions as well, and as the person choosing what to report upon, these will necessarily make it into the final piece. The audience has their own biases as well, so they won't be reading exactly the same piece that the reporter wrote.


"That you and vertex-four disagree not only on the facts, but on the nature of facts indicates that perhaps his point has some merit."

Unlikely. One of us has spent a large chunk of their professional career working in and around newsrooms.


The fact is that facts lie and don't always capture the big picture and they certainly don't capture the importance of an event in and of itself.

With millions of events happening each and every day the simple act of choice is incredibly meaningful.

Your own belief in the objectivity of modern newsrooms proves my point.

Clearly both Al-Jazeera and Fox News do fact checking... yet one would be left with a very different view of the world depending on who did the reporting.


It certainly proves nothing of the kind, and your use of the phrase "belief" is disingenuous. It implies an element of faith that is entirely lacking from any of my assertions. My statements are made based on direct observation of 35 independent newsrooms and direct access to the AP wire over a period of three and a half years. Fox is an outlier in the industry.


The fact that you use the authority argument means you are the opposite of an authority on objective truth.

If anything it is the one place where getting the objectivity wrong has no consequences. In a typical company if you are getting the truth about something wrong (e.g. what people want, what people are willing to pay, where the market will move to) the lack of profit margins will refute the correctness.

In a newsroom however objective truth is a hygiene issue like a cook washing his hands. They should. But if they don't it has very few consequences.


I read newspapers, I often find that important context is missing from every article on a given story, unfortunately - and different context is missing per paper to meet the biases of that paper.


I wouldn't say it's much of a shift as much as an arms race between hoaxers and investigators.

In the olden days you certainly had less ways to fake footage (no image editing or digital forms of manipulation), but at the same time it was also much easier to trick people into believing falsehoods. I mean, look at the older stuff on the Museum of Hoaxes. It'd be shot down in minutes in 2017, since it's obvious the people involved did very little work to back up their claims. You could literally walk into a town where a famous guy's son when missing ten years ago and pretend you're him with no one any the wiser for months. Despite not looking anything like the missing guy and knowing very little about his life. [0]

https://en.wikipedia.org/wiki/Tichborne_case

Or pretend to be a missing Russian princess/foreign visitor/royal with no one being any the wiser. Or lie in newspapers about how Astronomers found a society of advanced humanoids on the Moon. Either way, while there were few ways to falsify information to the same quality as can be done now, it was made up for by how easy it was to trick people in a world without the internet, a decent press/journalists or easily hidden cameras.

As history went on, obviously the amount of ways that you could falsify evidence increased, as did the convincingness of the trickery. However, at the same it also became much easier to debunk false claims too, so the quality of the fakers had to get better if they wanted to stand a chance of fooling anyone.

Hence the arms race. The better technology gets at letting people forge evidence, the better it also gets at letting people debunk fakes.


Just as food for thought, being able to lie or con people is also not necessarily a bad thing: Giorgio Perlasca was able to save more than 5000 Jews from deportation by posing as the Spanish consul in Nazi-occupied Budapest.

https://en.m.wikipedia.org/wiki/Giorgio_Perlasca


That too. Lying isn't necessarily evil, and the fact it's possible has averted a lot of tragedies as well as caused others.


I think this is a fallacy. the olden days were way worse. to put in presepective Carrots improving vision is propaganda created by the british yet see how many people believed it. Humans inventing and accepting fake beliefs to comfort themselves was never a new thing. Governments gaslighting its own citizens is communist party 101. People always want to believe that yesterday was better and blame their problems on the new.


Carrots improving vision is propaganda created by the british yet see how many people believed it.

Not exactly propaganda, more misdirection, maskirovka our Russian comrades call it


> I find it very strange that technologies like the internet appear to have created a world with more ambiguity and less transparency than we've had before

Any technology created by humans will be a mirror of human shortcomings. Techo-utopians believe that technology will make humanity somehow better. It won't. We'll just use it to do the same terrible shit we've been doing since the dawn of time.


This is another means by which advance comms and info capabilities undermine social trust.

https://www.reddit.com/r/dredmorbius/comments/6jqakv/communi...


All of this has been possible for the last century.

For instance, at least in Russia you just get an actor that looks like the politician, film them doing some bad stuff and blackmail them. Pretty simple.

We have had doctored photos/videos for as long as the mediums existed.

If anything it will saturate and you wont be able to tell what is real or not. Most people will default to "fake" and have to rely on the trustfulness of the source. Just like now. We have photoshop and there isn't mass chaos.


> Most people will default to "fake" and have to rely on the trustfulness of the source.

Until faked video of the trusted source him- or herself appears.


I think we’ll probably have to move to a model where any security footage or images that is used in court must be digitally signed at the hardware level, with a verification process similar to how CAs work now. Of course, this will still be tamperable, but tampering would have to be much more premeditated and sophisticated.


The privacy implications of that would be scary. It seems on par with North Korea's Red Star Watermarking which has been dubbed as "dictator’s wet dream".

https://qz.com/583311/north-koreas-dissident-tracking-comput...


that is absolutely never going to be workable because you can just film another video being played back.


I mean, there are all kinds of ways to bound uncertainty. You’re right, you could film another camera, but that could be addressed separately. Digital signing would only prove that a video came directly from a particular camera’s sensors. To assert the veracity of a video, you’d then have to prove that particular camera was at the scene, and the light coming into the lens was authentic. Most court footage is incidental, not intentional, so the sort of doctoring you describe is unlikely in a majority of cases. Where it is likely, different measures could be taken to bound uncertainty.

Separately, generating false images or videos for libel purposes would become more difficult; if a camera that has never been owned by a person is used to produce a signed video that is pornographic in nature, the authenticity of that video is immediately suspect. It would take a much more sophisticated attack to produce a mathematically convincing fake video.


if you are worried about somebody actively faking footage, and are willing to go to such drastic DRM measures to prevent it, you need to explain how it is actually going to solve the problem, not just handwave.


I can imagine watermarking at the light sensor level. It would need some very special provisions for compression, though.


Problem is, this way journalists won't have anonymous sources anymore and you will be able to track any citizen filming power abuses.


Well, but then you'd have to trust the light sensor manufacturer to provide secure keys. Remember when the NSA hacked into Gemalto and had a chance to steal its SIM card keys?

However, that's still interesting; isn't it possible that each light sensor has certain watermark-like characteristics arising from the manufacturing process?


Yes, fingerprinting of digital cameras has been a thing for at least a decade [https://www.eurekalert.org/pub_releases/2006-04/bu-bur041806...]


While the technique does make fraudulent videos easier to make, I don't see this in and of itself as that fatal to public trust. Faking content to hurt someone has been around forever. We then learned not to trust gossip, not to trust uncorroborated sources, etc.

I hope it might actually help people be more skeptical of what they're reading and seeing. And hopefully it will help news media be more skeptical of their sources as well. If that doesn't happen then this will be a disaster.


> We then learned not to trust gossip, not to trust uncorroborated sources, etc.

Somewhat ironically, I feel a citation is needed here.


Makes me think of the phrase "pics or it didn't happen".

Granted it was a lot more common before photoshop-style software was everywhere.


The video by itself will be shortly a medium what noone believes as ground truth, just as a written word or a drawing is now. We should be able to construct better cameras with tamperproof footage. But not sure how, as with realtime editing we cannot trust trivial things like secure timestamping. Secure video blockchain anyone?


> We then learned not to trust gossip, not to trust uncorroborated sources, etc.

If it needed any more proof (than facebook, fox, ads, gov propaganda...) that this never happened, I think the Trump debacle should be final evidence this is fantasy.


You overestimate people


This just gives more power to "authority". If subjects can't believe their own eyes anymore, who will tell them what to think? NBC would be happy to take the job...


How long will it take for this approach to be used to splice people into child pornography? The court of public opinion might sentence the victim to maiming or death.


Such things have been known to happen. Emmet Till. The Dutch de Witt brothers in 1672 are a notorious example. False testimony, witness, and evidence are all things.

http://tywkiwdbi.blogspot.com/2010/09/corpses-of-de-witt-bro...


The end game I see is that people won't trust photos or videos anymore. That politician picking up an escort and using racial slurs? Real video. But he'll say "oh, that's just AI, I would never do that" and now he has the plausible deniability.

The end result is more freedom and privacy because people won't trust "hard evidence" anymore. I think this might be a good thing.


As I have said for years, our current systems rely on the inefficiency of an attacker.

Progress in technology ruins that illusion.

Want to play some heads up poker or chess with someone? How sure are you they aren't getting instructions from a program through some means?

Don't have anything to hide? How about an AI to dig up plausible sounding parallel construction stories to convince a jury?

And in a few decades we're talking about flirting, humor, writing stories, producing any sort of content.

There will be a time when a lot of these false positives will still be accepted, until people just stop trusting EVERYTHING.


The propaganda potential will be interesting as well. My favorite recent example so far: https://www.youtube.com/watch?v=LYEY4x0_RDY

With a few obvious exceptions -- like one particular shot of Ivanka's face that crosses the line from realism all the way past comedy into horror -- some of these replacements are almost good enough to start fooling people. Voice synthesis is approaching the 'solved problem' point as well (e.g., https://www.youtube.com/watch?v=YfU_sWHT8mo ). Not there yet, but getting there.


Most obvious use-case to my mind is stock (or crypto) price manipulation. I wonder what would happen to the ETH price if Vitalik Buterin suddenly "announced" he has seen the light and is quitting ETH to work on DOGE :)


This is why we all should be skeptical of the media ( traditional and social ) as public skepticism is the only thing preventing mass hysteria. If the public were a tad bit more skeptical and didn't believe everything they read in news or social media, we would have avoided the wars of the past 15 years.

But every criticism of traditional media here gets flagged. Which is ridiculous for a "hacker" news site when you think about and the hacker ethos.


These days you don't even seem to need any evidence. An allegation seems to be enough to get you dethroned.


You can't update the state of the technology and assume that everything else will remain unchanged...


Deeper analysis by whom? Who cares about deeper analysis anymore?


> Want to discredit a politician? Here he is picking up a male escort and using racial slurs.

Worked so well in a recent election...

> These sorts of things could be used to start wars.

Wars have been started on much flimsier pretexts, unless you are talking about getting the other side so irate that they fire first.


And that’s why the use and mandating everyone uses their real and verified identity everywhere on the web is needed. You can still comment and post whatever anonymously but it won’t hold any weight!


That doesn't really make sense in this context; if someone is pretending to be you using your own face and voice, using real identities is kind of the point, isn't it?


Ummm but if it’s not you the uploader thru a verified ID system (block chain maybe) is held liable immediately. Also by the time such a system is out in place the public will be even more suspect of believing what they see on the net as truth. Such is already happening with those 40 and below in terms of finally waking up to all the fakery on the web. Just saw on FB a video where a girl crosses her eyes and they get stuck. Majority of commenters below 40 notes it’s fake. Above 40 commented otherwise.

Either we instate an ID system or the web becomes a joke no one believes. The latter I guess wouldn’t be a bad thing.


Well I disagree.

There are many ways to disprove such video, and if the technology exists, we already know that it exists, so it will make any video, real or not, much less relevant.

The tale of new technology bringing war and mayhem might make sense, but in reality it doesn't. You just wish it would happen, because you might want to have some fiction that says "what if?", but it's just a wish. It's also why I dislike black mirror as a tv show. Distopian science fiction is boring most of the time.


What a misplaced comment. Perhaps you would like to add the many ways the video could be disproven. Instead, you chose to ramble on about personal preference in sci-fi entertainment.



What's the big deal? Photoshop has already taught us not to trust photos, so we'll just have to learn to stop trusting videos now.

https://singularityhub.com/2016/05/13/new-digital-face-manip...


The deepfakes subreddit is remarkable - people without any prior background in machine learning or programming are asking tons of questions and learning all about deep learning. I've always thought applications-first was the best way to teach complicated material, this might be great evidence of that.


Porn has always been a factor in driving technology forward.

People who weren’t motivated before to learn machine learning or programming suddenly are.

This is no different!


Finally, people willing to publically sexually humiliate others are being empowered to code. I hope the girls who sit across from them in IRL school are able to catch some of these creeps capturing their face.


I can't wait to see what creepy thing they do next!


On the flipside, this gives anyone basic plausible deniability.

No baby, that homemade porn video on my phone wasn't me! I just used a program to stick my face on something I found on the internet.


Plausible deniability doesn't really work in real life though. Very few people are going to give you, or anyone accused of an indiscretion, the benefit of the doubt on the basis that a video might have been faked with AI. That is much less plausible than someone cheating and getting caught.

If you think anyone is going to accept that a video is fake, even if it is, then you're woefully naive.


That's true right now because video fakes are new and rare. Photoshop fakes have been around for years, and now when there's an apparent nude photo of someone famous, people generally assume it's fake unless there's a particular reason to believe it's real, such as a publicized hack.

On the other hand, video of an intimate partner may have enough behavioral cues so fakes aren't much of an issue, unless the attacker has started with video surveillance of the target.


Doubt could be made if you kept a video of yourself with Arnold Schwarzenegger's body in the same place as that video.


As someone with a generally cynical/pessimistic outlook I too, like blunte, want to see something positive in this.

Hopefully, since it can't be stopped, it becomes so routinely used against so many people that everyone stops caring any more if it's real or fake.

It's sad that our attitudes (if not behaviour) towards sex are still so fucked up and judgemental thanks to thousands of years of religious/Victorian/puritanical brainwashing and guilt-tripping.


Western sexual revolution can be said to have happened little more than 50 years ago. Culture takes a lot of time to revert.


There’s an episode of Downton Abbey where it goes the other way.


Yes, exactly.

> It isn’t difficult to imagine an amateur programmer running their own algorithm to create a sex tape of someone they want to harass.

If it's trivial to fake a porn video with anyone's face on it, then the burden of proof is on the harasser to demonstrate how this one's real.

Or you can take said video and put some famous people's face on it as a response.

Also, the source video probably existed before and was more or less publicly available, which is proof that the version with your face is the fake one.

(And if you're facing an enemy that has enough ressources to shoot a specially-made porn video just to smite you, you have bigger problems to worry about).


> (And if you're facing an enemy that has enough ressources to shoot a specially-made porn video just to smite you, you have bigger problems to worry about).

While there may be a barrier to applying the technology after the fact, shooting a homemade video is something that can be accomplished by anyone with a modern smart phone.

In fact, if you want your fake porn to look authentic (for purposes of blackmail or otherwise), this is how you would do it. Then it could be claimed that it was self shot and leaked to the internet via a cloud “hack” or similar.


In the scenario posited, it would depend a lot on a) the character and reputation of the denier and b) the gullibility of the person being lied to.

Of course, people who would both make such a video and also lie to their SO about it tend to seek out gullible easy marks who buy their BS. But sometimes such people get wise to it at some point.


> On the flipside, this gives anyone basic plausible deniability.

It's enough to say publicly "X molested me 20years ago" to destroy a person. No proof or witness needed.


I've worked in the visual effects industry for many years and this is incredibly interesting. In vfx to create a face replacement, we would have to create a 3D model of a head and then texture it with photos/scans of a the desired face. Then we would 3D track this cgi head onto an actor in the shot and then animate the face either by hand or using motion capture data from the stand-in actor's face. Then you'd light and render the cgi head and composite it into the shot.

This video shows the process: https://youtu.be/rsPq2qp_Z-E

And here's an article about it: https://www.fxguide.com/quicktakes/di4d-in-la/

It's pretty incredible that there's a completely different way to do a head swap. Of course, the vfx method gives the artist complete control over what the face/head does.


Outside of porn, I'm wondering if this could be used in other ways, e.g. government propaganda. Putting leaders in situations that make them look good or putting their opponents in compromising situations.


It not only can be, but it was https://www.youtube.com/watch?v=ApG1XdI-Dd4 there were a few months when JA didn't give any sign and people on /r/wikileaks started asking if he's still alive, then this video was posted. It's fake.


No, that video is real. Those are morph cuts.

https://www.youtube.com/watch?v=J6wPUtKg-Ac


That's fascinating!

There was also a livestream in which he supposedly gave a "proof of life" by reading out a bitcoin block hash, among other things.

Do you think that one was faked too?


It could also be used as a convincing tool to dismiss any proof, even legit ones, as fabricated. More or less the equivalent of "Your honor, that picture (of a cop shooting an unarmed civilian) is clearly photoshopped!".

I'm excited about the technology though concerned about possible abuses.


In the U.S, cops are allowed to shoot unarmed civilians, entirely at their discretion. We consider it a privilege of the position, bestowed in exchange for the risk they face. Video doesn’t affect the verdicts even when it is admitted.


>I'm wondering if this could be used in other ways, e.g. government propaganda.

Here you go: (SFW)

https://imgur.com/gallery/BEVmQXo


Absolutely. I'd be surprised if the NSA and GCHQ who we know already try to use porn and sexual scandals to discredit politicians and other "targets", aren't already using this or at least deep into researching it for their operations.

https://www.washington.edu/news/2017/07/11/lip-syncing-obama...

http://www.bbc.com/news/technology-25118156

https://www.belfasttelegraph.co.uk/news/uk/gchq-using-online...


Yeah, those diabolical Western intelligence agencies and their kompromat.


Yes. It's a historical fact that the FBI systematically slandered and harassed peaceful activists. I have no doubt that they (and other agencies) would do it again given the right circumstances; there's a high likelihood that they're doing it right now.

https://en.wikipedia.org/wiki/COINTELPRO


My well-intentioned friend, I don't need a Wikipedia article to learn about COINTELPRO nor could anyone wish more than I for a more open and democratic America. If you knew a bit more about me, we'd both have a good laugh that you posted that Wikipedia link for my "benefit".

My comment was not intended to absolve America. Parse it more carefully and you will see its meaning.

Because, yes, America's record is imperfect. At times, unsavory things have been done for the "right" reasons, and at other times for the wrong by those abusing power. Even when well-intentioned, championing certain ideals in a hostile world can be a messy business. We can debate the morality around those trade-offs but, in any case, we should all wish for America to live up to its ideals.

Yet, there's a tipping point at which you realize that there are some who are not interested in highlighting America's missteps for the purpose of perfecting it, but instead for the purpose of diminishing it and elevating other actors. Snowden and Greenwald come to mind. How is it that they so consistently focus on America's misdeeds while promoting the narrative of actors with far more dubious track records, wherein dissidents are not just smeared, but actively "disappeared" and more?

Today, America is being destroyed from within and without. It goes beyond a failing to live up to its ideals to a direct attack on the ideals themselves. So, I would simply urge caution. America is not perfect, but the alternative on-offer is far worse.


Extrapolating this means companies like Facebook, Apple, Google, and Amazon in the near future should have the technology and the data to be able to recreate almost anyone's voice and likeness in audio and video. Also I guess fake news will be much more if a problem.


Maybe performers will start to license out their face for use in adult videos. You don't have to participate in the porn itself, but you'll get a tiny royalty from each view. It's an interesting way to monetize your appearance without having to be physically present. I can totally see people coming up with performer "mod packs".


Quick preview: u/deepfakes, https://www.gfycat.com/UnsungTotalAmericancrow (NSFW).

Very cool, it has a few problems with merging the face but the expression is very convincing. Obviously, it doesn't work very well for very different faces (https://imgur.com/gallery/OpD3RXC SFW).


This one is much better https://www.reddit.com/user/tensorfakes/posts/ (NSFW porn of course)


It's funny you pick this as an example; the very first thing I see is the cut line on her eyebrows, like some hideous plastic surgery gone wrong. Skin tone doesn't really match either. Perhaps this stuff is less visible if you're distracted by the rest of the scene.


Yeah, but merging the new face with the old picture is not the purpose of this tool. Rather, consider how the new face has a realistic expression, and how indeed the skin tone doesn't match perfectly.


The girl/guy sex vid... has her face been superimposed? If so is that a celebrities face?


Yes, yes, Scarlett Johansson.


There goes my afternoon.


The user in question is here: https://www.reddit.com/user/deepfakes/submitted/

You can see the examples of their work at the bottom of that list. It's very, very obviously not the actual actresses. It's impressive, but not "holy shit, could that really be..?" impressive - there are artefacts scattered all over the faces.


Who cares about the pornography aspect? Of significantly more concern is our societal reliance on video evidence in general. Ultimately there's no software solution here; adversarial networks will be capable of generating high quality simulations that even they cannot distinguish between.

I'm also struggling to think of any viable hardware solution that doesn't involve in camera black boxes with asymmetric encryption - which is a hack at best.


I wonder if this will lead to a trend of people going to see events live, more often. Given that seeing something occur live is going to end up as the only "real way" to trust something to be true. Scary to think that reality has become questionable in itself.


Exactly my thoughts, too. As other commenters have pointed out, we no longer trust photos as a source-of-truth, and need video instead. Maybe 'verified' live streams are going to be the next trustful source...


Until AR catches on at least.


What does self-completion mean?



Pop goes the weasel, friendo


I just read the post and there are points where it flat out lies about what a source said.

From the article-

> There are studies that demonstrate that porn viewing desensitizes ones genitals and reduces white matter in the brain. https://www.wired.com/2014/06/is-it-really-true-that-watchin...

Then from the article they linked to in order to justify that claim it says the opposite-

> It's just as likely that men with less grey matter in their striatum are more attracted to porn, as opposed to porn causing that brain profile. The researchers know this. "It's not clear ... whether watching porn leads to brain changes or whether people born with certain brain types watch more porn,” Kühn told The Daily Telegraph


> I just read the post and there are points where it flat out lies about what a source said.

Thank you. I was just about to comment the same. But I suppose accurately quoting the Wired article didn't support their opinion, so they decided to make up their own conclusion in the Wired article and ignore the one actually written in the article.


For the New Year's day, Hacker News gives you porn links.


You should check-out Ghandi doing Margreth Thatcher- it has something poetic.


I can think of a few non-porn applications for this.

It also seems like it wouldn't be a million miles from this to being able to generate the 3D armature of a person in a video, kind of like a kinect in reverse (i.e. video file goes in, character armature comes out). Does anyone know of anything doing this? I want to be able to do motion capture from flat videos with AI building the armature.


The story here isn't porn. It's that we can no longer trust video evidence - of anything - in isolation.


Video itself is not enough information. We need a cryptographic way to capture and archive corroborating signals at the time video is recorded. Perhaps there is a verifiable means of determining the geographic location, background radiation and orientation of the cameras taking the video which would be nigh impossible to spoof.


the larger issue is that if it's this easy to create fake photorealism, video evidence is no longer valid.

not from the government.

not from amateur videographers.

not even from a video you took yourself-- think of it from everyone else's perspective.

but people believe their eyes.

we are witnessing the capstone of the national security state's propaganda strategy for the next few decades... when in doubt, construct a false reality.


celebfakes is the next Reddit hand grenade: the site has 'involuntary porn' as a category for reporting posts, yet simultaneously has a popular subreddit dedicated to involuntary porn.


Since Reddit admins have no natural morality that they have displayed, they will wait until more press gets launched with bad PR for the site. Then they will selectively respond based on what their loudest commenters say.

It seems like they could establish some basic moral rules they prefer and then objectively apply them. But they choose not to for some reason.


For what it's worth, I vastly prefer my admins amoral. Morality is diverse enough that for almost any moral framework, they'd be doing things I'd strongly disagree with.


Consistency has never been one of Reddit's management strengths. They seem to have embraced the reactionary model of moderating the site.


Same goes for Youtube - during "ElsaGate" scandal one of the banned channels was in top100 site-wide. I.e. youtube knew, but they didn't cared until their PR took a hit.


Citation needed? What subreddits? If you mean the tiny deepfakes subreddit I don't think anything there is involuntary.

In case it isn't clear, this isn't leaked videos or celebrities doing actual porn. It's synthetic renders similar to their faces based on publically available material. The celebrities aren't actually involved in any way. It isn't even really photos of them. It's an abstract model.

But surely you understood that? And in that case I don't get what you're saying.


Celebfakes (the first word in the comment you're replying to) is the subreddit.

Crtasm makes the rest of the point, which neither of us should need to explain.


Please could you provide a photo of your face?


There are photos of my face on my self-hosted website linked in my profile. There aren't hundreds of photos though because I'm not a famous public figure who trades privacy for money. Bit of a difference. Good luck regardless and send me the results.


If you're not voluntarily allowing your face to appear in a video, it is by definition involuntary.


That's true. But it's not involuntary porn of them because it's not really them. It's easy to see how it is different than a leaked video of an actual person if you think about it.

Unlike some authoritarian countries where people are jailed for using the public images of public figures in most of the world it's considered fair use or acceptable under parody. Especially if there's no money involved.

In any case it's not even the images themselves being redistributed. It's only features derived from public source material used to create derivative works.


WTF did I just read?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: