Hacker News new | past | comments | ask | show | jobs | submit login
Reddit bans ‘SFW’ deepfake community (unite.ai)
94 points by Hard_Space on June 13, 2022 | hide | past | favorite | 128 comments

The sub for banned because users were asking for stuff. They didn't ban the users, they banned the subreddit.

If they enforced the rules consistently then by that logic I could get any subreddit closed down by asking for things that are against the rules.

Stated reason != real reason. A common method reddit uses is to first ban all the mods, not accept any request for other users to take over the sub, then ban it for being unmoderated. Attempts to recreate the community will result in another ban, since you are recreating a banned community. If they want a sub gone, they will generate a reason to remove it.

Exactly this...

I created a music sub 3 years ago on Reddit, ran it for about a year and a half when I made one cross post to a relevant subreddit and got permanently banned with no explanation and no due process. Reddit still has the sub up with all of my (custom) content and comments. My sub was orphaned right when it was picking up steam. I decided to simply build out my own site for posts after that point, IG, TikTok, FaceBook etc, are all operating in the same monopolistic ways with content now.

Honestly, I think reddit was started as a crowd sourcing test that was really meant to build the platform's notoriety, and once it grew in dominance, they simply ditched the independent creators in order to run their own profit and IPO up.

I also think this is why reddit forces most people away from using links to external sites like YouTube, because they want all the traffic on their own site.

By uploading video and images to reddit's CDN, you have no idea of the amount of times items are being used. Tons of content is literally ripped off YouTube and TikTok and then re-hosted on Reddit's CDN and that leads to lower views for the original creators of the content. It's really corporate piracy that preys upon small creators. One day there may be a class action lawsuit, but I doubt it will result in anything fair for the lost time and damage done to content creators.

I don't really see why they need a reason, I thought they could just delete the sub?

Community trust and goodwill is the primary reason to not go around deleting subs without precedent established via TOS or past actions.

By that logic, I have seen hardcore misandry on many of the feminist forums. Many even talk about killing their partners or harming their body parts routinely. Even moderators respond positively to those comments sometimes. But hey, it doesn’t fit their narrative and there would be a shitstorm if they banned such forums. “Reddit bans popular feminist forum, paving a new way for misogyny”

My point is they only ban the ones that don’t fit their narrative or their political views and they know they can get away with.

The only thing reddit cares about is negative PR. If you generate enough negative press about that sub you can get it banned.

I feel like this is where our litigation-based society has gotten us.

Laws don't decide things anymore, outrage PR + a potential/real wave of lawsuits do.

If you think about it, that doesn't seem too bad until you realize a lot of of the negative PR is manufactured by only taking part of the story and using it out of context. Sometimes even approaching propaganda or conspiracies.

I thought /r/FemaleDatingStrategy was banned. It's still up though.

It was quarantined but not banned.

What's wrong with it?

The women there see men as porn addicted, revolting, abusive pigs. I think I'm understating it a bit. But you get the gist. It's the female incel.

It could basically be renamed "/r/HowToGoldDig" and still be accurate

And what's wrong with that? And why is that against the rules?

Of course, they're a private business. They don't have to have a reason to delete a subreddit, just the desire. The community can leave as well and form their own equivalent forum. You have the right to speak your mind, but no one can be forced to listen to it on their own website.

The problem I have with this logic is that many of these sites have become so big that they are effective monopolies. Alternative sites are hard to have your voice heard on and sparsely populated. Its a problem when the biggest forums for discussion on the internet takes their cues on free speech from North Korea

There are also incel subreddits and super open misogynists subs on reddit. It is not just true that they somehow don't exist.

The few that got closed regularly went out of their way to brigade other subs. But the rest of them is happily flourishing there.

"My point is they only ban the ones that don’t fit their narrative or their political views and they know they can get away with."

What evidence do you have for this accusation?

twoXchromosomes,or blackpeopletwitter are unapologetically bigoted subs which stay untouched.

Blackpeopletwitter is not bigoted, and definitely not unapologetically so.

It may come across that way to you, but black people talking about their actual experiences caused by non-black people in a way that makes a non-black uncomfortable does not mean it’s bigoted.

Things might have changed since I was forced to unsub due not being able to comment because I refused to prove to a few subreddit mods that i am indeed a person of color.

Around that time, many if not all of the threads ended up being closed due to the the dramatic arguments that formed when users posted political/racial commentary to the sub; which is to be expected. I believe the real problem was when comments from users who portrayed themselves as POC felt their voices were less pronounced due to the downvotes and negative replies. Understandable

So they decided to lock the sub down for anyone except those who accepts a segregated bubble and willingness to prove their "blackness" in order to post an opinion

That lockdown happened for a brief period of time, and was not something the BPT community wanted in the first place.

Requiring users to send pictures of themselves in order to tag them black / not black and restricting posters in specific threads to black people or nonblacks that pass a test is absolutely 100% bigoted. It may be bigotry some are open to accept, but it's bigotry nonetheless.

And even ignoring that, Reddit would never allow the reverse to happen.

How do you know?

And what about r/conservative?

Each sub has a wildly different number of posts, moderation results, reporting of violations, and moderation response times. Unless there is clear evidence of intent making the claim that Reddit makes decisions based on political bias is wrong

> If they enforced the rules consistently then by that logic I could get any subreddit closed down by asking for things that are against the rules.

Some subs do get brigaded by users intentionally posting content that violates reddit TOS. The same happens on Discord. Some are trolls, others are intentionally malicious so they can report things. If you have an inactive mod team in either community, yeah your sub will go poof.

> brigaded by users

Or Reddit employees.

Do you have evidence of this?

Only if that subreddit also allows such requests to stay up.

Reddit closes subreddits for lack of active moderation all the time.

That's not what the GP is getting at. When Reddit management wants to ban a subreddit, they cite all sorts of transgressions and lapses in moderation policy that can easily and routinely be seen on other subreddits - often large ones.

I see all sorts of thinly veiled calls for violence on political subreddits, and even on seemingly mundane ones like r/Texas whenever Ted Cruz comes up. Apparently, calls to get out the guillotine are highly contextual.

To be fair, the stuff going across r/conservative is quite a bit worse than r/politics. Which I expected, of course, because r/politics is a generic discussion subreddit and not intentionally one-sided. But Reddit leaves both alone.

>" To be fair, the stuff going across r/conservative is quite a bit worse than r/politics. Which I expected, of course, because r/politics is a generic discussion subreddit and not intentionally one-sided. But Reddit leaves both alone. "

In my experience r/politics is the most biased and lopsided major subreddit on the entire platform. People criticize r/conservative but I genuinely do not see the same level of wanton hyperbole, demonization of the other side, and calls for revolution in r/conservative that I routinely see in the comments of major posts from r/politics.

Eh, I think moderating the larger political sub is a much harder job. idk how much time you have spent on r/conservative but the comments can get pretty aggressive too, I just imagine there are orders of magnitude less of them, so the mods generally find and remove the ones that call for trans people to be forcibly hospitalized or harmed for example, or that such and such a politician should be shot. You can still find comments saying parents who have gay friends to have their children removed from their custody or whatever though; they have to be slightly more oblique than I think what stays up much longer on r/politics.

Try looking at r/conservative right now. It's fucking batty mate.

I check occasionally, most of the posts there are right wing nuts. Like 100% without merit will get thousands of upvotes and lots of comment from what are obviously russian/chinese/whoever bots.

I sense that your comment is going to attract a lot of attention. I want to point out that r/conservative is worse than r/politics in a way that both conservatives and liberals would agree is bad.

Perhaps 80% of r/conservative threads can only be commented on by flaired users. Certainly all the threads one would actually want to contribute to at any rate. How do you get a flair? You have to post conservative-sounding things elsewhere or in the non-flair-only threads. Only once you've established that you're not going to deviate from the groupthink will you be allowed to participate in the community.

Conversely, r/politics is a free-for-all. Granted, there is clear downvote brigading on alternative opinions but they do not get taken down ever. HN is going to read your "intentionally one-sided" quip and not understand how correct you truly are.

> Granted, there is clear downvote brigading on alternative opinions but they do not get taken down ever.

So /r/politics is much better because instead of taking down "alternative opinions" they just downvote it? I am not convinced. Effectively downvoting alternative opinions not only effectively makes them completely hidden from view for most users (under a "this comment has too many downvotes" barrier), but the users who do stumble upon those massively downvoted "alternative opinion" comments will get a nice dopamine rush from seeing their own opinions further re-enforced because the downvotes must mean this opinion is wrong.

Both are insidious and both behaviors massively contribute to group think bubbles. I don't think the pattern of behavior of one is massively better and more healthy than the other.

'Sort by Controversial' solves the sort of problem you're describing. I know a lot of users who sort it that way by default now.

> So /r/politics is much better because instead of taking down "alternative opinions" they just downvote it? I am not convinced.

Better than moderators doing it. Post something that deviates from what the moderators of r/conservative consider acceptable conservatism and they just ban you. If the community legitimately downvotes you, then at least you're facing garden variety groupthink instead of "my way or the highway" of a couple mods enforcing what is acceptable discussion.

This is a pattern. Trump, at his coyly named "Truth Social" bans more people than Twitter ever did. And not for violent threats, either, but for simply disagreeing with his narrative.

Since many might not know: You can go to r/conservative any day of the week, and see the most batshit insane mind melting garbage you could possibly ask for. Eg, right now the top posts are:

1. Cheering the liberal tears that must surely ensue, as 5th graders are taught how to shoot guns in class.

2. Fabricated Biden attack. Just, invented, like as if it was a funny thing to do, or as if there isn't real stuff to criticise him on.

3, 4. Pictures of tweets from Republicans, both blaming Biden/Dems for systemic issues (problems caused and maintained by both sides).

5. Outrage that Sunday morning shows aren't talking about the man on serious meds, who called Emergency Services saying he was going to kill Kavanaugh (for trying to loosen gun control regulations).

... Fucking. Crazy. That's not politics, it's not discussion; it's mental assault on morons. And you are not allowed to post or comment there without affirming your conservatism.

They are constantly raising the bar on what "lack of active moderation" means. Also as the site gets bigger moderating even a small niche subreddit can become a part time job and Reddit does pay moderators of some subs, so it's being used to editorialize more now than ever.

> They are constantly raising the bar on what "lack of active moderation" means.

If they are I haven’t seen it.

Plenty of subs out there with zero moderation… sadly.

Reddit historically only cares about a subs moderation when it generates enough negative attention to become a problem for Reddit (the company, as opposed to Reddit the “community”).

They’ve typically been quite happy to let quite toxic and otherwise rule breaking subreddits to exist so long as they don’t cause (too many) problems.

They're constantly raising the bar on what "lack of active moderation" means as it pertains to certain specific subreddits very selectively.

The magic the gathering subreddit has to moderate out the word "proxy" or risk getting banned. seems plausable like there are just a lot of speicfic rules you gotta follow to be a subreddit

Why is that? Was there an announcement?

I can track down the actualy post if you want I don't recall there being a specific announcment, but the logic goes something like this most people who use proxy are referring to fake magic cards printed to look like real ones and therefore talking about proxies how to make copies etc will lead to people cheating or stealing.

For what its worth i used the term proxy for a decade almost exclusively to mean a cheap paper copy of a magic card (or even just writing the word on a sheet of paper and sticking it in a sleeve over a land) and I find this restriction wrong-headed.

I don't know of any social media site that has been able to "consistently" enforce moderation at scale. When you have millions of humans they will find every edge case and workaround to any specific legalese you come up with.

Courts struggle with it too: https://www.youtube.com/watch?v=qhWCk2f2alI

Smaller sites like HN do better compared to twitter or reddit, because it's easier. But ultimately it all depends on what kind of communities the company wants to nurture.

While it would be nice if social media companies actually said the truth about moderation rather than pretending there's some content neutral principals they're applying, I also think a lot of people would be better off if they accept that social media is what it is, and move on if they don't like it.

They all can't moderate because they don't want to sacrifice profit... That's the explanation. They are also too large, but moderation and proper community support were once requirements for communities and now it's gone... It's a really selfish and lazy way of just collecting profit, and it also disrupts the ability of vital information, honest people, and key commercial entities to function on the platform.

This behavior has been occurring on Reddit for years, Social media companies also work hard to suppress dissent and complaints from public view... It's really creating the ideal that hostile, anti-competitive, and abusive behavior towards user bases is the new norm... Especially when they are paying most users to contribute, it creates a grim future for social media overall.

It's just far better to create a web site and pour that effort and time into it instead of working in an abusive and exclusionary community like that.

> If they enforced the rules consistently then by that logic I could get any subreddit closed down by asking for things that are against the rules.

No, because the rules specifically mention soliciting. As far as I can tell, the only other time this is mentioned is for "soliciting or facilitating illegal or prohibited transactions".

>If they enforced the rules consistently then by that logic I could get any subreddit closed down by asking for things that are against the rules.

They just change the rules as needed. Look at /r/blackpeopletwitter, they have a rule that you can only post in "country club threads" if you have sent a picture of your skin color in to the mods to verify you aren't white, since they don't want white people posting in there.

When people pointed out the obvious hypocrisy of Reddit endorsing racial discrimination by one of its largest subs, they just re-wrote the rules to make it kosher.


I’ve seen folks try that.

Didn’t work.

Another “politically oriented sub” had users who got pissed that they were banned for spamming a sub that I frequented.

They made new accounts (they did that a lot…) and posted a couple normal posts in our sub and then again spammed but this time pretending to be organizing a brigade vs … their sub.

Admins didn’t act when they complained because it was all pretty obvious and we moderated the brigade posts.

In my experience it’s not just “hey there are a few posts” that triggers these kinds of actions.

> I’ve seen folks try that.

Okay, but this goes back to the condition:

> If they enforced the rules consistently

Which reddit does not - and you are right:

> it’s not just “hey there are a few posts” that triggers these kinds of actions.

There is an agenda, and then motivated reasoning to find justification for the agenda.

What I described wasn’t an “agenda” it was just recognizing that the local moderators were acting and the content wasn’t actually from the local users.

That’s just common sense.

> I could get any subreddit closed down by asking for things that are against the rules

Pretty sure that's how every subreddit that's ever been banned got banned.

ah, the heckler's veto. I had only heard of that happening on college campuses recently. Never expected it to happen on someplace as big and diverse as Reddit

>Even though many such posts skirt perilously close to copyright-driven bans, the continuing survival of ‘IP-appropriated’ non-porn deepfake video posts by major YouTube deepfakers such as Ctrl-Shift-Face suggest a broad studio and rights-holder tolerance towards such activity, at least for the time being.

I would not be surprised if this isn't so much tolerance as much as it is ignorance. Copyright enforcement is hilariously expensive, and the only reason why any litigation even happens is that larger outfits are also hilariously petty. The mantra of copyright maximalism is that if anyone is even remotely touching "your work", you storm in and demand whatever money you can purely for the sake of keeping people off your "property".

Trust me, once they run out of nominally-SFW-but-actually-NSFW communities to ban, you'll see movie studios cotton onto what they consider to be theft[0] and start prosecuting something that doesn't really harm them in any way.

[0] Assume, for the sake of this discussion, that if we as a people decide someone gets a government monopoly on something, then depriving them of their monopoly is stealing from them.

Yes, I hate this logic too.

One day, a nude photo will be considered not much different than a face photo and that day a lot of things in the world will be much easier

The book, Light of Other Days, has a good take on that - everyone can see everything everywhere with wormhole cameras, they "effectively destroy all secrecy and privacy". So prudishness in the generations that grow up with wormcams is non-existent. (not dissimilar to kids and mobile phones ...)


Whee! Non-invasive anal-pro...., err colonoscopies.

I'm imagining a day when telephone scammers call up a loved one with AI of your voice.

It's already a well-established type of scam in Eastern Europe. Scammers call old people late in the the evening. A Distressed Female Voice is in pain, non-specific, calling out "mama, I'm in pain, help, I'm hurt", then the call cuts away to Authoritative Male Voice who tells the old person their daughter was in accident, needs help, needs money etc. etc.

People ask "was that photoshopped?" all the time.

Kids will be video calling each other as SpongeBob and Richard Nixon soon.

polote is right. In a decade, anyone will be able to make a nude of anyone else with their phone with just one photo. Or any other funny or stupid video or image.

The tech is real and is coming fast. We can either legislate it or stop worrying and get used to it. I prefer the latter approach.

This is going to be the biggest artistic renaissance of all time. Everyone has something to say, but most lack the years of practiced skill to say it. Not anymore.

What do we do when someone creates a deep fake of Putin saying he will nuke Ukraine if they do not surrender within 3 days?

You could easily fake a document and post it on twitter. You could fake a photo of putin pointing a gun at ukrainian children easily. What's so different with a video?

The same thing we already do with photoshops and video edits.

(Upvote or downvote.)

Do you think that banning a subreddit will stop that from happening?

If Putin wants to threaten someone, it's on him to make the other party believes the threat. Hopefully Putin has more trustworthy methods of communication than posting videos on Twitter. Like having his ambassador deliver the message in person, or at least confirm it.

When this doesn't happen, the recipient can be assured that the threat was either pure PR or fake. In either case it can be ignored.

Nudity itself is not the concern; impersonation/fraud is.

We're in the middle of a moral panic over trans people at the moment; I think there's a significant community of people who want the world to be much less sexually tolerant.

, I say to the police officer as he shoves me into the back of his car and tells me to put my pants back on

Well until then nothing stops you to generating extra income to the ladies from senior house and turn them victoria secret models recording mic licking videos to monetize

I thought nuclear weapons were an outlier, but it seems like humanity is going to have to figure out a more general way to handle technology that exists but is more harmful than not.

Are nuclear weapons more harmful than not? They haven't harmed many people lately, and have been a factor in preventing large-scale wars.

They are factor in preventing help to country that is victim of genocide right now. If Russia did not had nuclear guns and did not threatened to use them, there might be more activity to stop genocide in occupied Ukrainian territories.

True, but on the other hand the USSR might have gone to war with Nato at large back in the day.

What evidence is there that deepfakes are harmful and not just distasteful?

Generating offense and disgust is not necessary inherently harmful.

Please tell me that you don't think deepfakes are only ever going to be distasteful. Once this becomes more commonplace, you're going to see it being applied to more personal connections - e.g., deepfaking pornography about friends. While already dark, it doesn't take long for this be essentially inventing blackmail material. Once the quality is high enough, there's a ton of potential for deepfakes to be extremely damaging for ones life. If you don't have the social status to prove to a community that it's a fake, ones can be significantly impacted.

I would expect the opposite, that once deepfakes become commonplace the ability to blackmail anyone with pornographic material of them vanishes. After all they can claim it's a deepfake, and everyone will find that plausible.

Photos went through the same thing: they used to labor-intensive to fake and generally accepted at face value, now that everyone can easily manipulate them with photoshop people are much more suspicious. That's how we got to people using video as proof instead of simple images. Now faces in videos will lose this trust, and people will find more trustworthy signals.

> Now faces in videos will lose this trust


> people will find more trustworthy signals.

I would really like to believe this but the world is clearly experiencing a massive crisis of trust because we have failed to find reliable trustworthy signals now that broadcast Internet communication makes it so easy to create and spread nonsense.

> the world is clearly experiencing a massive crisis of trust because we have failed to find reliable trustworthy signals now that broadcast Internet communication makes it so easy to create and spread nonsense.


I think you could tell a couple of stories:

- "It used to be that media was used to broadcast journalism to the public, but now the internet allows lies to be spread faster than they can be caught".

- "It used to be that what the public found out was gatekeeped by a particular set of people, but now the internet allows the public unfiltered access to the truth".

I think it's closer to:

The public used to have a limited set of narratives they had access to because creating and broadcasting them was expensive. That naturally limited the set of narratives available to those who had some level of power and wealth. The upside, if you feel that any level of power or wealth is ever earned, is that those broadcasting have presumably done at least something of value to society to partially earn the right to broadcast. The downside is that much of it may be unearned, or the value they provided may have only been to the elite few at the expense of the many.

Today, narratives are so cheap to produce and broadcast that the public can choose whichever one suits their whim or preconceived notions.

It's not entirely clear to me which of these scenarios is better. They both clearly have deep structural flaws.

The trouble is, I think it's painfully clear that the incentives from advertising / social media has led online journalism to a place where speed takes priority over quality.

Trust doesn't really work like that in practice though, photos are still typically assumed to be genuine to say nothing of the context in which they're presented. The war in UA has a host of examples, genuine videos and images presented with various (made up) contexts to show anything the poster wants them to show. Add in modification (cropping, video cuts, as well as photoshop) and it gets even more complicated. Unless deepfaked porn is trivial to make by anyone, easily shared, wildly popular, and almost always perfect, I really don't think that "it's a deepfake" will work as an excuse. And that assumes that you'd be able to reach people to tell them that.

People believe what mass media tells them about Ukraine. That is, it's not that they're believing random pictures, it's that they're believing things that CNN (or if they're predisposed more sympathetic to Russia, Russia-sympathetic media outlets) tells them are true, even if CNN has no good reason to think that.

If we wait until society, by default, assumes that photos are fake, then it is way too late. Being able to trust what you see is fundamental to many basic functions of society.

This is not how humans work. We can see from 2020's politics that people believe all kinds of crazy stuff simply because a youtuber said so, to the point of committing acts of terrorism. https://www.npr.org/sections/thetwo-way/2017/06/22/533941689...

A low-trust society is a significantly less productive and safe place to be. We don't want to go back there.

I'll be the optimistic one (famous last words) and say that this apocalyptic scenario won't happen.

I am old enough to remember the times when "photoshopped" became a verb, and yet I don't remember a "Photoshop epidemic" where people blackmailed other people with production-quality fake pictures. I mean sure, it happened, but not at a major scale.

If the past is any indication we will indeed have deepfakes of personal connections, but their quality will rank from "bad" to "okay", and that will be it.

I had to deal with 2 separate fake facebook accounts that used photoshopped images of me during HS (>10 years ago). The fake accounts both had way way more friends than me, posted more frequently, were utterly humiliating and embarrassing. Throughout high school I had a bad reputation that proceeded me, it made making friends extremely difficult, and on top of a few other confounding factors made my HS experience so bad I basically gave up. I never found out who was behind the accounts.

I remember at the time this was a pretty common thing, in high schools, so perhaps you just managed to avoid it.

Eh, as the technology becomes better and more widespread, the likely result is that people take compromising pictures less seriously, because everyone knows they're more likely to be deepfakes than real. It will be ubiquitous. People will be deepfaking each other constantly for any number of reasons—jokes, porn, pranks, etc.

All the consternation about deepfakes ignores this higher-order effect.

Your opinion is ignorant in the most literal sense of the word.

Throughout human history we have used effort and cost of communicating in a given medium as an indicator of the legitimacy being contained therein. At various points the bar to entry for disseminating information using said mediums has dropped precipitously causing relative chaos as we acclimated to the new reality. Often times institutions or groups that had power and were using the difficulty of disseminating information to maintain that power were left with less when the dust settled.

Right now we're in the middle of one of those precipitous drops. From realistic special effects and green screens to celebrity porn modern computer tech is enabling high production value content with incredibly minimal expenditure compared to what would have been required in the past. That's gonna shake things up. Maybe it'll be a nothing-burger like photography. Maybe we'll have a century of religious wars like the printing press.

Society will be fine in the long term.

http://deepnude.cc we are already there.

Not really.

First of all, and most relevant to the discussion, that website handles single pictures. They don't do videos, which means I can pedantically classify the result as a "photoshop" instead of a "deepfake".

As for the results themselves: I uploaded a couple pictures of me. I can't see the full result because I am not a paying user, but even in the blurry preview I can see that it's a fake. One of them even fused my clothes and my skins, which I guess could be a turn-on for those who are into Lovecraftian horrors.

I won't deny that it's technically interesting, but I wouldn't start a moral panic over its results.

Yeah, but you are also not a horny 12 year old. It's bad enough.

you people... just suck the romance out of everything.

Whatever happened to mystery, and desire, and letting your imagination run wild trying to fill in the blanks?

It's like the computer is being used to narrow our horizons, not broaden them.

> Once the quality is high enough, there's a ton of potential for deepfakes to be extremely damaging for ones life.

Alternatively, once the quality becomes good enough, "it was a deepfake" will be an entirely reasonable defense. Not just in actual courts (which is what I fear we will be heading towards, that even horrible crimes such as raping children will go unpunished or underpunished in countries where possession even of drawn/animated material is a crime), but also in the "court of public opinion".

Today you could do this with a copy of Photoshop, but in practice this isn't really a concern for most people. What makes you think it'll be different with deepfakes?

Because you’ll need even less skills than it takes to do a passable photoshop?

People will believe the fakes, or not, based on what they want to believe. Once the quality is good enough you can argue that any is true and the other side can say deep fake.

Well, here's just the most recent and egregious example.

> The video, which shows a rendering of the Ukrainian president appearing to tell his soldiers to lay down their arms and surrender the fight against Russia, is a so-called deepfake


There are many earlier (questionably "deep fake") examples: https://www.washingtonpost.com/technology/2020/08/03/nancy-p...

> The video, which shows a rendering of the Ukrainian president appearing to tell his soldiers to lay down their arms and surrender the fight against Russia, is a so-called deepfake

Did the Ukranian soldiers do this following the deepfake video? As far as I understand it, most of them are still fighting the Russians. What damage or harm occurred here?

I am pretty sure modern military command and control does not rely very heavily on "the video looked like it was the president saying it".

One could scrape Facebook or some other social media site for faces, deepfake them onto porn, and then blackmail[0] the users into paying money to avoid the release of "porn starring them".

We know from the economics of scams that any social engineering attack that is cheap enough to do and can be made plausibly urgent enough to get people to not question it can be hugely profitable. Deepfake blackmail has the potential to check off both of these boxes:

1. Porn adds plausible urgency by defaming the victim. People will really, really do anything to avoid being associated with the porn business, as can be seen by how many copyright trolls specifically troll with porn as opposed to other copyrighted works.

2. It is getting cheaper to train deepfake models and public social media photography provides a large repository of images that could be used to generate face swaps.

Right now I haven't heard of this kind of blackmail actually happening yet. This is primarily because deepfakes still require manual labor to cut out large numbers of faces, and the training process takes a while.

However, these are both surmountable problems.

Ransomware has been a technically possible form of malware for some time, but the problem was getting money back to the thief in a way that was difficult to trace. The kinds of scams that target technically illiterate users use money mules, but that relies on the unwillingness of the criminal justice system to prosecute the kinds of crimes that hurt the weakest in society. Bitcoin made ransomware a lot less risky of a crime, to the point where a lot of criminals targeted large businesses that would ordinarily be protected by the law.

The only thing keeping it from being profitable to deepfake all of Facebook into a bunch of porn is that it would be expensive to do so. However, that's purely a technical problem, and the thing we know about technology is that it almost always gets cheaper over time.

[0] Tom Scott was actually going to do a speculative fiction video about this exact hypothetical... before the whole deepfake porn thing took off and spooked him out of it.

> One could scrape Facebook or some other social media site for faces, deepfake them onto porn, and then blackmail[0] the users into paying money to avoid the release of "porn starring them".

How about doing something about

* blackmailers * people giving in to blackmail * people trusting random videos on the internet

rather than trying to put the genie back into the bottle?

But offense and disgust is used as a justification for countless restrictions in society.

- Woman can't go topless in most places.

- Porn has age limits

- In some locations profanity is restricted in public

And these are rules from the government that clearly violate the first admendment but are accepted because of the level of offense and disgust they provoke.

Are you saying that legislation that prevents children from accessing porn, or legislation that prevents children from creating and distributing pornography of themselves, violates the first amendment?

More of a post-truth world

remember the Maine!

I mean, I'm no rationalism absolutist, but deepfake technology is in essence sophisticated lying/fraud. Kinda hard for me to see how it's not harmful. Sure, I could see how it could be used creatively, but outside of a few edge cases, deepfakes are a weapons technology. It's easy to use it for bad– mis/disinformation, smear campaigns–ultimately leading to an erosion of public trust in the information they use to stay informed. I think an informed populace is a good, yeah?

Meanwhile the number of ethical uses is relatively quite small. Maybe there is a novel use of this tech that will actually be useful, but I can't see how being able to easily impersonate someone else will be good. Maybe someone here has some ideas.

>> I'm no rationalism absolutist, but deepfake technology is in essence sophisticated lying/fraud.

Thank you for clarifying that. Even in the entertainment world (resurrecting dead actors) it's still a fake (a.k.a. fraud) just that we don't depend on it in a way with real consequences. Deepfakes are inherently fraud/deception.

>> What evidence is there that deepfakes are harmful and not just distasteful?

Further down the thread there is an example of a political deepfake used to spread disinformation. I don't really care about distaste, it's authenticity (the truth) that is under attack by these. On the upside, they can offer a bit of entertainment.

It's not even a problem of regulation since the bad guys will always have deepfakes. The cat is out of the bag already.

Amish figured that one out a long time ago. Everybody else is a beta-tester.

Important note: the largest "SFW" Deepfake community, r/SFWdeepfakes, was not banned, rather a smaller competing subreddit which doesn't have a recent archive on the popular archive sites.

This is really interesting. I saw this post on hckrnews and instantly had a flashback to that moment my account was banned from reddit.

I had actually created the SFWdeepfakes subreddit, because i thought those edits of Nicolas Cage on everything were absolutely hilarious.

Guess what? Suspended for "posting involuntary pornography". Tried to get my account back, but reddit was "experiencing higher than usual support volume" and never got back to me. Oh well.

If you'd asked me a few years ago, I'd have said deepfakes aren't a problem - that lack of media literacy is the problem. We've had photoshop and staged photos and impersonators and satirical news for years, after all. And although a tiny fraction of the population falls for hoaxes like bigfoot and astrology and homeopathy, 99% of people manage to sift through this stuff.

What I learned from the pandemic, though, is that people are much worse at telling fact from fiction than I expected.

It's given me a greater understanding of the people who are very worried about deepfakes - if you think it's more like 10%-30% of the population that are very easily fooled, new technology for fakery is an ominous thing to see.

Those are rookie numbers. I think the % of people who can be easily fooled is almost 100%. There are fewer and fewer people who live their lives doubting what they see unless they’re already attached to the opposite of that idea.

Most people are not trained to detect deep fakes and certainly most of us here still would fall for a deepfake unless we’re specifically looking for it.

The key here is to pay less attention to the “evidence” and instead ask where it come from. If there isn’t a clear story by trustworthy people about how it came to be, you will need to at least suspect it’s fake.

Unfortunately that means that random Internet commenters aren’t going to be trusted.

Holywood level of production has been able to do deep fakes for like a 100 years. Fake videos is nothing new.

The main difference is that there might be cheap tailored fakes for noname people like me for small scams.

You’re probably replying to the wrong person. I’m just saying that everyone can fall for fake stuff.

It's not just the people who are easily fooled, even the people who aren't fooled end up harmed. First we learned not to trust images in the days of Photoshop:



Now with deep fakes, we learn not to trust video:


But the end result is also that we no longer trust true information either. We have become cynical, detached, disorganized. But each of us ends up somewhat arbitrarily deciding what to believe out of the sea of indistinguishable bits forehosed at us by the Internet, we no longer have any shared experience or consensus to build a functioning society on.

Propaganda does not require you to believe it. It's equally effective if you end up believing nothing.

Distrust of any information from an institutional source is a good rule to live by imo and would make the world a better place

Distrust of any information from an institutional source would put all of us back to being hunter gatherers and virtually all of us would immediately die from starvation or poisoning from eating the wrong plants.

OK, and where did they go?

think about the children!

Everybody interested in this topic should read Yishan's (reddit CEO early 2010s) thread on the story around /r/Jailbait and content moderation.

edit: https://threadreaderapp.com/thread/1519239024379973632.html

Do you have a link to his thread?

Can't believe I made a comment to post a url, and forgot to post the url :0


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact