If they enforced the rules consistently then by that logic I could get any subreddit closed down by asking for things that are against the rules.
I created a music sub 3 years ago on Reddit, ran it for about a year and a half when I made one cross post to a relevant subreddit and got permanently banned with no explanation and no due process. Reddit still has the sub up with all of my (custom) content and comments. My sub was orphaned right when it was picking up steam. I decided to simply build out my own site for posts after that point, IG, TikTok, FaceBook etc, are all operating in the same monopolistic ways with content now.
Honestly, I think reddit was started as a crowd sourcing test that was really meant to build the platform's notoriety, and once it grew in dominance, they simply ditched the independent creators in order to run their own profit and IPO up.
I also think this is why reddit forces most people away from using links to external sites like YouTube, because they want all the traffic on their own site.
By uploading video and images to reddit's CDN, you have no idea of the amount of times items are being used. Tons of content is literally ripped off YouTube and TikTok and then re-hosted on Reddit's CDN and that leads to lower views for the original creators of the content. It's really corporate piracy that preys upon small creators. One day there may be a class action lawsuit, but I doubt it will result in anything fair for the lost time and damage done to content creators.
My point is they only ban the ones that don’t fit their narrative or their political views and they know they can get away with.
Laws don't decide things anymore, outrage PR + a potential/real wave of lawsuits do.
If you think about it, that doesn't seem too bad until you realize a lot of of the negative PR is manufactured by only taking part of the story and using it out of context. Sometimes even approaching propaganda or conspiracies.
The few that got closed regularly went out of their way to brigade other subs. But the rest of them is happily flourishing there.
What evidence do you have for this accusation?
It may come across that way to you, but black people talking about their actual experiences caused by non-black people in a way that makes a non-black uncomfortable does not mean it’s bigoted.
Around that time, many if not all of the threads ended up being closed due to the the dramatic arguments that formed when users posted political/racial commentary to the sub; which is to be expected. I believe the real problem was when comments from users who portrayed themselves as POC felt their voices were less pronounced due to the downvotes and negative replies. Understandable
So they decided to lock the sub down for anyone except those who accepts a segregated bubble and willingness to prove their "blackness" in order to post an opinion
And even ignoring that, Reddit would never allow the reverse to happen.
Each sub has a wildly different number of posts, moderation results, reporting of violations, and moderation response times. Unless there is clear evidence of intent making the claim that Reddit makes decisions based on political bias is wrong
Some subs do get brigaded by users intentionally posting content that violates reddit TOS. The same happens on Discord. Some are trolls, others are intentionally malicious so they can report things. If you have an inactive mod team in either community, yeah your sub will go poof.
Or Reddit employees.
Reddit closes subreddits for lack of active moderation all the time.
I see all sorts of thinly veiled calls for violence on political subreddits, and even on seemingly mundane ones like r/Texas whenever Ted Cruz comes up. Apparently, calls to get out the guillotine are highly contextual.
In my experience r/politics is the most biased and lopsided major subreddit on the entire platform. People criticize r/conservative but I genuinely do not see the same level of wanton hyperbole, demonization of the other side, and calls for revolution in r/conservative that I routinely see in the comments of major posts from r/politics.
Perhaps 80% of r/conservative threads can only be commented on by flaired users. Certainly all the threads one would actually want to contribute to at any rate. How do you get a flair? You have to post conservative-sounding things elsewhere or in the non-flair-only threads. Only once you've established that you're not going to deviate from the groupthink will you be allowed to participate in the community.
Conversely, r/politics is a free-for-all. Granted, there is clear downvote brigading on alternative opinions but they do not get taken down ever. HN is going to read your "intentionally one-sided" quip and not understand how correct you truly are.
So /r/politics is much better because instead of taking down "alternative opinions" they just downvote it? I am not convinced. Effectively downvoting alternative opinions not only effectively makes them completely hidden from view for most users (under a "this comment has too many downvotes" barrier), but the users who do stumble upon those massively downvoted "alternative opinion" comments will get a nice dopamine rush from seeing their own opinions further re-enforced because the downvotes must mean this opinion is wrong.
Both are insidious and both behaviors massively contribute to group think bubbles. I don't think the pattern of behavior of one is massively better and more healthy than the other.
Better than moderators doing it. Post something that deviates from what the moderators of r/conservative consider acceptable conservatism and they just ban you. If the community legitimately downvotes you, then at least you're facing garden variety groupthink instead of "my way or the highway" of a couple mods enforcing what is acceptable discussion.
This is a pattern. Trump, at his coyly named "Truth Social" bans more people than Twitter ever did. And not for violent threats, either, but for simply disagreeing with his narrative.
1. Cheering the liberal tears that must surely ensue, as 5th graders are taught how to shoot guns in class.
2. Fabricated Biden attack. Just, invented, like as if it was a funny thing to do, or as if there isn't real stuff to criticise him on.
3, 4. Pictures of tweets from Republicans, both blaming Biden/Dems for systemic issues (problems caused and maintained by both sides).
5. Outrage that Sunday morning shows aren't talking about the man on serious meds, who called Emergency Services saying he was going to kill Kavanaugh (for trying to loosen gun control regulations).
... Fucking. Crazy. That's not politics, it's not discussion; it's mental assault on morons. And you are not allowed to post or comment there without affirming your conservatism.
If they are I haven’t seen it.
Plenty of subs out there with zero moderation… sadly.
They’ve typically been quite happy to let quite toxic and otherwise rule breaking subreddits to exist so long as they don’t cause (too many) problems.
For what its worth i used the term proxy for a decade almost exclusively to mean a cheap paper copy of a magic card (or even just writing the word on a sheet of paper and sticking it in a sleeve over a land) and I find this restriction wrong-headed.
Courts struggle with it too: https://www.youtube.com/watch?v=qhWCk2f2alI
Smaller sites like HN do better compared to twitter or reddit, because it's easier. But ultimately it all depends on what kind of communities the company wants to nurture.
While it would be nice if social media companies actually said the truth about moderation rather than pretending there's some content neutral principals they're applying, I also think a lot of people would be better off if they accept that social media is what it is, and move on if they don't like it.
This behavior has been occurring on Reddit for years, Social media companies also work hard to suppress dissent and complaints from public view... It's really creating the ideal that hostile, anti-competitive, and abusive behavior towards user bases is the new norm... Especially when they are paying most users to contribute, it creates a grim future for social media overall.
It's just far better to create a web site and pour that effort and time into it instead of working in an abusive and exclusionary community like that.
No, because the rules specifically mention soliciting. As far as I can tell, the only other time this is mentioned is for "soliciting or facilitating illegal or prohibited transactions".
They just change the rules as needed. Look at /r/blackpeopletwitter, they have a rule that you can only post in "country club threads" if you have sent a picture of your skin color in to the mods to verify you aren't white, since they don't want white people posting in there.
When people pointed out the obvious hypocrisy of Reddit endorsing racial discrimination by one of its largest subs, they just re-wrote the rules to make it kosher.
Another “politically oriented sub” had users who got pissed that they were banned for spamming a sub that I frequented.
They made new accounts (they did that a lot…) and posted a couple normal posts in our sub and then again spammed but this time pretending to be organizing a brigade vs … their sub.
Admins didn’t act when they complained because it was all pretty obvious and we moderated the brigade posts.
In my experience it’s not just “hey there are a few posts” that triggers these kinds of actions.
Okay, but this goes back to the condition:
> If they enforced the rules consistently
Which reddit does not - and you are right:
> it’s not just “hey there are a few posts” that triggers these kinds of actions.
There is an agenda, and then motivated reasoning to find justification for the agenda.
That’s just common sense.
Pretty sure that's how every subreddit that's ever been banned got banned.
I would not be surprised if this isn't so much tolerance as much as it is ignorance. Copyright enforcement is hilariously expensive, and the only reason why any litigation even happens is that larger outfits are also hilariously petty. The mantra of copyright maximalism is that if anyone is even remotely touching "your work", you storm in and demand whatever money you can purely for the sake of keeping people off your "property".
Trust me, once they run out of nominally-SFW-but-actually-NSFW communities to ban, you'll see movie studios cotton onto what they consider to be theft and start prosecuting something that doesn't really harm them in any way.
 Assume, for the sake of this discussion, that if we as a people decide someone gets a government monopoly on something, then depriving them of their monopoly is stealing from them.
Yes, I hate this logic too.
Kids will be video calling each other as SpongeBob and Richard Nixon soon.
polote is right. In a decade, anyone will be able to make a nude of anyone else with their phone with just one photo. Or any other funny or stupid video or image.
The tech is real and is coming fast. We can either legislate it or stop worrying and get used to it. I prefer the latter approach.
This is going to be the biggest artistic renaissance of all time. Everyone has something to say, but most lack the years of practiced skill to say it. Not anymore.
(Upvote or downvote.)
When this doesn't happen, the recipient can be assured that the threat was either pure PR or fake. In either case it can be ignored.
Generating offense and disgust is not necessary inherently harmful.
Photos went through the same thing: they used to labor-intensive to fake and generally accepted at face value, now that everyone can easily manipulate them with photoshop people are much more suspicious. That's how we got to people using video as proof instead of simple images. Now faces in videos will lose this trust, and people will find more trustworthy signals.
> people will find more trustworthy signals.
I would really like to believe this but the world is clearly experiencing a massive crisis of trust because we have failed to find reliable trustworthy signals now that broadcast Internet communication makes it so easy to create and spread nonsense.
I think you could tell a couple of stories:
- "It used to be that media was used to broadcast journalism to the public, but now the internet allows lies to be spread faster than they can be caught".
- "It used to be that what the public found out was gatekeeped by a particular set of people, but now the internet allows the public unfiltered access to the truth".
The public used to have a limited set of narratives they had access to because creating and broadcasting them was expensive. That naturally limited the set of narratives available to those who had some level of power and wealth. The upside, if you feel that any level of power or wealth is ever earned, is that those broadcasting have presumably done at least something of value to society to partially earn the right to broadcast. The downside is that much of it may be unearned, or the value they provided may have only been to the elite few at the expense of the many.
Today, narratives are so cheap to produce and broadcast that the public can choose whichever one suits their whim or preconceived notions.
It's not entirely clear to me which of these scenarios is better. They both clearly have deep structural flaws.
A low-trust society is a significantly less productive and safe place to be. We don't want to go back there.
I am old enough to remember the times when "photoshopped" became a verb, and yet I don't remember a "Photoshop epidemic" where people blackmailed other people with production-quality fake pictures. I mean sure, it happened, but not at a major scale.
If the past is any indication we will indeed have deepfakes of personal connections, but their quality will rank from "bad" to "okay", and that will be it.
I remember at the time this was a pretty common thing, in high schools, so perhaps you just managed to avoid it.
All the consternation about deepfakes ignores this higher-order effect.
Throughout human history we have used effort and cost of communicating in a given medium as an indicator of the legitimacy being contained therein. At various points the bar to entry for disseminating information using said mediums has dropped precipitously causing relative chaos as we acclimated to the new reality. Often times institutions or groups that had power and were using the difficulty of disseminating information to maintain that power were left with less when the dust settled.
Right now we're in the middle of one of those precipitous drops. From realistic special effects and green screens to celebrity porn modern computer tech is enabling high production value content with incredibly minimal expenditure compared to what would have been required in the past. That's gonna shake things up. Maybe it'll be a nothing-burger like photography. Maybe we'll have a century of religious wars like the printing press.
Society will be fine in the long term.
First of all, and most relevant to the discussion, that website handles single pictures. They don't do videos, which means I can pedantically classify the result as a "photoshop" instead of a "deepfake".
As for the results themselves: I uploaded a couple pictures of me. I can't see the full result because I am not a paying user, but even in the blurry preview I can see that it's a fake. One of them even fused my clothes and my skins, which I guess could be a turn-on for those who are into Lovecraftian horrors.
I won't deny that it's technically interesting, but I wouldn't start a moral panic over its results.
Whatever happened to mystery, and desire, and letting your imagination run wild trying to fill in the blanks?
It's like the computer is being used to narrow our horizons, not broaden them.
Alternatively, once the quality becomes good enough, "it was a deepfake" will be an entirely reasonable defense. Not just in actual courts (which is what I fear we will be heading towards, that even horrible crimes such as raping children will go unpunished or underpunished in countries where possession even of drawn/animated material is a crime), but also in the "court of public opinion".
> The video, which shows a rendering of the Ukrainian president appearing to tell his soldiers to lay down their arms and surrender the fight against Russia, is a so-called deepfake
There are many earlier (questionably "deep fake") examples: https://www.washingtonpost.com/technology/2020/08/03/nancy-p...
Did the Ukranian soldiers do this following the deepfake video? As far as I understand it, most of them are still fighting the Russians. What damage or harm occurred here?
I am pretty sure modern military command and control does not rely very heavily on "the video looked like it was the president saying it".
We know from the economics of scams that any social engineering attack that is cheap enough to do and can be made plausibly urgent enough to get people to not question it can be hugely profitable. Deepfake blackmail has the potential to check off both of these boxes:
1. Porn adds plausible urgency by defaming the victim. People will really, really do anything to avoid being associated with the porn business, as can be seen by how many copyright trolls specifically troll with porn as opposed to other copyrighted works.
2. It is getting cheaper to train deepfake models and public social media photography provides a large repository of images that could be used to generate face swaps.
Right now I haven't heard of this kind of blackmail actually happening yet. This is primarily because deepfakes still require manual labor to cut out large numbers of faces, and the training process takes a while.
However, these are both surmountable problems.
Ransomware has been a technically possible form of malware for some time, but the problem was getting money back to the thief in a way that was difficult to trace. The kinds of scams that target technically illiterate users use money mules, but that relies on the unwillingness of the criminal justice system to prosecute the kinds of crimes that hurt the weakest in society. Bitcoin made ransomware a lot less risky of a crime, to the point where a lot of criminals targeted large businesses that would ordinarily be protected by the law.
The only thing keeping it from being profitable to deepfake all of Facebook into a bunch of porn is that it would be expensive to do so. However, that's purely a technical problem, and the thing we know about technology is that it almost always gets cheaper over time.
 Tom Scott was actually going to do a speculative fiction video about this exact hypothetical... before the whole deepfake porn thing took off and spooked him out of it.
How about doing something about
* people giving in to blackmail
* people trusting random videos on the internet
rather than trying to put the genie back into the bottle?
- Woman can't go topless in most places.
- Porn has age limits
- In some locations profanity is restricted in public
And these are rules from the government that clearly violate the first admendment but are accepted because of the level of offense and disgust they provoke.
Meanwhile the number of ethical uses is relatively quite small. Maybe there is a novel use of this tech that will actually be useful, but I can't see how being able to easily impersonate someone else will be good. Maybe someone here has some ideas.
Thank you for clarifying that. Even in the entertainment world (resurrecting dead actors) it's still a fake (a.k.a. fraud) just that we don't depend on it in a way with real consequences. Deepfakes are inherently fraud/deception.
Further down the thread there is an example of a political deepfake used to spread disinformation. I don't really care about distaste, it's authenticity (the truth) that is under attack by these. On the upside, they can offer a bit of entertainment.
It's not even a problem of regulation since the bad guys will always have deepfakes. The cat is out of the bag already.
I had actually created the SFWdeepfakes subreddit, because i thought those edits of Nicolas Cage on everything were absolutely hilarious.
Guess what? Suspended for "posting involuntary pornography". Tried to get my account back, but reddit was "experiencing higher than usual support volume" and never got back to me. Oh well.
What I learned from the pandemic, though, is that people are much worse at telling fact from fiction than I expected.
It's given me a greater understanding of the people who are very worried about deepfakes - if you think it's more like 10%-30% of the population that are very easily fooled, new technology for fakery is an ominous thing to see.
Most people are not trained to detect deep fakes and certainly most of us here still would fall for a deepfake unless we’re specifically looking for it.
Unfortunately that means that random Internet commenters aren’t going to be trusted.
The main difference is that there might be cheap tailored fakes for noname people like me for small scams.
Now with deep fakes, we learn not to trust video:
But the end result is also that we no longer trust true information either. We have become cynical, detached, disorganized. But each of us ends up somewhat arbitrarily deciding what to believe out of the sea of indistinguishable bits forehosed at us by the Internet, we no longer have any shared experience or consensus to build a functioning society on.
Propaganda does not require you to believe it. It's equally effective if you end up believing nothing.