They sometimes come up with measures like this, while routinely ignoring the development of horrible communities that they're aware of (since some users take any possible opportunity to mention them to the admins and are met with silence or half measures).
They even came up with the concept of quarantined subreddits. I honestly can't think of a single honest reason for them existing. Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain, instead of just outright banning it (unless you want to support it discreetly).
To be clear, I'm not accusing reddit admins of shadily supporting horrible communities. I'm just saying that, from the perspective of a user that sees them fight the battle, they're doing it so badly that it almost seems they don't want to win, and I wonder what's happening behind the scenes for that to happen.
So, using a combination of email blasts, instant messaging, and more, these groups focus down a site or individual. This can involve simple brigade type tactics against posted comments and stories, complete with abusing the site reporting process, to flag users and content they disagree with. Sometimes it requires pressuring those higher up with public shaming for permitting something they may not be aware of by use of the common innuendo of if they don't take action they agree with all the hateful, if any, aspects of the target.
When any social site becomes large enough to receive mentions on other sites and media it immediately becomes a target of groups seeking to control the message and attack those who do not wholly support that message. Even minor disagreements can be sufficient to attract ire.
the moral majority is an authoritarian regime and it will have a further damning effect than any feared religious right that people constantly brought up as an internet or social bogeyman.
the very fact they now state ideas they don't agree with as making them physically uncomfortable to the point they declare if violence should scare anyone. this is code for saying no limits are required to remove the threat.
1984 isn't government oppressing people, it is people oppressing people
* btw - only reddit you can be banned from reddits you never visited for activity on another sub (including just voting apparently)
For some people free speech is like riding the train: when you reach your stop, you get off.
Perhaps this is a way to sidestep claims of censorship. Deleting subs can embolden some community themes and messages by allowing them to play victim of censorship and bolster claims of conspiracy via attacks from left/right/government/lizard people. So let them carry on while alerting people of what lies ahead.
Keeping this group of people happy, while at the same time trying to maintain a somewhat “clean” site, is a delicate balancing act. I am certain that quarantined subreddits is a result of this.
Several such subreddits are (IMO rightfully) banned or quarantined.
The principal example that comes to mind is TD, which seems more obnoxious and tasteless than some sort of evil.
Subs that truly are beyond the pale by wide consensus seem to get dropped immediately and without objection.
As to not wanting to win, well, yes, dropping everything controversial means losing a lot of page-hits, and as a business, they're likely wary.
Reddit just says “This content has been restricted in your country in response to a legal request.
Geoblocked in Canada due to General legal request”
Is there any way to push Reddit to disclose who sent the request?
I’d love to followup with that agency, but we have a lot of police services in Canada, if it was one.
Reddit stopped copying Lumen (Chilling Effects) around 5 years ago.
(NSFW I guess, but not porn)
Though the subreddit (was forced) to enact a “no sourcing” rule already in the past.
In what world, refusing to ban something from your online platform is equivalent to "supporting it" (discreetly)? Reddit is a business, the business of getting ad money for providing an open discussion forum. They are free to decide what they host and what they not but simply hosting something (which in their case means, doing business with people that post that content) doesn't mean they support it. Supporting something is not the opposite of not supporting it.
It's like saying that Amazon _supports_ pornography because AWS allows to host pornographic content.
That would be the company which owns reddit, through the admins. Is it fair? No, it isn't, but reddit has no requirement to be fair (although it would be nice if it was).
That presumes some sort of equal playing field and a meritocracy of ideas.
That's clearly not the case. The reality is that good-faith content takes more effort than bad-faith content.
The 2016 US Election in particular showed us what happens when a platform takes a "hands off" approach - platforms become absolutely saturated with false, hateful information... much of it produced by large and even state-level organizations.
It's not just evident when it comes to political content.
Craigslist. Amazon. Etc. None of these platforms work with a completely laissez-faire approach on behalf of the platform owners. To think otherwise is utterly naive.
Platforms need to curate. There's no other way. Yes, this means we will need to place trust in those platforms and monitor their curation. Platforms will, in turn, have an obligation to be transparent w.r.t. how they curate.
– John Stuart Mill, On Liberty (https://www.utilitarianism.com/ol/two.html)
Mill seems to agree with you.
And yet, presumably you were able to employ critical thinking, see past all this, and vote the "right" way despite an alleged deluge of hateful misinformation, as were 48%, a majority, of your fellow voters.
Where is the evidence that the level of misinformation is any higher now than in the past, or that censorship (whether by public or private entities) is suddenly a desirable thing?
History has shown that we didn't need to censor and oppress Communist thought in the US during the Cold War because of its alleged threat, and even because Communist thought was being supported by foreign actors. In the upshot, communism discredited itself just fine, and in the meantime, our censorship made us intellectually and morally poorer. Why should things be any different with the new far right?
Where is the evidence that the level of misinformation is any higher now than in the past,
History has shown that we didn't need to censor and oppress Communist
thought in the US during the Cold War because of its alleged threat,
and even because Communist thought was being supported by foreign actors
1. "Communists" had no way to create and disseminate information on such a massive scale, in a manner that is nearly indistinguishable from good-faith actors. There's no analog for that in the Cold War's "Red Scare".
2. During the Red Scare, the possibility of Russia infiltrating and influencing the US to any real degree was laughable. It's not laughable now. It demonstrably happened and is happening.
3. "Censorship" is when a person is not allowed to express their views. That is not what is being proposed here. This is about refusing to hand a megaphone to bad actors.
Reality does not conform to your ideals of an egalitarian marketplace of ideas. Content farms with modest funding and modest staffs can outproduce and out-influence millions of regular voices. Relying upon the populace to suddenly become savvy is not realistic. Either we need to turn this into a full-scale information war of competing content farms and state-scale misinformation campaigns, or platform providers need to combat things at that level.
And yet, presumably you were able to employ critical thinking, see past
all this, and vote the "right" way despite an alleged deluge of hateful
misinformation, as were 48%, a majority, of your fellow voters.
I also have an IQ in the 90-99th percentile range, a degree in computer science and have been immersed in online culture since before web browsers existed.
The vast majority of the human race does not have those sorts of advantages. Realize that the HN demographic is not representative of the entire human race.
> 3. "Censorship" is when a person is not allowed to express their views. That is not what is being proposed here. This is about refusing to hand a megaphone to bad actors.
Censorship, historically, was preventing objectionable books from being published. In many forms of censorship, there was nothing theoretically preventing an author of objectionable books from handwriting them and distributing them privately, so long as they didn't draw too much attention to themselves. The historical censors could have made an identical argument to yours: they were merely depriving the objectionable view of a "megaphone" (the publisher), not eliminating the view entirely.
But given what you have asserted above about the importance of breadth of reach for views, it would seem to me that preventing a view from being widely disseminated in any practical way to people who would otherwise freely choose to read or hear the view is the very essence of censorship.
But more broadly, my real problem with your viewpoint is perfectly exemplified here:
> I also have an IQ in the 90-99th percentile range...
> Relying upon the populace to suddenly become savvy is not realistic.
Firstly, with deep respect, this is incredibly arrogant. But nevertheless, you may be right about this, that only the enlightened and educated few can discern truth from falsity and make informed choices.
But if you are right, don't you see that this undermines the bedrock assumptions and principles of democracy? The masses must be led, and shown the "right" information and protected from the "wrong" information? By whom? And what is to prevent those elite and enlightened few from acting in their own interests rather than those of the ignorant mob?
I see within your argument, which may not be factually wrong, a powerful argument in favor of authoritarianism and oligarchy. Any argument that leads to such conclusions is worth a fair amount of scrutiny, no?
Even if it is a fiction that all voters are equally intelligent and informed, sometimes we have found that certain fictions are very important prerequisites for creating a desirable society. For example, the fiction that "all men are created equal". They aren't. But we use that fiction in very important ways to create a more just and equal society. Another is the fiction of free will. Free will does not exist. But still, we treat people as if they had free will, because the alternative is disempowering and decouples people from any responsibility for their actions.
So, I think "the demos makes more-or-less informed choices that are more-or-less in its own self-interest" is another of those necessary fictions, necessary to prevent us from regressing to feudalism or worse.
Firstly, with deep respect, this is incredibly arrogant.
But nevertheless, you may be right about this, that only
the enlightened and educated few can discern truth from
falsity and make informed choices.
But it's important to examine our own privileges.
Intelligence (while admittedly not well-represented by a single number like IQ) is more or less a lottery we win or lose at birth. Bragging about intelligence is like bragging about being tall. It's not something earned; it wouldn't make sense for anybody to ever brag about it.
Critical thinking is a skill that is a serious luxury. It relies on some combination of intelligence, time to hone the skill, access to education, access to information, and/or a serious autodidactic streak. A lot of people live lives where one or all of those is missing, often through no fault of their own.
Other things help me to separate online misinformation from information as well. I was in high school / college as the web came of age, at a time when I had the time and inclination to dive into it from the beginning.
I was born into a stable middle class household in which we could afford a computer and internet access. More privileges many do not enjoy.
All those factors were crucial. Take one away and I may well have struggled to comprehend the difference between real and fake news.
My sister is a prime example. Many of the same advantages of me, but born earlier. Missed the internet revolution. Never been comfortable with technology. Constantly falls for fake news. Is that her fault? Well, yes, ultimately. We must all be responsible for ourselves.
But it's also essentially "a well-financed misinformation industry" versus "a middle-aged woman who is technologically illiterate" and wow, that is NOT a fair fight by any measure. If we are basing the future of our society on that fight having a favorable outcome, we are really fooling ourselves.
But nevertheless, you may be right about this, that only the
enlightened and educated few can discern truth from falsity
and make informed choices.
The "enemy" is well-funded organizations who have serious time and money -- and sometimes state level backing -- and are devoted to pushing a mix of legitimate and fake news that takes serious effort and knowledge to separate from the real thing(s).
I see within your argument, which may not be factually wrong,
a powerful argument in favor of authoritarianism and oligarchy
There are no easy answers.
A purely laissez-faire free for all absolutely does not work. We see it time and time again. Bad-faith information always utterly drowns out good-faith information, like spam emails or search results drowning out the legitimate stuff.
Solid, best-effort reporting takes time and sometimes the results are boring. Willful misinformation is orders of magnitude easier to produce.
Education and critical thinking are utterly essential. We should invest heavily in these areas. Democracy relies on them, utterly. But it is also fantasy to think everybody will become enlightened information-processors, and even those of us who are competent at it need help wading through oceans of garbage. At some point, we do need trusted providers of curation. While not ideal, it is realistic, unlike hoping that most members of society evolve into galaxy-brained autodidacts.
This is how any developed society functions. We can't all evaluate drugs and therapies, so we need the FDA. You could replace it with one or more private-sector ventures, perhaps, but at some point you would need to rely on something. We can't all make clothes or food so we trust others. We trust others to design our roads and keep them safe. If I had unlimited time and brain cells it might be nice to do these things myself, but it is not even a little bit realistic.
The same people that deems them worth quarantining, presumably.
We could have a debate of the benefits and drawbacks of moderation, but by the time you have quarantines in place, the ship of not considering yourself a moderator has long sailed, which is why it doesn't make sense for me as a measure.
Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.
Quarantines offer greater flexibility in monitoring and management, with fair warning to members.
A lot of its success is built on content which is not as acceptable as one might desire. See what happened to Tumblr when they cracked down.
Once you assign usernames and attach any type of value counter to an account, you've effectively dumped true anonymity out the window due to essentially creating a pseudonymic marketplace.
People may think "How pseudonymous do you need to be?" In today's world? Very. Especially given the propensity for deplatforming and reputation based social warfare.
I think the point is that there's a distinct disconnect between your profile on Reddit (and any other sites you share the username with) and your physical person, at least up until the point where you (ideally) make the connection public yourself if desired.
It's anonymous insofar as the "person" on reddit is not easily attributable to the physical person themselves.
There is one. It was used a couple comments up.
Pseudonymous. It's an actual word, not just made up.
My point stands, though - despite the misusage of the word anonymous there's a real difference between how facebook treats identity and how reddit does. And it massively affects how communities in both interact.
Also no surprise, the account was active for only ~5 days.
I think their main challenge is they have a cultural/temperamental resistance to making decisions on a case-by-case basis. Any time there is a problem it seems like they try to overgeneralize and come up with a sort of rule or heuristic that can apply in every case and situation. But since the "toxicity" of a community is usually more subtle than that they end up always being way too reactive.
That's easy - advertising.
The same reason YouTube cleaned up it's act. And Gifycat. And a dozen other sites who found themselves at the wrong end of their advertiser's ire.
Since we all know we're talking about The_Donald, I'll just mention them specifically. Alexa traffic data considers Reddit the #6 website in the US -- they are huge, and their major actions will be noticed. If they ban their top pro-Trump community, Fox News will be talking about it later that day. Reddit execs will be answering questions in front of the US Senate a week later. Phrases like "election interference" will be thrown around. Nobody wants that.
So instead they do it subtly: quarantine The_Donald, which means they don't show up in site-wide lists, can't be accessed through the mobile app, and can't be indexed by search engines. This stems their growth. Later, start replacing their moderators. Eventually you've taken it over and shut it down, and they leave. Now if Tucker Carlson wants to tell his audience about it, he has to explain to a bunch of 60-somethings what a "forum moderator" is. Not happening.
Honestly, I think what they did worked out best for everyone. Reddit is happy to be rid of them, and they're happy with their new website, TheDonald.win. Everybody wins.
There is a pinned mod message which leads you to the new site but that's entirely out of reddit
They’re profiting off anger so “old” ones being angry has no downsides.
There are no ads in quarantined subs and you can't buy any reddit awards to gift to others
Take a look: https://voat.co/
If someone wants to provide an alternative to reddit but they don’t want to moderate/censor content that makes people uncomfortable, Reddit becomes the lesser evil.
Reddit is clearly a leftwing site now in ways that it wasnt when it started (when libertarian was the dominant ethos). All their "common word" reddits, such as news, politics, technology science are purely leftist echo chambers by now. The few rightwing corners left are already isolated and ready to make the move. In fact they may even decide they should jump ship. What better time for /r/t_d to go somewhere else than now, when people's interest is intensely focused in elections and they can easily rally all their users to follow them. Wherever they go, more people of other persuasions will follow, because people love telling other people that they are wrong on the internet.
The question is if those other competitors can stand the traffic and whether they give the features to moderators that reddit does.
I suspect this is the reason. I observed the opposite, a certain type of content disappearing after a separate comunity was created for it.
You don't get why a online forum might have an interest in ensuring every other post isn't a dick pic?
If you can understand that, maybe you don't, then you DO understand their interest in moderating their own website whether or not you agree with their moderation standards in practice. A dick pic may be obvious to you...but so long as the picture depicts someone >18yo then it is legal and free speech.
>They sometimes come up with measures like this, while routinely ignoring the development of horrible communities that they're aware of...
Again isn't it their right to determine what forums/threads dick pics may in fact be appropriate and which threads forums they aren't and should be moderated/removed?
The confusing part is not that they're attempting to clean the site - it's their ineffective, half-assed ways of doing it.
> Your account has been suspended from for breaking the rules. Your account has been suspended for 3 day(s).
>> You recently upvoted a post or comment that was determined to be against our policies. Abusive content is not acceptable on Reddit nor is engaging with it. Please be thoughful about the content that you interact with.
Interestingly nothing in the Reddit policies seems to mention voting or engaging with content: https://www.redditinc.com/policies/content-policy
I think there should indeed be penalties for upvoting e.g. death threats.
If Reddit's goal is to encourage broader self-censorship of wrong-think, the message makes perfect sense. If Reddit informed a user that they broke Rule A, a rational user would learn a specific lesson. They would learn that they can avoid punishment by not interacting with content that breaks (or might break) Rule A.
By punishing without an explicit explanation, a rational user must conclude that, to avoid punishment, they cannot risk interacting with content that breaks (or might break) any of the rules on the website.
> Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.
Until now that has mostly been warnings for posting in "bad" subs. Now after most of those are gone I guess it's time for the next phase.
> I’d like to share an update about our thinking around quarantined communities.
> When we expanded our quarantine policy, we created an appeals process for sanctioned communities. One of the goals was to “force subscribers to reconsider their behavior and incentivize moderators to make changes.” While the policy attempted to hold moderators more accountable for enforcing healthier rules and norms, it didn’t address the role that each member plays in the health of their community.
> Today, we’re making an update to address this gap: Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.
This seems strange to me. If this content is violating reddit's policies then why don't they ban them? It seems awkward to have this middle ground of "semi-banned" content where users get suspensions if they interact with it too much. Why create these invisible land-mines for users? I imagine it's because banning this content outright would make Reddit admins appear even more biased, or because the content doesn't actually violate any policies but puts off advertisers.
There isn't a short answer to what's probably an eternal conflict, but I do know a lot of people in tech who might benefit from reflecting on the question of, "are we the baddies?"
So HN is gone, too, then.
Any site can have any rules they want. But they shouldn't expect to be able manipulate and control discussion and face no backlash from users.
> Without becoming a target for filling up with illegal shit to overwhelm the moderators and cause legal trouble for the owners?
'Trolling', 'flaming' and 'swearing' are not illegal, and honestly, most complaints about 'trolling' I see are just people who disagree with what they're reading.
You're saying they should face backlash from government regulators not from users.
If there's a big user backlash, those users are already free to move to whatever site they want, or make their own.
Re: your comment about trolling, etc, not being illegal: yes, exactly. So if a site wanted to enforce some sort of "family friendly" policy, you could simply bomb them with truly illegal shit instead and get them in trouble with the regulators for not being able to keep up as they would be liable for that, since they aren't a "anything legal goes" platform.
I never said that. But it's probably a good idea to have some sort of oversight for a massive platform that censors and manipulates discussion of millions of people. You may think their policies are totally justified and correct (they aren't), but the specific opinions they choose to promote may change to something you are against.
> If there's a big user backlash, those users are already free to move to whatever site they want, or make their own.
Yes, they are also free to shit on the website and point out it's flaws.
> So if a site wanted to enforce some sort of "family friendly" policy, you could simply bomb them with truly illegal shit instead and get them in trouble with the regulators for not being able to keep up as they would be liable for that, since they aren't a "anything legal goes" platform.
Trolling is not illegal, it's not even a specific thing. It's a very ambiguous term used to describe 'comment's that make me feel bad'. Trolling and posting illegal shit is completely different, sure ban illegal stuff - you kind of have to, but don't use 'trolling' as an excuse to ban stuff you disagree with.
Do you honestly believe that a discussion forum site that wanted a family friendly policy shouldn't be able to have any liability protections? That if someone started flooding them with child porn, say, at a rate they couldn't keep up with, they should have to shut down because they'd otherwise be liable for all that content due to their having a content moderation policy that forbid certain types of legal speech (say, swearing)?
And what does child pornography have to do with swearing?
> Do you honestly believe that a discussion forum site that wanted a family friendly policy shouldn't be able to have any liability protections?
Again, I never said that.
The pitch being made in this subthread was "I don't think Reddit should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act."
So you lose liability protection if you censor lawful content.
So if you want to ban swearing, you become liable for your user's content. Which makes you a WIDE OPEN target for bad actors who could post whatever illegal content they want faster than you could moderate it.
I'm trying to illustrate why I think `cwhiz took an absurd position, and am not using the term "trolling" in general now. I've given a specific example of "legal content some communities might want to prohibit" as well as "illegal content that bad actors could use to get the owner of that site in trouble in this proposed world."
If your laws permit speech you don't like, either fix your laws or fix yourself.
Think through the mechanics of how this oversight would work.
Someone will have to decide "is this site sufficiently neutral in how they moderate UGC." So let's spin up a government department more that, maybe call it part of the FCC, maybe make it a new one, who knows.
Now let years, maybe even decades go by. Now let's say some site you like gets essentially taken out - since massive sites aren't going to be able to do manual moderation of everything - by a regulator who leans to the opposite side of the political spectrum you do.
Now you don't have much resource, since you aren't free to go off and try again - the regulator would just get that site too.
I have a hard time seeing how this is better.
And this is a weird stance for conservatives to be preaching, since the previous similar thing, the fairness doctrine, was no favorite thing of theirs.
This is one of those things where you can't say "only the government can solve this problem" unless the only speech you want to see protected is fairly-mainstream government-endorsed speech.
There are many ways this can work, and it's not necessarily simple. I don't have a specific proposal.
> Now let years, maybe even decades go by. Now let's say some site you like gets essentially taken out - since massive sites aren't going to be able to do manual moderation of everything - by a regulator who leans to the opposite side of the political spectrum you do.
1) Manual moderation of massive sites is already going away for the most part.
2) Sites like reddit have swarms of moderators, the ability to report, and send legal notices to take down specific content and also be made.
3) We have laws about 'illegal content' these websites are already complying on a massive scale.
4) My suggestion was an oversight of excessive moderation and manipulation of opinion, not what your example suggests, which is the opposite. You literally don't have to moderate anything to not be charged with 'excessive censorship' or whatever.
> Now you don't have much resource, since you aren't free to go off and try again - the regulator would just get that site too.
We have a justice system for a reason.
> I have a hard time seeing how this is better.
We can all come with with 100 different imaginary scenarios where some very generic suggestion may fail. I don't see the point. Do you think there should be no laws about content? No libel laws? No copyright enforcement? Where do you draw the line?
> And this is a weird stance for conservatives to be preaching, since the previous similar thing, the fairness doctrine, was no favourite thing of theirs.
I'm not a conservative, but what I find more jarring is the people who want sites like facebook forced to be censored and moderated politically, while reddit can do whatever it pleases as long is it aligns with their political ideology.
> This is one of those things where you can't say "only the government can solve this problem" unless the only speech you want to see protected is fairly-mainstream government-endorsed speech.
Again, I did not suggest that there should be laws about what content is allowed. I actually think we already have too many such laws - I would personally do away with DMCA and other such bullshit. What we should have is a check on mass-scale censorship and manipulation of opinion by corporations.
I was talking about the proposal from `cwhiz, in which the government would revoke protections under existing current law. I think there are holes in that suggestion that are miles wide, that will extend to pretty much any other proposal that depends on the government deciding if a site is over- or under-moderated.
Without a specific proposal to talk about, or specific opinions on "should a site have to relinquish editorial control or rule-setting ability to enjoy liability protections from stuff done by users" (I say "no"), I don't see much more interesting stuff to debate. Those are real, active questions and proposals being made; other hypotheticals could go on forever but are less relevant to now.
But if you aren't interested in those specific points, I don't know why you're participating in a thread that was specifically about the questions around that current law.
If you would like to learn more, and see exactly what happened when Section 230 was not in effect, check out this case:
Stratton Oakmont, Inc. and Daniel Porush,
Prodigy Services Company, "John Doe", and "Mary Doe".
Stratton Oakmont (yep, the one that was made by Jordan Belfort, the Wolf of Wall Street), won that case against Prodigy and, yes, John Doe and Mary Doe. You, the user, are John Doe or Mary Doe. Section 230 protects you.
I'd be in favor of a law that required uncensored free speech on all forums but that also allowed users to opt-in to moderation. Even if 99% of users opted in to moderation it would still leave an escape valve for totally uncensored free thought.
Honestly I don't see what's the best alternative. Probably the segregation of comment platforms and sites. If Section 230 were to be substituted with something effective in regards of speech protection, or conversely the right of platforms to curate comments, then still this would only apply to the US and it would be a significant factor to move the comments elsewhere.
The legal realism prevails with this stupid law, it will always prevail. You can both moderate / review / editorialize content, including political content, while simultaneously enjoying Section 230 protections.
This isn't a practical idea. People aren't always aware of their own biases and critics are also inclined to see bias even when there is none. There is no practical way to determine bias levels.
> Reddit is very happy to claim they are unbiased, neutral, hate free.
Are they? Can you point to an example of such a claim?
> But some preferred subreddits will ban people for heaving wrong gender or wrong race
Subbredits are run by users.
> They also tolerate hate speech, incitement to violence or even advocacy for genocide , if it matches their ideology.
Reddit would say they don't tolerate those things, you say they do, how do we determine who is right?
No, they are run day to day by users, but ultimately are at the discretion of the site owners. Case in point, this article we are discussing.
> Reddit would say they don't tolerate those things, you say they do, how do we determine who is right?
The behavior of the site owners? Reddit site owners treat different subreddits differently and different viewpoints differently, although there is single characteristic differences (gender, race, etc) because of the people and topics that are generated from the groups. Pre-censoring, is not uncommon, which has led up to this wrongthink penalty.
If you're challenging the history of reddit, you aren't informed enough to be discussing this and it's not worth rehashing.
The site owners don't ban users based on race and gender which is the claim I responded to.
> Reddit site owners treat different subreddits differently and different viewpoints differently
Well yes, that is how standards work, "different views" are treated differently based on how they stack up with respect to the standard. I understand that you're arguing that reddit applies the rules selectively, but reddit would argue that they apply them fairly, so how do you suggest this kind of conflict be resolved at scale? Who should decide if reddit is being fair or not?
> If you're challenging the history of reddit, you aren't informed enough to be discussing this and it's not worth rehashing.
If you want to make an argument then make it, but condescending declarations of a monopoly on the informed position does not do your case any favors.
I'm not making a case. I'm describing the state of the reddit culture. Whether you believe it or not, is incidental.
> The site owners don't ban users based on race and gender which is the claim I responded to.
Your statements were broader than that. Revisionism aside, you can get banned for voting on a post, and there are lots of rules (written and unwritten by each community) that can get you banned from the subreddit or reddit as a whole. Making statements about YOUR gender (or a number of other attributes) can also get you banned. Here, I'll do a google search for you (since you can't be bothered) for subreddit bans, to start you off:
You aren't informed enough to be discussing this and it's not worth rehashing.
That much is clear.
> I'm describing the state of the reddit culture. Whether you believe it or not, is incidental.
Very convincing. Oh, I know, you don't care about that in the least.
> Your statements were broader than that. Revisionism aside...
No, they weren't. You stated that the subreddits are ultimately controlled by the site owners, and I explained that the site owners don't ban people based on race and gender. Simple. Revisionism indeed.
> Making statements about YOUR gender (or a number of other attributes) can also get you banned. Here, I'll do a google search for you
The results of the google search does not support the claim you're making, like, not even a single result. Beyond that, mods can ban users for whatever reason they want, but the site owners do not ban people based on their race and gender, that claim is totally false.
> You aren't informed enough to be discussing this and it's not worth rehashing.
lol, a laughable assertion coming from someone who can't even cherry-pick google search results to support their argument. Stop wasting my time.
Not cherry picking to make an observation that documents the history. Its an ongoing, and well covered, topic for which you can demonstrably research without difficulty. Denial doesnt help either of us and causes me to question your sincerity. Good luck with whatever.
The answer is going to be a moving target. You're gonna have some government bureaucrats having to make rulings on these regulations.
And many of the proponents of this seem to fail to imagine a world where Donald Trump is ever not president. Imagine the power this would give the government against someone who plays fast and loose with 'facts'. "We've decided that the statements in this post were intentionally misleading, therefore they are a political statement, and the owners of the site promoted them in a way that they didn't do to this other post, therefore the owners are making a political statement and this is not a neutral site, which makes it liable for these other comments from other users that we're interpreting as violent threats."
This is unsustainable though, there is just too much content to filter. There is also the fact that lacking such protections the site will be targeted by bad-faith actors who intend to harm the site by posting lots of legally problematic content.
Section 230 did not create a distinction between a platform and a publisher that required censorship rules to be applied fairly or evenly.
I'm a big fan of the idea that internet providers, DNS registrars, etc. need to be neutral to content in the way common carriers are, but I'm not convinced yet that platforms should have similar obligations. It seems that the appropriate response to schismatic disagreement with the "SysOps of the BBS" is to build another BBS as the "telephone network" is open to all.
Someone else mentioned as well, but some very specific monopoly/necessity cases could change this delineation, however I could only see this being necessary with a much different environment than we have today (that is, situations without feasible common carrier alternatives, _maybe_ google search would be an example today).
I won't speak to your particular motives because I don't know you from anyone else, but I sometimes wonder if a core driver behind wanting to mess with section 230 is because people feel entitled to an audience at low/no cost to them, at the expense of platform owners, to blather on about whatever they please. To this end, I say "nope, you're not entitled... we have an open internet, drive your own damn traffic".
You mentioned Google Search, I'd add Youtube for video content, Twitch for streaming, FB for social network etc. Sure, you can broadcast video content without Youtube, there's vimeo or you could host it yourself, but it's really not a feasible approach. Much like you don't really have to rely on the local utility company to provide you with electricity and water, you could totally run a generator in your back yard and have trucks bring in fresh and move out waste water ... but it's not a realistic approach.
> To this end, I say "nope, you're not entitled... we have an open internet, drive your own damn traffic".
I'd respond that we don't have an open internet, but more importantly: you don't have to be a platform, nobody is stopping Reddit from being a publisher.
The reason why they don't want to legally be a publisher is simply that their low-cost approach doesn't work if they have more responsibility for what they publish. They very much want to be able to pick and choose what is published, but they don't want to be considered a publisher.
But Reddit is effectively just a big message board. Such interpretation would force every small internet forum to either become a [0-9]chan or a closed space for invited members only. There is lots of middle ground where the owners/the community polices some speech and you can reasonably, realistically just go somewhere else.
The entire point of 230 was to allow companies like Reddit to censor lawful content while still claiming protections. It was never designed at any point to create neutral platforms.
Lack of moderation was already a defense before 230 was created:
> CompuServe stated they would not attempt to regulate what users posted on their services, while Prodigy had employed a team of moderators to validate content. Both faced legal challenges related to content posted by their users. In Cubby, Inc. v. CompuServe Inc., CompuServe was found not be at fault as, by its stance as allowing all content to go unmoderated, it was a distributor and thus not liable for libelous content posted by users. However, Stratton Oakmont, Inc. v. Prodigy Services Co. found that as Prodigy had taken an editorial role with regard to customer content, it was a publisher and legally responsible for libel committed by customers. 
Section 230 was not designed to foster neutral platforms, it was designed to allow platforms/communities to moderate as they saw fit without opening themselves up to legal liability. To re-phrase that in modern terms, section 230 is one of the strongest protections we have today for community's Right to Filter.
When people say that they like Section 230 but it's gone off the rails, what they really mean is that they think Section 230 was always a bad idea. Or perhaps more charitably that they haven't really researched how Section 230 works or what its history was.
The whole point of Section 230 was so sites could do that. Before 230, there has been court rulings that said if you moderate the content you are liable for the content. 230 was enacted specifically to reverse that.
I don't think people espousing this view understand the scope of what sites need to tolerate to comply. Would reddit be a better place if they let people say, "ni\\\\s belong in shackles" and, "death to Jews"? Because that's legal speech.
As far as Reddit and Facebook go, understand that freedom of speech also means freedom from compelled speech. It's not just about being able to say what you want, it's about being able to not say the things you don't want to say. Making Section 230 contingent on hosting all legal content compels people so say a lot of things they don't want to say.
The internet is open. If you disagree with Reddit, Twitter, Facebook, etc, there are alternatives. Of course when the hateful all congregate, the farce of their positions becomes too clear. They need the legitimacy of Twitter or Facebook to ply their nonsense.
As an aside, it's interesting that those far right sites are some of the most fervently moderated, "snowflake" sites. Even The_Donald on reddit was hysterical in its suppression of even a simple question on the truthfulness of some fake claim. Instant comment deletion and account ban. Gab's extreme assertions of their moderation policy are uproarious when compared to its creation (you know - "Twitter silenced right wing voices!")
Looking at it now, it feels like that in general the site is really mean spirited.
Also upvoting means that you're just seeing whatever the majority of voters want you to see. It works on a smaller community like HN because comments get 10s of upvotes at most. It's not a site for good discourse.
However there's communities where there is interesting discussion still and often if you want an opinion of a product or an issue you might find a thread on it from a real user.
But generally they've done nothing to stop the nurturing of racist/sexist/bullying content over the years.
Now you can argue about free speech being important for an internet platform.
But Reddit trying to change the community attitudes of their site now seems a little late.
Also there is a real sense that some users believe that Reddit is the Sheppard of society and they're fighting on the forefront of society. Which I think stems from the millions of apparent subscribers to subreddits.
But it's just a fancy internet forum.
Like this linked post is on a subreddit called Watch Reddit Die.
Apparently something about free speech being stifled in Reddit.
People need to realise they shouldn't waste their life on a website they don't like.
The internet hasn't been fully consolidated by Facebook and Google just yet, they can find or make a different community.
I feel like reddit has always had recurring and overlapping cycles of mean spiritedness. For instance, during the summer holiday period it becomes notoriously harder to have a reasonable exchange of ideas there (don't ask me why; I'm not quite sure). This year, since the beginning of the COVID19 isolation period in March, we've been in a period of very high mean spiritedness. It could be a consequence of users' high stress levels.
Reddit does still have several excellent communities. I find that there's one thing in common between them: They have real, dedicated, human moderators enforcing strict, rational policies. Also: There are some weird cliques of power moderators on reddit who control dozens if not hundreds of subreddits. A good subreddit is usually not run by those cliques. Not to discount the hard work that some grunt moderators put into trying to run the large, clique-controlled communities, but the commonalities are hard to ignore.
On upvotes: Reddit should have clear policies enforced equitatively and transparently at the company level, by a human staff, and downvotes shouldn't be presented as the opposite of an upvote. I don't see a complete alternative system that can replace the upvote, though. They do provide different sorting options; what else could they do, there?
On alternatives: People attempt to leave on a regular basis. The problem is that there's a much higher incentive for the people who are ostracized not only by reddit as a platform but by the associated community to leave than for people who are, for the most part, comfortable, even if the administration acts a little weird on occasion. All alternatives are quickly colonized by people that the majority of the reddit community (if there is such a thing) would rather not be immersed in: White supremacists, conspiracy theorists, fanatic nationalists, etc. For the more moderate people to be willing to move, they want to feel surrounded by a majority of people like them, not a minority.
It's been clear over the last few years that the popular subreddits are the ones that have the most dramatic content.
Smaller subreddit do tend to have better discourse though until eventually, they get large enough to become popular and start to become the same echo chambers.
Second, I'm curious what % of revenue comes from user-purchased post and comment awards from gold. I've noticed those in use quite a bit in some more inflammatory subs like /r/politics. I wonder how they vet payments from users to avoid providing a way for hostile actors to buy visibility while flying under the radar. Ie. Avoiding conflicts of interest from accidentally monetizing trolls.
I've done some searching and it seems that reddit gold is a pretty small piece of the reddit revenue. Campaigns start at $50,000 with Reddit Ad's.
Reddit made close to 1 million in revenue from Reddit gold in 2016.
I'm deeply interesting in learning what % of revenue that is for them and how quickly it is growing.
As for advertising on Reddit, I view it as a fool's errand on the part of advertisers given the extensive popularity of PiHoles and adblocking software among the userbase.
That said, my concern is that it creates an environment for misaligned incentives. If say, hostile foreign governments have their troll farms buying awards to increase visibility of divisive and inflammatory posts in key threads, that furthers their agenda and creates a revenue stream for Reddit that is fueled by creating an environment where that flourishes.
I'm NOT saying Reddit is actively trying to do this as I think they'd be horrified to find this happening, but they have a fine line to walk here from a product and business standpoint.
Re: your other point, I do digital advertising for a living. I can assure you that the vast majority of users on Reddit are not using PiHoles. While many may be using ad blockers, those largely don't work for native ads in the mobile app.
I think you underestimate the size of this place. It’s basically a busy Subreddit.
To think that the reddit administrators would go so far as to ban people for even interacting with content that THEY deem problematic.
Reddit is dead and no one I know uses it anymore.
- In case anyone is interested; here is the censored post that I was referring to https://old.reddit.com/r/AskReddit/comments/h9ctho/serious_a...
And that doesn't even relate to this topic as what you're referring to is on a subreddit basis and is determined by a set of volunteer mods not reddit admins.
Not true. The post body says [removed]. If the user deleted it, it would say [deleted].
Edit: another user determined it was both removed and deleted.
Also, the user posted a comment complaining about the removal.
How do we know that? The body just says [removed] and the user is marked as [deleted].
If the admins can alter the content of your posts, then surely they can delete / remove content and have it show whatever they choose.
It's not dead, and I say thankfully because it not being dead retains its state as a containment mechanism of what is easily the worst group of internet users in internet history.
I fear for the day Reddit falls and its userbase spills out looking for another avenue to be comforted with groupthink, newspeak and feel goods for the Brands We Love.
People are in different stages of life and seek solace in other people having similar experiences. Thus, the sexually frustrated congregate, the depressed congregate, drug enthusiasts (and abusers) congregate, disgruntled ex-<insert organization> congregate, you name it, and angry/sad/anxious people with a shared condition will congregate in search of validation/confidence.
But the opposite is also true - people with a shared hobby/passion/vision congregate as well and create some really fun and uplifting communities.
As the subs where you can speak with some degree of openness are ever smaller, it's not worth the time anymore.
And honestly I WISH nobody used reddit anymore, maybe it could roll back 10 years when it was frankly more fun, but it sure seems like more people than ever are using it.
How do we feel about pre-emptive moderation? Is it warranted? Is it warranted every time? Only certain topics?
More precisely I would imagine they don't care about facilitating/refereeing culture wars inside their subs, cause lord only knows how much work that would be.
That wasn't what I was asking-however.
>> It was a combination of things, so I can't say for sure what could have prevented this. More understanding parents definitely would have helped. Other perspectives online, as well. Detransitioners frequently get shut down on the internet, as their experiences are often labeled "transphobia." Which is utter bullshit, in my opinion. Perhaps if I had read a similar story as my own those years back, I would have had second thoughts.
Ironically, the AskReddit mods contributed to this problem by removing the post.
The mod then gloated about banning dozens of people other this and that "LGBTQIA+ people run the subreddit", and that "the lack of discrimination (...) for being cis or being white or being straight or being a man overshadows any discrimination they would receive for their adhd"
I don't think that's great moderation given the subject of the subreddit but what can you even do about it that would not be time consuming and pointless?
My model for moderation would be opt-in. I want to be able to see everything by default but be able to see what moderators have flagged as inappropriate. I could then grow to trust some or all of those moderators and just hide everything they flag by default. But I'd always be able to go and quickly evaluate whether I still trust those moderators to be doing the right thing and not just hiding things they disagree with.
These are just folks who either showed up early, or demonstrated they thought about the issue the same way as the person who started the reddit thought.
Here's an exercise - check out /r/unpopularopinion today. Full of perfectly boring normal opinions like "MTV should go back to showing music videos". All of the content just seems completely fake. There's not an unpopular opinion in sight.
Now go the Wayback Machine and take a look at that same sub from say, three to five years ago (which feels like a short time to me these days). Plenty of deeply unpopular, controversial, thought-provoking opinions. You don't have to like them, but they were real.
This isn't a bad thing. It's not like Reddit has monopoly powers. Some people want to belong to communities that have agreed-upon values.
Brief recap on its current state of affairs: https://www.youtube.com/watch?v=wqx4wdn4Odc (18 minutes and change)
Terms that have no distinctive power have little value, so while I'd certainly consider them "users", I wouldn't consider them part of the community, much like a tourist staying in a village for a weekend isn't a part of the village's community.
This is quick becoming a debate over definitions, but I think reddit thinks of itself as both having a community and also as having individual communities. For example, anybody who has had a post in their niche community get to the front page understands that there's a larger community they're a part of that is different from their smaller one.
To me, community implies some amount of connections between the members. I've never put a number on it, but basically something like "if you randomly pick X people from the group, at least Y of them will know each other". That would limit both the size and also do a reasonable test for community membership: if nobody in the community knows you, you're not a part of the community.
Whilst there isn't inherently incorrect or morally wrong about detransitioning in itself, often when brought up it can lead a person to an idea which is incorrect - essentially applying the anecdotal evidence of a Reddit thread to all situations.
Thus, moderators of subreddits (and administrators, although I think that whilst it is bad when they do so, it isn't as common as many would believe) feel conflicted on the matter. Should they allow the discussion to continue, almost guaranteeing that somebody is going to form misconceptions from the thread? Or should they shut it down, stifling free speech but at the same time reducing the level of overall animosity towards minorities in the community, and preventing people from forming misconceptions?
Ultimately, I think that it should always really come down on the side of free speech, as long as they aren't being blatantly hostile. However, in my opinion, if we really want to have a positive "marketplace of ideas", as some describe it, we really need to educate people from a young age on logical reasoning, cognitive biases and argumentative fallacies the same way we would teach history, geography or the arts. Whilst I think that's unlikely to happen in our current political climate, given that keeping people uneducated is a very effective way to make them vote for those who make the most compelling surface-level arguments, it seems to be the only good solution for the conflict laid out above.
This almost inevitably happens when a site gets popular enough. There are only so many intelligent people actively posting on the internet, and it's impossible to have a site with 100,000 users (let alone the >400,000,000 of Reddit) with a high fraction of them providing good content.
Certainly the heavy-handed censorship of the Reddit administration is a disincentive for people who are interested in topics at the edges of the overton window (i.e. most of the interesting people online).
See the following comment
The second paragraph has four sources that cite some transexuals rejecting the label of transgender, so language-policing others based on your personal preferences is quite inappropriate here.
The bottom line is that transexual is a subset of transgender, which is an umbrella term. If you write transexual refering to a person whom you do not know particularly identifies as transexual in can be seen as hate speech in many places.
This is just absurd.
> and no one I know uses it anymore.
You say this as if it is relevant to your point.
I sometimes browse by /r/all/gilded, which gives you a weird, weird, WEIRD cross section of the Internet. One time I found a heavily gilded post memorializing a recently deceased contributor to /r/cripplingalcoholism -- which appears to be an important but saddening community for people with a serious medical problem -- then followed a few links on the profile of the deceased and their associated until, a few minutes later, I ended up at /r/opiaterollcall.
Wow oh wow, what a subreddit that was: https://web.archive.org/web/20130904195509/www.reddit.com/r/...
That forum died pretty soon after I found it (and got national press coverage at https://www.nytimes.com/2017/07/20/us/opioid-reddit.html ) but wow, oh wow, there was a lot going on there which is now essentially lost to readers of history, because existing archives don't go deep into posts and comments.
Yesterday, Reddit CEO Steve Huffman (or “Spez,” as Redditors affectionately know him) published the platform’s annual transparency report, highlighting how the platform took down more than 200,000 pieces of offending content over 2019, along with nearly 56,000 accounts and nearly 22,000 subreddits—the site’s communities, most of which are moderated by volunteer users. He also flagged a “company update” regarding the way Reddit thinks about “quarantined” communities...
That policy, in a Reddit post by Huffman:
When we expanded our quarantine policy, we created an appeals process for sanctioned communities. One of the goals was to “force subscribers to reconsider their behavior and incentivize moderators to make changes.” While the policy attempted to hold moderators more accountable for enforcing healthier rules and norms, it didn’t address the role that each member plays in the health of their community.
Today, we’re making an update to address this gap: Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.
According to that statement, bans (apparently temporary) are for voting only in already quarantined subreddits.
WatchRedditDie's discussion is not especially nuanced, considered, or balanced.
There's enough fuzz in the vote counts that no one would notice, and you wouldn't have any controversy on your hands.
Those user's votes don't matter, scaring the userbase into narrowing their overton windows of acceptable reddit voting is what matters.
More broadly Reddit wants to be seen as taking action. The incentive structure facing that company rewards loudly making a slight improvement to the site over quietly making a huge improvement.
It is amazing how people will redefine hating Trump supporters, men, white people, "the 1%", western civilization etc as somehow 'not hateful'. "My hate isn't hate, I just dislike things that are bad." Right.
You're not familiar with, e.g. tankies, I take it?
The problem with reddit isn't technical, it's human. A large group of people will ruin literally anything.
and I don't especially want the admins of any one site to be able to erase my entire existence off the net on a whim.
single/token login from known net authority agents is common. I'd rather embrace that kind of thing than the idea that any one site should be our de facto forum and communications standard.
I was reminded of this recently when a personal favorite BB forum disappeared , taking years and years of niche automotive data with it. Data that was rare from the get-go, and un-replaceable now.
I'd rather decentralize my 'data-stores'. Reddit will disappear one day, and it's up to them to be a good actor towards the archival group, providing data bandwidth during a time when investors have become disinterested, and paying the light bill is near impossible.
I wouldn't count on that.
Keep archives, repeat information elsewhere , and most importantly : don't embrace adding and relying on centralized structure where doing so inherently adds weakness and fallibility.
Reddit’s content is already dumped and cloned in near-real-time.
That'd fix most of Reddit's issues and then some.
A normal forum will have different sections to discuss everything even mildly related to the overarching topic, and a number of areas to talk about things not related to the topic. But Reddit doesn't have those features, so subs have to resort to these half measures or allow multiple daily posts about the same 5 topics.
The last I heard they still weren’t profitable.
It is far better than it was over a decade ago.
As an aggregator, it is nearly perfectly aligned with my interests (except for the year they spent heavily promoting Jimmy Fallon).
I read CNN multiple times a day.
As a news site, it is nearly perfectly aligned with my interests.
Where did I lose you?