I don't really get reddit's efforts to clean their site.
They sometimes come up with measures like this, while routinely ignoring the development of horrible communities that they're aware of (since some users take any possible opportunity to mention them to the admins and are met with silence or half measures).
They even came up with the concept of quarantined subreddits. I honestly can't think of a single honest reason for them existing. Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain, instead of just outright banning it (unless you want to support it discreetly).
To be clear, I'm not accusing reddit admins of shadily supporting horrible communities. I'm just saying that, from the perspective of a user that sees them fight the battle, they're doing it so badly that it almost seems they don't want to win, and I wonder what's happening behind the scenes for that to happen.
What is going on at Reddit is that the cancel culture has found how to take down content they do not find acceptable and worse persecute those whose content they have taken down or disagree with.
So, using a combination of email blasts, instant messaging, and more, these groups focus down a site or individual. This can involve simple brigade type tactics against posted comments and stories, complete with abusing the site reporting process, to flag users and content they disagree with. Sometimes it requires pressuring those higher up with public shaming for permitting something they may not be aware of by use of the common innuendo of if they don't take action they agree with all the hateful, if any, aspects of the target.
When any social site becomes large enough to receive mentions on other sites and media it immediately becomes a target of groups seeking to control the message and attack those who do not wholly support that message. Even minor disagreements can be sufficient to attract ire.
the moral majority is an authoritarian regime and it will have a further damning effect than any feared religious right that people constantly brought up as an internet or social bogeyman.
the very fact they now state ideas they don't agree with as making them physically uncomfortable to the point they declare if violence should scare anyone. this is code for saying no limits are required to remove the threat.
1984 isn't government oppressing people, it is people oppressing people
* btw - only reddit you can be banned from reddits you never visited for activity on another sub (including just voting apparently)
> Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain, instead of just outright banning it (unless you want to support it discreetly).
Perhaps this is a way to sidestep claims of censorship. Deleting subs can embolden some community themes and messages by allowing them to play victim of censorship and bolster claims of conspiracy via attacks from left/right/government/lizard people. So let them carry on while alerting people of what lies ahead.
It is naive to think these bad users currently stay confined to their hateful communities. They use the rest of the site too. There is even an argument to be made that the rest of the site could be improved by banning these hateful communities, decreasing the value these users get from the site, and thereby decreasing the likelihood that these users will continue to use the rest of Reddit.
The sibling comment to yours links to the study of users from /r/fatpeoplehate and what happened to them. Short version: most of them stayed, and behaved better in better moderated subreddits. The implied conclusion is that a very significant part of noxious community behaviour is situational.
This is most definitely the case. The outrage inside Reddit is massive when there are signs of censorship, even if it’s just a very vocal minority: they set the tone.
Keeping this group of people happy, while at the same time trying to maintain a somewhat “clean” site, is a delicate balancing act. I am certain that quarantined subreddits is a result of this.
My take on quarantined subreddits is that it’s just a tool to remove certain communities from the mobile app so they don’t get in trouble with Apple / Google (mostly Apple, I’m sure).
I think there's something to this, because on at least the iOS app, a quarantined subreddit basically looks like it doesn't exist at all, on the phone app.
And also how to deal with communities that should absolutely not be spilling over into the rest of Reddit. Reddit, I think rightfully, quarantines a lot of ED related subreddits.
My impression is the reason is that there really isn't widespread agreement on the harmfulness of such communities, so they're sort of trying to split the middle. And perhaps also to advocate for seeing them as awful, per corporate sentiment.
The principal example that comes to mind is TD, which seems more obnoxious and tasteless than some sort of evil.
Subs that truly are beyond the pale by wide consensus seem to get dropped immediately and without objection.
As to not wanting to win, well, yes, dropping everything controversial means losing a lot of page-hits, and as a business, they're likely wary.
> They even came up with the concept of quarantined subreddits. I honestly can't think of a single honest reason for them existing. Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain, instead of just outright banning it (unless you want to support it discreetly).
In what world, refusing to ban something from your online platform is equivalent to "supporting it" (discreetly)? Reddit is a business, the business of getting ad money for providing an open discussion forum. They are free to decide what they host and what they not but simply hosting something (which in their case means, doing business with people that post that content) doesn't mean they support it. Supporting something is not the opposite of not supporting it.
It's like saying that Amazon _supports_ pornography because AWS allows to host pornographic content.
The missing component of your thought process is who, exactly, deems subreddits not worthy of existence. Many people view pornography as an extremely damaging problem and addiction, would you want all 18+ subreddits removes because somebody “can’t think of a single honest reason for them existing”?
> who, exactly, deems subreddits not worthy of existence.
That would be the company which owns reddit, through the admins. Is it fair? No, it isn't, but reddit has no requirement to be fair (although it would be nice if it was).
Ten years ago I would have said, "No censorship. Turn on the firehose of information, and let me choose what's 'acceptable' and what's not."
No more.
That presumes some sort of equal playing field and a meritocracy of ideas.
That's clearly not the case. The reality is that good-faith content takes more effort than bad-faith content.
The 2016 US Election in particular showed us what happens when a platform takes a "hands off" approach - platforms become absolutely saturated with false, hateful information... much of it produced by large and even state-level organizations.
It's not just evident when it comes to political content.
Craigslist. Amazon. Etc. None of these platforms work with a completely laissez-faire approach on behalf of the platform owners. To think otherwise is utterly naive.
Platforms need to curate. There's no other way. Yes, this means we will need to place trust in those platforms and monitor their curation. Platforms will, in turn, have an obligation to be transparent w.r.t. how they curate.
"the dictum that truth always triumphs over persecution, is one of those pleasant falsehoods which men repeat after one another till they pass into commonplaces, but which all experience refutes. History teems with instances of truth put down by persecution. If not suppressed forever, it may be thrown back for centuries."
> The 2016 US Election in particular showed us what happens when a platform takes a "hands off" approach - the platform is absolutely saturated with false, hateful information...
And yet, presumably you were able to employ critical thinking, see past all this, and vote the "right" way despite an alleged deluge of hateful misinformation, as were 48%, a majority, of your fellow voters.
Where is the evidence that the level of misinformation is any higher now than in the past, or that censorship (whether by public or private entities) is suddenly a desirable thing?
History has shown that we didn't need to censor and oppress Communist thought in the US during the Cold War because of its alleged threat, and even because Communist thought was being supported by foreign actors. In the upshot, communism discredited itself just fine, and in the meantime, our censorship made us intellectually and morally poorer. Why should things be any different with the new far right?
Where is the evidence that the level of misinformation is any higher now than in the past,
See the above overview for a start.
History has shown that we didn't need to censor and oppress Communist
thought in the US during the Cold War because of its alleged threat,
and even because Communist thought was being supported by foreign actors
The simalarities to today's situation are easy to see, but they are dwarfed by the differences.
1. "Communists" had no way to create and disseminate information on such a massive scale, in a manner that is nearly indistinguishable from good-faith actors. There's no analog for that in the Cold War's "Red Scare".
2. During the Red Scare, the possibility of Russia infiltrating and influencing the US to any real degree was laughable. It's not laughable now. It demonstrably happened and is happening.
3. "Censorship" is when a person is not allowed to express their views. That is not what is being proposed here. This is about refusing to hand a megaphone to bad actors.
Reality does not conform to your ideals of an egalitarian marketplace of ideas. Content farms with modest funding and modest staffs can outproduce and out-influence millions of regular voices. Relying upon the populace to suddenly become savvy is not realistic. Either we need to turn this into a full-scale information war of competing content farms and state-scale misinformation campaigns, or platform providers need to combat things at that level.
And yet, presumably you were able to employ critical thinking, see past
all this, and vote the "right" way despite an alleged deluge of hateful
misinformation, as were 48%, a majority, of your fellow voters.
Sure, yeah.
I also have an IQ in the 90-99th percentile range, a degree in computer science and have been immersed in online culture since before web browsers existed.
The vast majority of the human race does not have those sorts of advantages. Realize that the HN demographic is not representative of the entire human race.
Generally, your points are well taken. The only quibble I have on a factual basis is here:
> 3. "Censorship" is when a person is not allowed to express their views. That is not what is being proposed here. This is about refusing to hand a megaphone to bad actors.
Censorship, historically, was preventing objectionable books from being published. In many forms of censorship, there was nothing theoretically preventing an author of objectionable books from handwriting them and distributing them privately, so long as they didn't draw too much attention to themselves. The historical censors could have made an identical argument to yours: they were merely depriving the objectionable view of a "megaphone" (the publisher), not eliminating the view entirely.
But given what you have asserted above about the importance of breadth of reach for views, it would seem to me that preventing a view from being widely disseminated in any practical way to people who would otherwise freely choose to read or hear the view is the very essence of censorship.
But more broadly, my real problem with your viewpoint is perfectly exemplified here:
> I also have an IQ in the 90-99th percentile range...
> Relying upon the populace to suddenly become savvy is not realistic.
Firstly, with deep respect, this is incredibly arrogant. But nevertheless, you may be right about this, that only the enlightened and educated few can discern truth from falsity and make informed choices.
But if you are right, don't you see that this undermines the bedrock assumptions and principles of democracy? The masses must be led, and shown the "right" information and protected from the "wrong" information? By whom? And what is to prevent those elite and enlightened few from acting in their own interests rather than those of the ignorant mob?
I see within your argument, which may not be factually wrong, a powerful argument in favor of authoritarianism and oligarchy. Any argument that leads to such conclusions is worth a fair amount of scrutiny, no?
Even if it is a fiction that all voters are equally intelligent and informed, sometimes we have found that certain fictions are very important prerequisites for creating a desirable society. For example, the fiction that "all men are created equal". They aren't. But we use that fiction in very important ways to create a more just and equal society. Another is the fiction of free will. Free will does not exist. But still, we treat people as if they had free will, because the alternative is disempowering and decouples people from any responsibility for their actions.
So, I think "the demos makes more-or-less informed choices that are more-or-less in its own self-interest" is another of those necessary fictions, necessary to prevent us from regressing to feudalism or worse.
Firstly, with deep respect, this is incredibly arrogant.
But nevertheless, you may be right about this, that only
the enlightened and educated few can discern truth from
falsity and make informed choices.
I meant it purely in a humble way, and in the spirit of recognizing my own privileges. Not as some kind of value judgement. I am nothing special by HN standards, that's for darn sure. I'm probably below average here.
But it's important to examine our own privileges.
Intelligence (while admittedly not well-represented by a single number like IQ) is more or less a lottery we win or lose at birth. Bragging about intelligence is like bragging about being tall. It's not something earned; it wouldn't make sense for anybody to ever brag about it.
Critical thinking is a skill that is a serious luxury. It relies on some combination of intelligence, time to hone the skill, access to education, access to information, and/or a serious autodidactic streak. A lot of people live lives where one or all of those is missing, often through no fault of their own.
Other things help me to separate online misinformation from information as well. I was in high school / college as the web came of age, at a time when I had the time and inclination to dive into it from the beginning.
I was born into a stable middle class household in which we could afford a computer and internet access. More privileges many do not enjoy.
All those factors were crucial. Take one away and I may well have struggled to comprehend the difference between real and fake news.
My sister is a prime example. Many of the same advantages of me, but born earlier. Missed the internet revolution. Never been comfortable with technology. Constantly falls for fake news. Is that her fault? Well, yes, ultimately. We must all be responsible for ourselves.
But it's also essentially "a well-financed misinformation industry" versus "a middle-aged woman who is technologically illiterate" and wow, that is NOT a fair fight by any measure. If we are basing the future of our society on that fight having a favorable outcome, we are really fooling ourselves.
But nevertheless, you may be right about this, that only the
enlightened and educated few can discern truth from falsity
and make informed choices.
It's not even easy for those with the intelligence and inclination. The "enemy" here is not a crazy person, dressed in rags, wandering down the street muttering that the moon is made of cheese.
The "enemy" is well-funded organizations who have serious time and money -- and sometimes state level backing -- and are devoted to pushing a mix of legitimate and fake news that takes serious effort and knowledge to separate from the real thing(s).
I see within your argument, which may not be factually wrong,
a powerful argument in favor of authoritarianism and oligarchy
I certainly don't want that. If anything I'd say my arguments point towards technocracy where we would favor experts over pure mob rule. That certainly presents its own problems though.
There are no easy answers.
A purely laissez-faire free for all absolutely does not work. We see it time and time again. Bad-faith information always utterly drowns out good-faith information, like spam emails or search results drowning out the legitimate stuff.
Solid, best-effort reporting takes time and sometimes the results are boring. Willful misinformation is orders of magnitude easier to produce.
Education and critical thinking are utterly essential. We should invest heavily in these areas. Democracy relies on them, utterly. But it is also fantasy to think everybody will become enlightened information-processors, and even those of us who are competent at it need help wading through oceans of garbage. At some point, we do need trusted providers of curation. While not ideal, it is realistic, unlike hoping that most members of society evolve into galaxy-brained autodidacts.
This is how any developed society functions. We can't all evaluate drugs and therapies, so we need the FDA. You could replace it with one or more private-sector ventures, perhaps, but at some point you would need to rely on something. We can't all make clothes or food so we trust others. We trust others to design our roads and keep them safe. If I had unlimited time and brain cells it might be nice to do these things myself, but it is not even a little bit realistic.
> The missing component of your thought process is who, exactly, deems subreddits not worthy of existence.
The same people that deems them worth quarantining, presumably.
We could have a debate of the benefits and drawbacks of moderation, but by the time you have quarantines in place, the ship of not considering yourself a moderator has long sailed, which is why it doesn't make sense for me as a measure.
The bans, as announced in Reddit's annual Transparency Report this past February, apply specifically to actions in quarantined subreddits:
Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.
Reddit isn't even remotely anonymous. Pseudonymous at best. Anonymous forums are completely incompatible with user identity centric moderation mechanisms (Karma, Gold, rep etc...)
Once you assign usernames and attach any type of value counter to an account, you've effectively dumped true anonymity out the window due to essentially creating a pseudonymic marketplace.
People may think "How pseudonymous do you need to be?" In today's world? Very. Especially given the propensity for deplatforming and reputation based social warfare.
Anonymous relative to Facebook and all the other social media sites. Sure, having user accounts is not completely anonymous, but it is good enough for 99% of people.
Not having a real name policy isn't the same as being anonymous. Back in the day my irc nick and my WoW username were far more well known than I, the human, will ever be.
Anonymous is probably the wrong term, but I'm not sure there's a better one.
I think the point is that there's a distinct disconnect between your profile on Reddit (and any other sites you share the username with) and your physical person, at least up until the point where you (ideally) make the connection public yourself if desired.
It's anonymous insofar as the "person" on reddit is not easily attributable to the physical person themselves.
My point stands, though - despite the misusage of the word anonymous there's a real difference between how facebook treats identity and how reddit does. And it massively affects how communities in both interact.
You can make a new reddit account in 20 seconds if you don't want people to tie the comment you're about to make to your main account which is already a step removed from you. If you want Throwaway8675309 to develop a reputation, that's an option you have, not a requirement.
>They sometimes come up with measures like this, while routinely ignoring the development of horrible communities that they're aware of (since some users take any possible opportunity to mention them to the admins and are met with silence or half measures).
I think their main challenge is they have a cultural/temperamental resistance to making decisions on a case-by-case basis. Any time there is a problem it seems like they try to overgeneralize and come up with a sort of rule or heuristic that can apply in every case and situation. But since the "toxicity" of a community is usually more subtle than that they end up always being way too reactive.
> Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain
Since we all know we're talking about The_Donald, I'll just mention them specifically. Alexa traffic data considers Reddit the #6 website in the US -- they are huge, and their major actions will be noticed. If they ban their top pro-Trump community, Fox News will be talking about it later that day. Reddit execs will be answering questions in front of the US Senate a week later. Phrases like "election interference" will be thrown around. Nobody wants that.
So instead they do it subtly: quarantine The_Donald, which means they don't show up in site-wide lists, can't be accessed through the mobile app, and can't be indexed by search engines. This stems their growth. Later, start replacing their moderators. Eventually you've taken it over and shut it down, and they leave. Now if Tucker Carlson wants to tell his audience about it, he has to explain to a bunch of 60-somethings what a "forum moderator" is. Not happening.
Honestly, I think what they did worked out best for everyone. Reddit is happy to be rid of them, and they're happy with their new website, TheDonald.win. Everybody wins.
Probably they profit from horrible communities and lose on some, and there are no clear distinction between the two, so they started to quarantine less profitable ones to stigmatize before banning them.
They’re profiting off anger so “old” ones being angry has no downsides.
I think quarantined subreddits play the same role as "NSFW" tagging on posts: They're not banned, but it improves the user experience to not see them by accident.
This is correct, but the toxic, racist, alt-right, pro-fascism posts covering the front pages of sites like voat and ruqqus make me and many others wholly uninterested in these sites.
If someone wants to provide an alternative to reddit but they don’t want to moderate/censor content that makes people uncomfortable, Reddit becomes the lesser evil.
i never visit reddit's frontpage, i visit instead the individual subreddits that i frequent at. In fact i wonder why the frontpage exists. Voat would do a service to themselves if they removed the frontpage.
These sites rely on network effect, which become prominent after a certain threshold of users join the site. If they reach critical mass, they will suck in the rest of the extreme-right, then the republicans then slowly everyone, because people like controversy and will seek to go where they can have it.
Reddit is clearly a leftwing site now in ways that it wasnt when it started (when libertarian was the dominant ethos). All their "common word" reddits, such as news, politics, technology science are purely leftist echo chambers by now. The few rightwing corners left are already isolated and ready to make the move. In fact they may even decide they should jump ship. What better time for /r/t_d to go somewhere else than now, when people's interest is intensely focused in elections and they can easily rally all their users to follow them. Wherever they go, more people of other persuasions will follow, because people love telling other people that they are wrong on the internet.
The question is if those other competitors can stand the traffic and whether they give the features to moderators that reddit does.
I think it was set up (or at least supported for this long) mainly due to thedonald subreddit. The admins were too weak-willed to outright ban that subreddit and deal with the PR and political fallout, but they wanted to prevent those posts from filtering up to the main site. Now that all the Trump people have left reddit I would guess that the concept of “quarantine” is no longer needed and goes away soon, in favor of simply banning subreddits. But we can all rest easy knowing that we can still watch people die or all sorts of weird porn - reddit has really cleaned up their act.
If you let a bad community be bad in their dark corner, they won't spread as easily to the rest of the site. You can easily see that the accounts are toxic by seeing that they participate in garbage communities. And it's a great honeypot for authorities, be it for political astroturfing and nationstate stuff, or for borderline-illegal content.
>I don't really get reddit's efforts to clean their site.
You don't get why a online forum might have an interest in ensuring every other post isn't a dick pic?
If you can understand that, maybe you don't, then you DO understand their interest in moderating their own website whether or not you agree with their moderation standards in practice. A dick pic may be obvious to you...but so long as the picture depicts someone >18yo then it is legal and free speech.
>They sometimes come up with measures like this, while routinely ignoring the development of horrible communities that they're aware of...
Again isn't it their right to determine what forums/threads dick pics may in fact be appropriate and which threads forums they aren't and should be moderated/removed?
> Your account has been suspended from for breaking the rules. Your account has been suspended for 3 day(s).
>> You recently upvoted a post or comment that was determined to be against our policies. Abusive content is not acceptable on Reddit nor is engaging with it. Please be thoughful about the content that you interact with.
I got one of these warnings, and I've narrowed down what I engaged with to a Hard Times article. Strange times when reddit finds satire too spicy for their site.
Whether they intended it this way or not, users will now be worried they're upvoting the "wrong" thing and will be less likely to vote unpopular opinions and political views, just in case.
I imagine this is reactive, since they can't proactively moderate their content. When they do get around to moderating (like checking reported comments), they can then penalize people who promoted the unacceptable content.
I think there should indeed be penalties for upvoting e.g. death threats.
They are trying to train their user base to moderate content correctly. If it's against the sites policy, users should down-vote the content and report it.
This actually strikes me as an intelligent (albeit Orwellian) approach to rules enforcement.
If Reddit's goal is to encourage broader self-censorship of wrong-think, the message makes perfect sense. If Reddit informed a user that they broke Rule A, a rational user would learn a specific lesson. They would learn that they can avoid punishment by not interacting with content that breaks (or might break) Rule A.
By punishing without an explicit explanation, a rational user must conclude that, to avoid punishment, they cannot risk interacting with content that breaks (or might break) any of the rules on the website.
> Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.
Until now that has mostly been warnings for posting in "bad" subs. Now after most of those are gone I guess it's time for the next phase.
> I’d like to share an update about our thinking around quarantined communities.
> When we expanded our quarantine policy, we created an appeals process for sanctioned communities. One of the goals was to “force subscribers to reconsider their behavior and incentivize moderators to make changes.” While the policy attempted to hold moderators more accountable for enforcing healthier rules and norms, it didn’t address the role that each member plays in the health of their community.
> Today, we’re making an update to address this gap: Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.
This seems strange to me. If this content is violating reddit's policies then why don't they ban them? It seems awkward to have this middle ground of "semi-banned" content where users get suspensions if they interact with it too much. Why create these invisible land-mines for users? I imagine it's because banning this content outright would make Reddit admins appear even more biased, or because the content doesn't actually violate any policies but puts off advertisers.
There seems to be a divide between people who think discourse is a progressive art of the exchange of ideas, and those who use language as a pretext for political struggle where the role of ideas is to act as provocations that yield a persons "true," alignment and identity. The latter view means engaging ideas compromises your ethical purity, where the former one means people can express abhorrent ideas without physical consequences.
There isn't a short answer to what's probably an eternal conflict, but I do know a lot of people in tech who might benefit from reflecting on the question of, "are we the baddies?"
The dichotomy is seems to be reflected in the growing divide between rational, scientific inquiry, and modern progressive liberal arts which are rooted in critical theory.
I don't think Reddit should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act. If they want to police lawful content that is fine but then they are no longer an open platform and they should forfeit their Section 230 protections. I dislike the ability for online platforms to have their cake of protection against lawsuits and also eat their cake of policing lawful content they don't like.
So no site should be allowed to have moderation policies that include things like "no trolling," "no flaming," "no swearing," or anything like else like that for things that don't cross into "illegal" abusive speech? Without becoming a target for filling up with illegal shit to overwhelm the moderators and cause legal trouble for the owners?
> So no site should be allowed to have moderation policies that include things like "no trolling," "no flaming," "no swearing," or anything like else like that for things that don't cross into "illegal" abusive speech?
Any site can have any rules they want. But they shouldn't expect to be able manipulate and control discussion and face no backlash from users.
> Without becoming a target for filling up with illegal shit to overwhelm the moderators and cause legal trouble for the owners?
'Trolling', 'flaming' and 'swearing' are not illegal, and honestly, most complaints about 'trolling' I see are just people who disagree with what they're reading.
> But they shouldn't expect to be able manipulate and control discussion and face no backlash from users.
You're saying they should face backlash from government regulators not from users.
If there's a big user backlash, those users are already free to move to whatever site they want, or make their own.
---
Re: your comment about trolling, etc, not being illegal: yes, exactly. So if a site wanted to enforce some sort of "family friendly" policy, you could simply bomb them with truly illegal shit instead and get them in trouble with the regulators for not being able to keep up as they would be liable for that, since they aren't a "anything legal goes" platform.
> You're saying they should face backlash from government regulators not from users
I never said that. But it's probably a good idea to have some sort of oversight for a massive platform that censors and manipulates discussion of millions of people. You may think their policies are totally justified and correct (they aren't), but the specific opinions they choose to promote may change to something you are against.
> If there's a big user backlash, those users are already free to move to whatever site they want, or make their own.
Yes, they are also free to shit on the website and point out it's flaws.
> So if a site wanted to enforce some sort of "family friendly" policy, you could simply bomb them with truly illegal shit instead and get them in trouble with the regulators for not being able to keep up as they would be liable for that, since they aren't a "anything legal goes" platform.
Trolling is not illegal, it's not even a specific thing. It's a very ambiguous term used to describe 'comment's that make me feel bad'. Trolling and posting illegal shit is completely different, sure ban illegal stuff - you kind of have to, but don't use 'trolling' as an excuse to ban stuff you disagree with.
Separate reply for the other side of this thread. I honestly don't know where you're getting it with your comments re: trolling.
Do you honestly believe that a discussion forum site that wanted a family friendly policy shouldn't be able to have any liability protections? That if someone started flooding them with child porn, say, at a rate they couldn't keep up with, they should have to shut down because they'd otherwise be liable for all that content due to their having a content moderation policy that forbid certain types of legal speech (say, swearing)?
Dude, I honestly don't think you understand what 'trolling' means. You already agreed that trolling is not illegal, why do you keep trying to use examples of illegal content to demonstrate why 'trolling' should be banned?
And what does child pornography have to do with swearing?
> Do you honestly believe that a discussion forum site that wanted a family friendly policy shouldn't be able to have any liability protections?
You keep bringing up trolling as a term in general, I tried to make it specific.
The pitch being made in this subthread was "I don't think Reddit should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act."
So you lose liability protection if you censor lawful content.
So if you want to ban swearing, you become liable for your user's content. Which makes you a WIDE OPEN target for bad actors who could post whatever illegal content they want faster than you could moderate it.
I'm trying to illustrate why I think `cwhiz took an absurd position, and am not using the term "trolling" in general now. I've given a specific example of "legal content some communities might want to prohibit" as well as "illegal content that bad actors could use to get the owner of that site in trouble in this proposed world."
Okay, to make it clear - I don't think sites should lose 230 protections for having a moderation policy. I do think that the excessive manipulation and censorship of dissenting opinions on sites such as reddit is a bad thing however.
> But it's probably a good idea to have some sort of oversight for a massive platform that censors and manipulates discussion of millions of people. You may think their policies are totally justified and correct (they aren't), but the specific opinions they choose to promote may change to something you are against.
Think through the mechanics of how this oversight would work.
Someone will have to decide "is this site sufficiently neutral in how they moderate UGC." So let's spin up a government department more that, maybe call it part of the FCC, maybe make it a new one, who knows.
Now let years, maybe even decades go by. Now let's say some site you like gets essentially taken out - since massive sites aren't going to be able to do manual moderation of everything - by a regulator who leans to the opposite side of the political spectrum you do.
Now you don't have much resource, since you aren't free to go off and try again - the regulator would just get that site too.
I have a hard time seeing how this is better.
And this is a weird stance for conservatives to be preaching, since the previous similar thing, the fairness doctrine, was no favorite thing of theirs.
This is one of those things where you can't say "only the government can solve this problem" unless the only speech you want to see protected is fairly-mainstream government-endorsed speech.
> Think through the mechanics of how this oversight would work.
There are many ways this can work, and it's not necessarily simple. I don't have a specific proposal.
> Now let years, maybe even decades go by. Now let's say some site you like gets essentially taken out - since massive sites aren't going to be able to do manual moderation of everything - by a regulator who leans to the opposite side of the political spectrum you do.
1) Manual moderation of massive sites is already going away for the most part.
2) Sites like reddit have swarms of moderators, the ability to report, and send legal notices to take down specific content and also be made.
3) We have laws about 'illegal content' these websites are already complying on a massive scale.
4) My suggestion was an oversight of excessive moderation and manipulation of opinion, not what your example suggests, which is the opposite. You literally don't have to moderate anything to not be charged with 'excessive censorship' or whatever.
> Now you don't have much resource, since you aren't free to go off and try again - the regulator would just get that site too.
We have a justice system for a reason.
> I have a hard time seeing how this is better.
We can all come with with 100 different imaginary scenarios where some very generic suggestion may fail. I don't see the point. Do you think there should be no laws about content? No libel laws? No copyright enforcement? Where do you draw the line?
> And this is a weird stance for conservatives to be preaching, since the previous similar thing, the fairness doctrine, was no favourite thing of theirs.
I'm not a conservative, but what I find more jarring is the people who want sites like facebook forced to be censored and moderated politically, while reddit can do whatever it pleases as long is it aligns with their political ideology.
> This is one of those things where you can't say "only the government can solve this problem" unless the only speech you want to see protected is fairly-mainstream government-endorsed speech.
Again, I did not suggest that there should be laws about what content is allowed. I actually think we already have too many such laws - I would personally do away with DMCA and other such bullshit. What we should have is a check on mass-scale censorship and manipulation of opinion by corporations.
> There are many ways this can work, and it's not necessarily simple. I don't have a specific proposal.
I was talking about the proposal from `cwhiz, in which the government would revoke protections under existing current law. I think there are holes in that suggestion that are miles wide, that will extend to pretty much any other proposal that depends on the government deciding if a site is over- or under-moderated.
Without a specific proposal to talk about, or specific opinions on "should a site have to relinquish editorial control or rule-setting ability to enjoy liability protections from stuff done by users" (I say "no"), I don't see much more interesting stuff to debate. Those are real, active questions and proposals being made; other hypotheticals could go on forever but are less relevant to now.
But if you aren't interested in those specific points, I don't know why you're participating in a thread that was specifically about the questions around that current law.
So, you're arguing with me about something I never even said, while ignoring my point about excessive censorship and manipulation of public opinion? 'Revoke 230' or 'keep 230' are not the only possibilities in this universe.
That's actually not the point of Section 230. The point is that if you remove Section 230 and then you accuse someone of trolling, flaming, or swearing, that could be considered libelous.
If you would like to learn more, and see exactly what happened when Section 230 was not in effect, check out this case:
Stratton Oakmont, Inc. and Daniel Porush,
v.
Prodigy Services Company, "John Doe", and "Mary Doe".
Stratton Oakmont (yep, the one that was made by Jordan Belfort, the Wolf of Wall Street), won that case against Prodigy and, yes, John Doe and Mary Doe. You, the user, are John Doe or Mary Doe. Section 230 protects you.
What's not the point of section 230? I literally didn't say anything about section 230 or suggest that sites that moderate should have those protections revoked.
You are right, my apologies. You were responding to another person who was having a discussion about Section 230. I thought you were adding on to their thoughts about Section 230. If your point is entirely that users can backlash against a platform, then your point is correct.
I'll also add that the excessive censorship and manipulation of opinion on massive sites such as reddit, facebook, youtube, etc. is somewhat disturbing. People tend to think that only governments can censor excessively, but the fact is most public discussion in the West now happens on a few massive private platforms, and these platforms have a lot of power over public opinion.
Free speech is not so daft a concept. Where some restrictions are necessary, the principle of “viewpoint neutrality” still works to prevent censorship. This is the approach that First Amendment jurisprudence takes. So, where the government allows pro-abortion expression, it must also allow anti-abortion expression. But it can still set volume limits and control crowd movements and so forth, as long as it does not discriminate on the basis of viewpoint. And this applies even where its more general obligation to allow all expression does not hold.
But Section 230 doesn't account for it... I honestly don't think it's practicable as much as it pains me. You rely on some goodwill from the site, and if you open it to lawsuits you will get none.
It's actually very insidious the way HN (and Reddit) control what we see though. There is almost no transparency, accountability, or oversight. Moderators make mistakes and we never even know it. We don't know what we don't know. The culture of silicon valley and hacker culture are being manipulated by a small number of employees at a proprietary investment firm. This is very far from ideal even if you do trust the intentions of the current employees.
I'd be in favor of a law that required uncensored free speech on all forums but that also allowed users to opt-in to moderation. Even if 99% of users opted in to moderation it would still leave an escape valve for totally uncensored free thought.
I think you are, sadly, completely correct. And that is why Section 230 in IMO unworkable as it was conceived, as it provides exceptions on a justification that is completely false.
Honestly I don't see what's the best alternative. Probably the segregation of comment platforms and sites. If Section 230 were to be substituted with something effective in regards of speech protection, or conversely the right of platforms to curate comments, then still this would only apply to the US and it would be a significant factor to move the comments elsewhere.
There should be a users/penetration threshold above which enforcing a cultural code of conduct waives immunity. GDPR uses a similar threshold, and unless you're seriously arguing that "trolling", "flaming", and "swearing" are being moderated successfully on e.g. Twitter, it seems practical.
Without section 230 protection it would be impossible for the site to stay open, so in effect you're saying sites that moderate user content should be shut down. Does that seem like an accurate interpretation of your position?
Section 230 is another classic legalist versus legal realist versus "whatever absolves me" trap that people fall into on this forum.
The legal realism prevails with this stupid law, it will always prevail. You can both moderate / review / editorialize content, including political content, while simultaneously enjoying Section 230 protections.
Ok. In my view that's unreasonable. Website operators should have the freedom to moderate their sites without fear of being shut down. Moderation is part of the product, they should have the freedom to create online communities that cater to specific audiences, including with respect to political partisanship if that is where their values lie. It seems unreasonable that all the users that enjoy the site as is should lose access to it because of some users who disagree with the site's moderation policies.
This isn't a practical idea. People aren't always aware of their own biases and critics are also inclined to see bias even when there is none. There is no practical way to determine bias levels.
> Reddit is very happy to claim they are unbiased, neutral, hate free.
Are they? Can you point to an example of such a claim?
> But some preferred subreddits will ban people for heaving wrong gender or wrong race
Subbredits are run by users.
> They also tolerate hate speech, incitement to violence or even advocacy for genocide , if it matches their ideology.
Reddit would say they don't tolerate those things, you say they do, how do we determine who is right?
No, they are run day to day by users, but ultimately are at the discretion of the site owners. Case in point, this article we are discussing.
> Reddit would say they don't tolerate those things, you say they do, how do we determine who is right?
The behavior of the site owners? Reddit site owners treat different subreddits differently and different viewpoints differently, although there is single characteristic differences (gender, race, etc) because of the people and topics that are generated from the groups. Pre-censoring, is not uncommon, which has led up to this wrongthink penalty.
If you're challenging the history of reddit, you aren't informed enough to be discussing this and it's not worth rehashing.
> but ultimately are at the discretion of the site owners
The site owners don't ban users based on race and gender which is the claim I responded to.
> Reddit site owners treat different subreddits differently and different viewpoints differently
Well yes, that is how standards work, "different views" are treated differently based on how they stack up with respect to the standard. I understand that you're arguing that reddit applies the rules selectively, but reddit would argue that they apply them fairly, so how do you suggest this kind of conflict be resolved at scale? Who should decide if reddit is being fair or not?
> If you're challenging the history of reddit, you aren't informed enough to be discussing this and it's not worth rehashing.
If you want to make an argument then make it, but condescending declarations of a monopoly on the informed position does not do your case any favors.
I'm not making a case. I'm describing the state of the reddit culture. Whether you believe it or not, is incidental.
> The site owners don't ban users based on race and gender which is the claim I responded to.
Your statements were broader than that. Revisionism aside, you can get banned for voting on a post, and there are lots of rules (written and unwritten by each community) that can get you banned from the subreddit or reddit as a whole. Making statements about YOUR gender (or a number of other attributes) can also get you banned. Here, I'll do a google search for you (since you can't be bothered) for subreddit bans, to start you off:
https://www.google.com/search?client=firefox-b-1-d&q=subredd...
You aren't informed enough to be discussing this and it's not worth rehashing.
> I'm describing the state of the reddit culture. Whether you believe it or not, is incidental.
Very convincing. Oh, I know, you don't care about that in the least.
> Your statements were broader than that. Revisionism aside...
No, they weren't. You stated that the subreddits are ultimately controlled by the site owners, and I explained that the site owners don't ban people based on race and gender. Simple. Revisionism indeed.
> Making statements about YOUR gender (or a number of other attributes) can also get you banned. Here, I'll do a google search for you
The results of the google search does not support the claim you're making, like, not even a single result. Beyond that, mods can ban users for whatever reason they want, but the site owners do not ban people based on their race and gender, that claim is totally false.
> You aren't informed enough to be discussing this and it's not worth rehashing.
lol, a laughable assertion coming from someone who can't even cherry-pick google search results to support their argument. Stop wasting my time.
> cherry-pick google search results to support their argument.
Not cherry picking to make an observation that documents the history. Its an ongoing, and well covered, topic for which you can demonstrably research without difficulty. Denial doesnt help either of us and causes me to question your sincerity. Good luck with whatever.
So no rebuttal to anything I actually wrote? Are you sure you're not the troll, because I'm just trying to have a discussion and what you've replied with is bad-faith snark.
What you're saying is that because they've decided to take a political stance the current law should be changed to impose new consequences on them. As the law stands today there are no consequences.
The core here is that answering the question of "did the site decide to take an unreasonable political stance?" vs "did the user decide to take an unreasonable political stance?" is not as black and white as you would like.
The answer is going to be a moving target. You're gonna have some government bureaucrats having to make rulings on these regulations.
And many of the proponents of this seem to fail to imagine a world where Donald Trump is ever not president. Imagine the power this would give the government against someone who plays fast and loose with 'facts'. "We've decided that the statements in this post were intentionally misleading, therefore they are a political statement, and the owners of the site promoted them in a way that they didn't do to this other post, therefore the owners are making a political statement and this is not a neutral site, which makes it liable for these other comments from other users that we're interpreting as violent threats."
Sure it will be possible to stay open: just double down removing offending content and defend from the lawsuits. Large companies routinely have multiple ongoing lawsuits.
> just double down removing offending content and defend from the lawsuits
This is unsustainable though, there is just too much content to filter. There is also the fact that lacking such protections the site will be targeted by bad-faith actors who intend to harm the site by posting lots of legally problematic content.
In essence section 230 says "as a platform, youre not held liable for censorship of content you find objectionable, AND you dont take ownership of the speech you leave up."
Section 230 did not create a distinction between a platform and a publisher that required censorship rules to be applied fairly or evenly.
That was literally the purpose of Section 230. If you want them to not be able to do that, you want Section 230 to be repealed. You don't have to take my word for it, either, since one of the authors has been explicit about it. (The discussion in his tweets is largely framed around politics, but there is nothing that would make the argument only apply to them vs speech in general)
Unfortunately, I don't think you understand the extent to which you, as an individual user, would be under obligation if Section 230 wasn't in force. Section 230 protects you, the "user" as much as it protects the platform or "provider of interactive computer services". It is very likely that without the protections of Section 230 you, the "user", would have to decide whether my comments on your Reddit post, Youtube video, Facebook post, or even this post, do not bring you under undo liability, and thus would have to police those comments. If you are prepared to do so, then go ahead and remove Section 230 protections. I have a legal background and I am certainly not prepared to decide if all content related to my content is lawful content.
The aim of 230 was explicitly to allow doing so. And how would you even imagine this working? Have a bunch of conlaw students judge posts against complex tests?
What model makes sense and why should it be treated differently than other forums with moderation?
I'm a big fan of the idea that internet providers, DNS registrars, etc. need to be neutral to content in the way common carriers are, but I'm not convinced yet that platforms should have similar obligations. It seems that the appropriate response to schismatic disagreement with the "SysOps of the BBS" is to build another BBS as the "telephone network" is open to all.
At some point, that gets moved up a few levels though, if the BBS allows for groups that get moderated by other people, is the BBS still "the moderator" or is it those people that set up the group? And if you don't agree with their decisions, should you be able to launch a rival group on the BBS (it being a platform and all)?
I simply disagree that it moves up. We have had a clear delineation between what is "restrictive private property" and what is "open access private property". That separation is clearly delineated at the IP network + related necessities. This model has worked well, and I don't see how it is breaking now.
Someone else mentioned as well, but some very specific monopoly/necessity cases could change this delineation, however I could only see this being necessary with a much different environment than we have today (that is, situations without feasible common carrier alternatives, _maybe_ google search would be an example today).
I won't speak to your particular motives because I don't know you from anyone else, but I sometimes wonder if a core driver behind wanting to mess with section 230 is because people feel entitled to an audience at low/no cost to them, at the expense of platform owners, to blather on about whatever they please. To this end, I say "nope, you're not entitled... we have an open internet, drive your own damn traffic".
I think it's reasonable to argue that we have a bunch of local monopolies today, only that "local" on the internet is interest-based-local, not geographically local.
You mentioned Google Search, I'd add Youtube for video content, Twitch for streaming, FB for social network etc. Sure, you can broadcast video content without Youtube, there's vimeo or you could host it yourself, but it's really not a feasible approach. Much like you don't really have to rely on the local utility company to provide you with electricity and water, you could totally run a generator in your back yard and have trucks bring in fresh and move out waste water ... but it's not a realistic approach.
> To this end, I say "nope, you're not entitled... we have an open internet, drive your own damn traffic".
I'd respond that we don't have an open internet, but more importantly: you don't have to be a platform, nobody is stopping Reddit from being a publisher.
The reason why they don't want to legally be a publisher is simply that their low-cost approach doesn't work if they have more responsibility for what they publish. They very much want to be able to pick and choose what is published, but they don't want to be considered a publisher.
I mean, I see merit in the position that monopolistic corporate entities, which you are forced to interact with if you want to participate in modern society, have government-like power over people and should be held to similar standards in terms of civil liberties.
But Reddit is effectively just a big message board. Such interpretation would force every small internet forum to either become a [0-9]chan or a closed space for invited members only. There is lots of middle ground where the owners/the community polices some speech and you can reasonably, realistically just go somewhere else.
> I don't think Reddit should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act
The entire point of 230 was to allow companies like Reddit to censor lawful content while still claiming protections. It was never designed at any point to create neutral platforms.
Lack of moderation was already a defense before 230 was created:
> CompuServe stated they would not attempt to regulate what users posted on their services, while Prodigy had employed a team of moderators to validate content. Both faced legal challenges related to content posted by their users. In Cubby, Inc. v. CompuServe Inc., CompuServe was found not be at fault as, by its stance as allowing all content to go unmoderated, it was a distributor and thus not liable for libelous content posted by users. However, Stratton Oakmont, Inc. v. Prodigy Services Co. found that as Prodigy had taken an editorial role with regard to customer content, it was a publisher and legally responsible for libel committed by customers. [0]
Section 230 was not designed to foster neutral platforms, it was designed to allow platforms/communities to moderate as they saw fit without opening themselves up to legal liability. To re-phrase that in modern terms, section 230 is one of the strongest protections we have today for community's Right to Filter[1].
When people say that they like Section 230 but it's gone off the rails, what they really mean is that they think Section 230 was always a bad idea. Or perhaps more charitably that they haven't really researched how Section 230 works or what its history was.
> I don't think Reddit should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act.
The whole point of Section 230 was so sites could do that. Before 230, there has been court rulings that said if you moderate the content you are liable for the content. 230 was enacted specifically to reverse that.
The problem is that "lawful content" includes things like explicit support for Nazism and the KKK (not 2020 "Trump is a Nazi" kind of allegations of Nazism, but actual Nazis). Even things like abstract calls for violence are protected under the First Amendment: https://en.m.wikipedia.org/wiki/Brandenburg_v._Ohio
I don't think people espousing this view understand the scope of what sites need to tolerate to comply. Would reddit be a better place if they let people say, "ni\\\\s belong in shackles" and, "death to Jews"? Because that's legal speech.
How might a democracy suffer if those who control the common venues for voter's political dialogue are given a monopoly over setting the bounds of acceptable speech?
I think it's an extreme stretch to say that Reddit has a monopoly. Or even Reddit + Facebook + YouTube. My response would be to tell people to rediscover personal websites, mailing lists, and IRC. I'd be more concerned if ISPs, DNS providers, and payment processors start policing users' politics (which they have, recently, see the Carl Benjamin AKA "Sargon of Akkad" situation where payment processors threatened to cease business with Patreon unless they kicked him off the site). Being kicked off of Facebook or Reddit doesn't prevent someone from publishing their content. I've always told people if you want reliably get your message across, host your own website. It's not hard to pick up some basic HTML and CSS. It's only once the ability of people to publish their own messages that I get concerned.
As far as Reddit and Facebook go, understand that freedom of speech also means freedom from compelled speech. It's not just about being able to say what you want, it's about being able to not say the things you don't want to say. Making Section 230 contingent on hosting all legal content compels people so say a lot of things they don't want to say.
This is an absurd argument that the far right is plying as if they have some constitutional right to push their hateful noise on every site.
The internet is open. If you disagree with Reddit, Twitter, Facebook, etc, there are alternatives. Of course when the hateful all congregate, the farce of their positions becomes too clear. They need the legitimacy of Twitter or Facebook to ply their nonsense.
As an aside, it's interesting that those far right sites are some of the most fervently moderated, "snowflake" sites. Even The_Donald on reddit was hysterical in its suppression of even a simple question on the truthfulness of some fake claim. Instant comment deletion and account ban. Gab's extreme assertions of their moderation policy are uproarious when compared to its creation (you know - "Twitter silenced right wing voices!")
I don't think Hacker News should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act. If they want to police lawful content that is fine but then they are no longer an open platform and they should forfeit their Section 230 protections. I dislike the ability for online platforms to have their cake of protection against lawsuits and also eat their cake of policing lawful content they don't like.
I used Reddit a lot back in 2013ish but went off of it.
Looking at it now, it feels like that in general the site is really mean spirited.
Also upvoting means that you're just seeing whatever the majority of voters want you to see. It works on a smaller community like HN because comments get 10s of upvotes at most. It's not a site for good discourse.
However there's communities where there is interesting discussion still and often if you want an opinion of a product or an issue you might find a thread on it from a real user.
But generally they've done nothing to stop the nurturing of racist/sexist/bullying content over the years.
Now you can argue about free speech being important for an internet platform.
But Reddit trying to change the community attitudes of their site now seems a little late.
Also there is a real sense that some users believe that Reddit is the Sheppard of society and they're fighting on the forefront of society. Which I think stems from the millions of apparent subscribers to subreddits.
But it's just a fancy internet forum.
Like this linked post is on a subreddit called Watch Reddit Die.
Apparently something about free speech being stifled in Reddit.
People need to realise they shouldn't waste their life on a website they don't like.
The internet hasn't been fully consolidated by Facebook and Google just yet, they can find or make a different community.
I have been a redditor for a decade and have a few remarks based on my experience.
I feel like reddit has always had recurring and overlapping cycles of mean spiritedness. For instance, during the summer holiday period it becomes notoriously harder to have a reasonable exchange of ideas there (don't ask me why; I'm not quite sure). This year, since the beginning of the COVID19 isolation period in March, we've been in a period of very high mean spiritedness. It could be a consequence of users' high stress levels.
Reddit does still have several excellent communities. I find that there's one thing in common between them: They have real, dedicated, human moderators enforcing strict, rational policies. Also: There are some weird cliques of power moderators on reddit who control dozens if not hundreds of subreddits. A good subreddit is usually not run by those cliques. Not to discount the hard work that some grunt moderators put into trying to run the large, clique-controlled communities, but the commonalities are hard to ignore.
On upvotes: Reddit should have clear policies enforced equitatively and transparently at the company level, by a human staff, and downvotes shouldn't be presented as the opposite of an upvote. I don't see a complete alternative system that can replace the upvote, though. They do provide different sorting options; what else could they do, there?
On alternatives: People attempt to leave on a regular basis. The problem is that there's a much higher incentive for the people who are ostracized not only by reddit as a platform but by the associated community to leave than for people who are, for the most part, comfortable, even if the administration acts a little weird on occasion. All alternatives are quickly colonized by people that the majority of the reddit community (if there is such a thing) would rather not be immersed in: White supremacists, conspiracy theorists, fanatic nationalists, etc. For the more moderate people to be willing to move, they want to feel surrounded by a majority of people like them, not a minority.
The phenomenon you are describing is called "Summer Reddit", and that term comes about yearly because schoolchildren are home for the summer, with the average age of Redditors participating during this time lowering from 24 to 16. It also heavily correlates with an increase in edgy commenting and shitposting.
Reddit's issue is the issue of so many ad supported websites. Clicks sell ads and controversy draws more clicks than anything else.
It's been clear over the last few years that the popular subreddits are the ones that have the most dramatic content.
Smaller subreddit do tend to have better discourse though until eventually, they get large enough to become popular and start to become the same echo chambers.
First, big brands don't want controversy displayed next to their ads. This is likely behind some moves Reddit has made in the past to clean things up.
Second, I'm curious what % of revenue comes from user-purchased post and comment awards from gold. I've noticed those in use quite a bit in some more inflammatory subs like /r/politics. I wonder how they vet payments from users to avoid providing a way for hostile actors to buy visibility while flying under the radar. Ie. Avoiding conflicts of interest from accidentally monetizing trolls.
Big brands don't want to be advertised on r/watchpeopledie but probably have no issue with being advertised on r/relationshipadvice which is almost entirely a sub about controversy and thus drives clicks and revenue.
I've done some searching and it seems that reddit gold is a pretty small piece of the reddit revenue. Campaigns start at $50,000 with Reddit Ad's.
That's pretty old data. And I realize I said Gold when what I meant was Reddit Coins, ie. the currency that lets people gild posts with various things like flames, and other colors.
I'm deeply interesting in learning what % of revenue that is for them and how quickly it is growing.
A recent heavily awarded post was broken down by cost per award. The money spent to award that post what it got was over $24,000, and that was just in the new award types and not the older medal/coin types. Adding the latter in pushed the total to just under $36k for that single post.
As for advertising on Reddit, I view it as a fool's errand on the part of advertisers given the extensive popularity of PiHoles and adblocking software among the userbase.
Do you have a link to that breakdown? I'd love to dig in further. I suspect that there's a lot of Coins in circulation from Reddit seeding the system. The marginal cost to them is roughly $0. Without being able to confirm if users paid the list price for those awards, I'm not sure we can confidently say Reddit banked ~$36k revenue for that post.
That said, my concern is that it creates an environment for misaligned incentives. If say, hostile foreign governments have their troll farms buying awards to increase visibility of divisive and inflammatory posts in key threads, that furthers their agenda and creates a revenue stream for Reddit that is fueled by creating an environment where that flourishes.
I'm NOT saying Reddit is actively trying to do this as I think they'd be horrified to find this happening, but they have a fine line to walk here from a product and business standpoint.
Re: your other point, I do digital advertising for a living. I can assure you that the vast majority of users on Reddit are not using PiHoles. While many may be using ad blockers, those largely don't work for native ads in the mobile app.
Reddit has been compromised with actors who do not have free speech in their best interest. The other day there was an illuminating ask reddit post about transexuals who decided to transition back and their experiences. That post got promptly removed by moderators; even though there was no hateful content within it at all. They effectively silenced the transexual population due to an ever so slight conflict of ideology.
To think that the reddit administrators would go so far as to ban people for even interacting with content that THEY deem problematic.
The post you are referring to was deleted by the user who made the post not the mods....
And that doesn't even relate to this topic as what you're referring to is on a subreddit basis and is determined by a set of volunteer mods not reddit admins.
One of the problems I have is that as a moderator on some subreddits I have issues if I don't use the new reddit site. I prefer the old site much more but find it annoying to have to flip flop back and forth so I have settled on the crapdesign.
I just tested it and any post deleted by the user has [deleted] for both the post body and username. Posts removed by mods will have [removed] for the post body but will still leave the username untouched. A post needs to be both deleted and removed for it to appear like that specific example with [removed] for the body and [deleted] for the username.
As another user has pointed out, it's both. The post was removed by moderators first, then the user deleted it some time after that. This information is only visible in the old design, not in the redesign.
Unfortunately if the user removed it you can't really see the order of events then. I don't believe you can now see who removed it first. I know as a mod on reddit I have deleted posts that were already removed by the user.
>Reddit is dead and no one I know uses it anymore.
It's not dead, and I say thankfully because it not being dead retains its state as a containment mechanism of what is easily the worst group of internet users in internet history.
I fear for the day Reddit falls and its userbase spills out looking for another avenue to be comforted with groupthink, newspeak and feel goods for the Brands We Love.
Reddit is a collection of communities. I like several of them, and think they are generally good and uplifting, /r/pixelart for instance. Some communities have some pretty bad people in them, but it is impossible to create a social media site at scale without like-minded bad users creating and/or flocking to bad communities.
People are in different stages of life and seek solace in other people having similar experiences. Thus, the sexually frustrated congregate, the depressed congregate, drug enthusiasts (and abusers) congregate, disgruntled ex-<insert organization> congregate, you name it, and angry/sad/anxious people with a shared condition will congregate in search of validation/confidence.
But the opposite is also true - people with a shared hobby/passion/vision congregate as well and create some really fun and uplifting communities.
Yep reddit is increasing its conformity and intolerance to dissent at a very quick pace, maybe because it's election year? But in the past such forays have been one way only.
As the subs where you can speak with some degree of openness are ever smaller, it's not worth the time anymore.
Unfortunately this style of moderation has already has spilled out. Even on HN people are insulted for not agreeing with the groupthink, usually as an attempt to bully their point of view off of the internet. Now these people are becoming mods and have real power to suppress counter narratives. This is a systematic problem of people trying to oppress views outside of theirs.
I imagine the mods have a fairly finely tuned sense of which threads will turn into a shitstorm and don't give a fuck about pre-emptively killing them.
And honestly I WISH nobody used reddit anymore, maybe it could roll back 10 years when it was frankly more fun, but it sure seems like more people than ever are using it.
I imagine the mods have a fairly finely tuned sense of which threads will turn into a shitstorm and don't give a fuck about pre-emptively killing them.
How do we feel about pre-emptive moderation? Is it warranted? Is it warranted every time? Only certain topics?
Ultimately the mods are volunteers, they get to do what they want?
More precisely I would imagine they don't care about facilitating/refereeing culture wars inside their subs, cause lord only knows how much work that would be.
How does this method avoid the moderators implicit bias? Unless we get actual sitting judges how can these people be neutral and not impose their bias on what light is?
I 100% support Reddit killing any thread in a popular subreddit about anyone trans or the trans community preemptively because every single thread turns into an absolute dumpster fire. Reddit on the whole shows a very different side whenever those kinds of threads show up.
Many of the oldest users (me included) have just moved from the more general subreddits, where those shitstorms happen, into more and more specialized subreddits. But I miss the old reddit too.
> Do you think there was a way for you avoid this path? Like more understanding parents, other friends, another therapist? Or was it unavoidable?
>> It was a combination of things, so I can't say for sure what could have prevented this. More understanding parents definitely would have helped. Other perspectives online, as well. Detransitioners frequently get shut down on the internet, as their experiences are often labeled "transphobia." Which is utter bullshit, in my opinion. Perhaps if I had read a similar story as my own those years back, I would have had second thoughts.
Strictly speaking, that's not ironic at all; it's quite in line with what's routinely experienced by people trying to provide accurate and nuanced information about gender dysphoria, gender-dysphoric identity etc., and how to best help folks who may be affected by it.
Not exactly the same thing but the other day I got banned from r/ADHD for a (moderate from my POV) comment about pronouns.
The mod then gloated about banning dozens of people other this and that "LGBTQIA+ people run the subreddit", and that "the lack of discrimination (...) for being cis or being white or being straight or being a man overshadows any discrimination they would receive for their adhd"
I don't think that's great moderation given the subject of the subreddit but what can you even do about it that would not be time consuming and pointless?
The moderation is the worst thing about reddit. Reddit mods are some of the most corrupt people I've encountered but there's no way of telling how much the accepted groupthink is being carefully shaped by the mods.
My model for moderation would be opt-in. I want to be able to see everything by default but be able to see what moderators have flagged as inappropriate. I could then grow to trust some or all of those moderators and just hide everything they flag by default. But I'd always be able to go and quickly evaluate whether I still trust those moderators to be doing the right thing and not just hiding things they disagree with.
I think it's unfortunate that reddit chose the term moderator, because it implies an impartiality that you're not going to get from what is effectively the "owner" of a community.
These are just folks who either showed up early, or demonstrated they thought about the issue the same way as the person who started the reddit thought.
Agree. There's just nothing even remotely interesting or real being said on Reddit anymore. I'd honestly rather do anything else. I'd rather stare at a wall with my own thoughts than spend time on Reddit.
Here's an exercise - check out /r/unpopularopinion today. Full of perfectly boring normal opinions like "MTV should go back to showing music videos". All of the content just seems completely fake. There's not an unpopular opinion in sight.
Now go the Wayback Machine and take a look at that same sub from say, three to five years ago (which feels like a short time to me these days). Plenty of deeply unpopular, controversial, thought-provoking opinions. You don't have to like them, but they were real.
That's probably true, but people who just look at popular post and aren't engaged in anything aren't really part of any community either, unless we expand community to literally mean anybody that's visiting the site, at which point it becomes somewhat meaningless because it's too homogeneous again.
Terms that have no distinctive power have little value, so while I'd certainly consider them "users", I wouldn't consider them part of the community, much like a tourist staying in a village for a weekend isn't a part of the village's community.
I don't go to my city council meetings, but am still a member of our local community.
This is quick becoming a debate over definitions, but I think reddit thinks of itself as both having a community and also as having individual communities. For example, anybody who has had a post in their niche community get to the front page understands that there's a larger community they're a part of that is different from their smaller one.
Sure, much like you can get your 15 minutes of fame and get on national TV. I wouldn't call "the whole country" one community with smaller child communities though.
To me, community implies some amount of connections between the members. I've never put a number on it, but basically something like "if you randomly pick X people from the group, at least Y of them will know each other". That would limit both the size and also do a reasonable test for community membership: if nobody in the community knows you, you're not a part of the community.
I think the core of a lot of these issues is the distinction between universal findings which almost always are in favour of "progressive" worldviews and anecdotes which aren't necessarily. The example of trans people detransitioning is a good one here - the general truth that transgender people are very happy with their decision to do so (regardless of what that may mean for them) conflicts with individuals and their anecdotes regarding their (or people they know's) transition.
Whilst there isn't inherently incorrect or morally wrong about detransitioning in itself, often when brought up it can lead a person to an idea which is incorrect - essentially applying the anecdotal evidence of a Reddit thread to all situations.
Thus, moderators of subreddits (and administrators, although I think that whilst it is bad when they do so, it isn't as common as many would believe) feel conflicted on the matter. Should they allow the discussion to continue, almost guaranteeing that somebody is going to form misconceptions from the thread? Or should they shut it down, stifling free speech but at the same time reducing the level of overall animosity towards minorities in the community, and preventing people from forming misconceptions?
Ultimately, I think that it should always really come down on the side of free speech, as long as they aren't being blatantly hostile. However, in my opinion, if we really want to have a positive "marketplace of ideas", as some describe it, we really need to educate people from a young age on logical reasoning, cognitive biases and argumentative fallacies the same way we would teach history, geography or the arts. Whilst I think that's unlikely to happen in our current political climate, given that keeping people uneducated is a very effective way to make them vote for those who make the most compelling surface-level arguments, it seems to be the only good solution for the conflict laid out above.
I'm curious, can you point me towards some examples of the data you suggest? I personally haven't seen anything of the kind, although perhaps newer data may suggest slightly lower rates of overall satisfaction given that increasing awareness of something to an otherwise unaware population is always going to increase false negatives. Still, the figures I see are always approximately in the 95%+ positive range.
I don't think "reddit is dead", but I do think the relatively intelligent and interesting userbase that was there ~10 years ago has been almost entirely displaced by clickbait and propaganda targeted at the now mostly middlingly-intelligent read-only userbase.
This almost inevitably happens when a site gets popular enough. There are only so many intelligent people actively posting on the internet, and it's impossible to have a site with 100,000 users (let alone the >400,000,000 of Reddit) with a high fraction of them providing good content.
Certainly the heavy-handed censorship of the Reddit administration is a disincentive for people who are interested in topics at the edges of the overton window (i.e. most of the interesting people online).
The second paragraph has four sources that cite some transexuals rejecting the label of transgender, so language-policing others based on your personal preferences is quite inappropriate here.
That page correctly notes that 'transsexual' "originates in the medical and psychological communities", much like Wiki itself does. While 'transgender' may be naturally broader, there's nothing hateful whatsoever about using that term; it's quite well-established.
Considering how much of Internet culture is currently being formed on reddit it's amazing how opaque the site is to history. The archival is low-quality and missing a lot.
I sometimes browse by /r/all/gilded, which gives you a weird, weird, WEIRD cross section of the Internet. One time I found a heavily gilded post memorializing a recently deceased contributor to /r/cripplingalcoholism -- which appears to be an important but saddening community for people with a serious medical problem -- then followed a few links on the profile of the deceased and their associated until, a few minutes later, I ended up at /r/opiaterollcall.
That forum died pretty soon after I found it (and got national press coverage at https://www.nytimes.com/2017/07/20/us/opioid-reddit.html ) but wow, oh wow, there was a lot going on there which is now essentially lost to readers of history, because existing archives don't go deep into posts and comments.
Gizmodo addressed this in February, though just as Covid-19 news coverage began overwhelming everything:
Yesterday, Reddit CEO Steve Huffman (or “Spez,” as Redditors affectionately know him) published the platform’s annual transparency report, highlighting how the platform took down more than 200,000 pieces of offending content over 2019, along with nearly 56,000 accounts and nearly 22,000 subreddits—the site’s communities, most of which are moderated by volunteer users. He also flagged a “company update” regarding the way Reddit thinks about “quarantined” communities...
When we expanded our quarantine policy, we created an appeals process for sanctioned communities. One of the goals was to “force subscribers to reconsider their behavior and incentivize moderators to make changes.” While the policy attempted to hold moderators more accountable for enforcing healthier rules and norms, it didn’t address the role that each member plays in the health of their community.
Today, we’re making an update to address this gap: Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.
>I don't understand why you would even publicize this. Just stop counting those users' votes, if that's what you want to do?
Those user's votes don't matter, scaring the userbase into narrowing their overton windows of acceptable reddit voting is what matters.
More broadly Reddit wants to be seen as taking action. The incentive structure facing that company rewards loudly making a slight improvement to the site over quietly making a huge improvement.
The point is that the users who are upvoting this kind of content will stay on the site if they think their votes matter. Reddit clearly would rather they leave entirely, and this is a good way to encourage racists/misogynists/incels/whoever to find another platform.
It happens for far left content as well. I don’t think it’s limited to hateful content. At least, that’s what I’ve read from the chapotraphouse subreddit. Example being upvoting some far left stuff in r/politics.
'Far-left content' and 'hateful content' are not mutually exclusive. Quite the opposite, really.
It is amazing how people will redefine hating Trump supporters, men, white people, "the 1%", western civilization etc as somehow 'not hateful'. "My hate isn't hate, I just dislike things that are bad." Right.
Tankie feels like it became a near useless term. Some well known liberal paper (maybe NYT?) published some article that insinuated that maybe US cold war era propaganda on the USSR wasnt nessicarily always true and someone linked it to me referring to it as a tankie article.
Can't happen soon enough. The days where forums were the defacto place for discussion were just better. The communities would self-police and alternative opinions weren't suppressed nearly as easily. Reddit is a meme slideshow, not a place for discussion.
The long tail of reddit is the same as forums. I'm subscribed a bunch of niche subreddits for products I own, projects, etc and it's just like forums except with a measure of consistency and features that I appreciate. These communities are self-policed the same as any other forum.
The problem with reddit isn't technical, it's human. A large group of people will ruin literally anything.
You just have to be in the proper small subreddits. Reddit is where I get the highest quality of discussion (especially since Covid-19 as small subreddits stayed small but HN went downwards)
I don't want to register zillions of forums and manage my passwords. I contribute to hundreds of subreddits. If reddit dies I'll find a replacement like reddit and move on.
>I don't want to register zillions of forums and manage my passwords.
and I don't especially want the admins of any one site to be able to erase my entire existence off the net on a whim.
single/token login from known net authority agents is common. I'd rather embrace that kind of thing than the idea that any one site should be our de facto forum and communications standard.
I was reminded of this recently when a personal favorite BB forum disappeared , taking years and years of niche automotive data with it. Data that was rare from the get-go, and un-replaceable now.
I'd rather decentralize my 'data-stores'. Reddit will disappear one day, and it's up to them to be a good actor towards the archival group, providing data bandwidth during a time when investors have become disinterested, and paying the light bill is near impossible.
I wouldn't count on that.
Keep archives, repeat information elsewhere , and most importantly : don't embrace adding and relying on centralized structure where doing so inherently adds weakness and fallibility.
I don't think any "irreplacable" data or conversation should be on reddit. If I lose my conversations about funny cats, or factorio, or why I love a programming language, I'm not gonna lose any sleep over it. I agree with you for critical stuff, I don't think they belong to reddit. But please understand that for consumers like me, reddit is an incredibly convenient social media. I can login now and spend hours talking about hundreds of different things. There is no way in hell I'll be able to register to so many different forums and not get frustrated.
This is why an ActivityPub based, federated forum system might be the future here. Have it so you can join the network once, and carry the same username to any other site on the network without anything resembling a signup process. Perhaps you could even have a choice of aggregators which then displayed content from all those sites in one place, or let you view a combined karma score or post count or whatever too.
And most websites implement that? I like r/factorio on reddit. Factorio official forums require me to register with a username, email and password. I simply don't want to do that, and will not do that. I'd much rather not engage with factorio content than register a forum for 100s and 100s of things I'm interested in.
The biggest hit against reddit is that every large sub has to resort to weekly discussion posts/tagging every post so people can filter, simply because of the way subreddits are forced to be laid out.
A normal forum will have different sections to discuss everything even mildly related to the overarching topic, and a number of areas to talk about things not related to the topic. But Reddit doesn't have those features, so subs have to resort to these half measures or allow multiple daily posts about the same 5 topics.
A solution is creating more niche subs, As an example the main Star Trek sub is very relax, so you get a variety of posts there and the comments vary in quality from very informed ones to just memes , hare and entitlement. So there are sister sub-reddits , I like r/DaystromInstitute because if has a narrow scope and the moderation is strong no lazy comments allowed. There are subreddits for each Star Trek series(even the old one get hate and low effort meme comments) so you can discuss your favorite show and keep out the hate and meme comments. IMO this is the best thing, I seen it in gaming too, because fans will not agree which version of the game is good and which one is trash so they make smaller sub-reddits to talk in peace. Though you need someone to kickstart things and force moderation, then the community grows a bit and it can self moderate.
It's a matter of scale. As size grows, rules become harder to enforce and the amount of comments that would derail conversations sky-rocket. Living under the same rules globally, esp. across ideologies and world-views, is a disaster for everyone.
I see people writing this for years now. I guess at some point in history it will become true but I don't see a steep decline that would justify it atm.
Digg trying did not lose to reddit by trying to improve their platform by banning hateful content and low-quality users, nor did reddit win by being a haven for such content.
Reddit once being a haven away from large corporate interests, has continually succumbed to the mainstream narrative of what’s appropriate.
Sure, there are communities that are horrid, but then there are those that are on the line. And inch by inch that line moves from moral goodness, to corporate interest.
Reddit is no different than Twitter/YouTube/Facebook, only they haven’t covered the same amount of ground.
Given another few years, if that, we’ll see the same censorship practices on Reddit, with the same magnitude, as youtube, facebook, and twitter.
This is essentially account entrapment. Show content they know is in bad faith, then punish them. Or, once a post is deemed in bad faith, retroactively punish them for not having the correct opinion.
It's interesting. I was just looking at a user account that had a gazillion post karma, but -10 comment karma. It also created a few subreddits. It's so obvious a content farmer.
People moved away from Digg because they implemented an algorithm to suggest news, and de-emphasised user submitted stories. Now, essentially the same thing is happening with reddit via these farmers. It's not as obvious and might not be sanctioned, but at the end of the day, a few select bots are also pushing content to the front.
I’ve read many books in my time and I liked 1984 both for its entertainment value and as a warning about government power and limiting free speech. What is wrong with 1984? Is there a similar but better book I should read?
What’s wrong with 1984 is people sign up for a private service, post something against the ToS, get their account banned by a volunteer user-moderator, then call it “punishment” and cite 1984 as if “piss off, troll” from JimBob moderator of /r/WhippetFanciers is exactly the same as living in North Korea and being forced to speak NewSpeak on threat of having caged rats eat your face in Room 101.
My long time reddit account (same name as my hn account) was shadowbanned 2 weeks ago. I sent them an appeal and they responded really quick (it was sunday night I believe) saying that my account "got caught by Reddit's spam filter" and unbanned the account. A few days later my account was banned again. I sent an appeal and this time they haven't responded in over a week. Is this possibly related?
Probably not, but my reddit account of 10+ years have been shadowbanned too with no explanation. As a Digg immigrant, I can't wait to migrate away to the next reddit.
I'm a daily redditor. On Saturday I logged into reddit and found "Your account has been permanently suspended from Reddit." across the top banner. The mod message says I was permanently suspended for repeated violations of the content policy.
The last time I posted ANYTHING on reddit was 21 days ago when I commented on a Schwarzenegger post and said I'd love for him to be president. I appealed it and haven't heard anything.
First, it tries to be a "community hub", like Forumer in the old days, where anyone can set up an "autonomous" messageboard (subreddit) to support discussion among a particular subculture. Consider e.g. /r/smashbros.
Second, it tries to be a "community-in-itself", complete with its own demonym "redditor", and a shared set of values, etiquette, and jokes/memes.
These goals are fundamentally at odds. A community of competing factions -- some of which openly hate each other -- is hardly a community. A board for Super Smash Brothers players which must hew to the diktat of reddit at large is not really a community of Smashers. When a norm on Reddit at large is violated by a subreddit, the general population is incensed. When a rule is handed down from the admins to the moderators of a subcommunity, the members of that community feel they are being manipulated by outsiders -- even if they might have decided to enforce that rule on their own, if they had been left to their own devices.
"Spring forward… into Reddit’s 2019 transparency report"
Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.
/r/reddit.com is still used for admin communications; if you want to contact the site admins, you are encouraged to use the contact the moderators link on /r/reddit.com.
Amidst all this accounts that randomly spam the site with collections of articles and etc run free. They don't participate in discussion (maybe a handful of actual comments... with more links), they just push a particular point of view and inundate every sub they can find with article after article....
A simple query "hey is this user just spamming submissions" could deal with some of them.
If you search new you're presented with repost after repost of these spammers, sometimes reposing their own content, just from another news source.
If you're part of a location based sub that is in the news, it's just an AVALANCHE of distantly cause related submitters spamming it up.
That’s the most obviously problem I see with section 230 freedom of speech maximalists arguments. Who’s right is it to curtail the freedom of someone to announce their love of a some sexual performance enhancing supplement in every thread and comment section on the internet?
Here is what people mean when we say we don't like Section 230. And we mean it whether or not Section 230 is actually responsible.
- Advertiser uses our trademark on Google Ads? Google makes money and has editorial control. But we can't sue them.
- Somebody posts something libelous about us on Twitter? Twitter makes serious money and has editorial control. But we can't sue them.
- Fakes on eBay? Can't stop them. eBay not liable.
If Twitter is going to have a "fact checking" department, then they also need a "retractions" department. And if retraction aren't forthcoming when reasonable, then we need a cause of action against them.
> Abusive content is not acceptable on Reddit nor is engaging with it.
This is such a fail for Reddit. If the content isn't allowed on the site, it's on Reddit to remove it. The existence of the content on the site implies Reddit's approval of the content, so engaging with that content is a wholly innocent act.
Please explain by what mechanism you would expect abusive content to be unengagable before being recognized (and treated) as such. Preferably one that does not involve Reddit having to travel back in time.
I don't have a way to programmatically detect abusive content, and for users the content's presence on the site implies it's legitimate. Just remove it when it's found, and shadowban the people who posted it. It isn't rocket surgery.
An obvious concern of this would be just silencing content which isn't violent or morally wrong but just not wanted in a large commercially viable website. This is obviously polarizing but something I'll be watching is what happens to /r/chapotraphouse.
CTH has some posts that violated a rule but most don't, the admins claim their quarantining was never well communicated and intentionally vague, posting screenshots of lack of communication from mods.
If people get suspended for just participating there without any clear violations of calling for violence/brigading/etc then that's not really a great sign for transparency.
>CTH has some posts that violated a rule but most don't
I'm not familiar with the CTH situation, or super familiar with CTH in general besides knowing them to be pretty far left on the political spectrum.
But based solely on this line, if their admins are not making enough of an effort in Reddit's eyes to clean up and discourage people from making these rule breaking posts, then the quarantine makes sense. Even if only 1% of .1% of the posts are violating the rules, if the subreddit isn't policing that small minority of posts, what is Reddit supposed to do? Ignore it because most posts aren't breaking rules?
I don't follow them enough to personally have an opinion on if they warranted that treatment or not, but did follow enough to know it really couldn't have been most posts. So really can't argue with you on the validity of the banning.
but... that if the fraction of bad posts lead to mass suspensions on CTH then I'd say it is a creepy/not-good direction for Reddit.
I pulled the fractions out of thin air, so I have no idea what they actually are.
But you have to do something about people refusing to enforce site policy, even if it's only on a small portion of the content.
For an extreme example (and to be clear I am not saying anything like this happened at CTH), if 99 posts out of 100 on subreddit fall within the rules, but that 1 post that doesn't is promoting killing everyone of a certain race, and the subreddit moderators refuse to do anything about that one post, then they 100% need to have something happen. And that doesn't change if it's 999 out of 1000, etc., either.
Now, it's unlikely that anything on CTH was at that level of toxic and hateful, but the same overall concept applies - they have to take care of posts that are against policy, even if they're a small minority. If not, it seems like getting quarantined or even removed completely is a reasonable course of action.
The thing with reddit's rules is they aren't applied in a consistent way. You can go to /r/justiceserved (which is basically a violence porn sub) and talk about violence all you want. Look at https://www.reddit.com/r/JusticeServed/comments/hdvsy2/dayli... as an example, look at the people there that are saying they think it's a good thing that a guy was shot (and killed) for attempted robbery. This was the top post on /r/rising for that sub when I made this comment.
You can go to /r/politics and say that we should go to war with Iran, or say that it would be great if Putin died and be fine.
/r/chapotraphouse has probably the same amount of calls for (or glorification of) violence as /r/politics (and less then /r/justiceserved), but for some reason reddit has it out for them. Also, how is /r/cth supposed to fix their rulebreaking if the admins aren't very clear on what content is breaking the rules?
Yeah, Justice Served definitely looks like it needs to be looked at.
I think there's two separate issues here:
1) There's probably not enough manpower to be able to do this evenly and consistently across all of Reddit. They're not ever going to be able to do everything all at once. If efforts are ongoing, I don't think it's fair to penalize Reddit for not having taken care of everything that needed to be taken care of all at the same time. If we get to the point where Reddit says "ok we're done we've taken care of all the bad communities and are scaling back down the moderation team" and we see that it still is uneven, then I think we're at the point where it's a problem.
2) The mega subs are tough to deal with in general. Politics is absolutely massive, and from a practical standpoint as one of the core subreddits, is going to be handled differently than a non-core one. Reddit is ultimately a business, and even if from a moral absolutist standpoint it should be treated exactly the same as any other subreddit, realistically it's not going to be.
I do feel like CTH might have been targeted a bit specifically because so much of this has been focused on far-right subreddits, and there was some desire to "balance" it out.
What seems to be happening with CTH looks like a positive sign. They're not really less toxic than the others getting the boot, and the admins kicking abusive buddies while they're cleaning up everyone else says they have a (somewhat) functional internal process. Less room for toxicity in the world!
>but it’s also a cesspool or horrible people sharing horrible ideas
This is not coincidental. The same thing happened with Voat, the same thing is happening with Ruqqus.
I'm sure some not-awful stuff will end up as collateral damage here, but Reddit is attempting to target the communities of people that are promoting and engaging in some pretty heinous activities and lines of thought. When those are the people leaving to make a new platform where they won't be censored, that's the content that takes up the mindshare on those websites.
The presence of content that is extremely inadmissible for Big Tech is a good sign in my book. It's very easy to select the type of content you actually want to see.
> it’s also a cesspool or horrible people sharing horrible ideas.
I tend to follow Voltaire's ‘I disapprove of what you say, but I will defend to the death your right to say it.' I agree with your assessment of the quality of ideas thrown around there, but I'd prefer a world where the internet allowed sites like that to exist, rather than one that's increasingly owned & distributed by giant corporate conglomerates with significant oversight into the discussion of its members, lest you get assigned to "re-education camps" or some BS like that.
And let's not pretend that Reddit doesn't want to be one of those giant conglomerates...
There's nothing wrong with Reddit not wanting to facilitate and provide a platform for that sort of content, though.
It's a massively popular site with mainstream appeal. They don't want to let that popularity and audience be taken advantage of to spread ideas that are hateful and likely to incite violence.
Quarantined subreddits are dumb, but it's a sign that their afraid of the backlash of outright banning them - they are already getting accused of heavy handed censorship.
If people want to talk about that stuff, then yeah, go to gab and voat and ruqqus or 8chan or other sites that allow for it. There's no shortage of places on the internet to talk about anything you want. Reddit doesn't have to be one of them.
To the extent that a site like Reddit tries to police content and decide what's "hateful" content versus just mildly offensive, etc, the platform becomes less valuable to me.
These policy changes have driven me off of Reddit. Not because I was part of any quarantined "communities" but just because I see where things are going.
Reddit used to actually believe in the ideal of free speech, full stop. That stopped being true quite some time ago.
> I hope most people on HN don't agree with the train of thought that websites have no duty to their fellow man to not normalize hate speech.
I suspect that most agree with your statement, but to speak for myself, I am very skeptical of labelling something as hate speech. It starts out with banning "legitimate" hate speech ("ethnic group X is secretly controlling everything and must be purged") and quickly morphs into something like "you said all lives matter which is associated with opposition to black lives matter which over the long-run is part of a system of oppression and therefore we're going to ban you". If it's not clear, these are arbitrary examples that I just made up but I think the way Reddit is going it's quite clear that we're moving towards that.
So, I might be anthropomorphizing a website and drawing too many parallels to my own experience, but having shifted my own stances here over the past 7-8 years, I can't say I blame Reddit.
I used to be completely against de-platforming people. And I still think sometimes it goes too far - people should be exposed to diverse ideas, and disagreeing with someone doesn't instantly make you a nazi. I won't mention names here, but there are a lot of people that I've vehemently disagreed with on major issues, but still followed along with 'sunlight is the best disinfectant.' I thought that they were ultimately exposing themselves as bigots and hatemongers when going on TV shows, writing op eds, showing up at college campuses. That they would end up taking care of themselves.
But then I realized that I was seeing more and more people quote them, quote their rhetoric. And I'm not even talking about stuff along the lines of 'Black Lives Matter' vs 'Blue/All Lives Matter' type stuff - people being blatantly anti-semetic, people glorifying pedophilia because "it happened to them and they turned out OK", all sorts of very cut and dry "bad things."
When we give these large public platforms to people peddling this, we're directly increasing the spread of the rhetoric. Reddit doesn't want to be involved in spreading this sort of rhetoric, even in it's milder forms, because people get radicalized slowly - you start with the softer sound bites, the ones where they have a lot more wiggle room to claim you don't mean this other thing. And as people start to agree with that idea, you move in with one that pushes the boundaries a bit farther. Keep repeating this process, and someone ends up embedded in the extreme forms of hate.
Reddit wants to cut that off at the start and not play a part in spreading these ideologies and helping radicalize people into them. They've realized that sunlight isn't enough to disinfect things that are fundamentally rotten, especially when these subreddits are largely monocultures without outside influence coming in. If they don't want to be a party to the radicalization of people into buying in on extremist hateful ideologies, they have to do this, and they can't stand by doing nothing in the name of "free speech"
On most of the internet, you don't win arguments by being right. You don't convince people by providing a well research, well sourced, informative argument. You win by beating people into submission. You win by having the better sound bites, the pithier quotes, the most upvotes and most people agreeing with you. And since these subreddits are so self selecting, when you wander in as an outsider, you only see the "winners" - the dissenting voices got downvoted, the bad arguments that appeal to the population of the reddit get upvoted.
Is this a lot of responsibility to put in the hands of people? Yes. Can it lead to the same sort of problem I'm describing here? Yes. Is there any other choice? Not that I can see. You can stand by and let something that you know is harmful and dangerous to human lives happen on your watch because you're afraid you might someday end up doing the same thing, and know that there will be bad results. Or you can risk it, try to take steps to prevent yourself from falling into the same trap, and move forward with something that is necessary with knowledge that if you are not careful you can become part of the problem.
>people get radicalized slowly - you start with the softer sound bites, the ones where they have a lot more wiggle room to claim you don't mean this other thing. And as people start to agree with that idea, you move in with one that pushes the boundaries a bit farther. Keep repeating this process, and someone ends up embedded in the extreme forms of hate.
Why does this happen in one direction but not the other? A radical sees something slightly less hardcore. Is influenced. Sees more non-hateful content. Slowly moves away from radicalism and towards the center. Why doesn't that happen equally as the other way around? Should be a balance there, right?
Really, your whole point is essentially stating that these ideas that you hate are more convincing than the ideas you believe in. You don't think you can consistently win the war of ideas via argumentation so, since you obviously are right and your ideas are best, that justifies using power to simply suppress other people's thoughts and words.
All of this rests on your 100% certainty of being correct. Once you admit any level of uncertainty, which you must, your whole approach collapses.
Why do you get to do it to them, but they don't get to do it to you? It's because you think you're 100% right. Of course they think the same about themselves, though, so you're claiming you have some special moral position in the universe that they don't. you stand above them; you have the right to judgment on them but they don't have the right to judgment on you.
Nothing justifies that. The only real differential is that you have the power to silence them and they don't have the power to silence you, and so you're going to use that power and screw equal rights.
>something that you know is harmful and dangerous to human lives
You keep saying these things as though they're absolute facts. They're not facts. That's your opinion. Others' opinion is the opposite - to them all of your beliefs are harmful and dangerous to human lives. And you're walled yourself in so you don't understand the reasoning, and end up thinking it's just inchoate meaningless 'hate' that you're opposed to. You're ignorant of ideas outside your filter bubble, and use the strawman images of opposition you've invented to justify enforcing that filter bubble on others using power.
Quarantined subreddits are analogous to downweighting a particular Facebook post in the FB algorithm: both are a form of "soft" moderation stopping short of outright censorship.
Banning users who vote for Reddit's arbitrarily-defined objectionable content is a bridge even further than mere censorship. Particularly because the users are voting for the content before they know Reddit has defined it as objectionable, by definition, because once Reddit has made that determination, the original post is removed.
> There's nothing wrong with Reddit not wanting to facilitate and provide a platform for that sort of content, though.
The counterarguments to this have been presented so many times on HN.
The most fundamental problem is that there is a motte-and-bailey going on with these defenses of censorship: in theory, Reddit and others are censoring far-right hateful domestic terrorists. In reality, they are also censoring qualified medical professionals from opining on COVID in a manner contrary to the WHO, or people who are not actual climate denialists, but who merely question the precision of particular climate models.
Exactly as any defender of free speech would have predicted, these mechanisms are enacted on the pretext of defending against a bogeyman, but are actually used in practice to impose ideological uniformity and suppress legitimate dissent.
> I'd prefer a world where the internet allowed sites like that to exist
So these alt sites spring up all the time (8chan, 4chan, ruqqus, voat). No one has taken those sites off the night. And the public is free to vote with their time, clicks, and money on whether they support these sites.
This is what happens. Banned content by reddit policies is hate speech, far-right politics, and targeted hate. People who leave reddit for somewhere else do so because... they want to do those things, and can't. It's no wonder that any forum they then choose would become dominated by those ideas, that's the entire point.
Lots of left wing politics receive these inbox scolds too, it seems to affect hate speech, harassment, and also political ideologies outside of the advertiser comfort zone.
I like the ability to see the discourse from all sides of the spectrum (reddit, voat, worldstar, etc.) I'd rather not have the platform dictate what they "think" is going to offend me.
Yes that's right. They won't ban you for having non-terrible opinions, but they'll downvote you into non-existence. I tried participating in voat for a couple of days before leaving forever.
It continuously baffles me that some people don't learn from unmoderated online communities. They become dens of bigotry, doxing, attack mobs, racism, misogyny. Saying your site is driven only by "free speech" is the cowardly reply that refuses to stand up to these things.
That's a difficult discussion. These sites offer so much more than what they're always reduced to. I would even argue that they're sometimes nicer places than Reddit or hn. And even with those icky subjects they're not always as black or white as they're portrayed. Even if you don't believe that they're right about them, they're evidently more open to their debate.
Exactly. I honestly enjoy 4chan for a multitude of reasons, and have had many quality discussions on the site over the past many years. There's always garbage to sift through, but I like the "free" nature of those types of platforms.
Interesting approach. Assuming no bots, doesn't that change actual sentiment among the population. How can you measure what people are thinking if dissenting voices are banned?
Every time we have a conversation about why Facebook, Twitter, and Reddit shouldn't be allowed to"censor" content, I ask what would become of Hacker News if it was not permitted to moderate spam or threads that do not follow the HN guidelines.
I think it would quickly devolve into 4Chan. Almost all of the current community would decamp, and the trolls that infest other sites would take over.
For structural reasons this comparison makes no sense.
HN has one homepage; one sub-community. Reddit has an infinite number of subreddits. Similar for facebook.
Even with no moderation, in no case would anyone be exposed to content that wasn't posted and allowed in a community that they deliberately subscribed to.
Reddit polices what kinds of communities are allowed. HN is one kind of community.
It makes no sense to you, but for years, users on certain subreddits were exposed to users and their views from other subreddits thanks to brigading.
Likewise, Reddit and Facebook both have to deal with being judged by all the forums they "host." You may not think it makes sense to criticize Reddit as a whole for hosting white supremacist forums and violent misogyny forums, but the rest of society seems to think that one bad apple forum spoils the entire Reddit and Facebook barrel.
If you're looking for a comparison, I don't have to read any thread I don't want to on HN, just as I don't have to join any subreddit I don't want to on Reddit.
But I know they're there, and that affects my perception of what HN or Reddit or Facebook is.
If it was just about hiding content, we could argue that HN shouldn't censor anything, but if we let people tag threads and comments with labels like "white supremacy" or "SJW raving," then everyone can filter what they don't want to see, so what's the problem?
>users on certain subreddits were exposed to users and their views from other subreddits thanks to brigading.
Brigading has been rightly against the rules of Reddit forever, that's a hostile action, rare, quickly punished and not significant.
Also, critically, it applies the same way to completely separate websites as well. Redditors could brigade HN or vice versa.
On Reddit, nobody gets exposed to content they didn't subscribe to.
>I don't have to read any thread I don't want to on HN
True, though you still have to see it, so in the interests of time at the least it's good to have a filter.
>we could argue that HN shouldn't censor anything, but if we let people tag threads and comments with labels like "white supremacy" or "SJW raving," then everyone can filter what they don't want to see, so what's the problem?
That's a great idea! We could do a curator system where anyone can sign up to be a curator to tag things with their own personal tags. Then you can just find a curator who matches your wishes and we're golden.
Let’s compare this to a mall. If 50% of the stores sell confederate merchandise, Nazi memorabilia, MAGA hats, and souvenir photographs of Black people getting lynched and/or Jews being gassed, can we ask Blacks people, Jews, and generally people who are not ideological racists or libertarian enablers to just walk past what they “disagree with” and spend money elsewhere in the mall?
Not in this or any other timeline. Even if we cover up the shops, knowing that your money in a normal store helps the mall stay in business to rent space to a racist store makes this mall anathema to a huge segment of society.
—-
YOU may be comfortable shopping there, and YOU might construct all sorts of rationalizations for being complicit in racism, but YOU are being unrealistic if you think that everybody else is going to be all, “racism is terrible, but hey, it’s a mall, what’s the big deal?”
Not everybody is going to go along with rationalizing away being complicit in violent racist ideologies. And thus it is that companies like Facebook and Reddit end up being judged by the worst things they support.
If you think that’s terrible and want to trot out “logic” that “proves” they aren’t complicit in racism, you’re going to have to accept that plenty of people can’t be persuaded to do nothing while their country slips into Nazis II.
I feel like this is completely different. Reddit has subreddits and subreddit moderators who can moderate based on their own community guidelines. There is no need for global moderation.
"Devolve into"? This website is just the "sounds educated" arm of imageboard users.
HN is entirely filled with right-wing to far-right sheltered techbros. Half the accounts here that are constantly complaining about "free speech" and "consequences to their actions" are really one bad day away from calling everyone they disagree with various slurs, and the other half are so insulated from any sort of actual injustice in the world that they sit atop of their smug towers of indifference, cawing at the peasants below who have things such as "opinions" (that disagree with theirs, of course) or "motivations".
You are not wrong, but don't forget Sturgeon's Revelation: "90% of everything is CRUD."
What makes something valuable is the 10% of the thing that is not CRUD, and whatever measures put into place to make sure that the 90% CRUD doesn't poison the 10% that's worthwhile.
To me, that's having clear guidelines that are consistently applied through moderation.
You're not wrong about this. There is a very vocal right-wing population of commenters here on HN that show their true colors in more polarizing submissions.
Nice I wonder how many comments that go against Reddit’s policies I’ve upvoted over the years unintentionally on the mobile app.
I up/downvote posts/comments at least couple of times a day when trying to scroll on my phone or just hitting the screen to wake it up after being distracted.
This seems like a good way to suppress bots that upvote posts that incite violence or severe verbal abuse or gingoist. However, I am afraid if they start doing this for discussion on COVID-19.
Seems a little extreme, but the overall logic seems sound from a moderation perspective. If you have users that consistently upvote content that violates the TOS inside of quarantined subreddits its probably a sign that they're a trouble-maker with respect to the site guidelines.
On the one hand, I can understand them not wanting to be a platform for content they do not like (find offensive, believe legally dangerous, &c). On the other hand, presence of porn and nazis is an indicator of a healthy community — banning these things leads to decline in other areas (see Tumblr). You can’t have partial freedom of speech.
At least the porn is still there.
The problem with ‘social’ sites is that they allow cross-contamination between the seedy bits, and r/woodworking and the like.
The open web — with separately hosted forums — is a lot better, but less convenient. I hope to see a solution sometime, but the odds are not good.
At this point reddit is arguably a social danger, in that it is made to look like organic content, while in truth everything is heavily censored and opinions are implicitly and explicitly approved in an absolute top down manner.
Opaque vote display and ranking algorithms, keyword based shadowbanning, ideological mods and admin - combined with an organic facade you have a platform to manufacture consensus and push propaganda onto millions of unsuspecting users, young and old alike.
Why do SO MANY pretend-libertarian tech bros pretending to be libertarians and liberty-loving freedom fans think that free speech applies to private property?
And why have they let themselves be tricked into spewing the Section-230 nonsense that conservative hate-mongers are using to threaten online service providers?
Section 230 protection is irrelevant to service providers policing their site. It is very relevant to service providers who are being told by conservative hate-mongering politicians “hey we’ll strip your protections unless you allow our hateful content even though the protections are unrelated to our content and comments”.
You’re carrying water for malicious actors who don’t care about you.
By your own logic, not mine (I think it’s bullshit) YOURS if you downvote this comment to the point that it is hidden or automodded then you are infringing on my “FREEZE PEECH RAIGHTS” because you are “removing” constitutionally-protected freeze peach from the public sphere.
No one complained when folks voting on an unpopular forum were kicked out late last year. Everyone laughed and said "good riddance" then. Now the chickens have come home to roost.
The adage of the "fascists will next come for you" continues to remain true.
For years reddit stood by and did nothing while certain groups exploited and spammed the front page with their hate speech, im glad to see they're finally doing something about it.
The idea that the "vote" is not just counted but judged is new, it seems. It is the next step in the trend of hyper precise tracking of peoples thoughts and actions, and their judgement in a transparent and opinionated global social sphere with extreme consequences.
Of course "voting" should be something you are accountable for. If you are pushing nonsense and banned content up, you become a part of the problem.
Even on HN there have been many anti-science nonsense stories on the front-page where the site would benefit from letting people indicate that they want anyone who enabled it to have no weighting in the curation of their own personal view. This may be a sort of filter bubble, but it's an anti-idiot filter bubble. I don't need to have my world clouded by the noise of conspiracy fanatics and idiots.
So, if people don't vote the way the Reddit admins want, the Reddit admins will replace the voters. That is definitely a way to get only the things the Reddit admins want upvoted, upvoted. But it seems to me to be the opposite of free discourse.
I can sympathise with the admins, because a) there are a lot of terrible, toxic people b) there are a lot of bot armies. The latter is probably easiest to combat, but it is still a tough nut to crack. The former is even worse: I imagine that the most engaged users are quite often the users one wishes were least engaged.
It's a tough problem, but Orwellian punishment of wrongthink seems the wrong direction.
How so? Calling for someone to be murdered is understandably against their policy. As is doxxing someone. The user accounts that upvote the call for murder or upvote the doxxing details KNOW that the post is against their policy and decided to participate anyway.
I hate how these conversations always devolve to "well they're just trying to control what everyone says!" as if it's black and white and not shades of gray. No, they're trying to prevent assholes from spilling over into people's real lives, because the alternative is becoming 4chan. Where all that "upvoted" content results in ACTUAL PEOPLE BEING KILLED.
I hope most people on HN don't agree with the train of thought that websites have no duty to their fellow man to not normalize hate speech. "Where does it stop" - it stops at not doxxing people and not calling for people to be murdered, per their TOS.
Theres no upvoting on 4chan and bumping a thread (or on reddit upvoting) does not kill and cannot kill anyone. The people making the threads end up killing people (very rarely) not the randoms that reply (or upvote).
You don't even know what the account upvoted, and upvoting isn't hate speech (or calling for someone to be murdered). Do you have a bunch of unknown information on this or are you just defending reddit's continued slide to dystopian horror for no reason?
The problem is that there's no reason to believe that this policy will be limited strictly to obvious garbage. Unpopular, but not garbage views will inevitably be hit too. Maybe that's worth taking out the garbage, but collateral damage is still a negative.
It's amusing how much HN mocks the police for their stance that everything that enables privacy could enable child exploitation imagery while at the same time insisting that anything that resembles moderation will end in dystopian thought policing.
Everything that enables privacy absolutely could enable child exploitation imagery. But privacy as a whole is too fucking valuable to give up to make prosecuting some crimes easier. In the US, that's the whole point of the fourth amendment -- warrants with probable cause and specificity requirements are the proper escape hatch, not bulk data sifting. Not all crime can or should be caught at any cost whatsoever.
Moderation need not end in dystopian thought policing. Reddit has had moderation for quite some time. Indeed voting was there from the beginning, and is a primitive crowdsourced form of moderation, with many obvious flaws, including groupthink.
Forum moderation's whole point is to constrain what can be said. It can be used for good or ill. Extending it to bans from agreeing with wrongthink is a step beyond the status quo. Maybe it'll balance out to be good on net, maybe bad, but don't pretend it's not novel or that it's obviously good on balance, and no one should be concerned.
Because they own the website and can do what they please with it? And "their" definition of garbage is banning things like "pics of dead girls" and "fat people hate". You're welcome to your own definition on whatever billion-user website you happen to administer.
For example, lets compare a similar service. Twitter is currently censoring some of the President of the US's tweets. They were not doing that a few months ago. So it's not black and white where twitter and acceptability is, and, like with society, it changes over time. Thus, people are concerned that merely interacting with something is enough to be labelled anathema, because of the shades of gray.
They're not censoring them at all - they're adding more context below them that some of the content is misleading. You can still read every insane word in its original context.
You're forming a motte and bailey argument here... inside another motte and bailey argument.
Motte #1: doxxing, calls to murder specific individuals
Bailey #1: 'hate speech'.
And your bailey itself is a motte and bailey!
Motte #2: speech that incites violence or hate against a given group
Bailey #2: "All lives matter", "It's ok to be white" banned, "#killallmen", "all white people are racist" Ok.
It's always like this. I'd love a pro-censorship advocate to actually advocate what they want instead of endlessly cloaking it in these sorts of smokescreens and sneer labels. If 'hate speech' ever defended all people equally it wouldn't get so much pushback as a concept, but it was always a special protection for groups favored by those in power. Address that.
And people downvote you as a disagreement button on this site instead of qualifying their opinion in words.
But you’re not wrong in your opinion and allowing downvotes without requiring a response encourages groupthink as well.
But where’s the line? Do we as society mandate freedom of speech on all private platforms? Are downvotes on HN protected speech?
You’re right that it’s a tough problem. We are at an inflection point right now. The boundaries of speech freedoms for the next century are being defined right now.
Even more concerning than using downvote as a disagree button is flagging as a disagree button. It's happened to me many times where my good-faith comments have been flagged due to advocating a minority position (specifically, that the COVID-19 lockdown is harmful, just to give context) and I have/had to e-mail Dang to get a human to look at the flagged comments and determine if they were actually violating a guideline.
Freedom of speech is only about whether what you say can be used against you for government persecution (other than admitting to a crime to an officer of the law).
Therefore, I don't think we can say that posts and replies on the websites hosted by private companies can be considered protected speech.
That said, treating upvotes and downvotes in the same way as the text in a post or reply is a pretty bad idea IMHO.
> during an era of rampant 4chan-troll white supremacism
I find the narrative on this one not merely dubious, but Orwellian. Not only is free speech under extraordinary attack, as we have seen, freedom of association is "in the public health interest" but only for the "right" causes.
"White supremacism" -- just like "Russian trolls" -- is a canard used to justify actions which are themselves fascist.
Eh but there's a couple reasons in my mind why this might be reasonable.
First, Reddit isn't a 1st Amendment platform; they make the site, they make the rules. Like it or not, that's how it is today.
Second, you have to stop toxicity somewhere; there are a few universal goods we can generally agree upon, things like abuse of animals, underage content, etc. and as long as the slope isn't too slippery beyond that, I think this is an acceptable form of moderation.
For those of you who aren’t familiar, the “watchredditdie” subreddit is a “safe space” for alt-right users to whine about not being able to brigade, astroturf, and otherwise manipulate reddit to their own political ends the way they did in the run-up to the 2016 election.
This isn't a ban, it's a 3 day suspension. And we don't know what content was upvoted, or why it was a against Reddit policy. We do know, however, that Reddit indeed has policies about acceptable content, and most of us seem to agree they're fairly reasonable. So I don't understand the use of the scare quotes in the title.
Why exactly should an upvote, which is publicly visible approval, be different than a comment? I mean... they're both content posted to the site. It's not unreasonable that they be subject to the same rules.
Is the outrage here that votes are supposed to be "anonymous" and this feels like an unmasking of something people felt more comfortable doing "in private"? Is that something we want to protect per se?
This seems like more outrage than meat, honestly. I'd really like to know what the comment was before deciding.
Well for one it makes every user individually responsible for identifying content that breaks the rules, and this is very hard given Reddit's history of arbitrarily enforcing the rules.
The title wasn't meant to be a scare. The quote from Reddit is "against out policies" which was changed to "against their policies". I believe that's fair. And they already announced that these bans will be permanent in the announcement way back (see other comment for link)
They sometimes come up with measures like this, while routinely ignoring the development of horrible communities that they're aware of (since some users take any possible opportunity to mention them to the admins and are met with silence or half measures).
They even came up with the concept of quarantined subreddits. I honestly can't think of a single honest reason for them existing. Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain, instead of just outright banning it (unless you want to support it discreetly).
To be clear, I'm not accusing reddit admins of shadily supporting horrible communities. I'm just saying that, from the perspective of a user that sees them fight the battle, they're doing it so badly that it almost seems they don't want to win, and I wonder what's happening behind the scenes for that to happen.