Hacker News new | past | comments | ask | show | jobs | submit login
Reddit started banning accounts that voted for content “against their policies” (reddit.com)
420 points by s9w 14 days ago | hide | past | favorite | 367 comments



I don't really get reddit's efforts to clean their site.

They sometimes come up with measures like this, while routinely ignoring the development of horrible communities that they're aware of (since some users take any possible opportunity to mention them to the admins and are met with silence or half measures).

They even came up with the concept of quarantined subreddits. I honestly can't think of a single honest reason for them existing. Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain, instead of just outright banning it (unless you want to support it discreetly).

To be clear, I'm not accusing reddit admins of shadily supporting horrible communities. I'm just saying that, from the perspective of a user that sees them fight the battle, they're doing it so badly that it almost seems they don't want to win, and I wonder what's happening behind the scenes for that to happen.


What is going on at Reddit is that the cancel culture has found how to take down content they do not find acceptable and worse persecute those whose content they have taken down or disagree with.

So, using a combination of email blasts, instant messaging, and more, these groups focus down a site or individual. This can involve simple brigade type tactics against posted comments and stories, complete with abusing the site reporting process, to flag users and content they disagree with. Sometimes it requires pressuring those higher up with public shaming for permitting something they may not be aware of by use of the common innuendo of if they don't take action they agree with all the hateful, if any, aspects of the target.

When any social site becomes large enough to receive mentions on other sites and media it immediately becomes a target of groups seeking to control the message and attack those who do not wholly support that message. Even minor disagreements can be sufficient to attract ire.

the moral majority is an authoritarian regime and it will have a further damning effect than any feared religious right that people constantly brought up as an internet or social bogeyman.

the very fact they now state ideas they don't agree with as making them physically uncomfortable to the point they declare if violence should scare anyone. this is code for saying no limits are required to remove the threat.

1984 isn't government oppressing people, it is people oppressing people

* btw - only reddit you can be banned from reddits you never visited for activity on another sub (including just voting apparently)


I think they just want to have their cake and eat it too, so to speak, like everyone else.

For some people free speech is like riding the train: when you reach your stop, you get off.


> Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain, instead of just outright banning it (unless you want to support it discreetly).

Perhaps this is a way to sidestep claims of censorship. Deleting subs can embolden some community themes and messages by allowing them to play victim of censorship and bolster claims of conspiracy via attacks from left/right/government/lizard people. So let them carry on while alerting people of what lies ahead.


It also keeps people from flooding into the rest of Reddit. When /r/fatpeoplehate was banned, their users flooded and spammed most other subreddits.


For what its worth, researchers have actually studied this specific case (r/fatpeoplehate) and concluded the exact opposite: http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf


Interesting. That's a very fancy study, but the second set of graphs show nothing. While of course the text says that everything went just fine.


It is naive to think these bad users currently stay confined to their hateful communities. They use the rest of the site too. There is even an argument to be made that the rest of the site could be improved by banning these hateful communities, decreasing the value these users get from the site, and thereby decreasing the likelihood that these users will continue to use the rest of Reddit.


The sibling comment to yours links to the study of users from /r/fatpeoplehate and what happened to them. Short version: most of them stayed, and behaved better in better moderated subreddits. The implied conclusion is that a very significant part of noxious community behaviour is situational.


This is most definitely the case. The outrage inside Reddit is massive when there are signs of censorship, even if it’s just a very vocal minority: they set the tone.

Keeping this group of people happy, while at the same time trying to maintain a somewhat “clean” site, is a delicate balancing act. I am certain that quarantined subreddits is a result of this.


My take on quarantined subreddits is that it’s just a tool to remove certain communities from the mobile app so they don’t get in trouble with Apple / Google (mostly Apple, I’m sure).


I think there's something to this, because on at least the iOS app, a quarantined subreddit basically looks like it doesn't exist at all, on the phone app.


...and also how to avoid the media talking about it.


And also how to deal with communities that should absolutely not be spilling over into the rest of Reddit. Reddit, I think rightfully, quarantines a lot of ED related subreddits.


What does ED stand for?


Eating disorder, I’m sure. There are pro-eating disorder forums around on the internet just like there are pro-suicide forums.


Eating disorder, as noted. "Pro-Ana" and "thinspiration are terms that appear, as examples.

https://en.wikipedia.org/wiki/Pro-ana

Several such subreddits are (IMO rightfully) banned or quarantined.


eating disorder


My impression is the reason is that there really isn't widespread agreement on the harmfulness of such communities, so they're sort of trying to split the middle. And perhaps also to advocate for seeing them as awful, per corporate sentiment.

The principal example that comes to mind is TD, which seems more obnoxious and tasteless than some sort of evil.

Subs that truly are beyond the pale by wide consensus seem to get dropped immediately and without objection.

As to not wanting to win, well, yes, dropping everything controversial means losing a lot of page-hits, and as a business, they're likely wary.


There is a marijuana related subreddit that is geo-banned from Canadian IPs.

Reddit just says “This content has been restricted in your country in response to a legal request.

Geoblocked in Canada due to General legal request”

Is there any way to push Reddit to disclose who sent the request?

I’d love to followup with that agency, but we have a lot of police services in Canada, if it was one.


You seem to have forgotten to name the subreddit.

Reddit stopped copying Lumen (Chilling Effects) around 5 years ago.


It’s http://old.reddit.com/r/canadianmoms

(NSFW I guess, but not porn)


Thanks, you're right — can access from the US, can't from Canada, with no explanation.


From Canadian IPs? It is legal here now, so that is odd.


Only legal if you buy it through the state-approved sellers or grow your own.

Though the subreddit (was forced) to enact a “no sourcing” rule already in the past.


> They even came up with the concept of quarantined subreddits. I honestly can't think of a single honest reason for them existing. Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain, instead of just outright banning it (unless you want to support it discreetly).

In what world, refusing to ban something from your online platform is equivalent to "supporting it" (discreetly)? Reddit is a business, the business of getting ad money for providing an open discussion forum. They are free to decide what they host and what they not but simply hosting something (which in their case means, doing business with people that post that content) doesn't mean they support it. Supporting something is not the opposite of not supporting it.

It's like saying that Amazon _supports_ pornography because AWS allows to host pornographic content.


The missing component of your thought process is who, exactly, deems subreddits not worthy of existence. Many people view pornography as an extremely damaging problem and addiction, would you want all 18+ subreddits removes because somebody “can’t think of a single honest reason for them existing”?


> who, exactly, deems subreddits not worthy of existence.

That would be the company which owns reddit, through the admins. Is it fair? No, it isn't, but reddit has no requirement to be fair (although it would be nice if it was).


Ten years ago I would have said, "No censorship. Turn on the firehose of information, and let me choose what's 'acceptable' and what's not."

No more.

That presumes some sort of equal playing field and a meritocracy of ideas.

That's clearly not the case. The reality is that good-faith content takes more effort than bad-faith content.

The 2016 US Election in particular showed us what happens when a platform takes a "hands off" approach - platforms become absolutely saturated with false, hateful information... much of it produced by large and even state-level organizations.

It's not just evident when it comes to political content.

Craigslist. Amazon. Etc. None of these platforms work with a completely laissez-faire approach on behalf of the platform owners. To think otherwise is utterly naive.

Platforms need to curate. There's no other way. Yes, this means we will need to place trust in those platforms and monitor their curation. Platforms will, in turn, have an obligation to be transparent w.r.t. how they curate.


"the dictum that truth always triumphs over persecution, is one of those pleasant falsehoods which men repeat after one another till they pass into commonplaces, but which all experience refutes. History teems with instances of truth put down by persecution. If not suppressed forever, it may be thrown back for centuries."

– John Stuart Mill, On Liberty (https://www.utilitarianism.com/ol/two.html)

Mill seems to agree with you.


> The 2016 US Election in particular showed us what happens when a platform takes a "hands off" approach - the platform is absolutely saturated with false, hateful information...

And yet, presumably you were able to employ critical thinking, see past all this, and vote the "right" way despite an alleged deluge of hateful misinformation, as were 48%, a majority, of your fellow voters.

Where is the evidence that the level of misinformation is any higher now than in the past, or that censorship (whether by public or private entities) is suddenly a desirable thing?

History has shown that we didn't need to censor and oppress Communist thought in the US during the Cold War because of its alleged threat, and even because Communist thought was being supported by foreign actors. In the upshot, communism discredited itself just fine, and in the meantime, our censorship made us intellectually and morally poorer. Why should things be any different with the new far right?


This conversation will definitely be easier to follow if you get up to speed first: https://en.wikipedia.org/wiki/Russian_interference_in_the_20...

    Where is the evidence that the level of misinformation is any higher now than in the past,
See the above overview for a start.

    History has shown that we didn't need to censor and oppress Communist 
    thought in the US during the Cold War because of its alleged threat, 
    and even because Communist thought was being supported by foreign actors
The simalarities to today's situation are easy to see, but they are dwarfed by the differences.

1. "Communists" had no way to create and disseminate information on such a massive scale, in a manner that is nearly indistinguishable from good-faith actors. There's no analog for that in the Cold War's "Red Scare".

2. During the Red Scare, the possibility of Russia infiltrating and influencing the US to any real degree was laughable. It's not laughable now. It demonstrably happened and is happening.

3. "Censorship" is when a person is not allowed to express their views. That is not what is being proposed here. This is about refusing to hand a megaphone to bad actors.

Reality does not conform to your ideals of an egalitarian marketplace of ideas. Content farms with modest funding and modest staffs can outproduce and out-influence millions of regular voices. Relying upon the populace to suddenly become savvy is not realistic. Either we need to turn this into a full-scale information war of competing content farms and state-scale misinformation campaigns, or platform providers need to combat things at that level.

    And yet, presumably you were able to employ critical thinking, see past 
    all this, and vote the "right" way despite an alleged deluge of hateful 
    misinformation, as were 48%, a majority, of your fellow voters.
Sure, yeah.

I also have an IQ in the 90-99th percentile range, a degree in computer science and have been immersed in online culture since before web browsers existed.

The vast majority of the human race does not have those sorts of advantages. Realize that the HN demographic is not representative of the entire human race.


Generally, your points are well taken. The only quibble I have on a factual basis is here:

> 3. "Censorship" is when a person is not allowed to express their views. That is not what is being proposed here. This is about refusing to hand a megaphone to bad actors.

Censorship, historically, was preventing objectionable books from being published. In many forms of censorship, there was nothing theoretically preventing an author of objectionable books from handwriting them and distributing them privately, so long as they didn't draw too much attention to themselves. The historical censors could have made an identical argument to yours: they were merely depriving the objectionable view of a "megaphone" (the publisher), not eliminating the view entirely.

But given what you have asserted above about the importance of breadth of reach for views, it would seem to me that preventing a view from being widely disseminated in any practical way to people who would otherwise freely choose to read or hear the view is the very essence of censorship.

But more broadly, my real problem with your viewpoint is perfectly exemplified here:

> I also have an IQ in the 90-99th percentile range...

> Relying upon the populace to suddenly become savvy is not realistic.

Firstly, with deep respect, this is incredibly arrogant. But nevertheless, you may be right about this, that only the enlightened and educated few can discern truth from falsity and make informed choices.

But if you are right, don't you see that this undermines the bedrock assumptions and principles of democracy? The masses must be led, and shown the "right" information and protected from the "wrong" information? By whom? And what is to prevent those elite and enlightened few from acting in their own interests rather than those of the ignorant mob?

I see within your argument, which may not be factually wrong, a powerful argument in favor of authoritarianism and oligarchy. Any argument that leads to such conclusions is worth a fair amount of scrutiny, no?

Even if it is a fiction that all voters are equally intelligent and informed, sometimes we have found that certain fictions are very important prerequisites for creating a desirable society. For example, the fiction that "all men are created equal". They aren't. But we use that fiction in very important ways to create a more just and equal society. Another is the fiction of free will. Free will does not exist. But still, we treat people as if they had free will, because the alternative is disempowering and decouples people from any responsibility for their actions.

So, I think "the demos makes more-or-less informed choices that are more-or-less in its own self-interest" is another of those necessary fictions, necessary to prevent us from regressing to feudalism or worse.


    Firstly, with deep respect, this is incredibly arrogant. 
    But nevertheless, you may be right about this, that only 
    the enlightened and educated few can discern truth from 
    falsity and make informed choices.
I meant it purely in a humble way, and in the spirit of recognizing my own privileges. Not as some kind of value judgement. I am nothing special by HN standards, that's for darn sure. I'm probably below average here.

But it's important to examine our own privileges.

Intelligence (while admittedly not well-represented by a single number like IQ) is more or less a lottery we win or lose at birth. Bragging about intelligence is like bragging about being tall. It's not something earned; it wouldn't make sense for anybody to ever brag about it.

Critical thinking is a skill that is a serious luxury. It relies on some combination of intelligence, time to hone the skill, access to education, access to information, and/or a serious autodidactic streak. A lot of people live lives where one or all of those is missing, often through no fault of their own.

Other things help me to separate online misinformation from information as well. I was in high school / college as the web came of age, at a time when I had the time and inclination to dive into it from the beginning.

I was born into a stable middle class household in which we could afford a computer and internet access. More privileges many do not enjoy.

All those factors were crucial. Take one away and I may well have struggled to comprehend the difference between real and fake news.

My sister is a prime example. Many of the same advantages of me, but born earlier. Missed the internet revolution. Never been comfortable with technology. Constantly falls for fake news. Is that her fault? Well, yes, ultimately. We must all be responsible for ourselves.

But it's also essentially "a well-financed misinformation industry" versus "a middle-aged woman who is technologically illiterate" and wow, that is NOT a fair fight by any measure. If we are basing the future of our society on that fight having a favorable outcome, we are really fooling ourselves.

    But nevertheless, you may be right about this, that only the 
    enlightened and educated few can discern truth from falsity 
    and make informed choices.
It's not even easy for those with the intelligence and inclination. The "enemy" here is not a crazy person, dressed in rags, wandering down the street muttering that the moon is made of cheese.

The "enemy" is well-funded organizations who have serious time and money -- and sometimes state level backing -- and are devoted to pushing a mix of legitimate and fake news that takes serious effort and knowledge to separate from the real thing(s).

    I see within your argument, which may not be factually wrong, 
    a powerful argument in favor of authoritarianism and oligarchy
I certainly don't want that. If anything I'd say my arguments point towards technocracy where we would favor experts over pure mob rule. That certainly presents its own problems though.

There are no easy answers.

A purely laissez-faire free for all absolutely does not work. We see it time and time again. Bad-faith information always utterly drowns out good-faith information, like spam emails or search results drowning out the legitimate stuff.

Solid, best-effort reporting takes time and sometimes the results are boring. Willful misinformation is orders of magnitude easier to produce.

Education and critical thinking are utterly essential. We should invest heavily in these areas. Democracy relies on them, utterly. But it is also fantasy to think everybody will become enlightened information-processors, and even those of us who are competent at it need help wading through oceans of garbage. At some point, we do need trusted providers of curation. While not ideal, it is realistic, unlike hoping that most members of society evolve into galaxy-brained autodidacts.

This is how any developed society functions. We can't all evaluate drugs and therapies, so we need the FDA. You could replace it with one or more private-sector ventures, perhaps, but at some point you would need to rely on something. We can't all make clothes or food so we trust others. We trust others to design our roads and keep them safe. If I had unlimited time and brain cells it might be nice to do these things myself, but it is not even a little bit realistic.


> The missing component of your thought process is who, exactly, deems subreddits not worthy of existence.

The same people that deems them worth quarantining, presumably.

We could have a debate of the benefits and drawbacks of moderation, but by the time you have quarantines in place, the ship of not considering yourself a moderator has long sailed, which is why it doesn't make sense for me as a measure.


The bans, as announced in Reddit's annual Transparency Report this past February, apply specifically to actions in quarantined subreddits:

Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.

https://old.reddit.com/r/announcements/comments/f8y9nx/sprin...

Quarantines offer greater flexibility in monitoring and management, with fair warning to members.


Reddit is an anonymous website, in a unique way that Facebook, Twitter, and even Hacker News (simply because of the small community) are not.

A lot of its success is built on content which is not as acceptable as one might desire. See what happened to Tumblr when they cracked down.


Reddit isn't even remotely anonymous. Pseudonymous at best. Anonymous forums are completely incompatible with user identity centric moderation mechanisms (Karma, Gold, rep etc...)

Once you assign usernames and attach any type of value counter to an account, you've effectively dumped true anonymity out the window due to essentially creating a pseudonymic marketplace.

People may think "How pseudonymous do you need to be?" In today's world? Very. Especially given the propensity for deplatforming and reputation based social warfare.


Anonymous relative to Facebook and all the other social media sites. Sure, having user accounts is not completely anonymous, but it is good enough for 99% of people.


Not having a real name policy isn't the same as being anonymous. Back in the day my irc nick and my WoW username were far more well known than I, the human, will ever be.


Anonymous is probably the wrong term, but I'm not sure there's a better one.

I think the point is that there's a distinct disconnect between your profile on Reddit (and any other sites you share the username with) and your physical person, at least up until the point where you (ideally) make the connection public yourself if desired.

It's anonymous insofar as the "person" on reddit is not easily attributable to the physical person themselves.


> Anonymous is probably the wrong term, but I'm not sure there's a better one.

There is one. It was used a couple comments up.

Pseudonymous. It's an actual word, not just made up.


Ah yep, I glossed over that somehow.

My point stands, though - despite the misusage of the word anonymous there's a real difference between how facebook treats identity and how reddit does. And it massively affects how communities in both interact.


You can make a new reddit account in 20 seconds if you don't want people to tie the comment you're about to make to your main account which is already a step removed from you. If you want Throwaway8675309 to develop a reputation, that's an option you have, not a requirement.


No surprise, but Throwaway8675309 actually exists on reddit.

https://www.reddit.com/user/Throwaway8675309

Also no surprise, the account was active for only ~5 days.


>They sometimes come up with measures like this, while routinely ignoring the development of horrible communities that they're aware of (since some users take any possible opportunity to mention them to the admins and are met with silence or half measures).

I think their main challenge is they have a cultural/temperamental resistance to making decisions on a case-by-case basis. Any time there is a problem it seems like they try to overgeneralize and come up with a sort of rule or heuristic that can apply in every case and situation. But since the "toxicity" of a community is usually more subtle than that they end up always being way too reactive.


> I don't really get reddit's efforts to clean their site.

That's easy - advertising.

The same reason YouTube cleaned up it's act. And Gifycat. And a dozen other sites who found themselves at the wrong end of their advertiser's ire.


> Once you've deemed a community so harmful that you don't want people accidentally stumbling into it, why go out of your way to allow it to exist behind a curtain

Since we all know we're talking about The_Donald, I'll just mention them specifically. Alexa traffic data considers Reddit the #6 website in the US -- they are huge, and their major actions will be noticed. If they ban their top pro-Trump community, Fox News will be talking about it later that day. Reddit execs will be answering questions in front of the US Senate a week later. Phrases like "election interference" will be thrown around. Nobody wants that.

So instead they do it subtly: quarantine The_Donald, which means they don't show up in site-wide lists, can't be accessed through the mobile app, and can't be indexed by search engines. This stems their growth. Later, start replacing their moderators. Eventually you've taken it over and shut it down, and they leave. Now if Tucker Carlson wants to tell his audience about it, he has to explain to a bunch of 60-somethings what a "forum moderator" is. Not happening.

Honestly, I think what they did worked out best for everyone. Reddit is happy to be rid of them, and they're happy with their new website, TheDonald.win. Everybody wins.


But the sub doesn't "exist" for months, no new posts

https://www.reddit.com/r/The_Donald/new/

There is a pinned mod message which leads you to the new site but that's entirely out of reddit


Probably they profit from horrible communities and lose on some, and there are no clear distinction between the two, so they started to quarantine less profitable ones to stigmatize before banning them.

They’re profiting off anger so “old” ones being angry has no downsides.


>Probably they profit from horrible communities

There are no ads in quarantined subs and you can't buy any reddit awards to gift to others


I mean T_D pays and WPD takes, and if T_D is not quarantined but WPD is, it seem to almost makes sense WPD goes but T_D stays


Still drives traffic to the rest of the site.


Because competitors exist


Not really. And forcing all of your unwanted/toxic subreddits to Reddit clones only damns them from ever gaining traction.

Take a look: https://voat.co/


Subreddits can be unwanted by Reddit, but otherwise non-toxic.


This is correct, but the toxic, racist, alt-right, pro-fascism posts covering the front pages of sites like voat and ruqqus make me and many others wholly uninterested in these sites.

If someone wants to provide an alternative to reddit but they don’t want to moderate/censor content that makes people uncomfortable, Reddit becomes the lesser evil.


Doesn't that mean that Reddit is doing a half decent job? If all that stuff was on Reddit (haha if...) wouldn't it turn you off of the site?


Right, but also it means that competition will always be at a severe handicap.


i never visit reddit's frontpage, i visit instead the individual subreddits that i frequent at. In fact i wonder why the frontpage exists. Voat would do a service to themselves if they removed the frontpage.


These sites rely on network effect, which become prominent after a certain threshold of users join the site. If they reach critical mass, they will suck in the rest of the extreme-right, then the republicans then slowly everyone, because people like controversy and will seek to go where they can have it.

Reddit is clearly a leftwing site now in ways that it wasnt when it started (when libertarian was the dominant ethos). All their "common word" reddits, such as news, politics, technology science are purely leftist echo chambers by now. The few rightwing corners left are already isolated and ready to make the move. In fact they may even decide they should jump ship. What better time for /r/t_d to go somewhere else than now, when people's interest is intensely focused in elections and they can easily rally all their users to follow them. Wherever they go, more people of other persuasions will follow, because people love telling other people that they are wrong on the internet.

The question is if those other competitors can stand the traffic and whether they give the features to moderators that reddit does.


T_d already split into an independent website four months ago.


You mean 4chan/8ch?


I think it was set up (or at least supported for this long) mainly due to thedonald subreddit. The admins were too weak-willed to outright ban that subreddit and deal with the PR and political fallout, but they wanted to prevent those posts from filtering up to the main site. Now that all the Trump people have left reddit I would guess that the concept of “quarantine” is no longer needed and goes away soon, in favor of simply banning subreddits. But we can all rest easy knowing that we can still watch people die or all sorts of weird porn - reddit has really cleaned up their act.


I think quarantined subreddits play the same role as "NSFW" tagging on posts: They're not banned, but it improves the user experience to not see them by accident.


Yeah, but it's always the hate subreddits that make the headlines when they're quarantined.


If you let a bad community be bad in their dark corner, they won't spread as easily to the rest of the site. You can easily see that the accounts are toxic by seeing that they participate in garbage communities. And it's a great honeypot for authorities, be it for political astroturfing and nationstate stuff, or for borderline-illegal content.


>If you let a bad community be bad in their dark corner, they won't spread as easily to the rest of the site.

I suspect this is the reason. I observed the opposite, a certain type of content disappearing after a separate comunity was created for it.


>I don't really get reddit's efforts to clean their site.

You don't get why a online forum might have an interest in ensuring every other post isn't a dick pic?

If you can understand that, maybe you don't, then you DO understand their interest in moderating their own website whether or not you agree with their moderation standards in practice. A dick pic may be obvious to you...but so long as the picture depicts someone >18yo then it is legal and free speech.

>They sometimes come up with measures like this, while routinely ignoring the development of horrible communities that they're aware of...

Again isn't it their right to determine what forums/threads dick pics may in fact be appropriate and which threads forums they aren't and should be moderated/removed?


> You don't get why a online forum might have an interest in ensuring every other post isn't a dick pic?

The confusing part is not that they're attempting to clean the site - it's their ineffective, half-assed ways of doing it.


Reddit admin message:

> Your account has been suspended from for breaking the rules. Your account has been suspended for 3 day(s).

>> You recently upvoted a post or comment that was determined to be against our policies. Abusive content is not acceptable on Reddit nor is engaging with it. Please be thoughful about the content that you interact with.

Interestingly nothing in the Reddit policies seems to mention voting or engaging with content: https://www.redditinc.com/policies/content-policy


I got one of these warnings, and I've narrowed down what I engaged with to a Hard Times article. Strange times when reddit finds satire too spicy for their site.


This is super stupid. If they're so against the content why allow it to exist in the first place?


Whether they intended it this way or not, users will now be worried they're upvoting the "wrong" thing and will be less likely to vote unpopular opinions and political views, just in case.


Wrongthink is becoming a reality.


Arguably in 2020, you should be very careful about expressing any opinion that is in any way unpopular. Or that even could be construed as such.


I feel like people have forgotten what free speech is.


I grew up in a very fundamentalist (Christian) area, so the lack of it feels very familiar.


I imagine this is reactive, since they can't proactively moderate their content. When they do get around to moderating (like checking reported comments), they can then penalize people who promoted the unacceptable content.

I think there should indeed be penalties for upvoting e.g. death threats.


Leaving the content up allows Reddit to track users' engagement and categorize/map their viewpoints.


They are trying to train their user base to moderate content correctly. If it's against the sites policy, users should down-vote the content and report it.


That message is so stupid. What's there to learn from the punishment if you don't know what you did wrong?


This actually strikes me as an intelligent (albeit Orwellian) approach to rules enforcement.

If Reddit's goal is to encourage broader self-censorship of wrong-think, the message makes perfect sense. If Reddit informed a user that they broke Rule A, a rational user would learn a specific lesson. They would learn that they can avoid punishment by not interacting with content that breaks (or might break) Rule A.

By punishing without an explicit explanation, a rational user must conclude that, to avoid punishment, they cannot risk interacting with content that breaks (or might break) any of the rules on the website.


You learn that you are on the mercy of the platform owner. And that if you withhold enough details of the punishment you can play afool with courts.


You can make as many accounts as you want, no need to use an email address.


This actually changed in the last few years.


No it didn't, just click next when it asks for an email adddress.


As well, punishing someone for Engagement without malice, which you later determined to violate policy to, is a bit obtuse.


They officially announced this three months ago: https://www.reddit.com/r/announcements/comments/f8y9nx/sprin...

> Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.

Until now that has mostly been warnings for posting in "bad" subs. Now after most of those are gone I guess it's time for the next phase.


Full message:

> I’d like to share an update about our thinking around quarantined communities.

> When we expanded our quarantine policy, we created an appeals process for sanctioned communities. One of the goals was to “force subscribers to reconsider their behavior and incentivize moderators to make changes.” While the policy attempted to hold moderators more accountable for enforcing healthier rules and norms, it didn’t address the role that each member plays in the health of their community.

> Today, we’re making an update to address this gap: Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.

This seems strange to me. If this content is violating reddit's policies then why don't they ban them? It seems awkward to have this middle ground of "semi-banned" content where users get suspensions if they interact with it too much. Why create these invisible land-mines for users? I imagine it's because banning this content outright would make Reddit admins appear even more biased, or because the content doesn't actually violate any policies but puts off advertisers.


One of the easiest reasons is the order of these things: post being made, post being upvoted, post being recognized as tos violation


I don't think Reddit should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act. If they want to police lawful content that is fine but then they are no longer an open platform and they should forfeit their Section 230 protections. I dislike the ability for online platforms to have their cake of protection against lawsuits and also eat their cake of policing lawful content they don't like.


So no site should be allowed to have moderation policies that include things like "no trolling," "no flaming," "no swearing," or anything like else like that for things that don't cross into "illegal" abusive speech? Without becoming a target for filling up with illegal shit to overwhelm the moderators and cause legal trouble for the owners?

So HN is gone, too, then.


> So no site should be allowed to have moderation policies that include things like "no trolling," "no flaming," "no swearing," or anything like else like that for things that don't cross into "illegal" abusive speech?

Any site can have any rules they want. But they shouldn't expect to be able manipulate and control discussion and face no backlash from users.

> Without becoming a target for filling up with illegal shit to overwhelm the moderators and cause legal trouble for the owners?

'Trolling', 'flaming' and 'swearing' are not illegal, and honestly, most complaints about 'trolling' I see are just people who disagree with what they're reading.


> But they shouldn't expect to be able manipulate and control discussion and face no backlash from users.

You're saying they should face backlash from government regulators not from users.

If there's a big user backlash, those users are already free to move to whatever site they want, or make their own.

---

Re: your comment about trolling, etc, not being illegal: yes, exactly. So if a site wanted to enforce some sort of "family friendly" policy, you could simply bomb them with truly illegal shit instead and get them in trouble with the regulators for not being able to keep up as they would be liable for that, since they aren't a "anything legal goes" platform.


> You're saying they should face backlash from government regulators not from users

I never said that. But it's probably a good idea to have some sort of oversight for a massive platform that censors and manipulates discussion of millions of people. You may think their policies are totally justified and correct (they aren't), but the specific opinions they choose to promote may change to something you are against.

> If there's a big user backlash, those users are already free to move to whatever site they want, or make their own.

Yes, they are also free to shit on the website and point out it's flaws.

> So if a site wanted to enforce some sort of "family friendly" policy, you could simply bomb them with truly illegal shit instead and get them in trouble with the regulators for not being able to keep up as they would be liable for that, since they aren't a "anything legal goes" platform.

Trolling is not illegal, it's not even a specific thing. It's a very ambiguous term used to describe 'comment's that make me feel bad'. Trolling and posting illegal shit is completely different, sure ban illegal stuff - you kind of have to, but don't use 'trolling' as an excuse to ban stuff you disagree with.


Separate reply for the other side of this thread. I honestly don't know where you're getting it with your comments re: trolling.

Do you honestly believe that a discussion forum site that wanted a family friendly policy shouldn't be able to have any liability protections? That if someone started flooding them with child porn, say, at a rate they couldn't keep up with, they should have to shut down because they'd otherwise be liable for all that content due to their having a content moderation policy that forbid certain types of legal speech (say, swearing)?


Dude, I honestly don't think you understand what 'trolling' means. You already agreed that trolling is not illegal, why do you keep trying to use examples of illegal content to demonstrate why 'trolling' should be banned?

And what does child pornography have to do with swearing?

> Do you honestly believe that a discussion forum site that wanted a family friendly policy shouldn't be able to have any liability protections?

Again, I never said that.


You keep bringing up trolling as a term in general, I tried to make it specific.

The pitch being made in this subthread was "I don't think Reddit should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act."

So you lose liability protection if you censor lawful content.

So if you want to ban swearing, you become liable for your user's content. Which makes you a WIDE OPEN target for bad actors who could post whatever illegal content they want faster than you could moderate it.

I'm trying to illustrate why I think `cwhiz took an absurd position, and am not using the term "trolling" in general now. I've given a specific example of "legal content some communities might want to prohibit" as well as "illegal content that bad actors could use to get the owner of that site in trouble in this proposed world."


Okay, to make it clear - I don't think sites should lose 230 protections for having a moderation policy. I do think that the excessive manipulation and censorship of dissenting opinions on sites such as reddit is a bad thing however.


I do. If you moderate your site, you endorse its content and should be held responsible for it (i.e be treated like other publishers).

If your laws permit speech you don't like, either fix your laws or fix yourself.


> But it's probably a good idea to have some sort of oversight for a massive platform that censors and manipulates discussion of millions of people. You may think their policies are totally justified and correct (they aren't), but the specific opinions they choose to promote may change to something you are against.

Think through the mechanics of how this oversight would work.

Someone will have to decide "is this site sufficiently neutral in how they moderate UGC." So let's spin up a government department more that, maybe call it part of the FCC, maybe make it a new one, who knows.

Now let years, maybe even decades go by. Now let's say some site you like gets essentially taken out - since massive sites aren't going to be able to do manual moderation of everything - by a regulator who leans to the opposite side of the political spectrum you do.

Now you don't have much resource, since you aren't free to go off and try again - the regulator would just get that site too.

I have a hard time seeing how this is better.

And this is a weird stance for conservatives to be preaching, since the previous similar thing, the fairness doctrine, was no favorite thing of theirs.

This is one of those things where you can't say "only the government can solve this problem" unless the only speech you want to see protected is fairly-mainstream government-endorsed speech.


> Think through the mechanics of how this oversight would work.

There are many ways this can work, and it's not necessarily simple. I don't have a specific proposal.

> Now let years, maybe even decades go by. Now let's say some site you like gets essentially taken out - since massive sites aren't going to be able to do manual moderation of everything - by a regulator who leans to the opposite side of the political spectrum you do.

1) Manual moderation of massive sites is already going away for the most part.

2) Sites like reddit have swarms of moderators, the ability to report, and send legal notices to take down specific content and also be made.

3) We have laws about 'illegal content' these websites are already complying on a massive scale.

4) My suggestion was an oversight of excessive moderation and manipulation of opinion, not what your example suggests, which is the opposite. You literally don't have to moderate anything to not be charged with 'excessive censorship' or whatever.

> Now you don't have much resource, since you aren't free to go off and try again - the regulator would just get that site too.

We have a justice system for a reason.

> I have a hard time seeing how this is better.

We can all come with with 100 different imaginary scenarios where some very generic suggestion may fail. I don't see the point. Do you think there should be no laws about content? No libel laws? No copyright enforcement? Where do you draw the line?

> And this is a weird stance for conservatives to be preaching, since the previous similar thing, the fairness doctrine, was no favourite thing of theirs.

I'm not a conservative, but what I find more jarring is the people who want sites like facebook forced to be censored and moderated politically, while reddit can do whatever it pleases as long is it aligns with their political ideology.

> This is one of those things where you can't say "only the government can solve this problem" unless the only speech you want to see protected is fairly-mainstream government-endorsed speech.

Again, I did not suggest that there should be laws about what content is allowed. I actually think we already have too many such laws - I would personally do away with DMCA and other such bullshit. What we should have is a check on mass-scale censorship and manipulation of opinion by corporations.


> There are many ways this can work, and it's not necessarily simple. I don't have a specific proposal.

I was talking about the proposal from `cwhiz, in which the government would revoke protections under existing current law. I think there are holes in that suggestion that are miles wide, that will extend to pretty much any other proposal that depends on the government deciding if a site is over- or under-moderated.

Without a specific proposal to talk about, or specific opinions on "should a site have to relinquish editorial control or rule-setting ability to enjoy liability protections from stuff done by users" (I say "no"), I don't see much more interesting stuff to debate. Those are real, active questions and proposals being made; other hypotheticals could go on forever but are less relevant to now.

But if you aren't interested in those specific points, I don't know why you're participating in a thread that was specifically about the questions around that current law.


So, you're arguing with me about something I never even said, while ignoring my point about excessive censorship and manipulation of public opinion? 'Revoke 230' or 'keep 230' are not the only possibilities in this universe.


That's actually not the point of Section 230. The point is that if you remove Section 230 and then you accuse someone of trolling, flaming, or swearing, that could be considered libelous.

If you would like to learn more, and see exactly what happened when Section 230 was not in effect, check out this case:

Stratton Oakmont, Inc. and Daniel Porush, v. Prodigy Services Company, "John Doe", and "Mary Doe".

Stratton Oakmont (yep, the one that was made by Jordan Belfort, the Wolf of Wall Street), won that case against Prodigy and, yes, John Doe and Mary Doe. You, the user, are John Doe or Mary Doe. Section 230 protects you.


What's not the point of section 230? I literally didn't say anything about section 230 or suggest that sites that moderate should have those protections revoked.


You are right, my apologies. You were responding to another person who was having a discussion about Section 230. I thought you were adding on to their thoughts about Section 230. If your point is entirely that users can backlash against a platform, then your point is correct.


I'll also add that the excessive censorship and manipulation of opinion on massive sites such as reddit, facebook, youtube, etc. is somewhat disturbing. People tend to think that only governments can censor excessively, but the fact is most public discussion in the West now happens on a few massive private platforms, and these platforms have a lot of power over public opinion.


Free speech is not so daft a concept. Where some restrictions are necessary, the principle of “viewpoint neutrality” still works to prevent censorship. This is the approach that First Amendment jurisprudence takes. So, where the government allows pro-abortion expression, it must also allow anti-abortion expression. But it can still set volume limits and control crowd movements and so forth, as long as it does not discriminate on the basis of viewpoint. And this applies even where its more general obligation to allow all expression does not hold.


But Section 230 doesn't account for it... I honestly don't think it's practicable as much as it pains me. You rely on some goodwill from the site, and if you open it to lawsuits you will get none.

It's actually very insidious the way HN (and Reddit) control what we see though. There is almost no transparency, accountability, or oversight. Moderators make mistakes and we never even know it. We don't know what we don't know. The culture of silicon valley and hacker culture are being manipulated by a small number of employees at a proprietary investment firm. This is very far from ideal even if you do trust the intentions of the current employees.

I'd be in favor of a law that required uncensored free speech on all forums but that also allowed users to opt-in to moderation. Even if 99% of users opted in to moderation it would still leave an escape valve for totally uncensored free thought.


I think you are, sadly, completely correct. And that is why Section 230 in IMO unworkable as it was conceived, as it provides exceptions on a justification that is completely false.

Honestly I don't see what's the best alternative. Probably the segregation of comment platforms and sites. If Section 230 were to be substituted with something effective in regards of speech protection, or conversely the right of platforms to curate comments, then still this would only apply to the US and it would be a significant factor to move the comments elsewhere.


There should be a users/penetration threshold above which enforcing a cultural code of conduct waives immunity. GDPR uses a similar threshold, and unless you're seriously arguing that "trolling", "flaming", and "swearing" are being moderated successfully on e.g. Twitter, it seems practical.


Without section 230 protection it would be impossible for the site to stay open, so in effect you're saying sites that moderate user content should be shut down. Does that seem like an accurate interpretation of your position?


Section 230 is another classic legalist versus legal realist versus "whatever absolves me" trap that people fall into on this forum.

The legal realism prevails with this stupid law, it will always prevail. You can both moderate / review / editorialize content, including political content, while simultaneously enjoying Section 230 protections.


The people who argue this point are arguing for a change in the law.


I'm fine with that. They decide to take a political stance, they shall deal with the legal consequence of their actions.


Ok. In my view that's unreasonable. Website operators should have the freedom to moderate their sites without fear of being shut down. Moderation is part of the product, they should have the freedom to create online communities that cater to specific audiences, including with respect to political partisanship if that is where their values lie. It seems unreasonable that all the users that enjoy the site as is should lose access to it because of some users who disagree with the site's moderation policies.


removed


> they should also disclose their bias

This isn't a practical idea. People aren't always aware of their own biases and critics are also inclined to see bias even when there is none. There is no practical way to determine bias levels.

> Reddit is very happy to claim they are unbiased, neutral, hate free.

Are they? Can you point to an example of such a claim?

> But some preferred subreddits will ban people for heaving wrong gender or wrong race

Subbredits are run by users.

> They also tolerate hate speech, incitement to violence or even advocacy for genocide , if it matches their ideology.

Reddit would say they don't tolerate those things, you say they do, how do we determine who is right?


> Subbredits are run by users.

No, they are run day to day by users, but ultimately are at the discretion of the site owners. Case in point, this article we are discussing.

> Reddit would say they don't tolerate those things, you say they do, how do we determine who is right?

The behavior of the site owners? Reddit site owners treat different subreddits differently and different viewpoints differently, although there is single characteristic differences (gender, race, etc) because of the people and topics that are generated from the groups. Pre-censoring, is not uncommon, which has led up to this wrongthink penalty.

If you're challenging the history of reddit, you aren't informed enough to be discussing this and it's not worth rehashing.


> but ultimately are at the discretion of the site owners

The site owners don't ban users based on race and gender which is the claim I responded to.

> Reddit site owners treat different subreddits differently and different viewpoints differently

Well yes, that is how standards work, "different views" are treated differently based on how they stack up with respect to the standard. I understand that you're arguing that reddit applies the rules selectively, but reddit would argue that they apply them fairly, so how do you suggest this kind of conflict be resolved at scale? Who should decide if reddit is being fair or not?

> If you're challenging the history of reddit, you aren't informed enough to be discussing this and it's not worth rehashing.

If you want to make an argument then make it, but condescending declarations of a monopoly on the informed position does not do your case any favors.


> does not do your case any favors

I'm not making a case. I'm describing the state of the reddit culture. Whether you believe it or not, is incidental.

> The site owners don't ban users based on race and gender which is the claim I responded to.

Your statements were broader than that. Revisionism aside, you can get banned for voting on a post, and there are lots of rules (written and unwritten by each community) that can get you banned from the subreddit or reddit as a whole. Making statements about YOUR gender (or a number of other attributes) can also get you banned. Here, I'll do a google search for you (since you can't be bothered) for subreddit bans, to start you off: https://www.google.com/search?client=firefox-b-1-d&q=subredd...

You aren't informed enough to be discussing this and it's not worth rehashing.


> I'm not making a case.

That much is clear.

> I'm describing the state of the reddit culture. Whether you believe it or not, is incidental.

Very convincing. Oh, I know, you don't care about that in the least.

> Your statements were broader than that. Revisionism aside...

No, they weren't. You stated that the subreddits are ultimately controlled by the site owners, and I explained that the site owners don't ban people based on race and gender. Simple. Revisionism indeed.

> Making statements about YOUR gender (or a number of other attributes) can also get you banned. Here, I'll do a google search for you

The results of the google search does not support the claim you're making, like, not even a single result. Beyond that, mods can ban users for whatever reason they want, but the site owners do not ban people based on their race and gender, that claim is totally false.

> You aren't informed enough to be discussing this and it's not worth rehashing.

lol, a laughable assertion coming from someone who can't even cherry-pick google search results to support their argument. Stop wasting my time.


> cherry-pick google search results to support their argument.

Not cherry picking to make an observation that documents the history. Its an ongoing, and well covered, topic for which you can demonstrably research without difficulty. Denial doesnt help either of us and causes me to question your sincerity. Good luck with whatever.


[flagged]


So no rebuttal to anything I actually wrote? Are you sure you're not the troll, because I'm just trying to have a discussion and what you've replied with is bad-faith snark.


What you're saying is that because they've decided to take a political stance the current law should be changed to impose new consequences on them. As the law stands today there are no consequences.


The core here is that answering the question of "did the site decide to take an unreasonable political stance?" vs "did the user decide to take an unreasonable political stance?" is not as black and white as you would like.

The answer is going to be a moving target. You're gonna have some government bureaucrats having to make rulings on these regulations.

And many of the proponents of this seem to fail to imagine a world where Donald Trump is ever not president. Imagine the power this would give the government against someone who plays fast and loose with 'facts'. "We've decided that the statements in this post were intentionally misleading, therefore they are a political statement, and the owners of the site promoted them in a way that they didn't do to this other post, therefore the owners are making a political statement and this is not a neutral site, which makes it liable for these other comments from other users that we're interpreting as violent threats."


Sure it will be possible to stay open: just double down removing offending content and defend from the lawsuits. Large companies routinely have multiple ongoing lawsuits.


> just double down removing offending content and defend from the lawsuits

This is unsustainable though, there is just too much content to filter. There is also the fact that lacking such protections the site will be targeted by bad-faith actors who intend to harm the site by posting lots of legally problematic content.


This is not a practical way for anyone to run a business.


What lawsuits. They are well within their right to remove whatever they want. Maybe not everyone agrees thats how it should be, but thats how it is.


In essence section 230 says "as a platform, youre not held liable for censorship of content you find objectionable, AND you dont take ownership of the speech you leave up."

Section 230 did not create a distinction between a platform and a publisher that required censorship rules to be applied fairly or evenly.

https://www.law.cornell.edu/uscode/text/47/230


That was literally the purpose of Section 230. If you want them to not be able to do that, you want Section 230 to be repealed. You don't have to take my word for it, either, since one of the authors has been explicit about it. (The discussion in his tweets is largely framed around politics, but there is nothing that would make the argument only apply to them vs speech in general)

https://twitter.com/RonWyden/status/1266052221072019456


Unfortunately, I don't think you understand the extent to which you, as an individual user, would be under obligation if Section 230 wasn't in force. Section 230 protects you, the "user" as much as it protects the platform or "provider of interactive computer services". It is very likely that without the protections of Section 230 you, the "user", would have to decide whether my comments on your Reddit post, Youtube video, Facebook post, or even this post, do not bring you under undo liability, and thus would have to police those comments. If you are prepared to do so, then go ahead and remove Section 230 protections. I have a legal background and I am certainly not prepared to decide if all content related to my content is lawful content.


The aim of 230 was explicitly to allow doing so. And how would you even imagine this working? Have a bunch of conlaw students judge posts against complex tests?


What model makes sense and why should it be treated differently than other forums with moderation?

I'm a big fan of the idea that internet providers, DNS registrars, etc. need to be neutral to content in the way common carriers are, but I'm not convinced yet that platforms should have similar obligations. It seems that the appropriate response to schismatic disagreement with the "SysOps of the BBS" is to build another BBS as the "telephone network" is open to all.


At some point, that gets moved up a few levels though, if the BBS allows for groups that get moderated by other people, is the BBS still "the moderator" or is it those people that set up the group? And if you don't agree with their decisions, should you be able to launch a rival group on the BBS (it being a platform and all)?


I simply disagree that it moves up. We have had a clear delineation between what is "restrictive private property" and what is "open access private property". That separation is clearly delineated at the IP network + related necessities. This model has worked well, and I don't see how it is breaking now.

Someone else mentioned as well, but some very specific monopoly/necessity cases could change this delineation, however I could only see this being necessary with a much different environment than we have today (that is, situations without feasible common carrier alternatives, _maybe_ google search would be an example today).

I won't speak to your particular motives because I don't know you from anyone else, but I sometimes wonder if a core driver behind wanting to mess with section 230 is because people feel entitled to an audience at low/no cost to them, at the expense of platform owners, to blather on about whatever they please. To this end, I say "nope, you're not entitled... we have an open internet, drive your own damn traffic".


I think it's reasonable to argue that we have a bunch of local monopolies today, only that "local" on the internet is interest-based-local, not geographically local.

You mentioned Google Search, I'd add Youtube for video content, Twitch for streaming, FB for social network etc. Sure, you can broadcast video content without Youtube, there's vimeo or you could host it yourself, but it's really not a feasible approach. Much like you don't really have to rely on the local utility company to provide you with electricity and water, you could totally run a generator in your back yard and have trucks bring in fresh and move out waste water ... but it's not a realistic approach.

> To this end, I say "nope, you're not entitled... we have an open internet, drive your own damn traffic".

I'd respond that we don't have an open internet, but more importantly: you don't have to be a platform, nobody is stopping Reddit from being a publisher.

The reason why they don't want to legally be a publisher is simply that their low-cost approach doesn't work if they have more responsibility for what they publish. They very much want to be able to pick and choose what is published, but they don't want to be considered a publisher.


I strongly disagree on all points, but I think it is a useful perspective overall as a comparison of what the future could or should look like.


What you "think" is literally entirely against the reason Section 230 exists.


I mean, I see merit in the position that monopolistic corporate entities, which you are forced to interact with if you want to participate in modern society, have government-like power over people and should be held to similar standards in terms of civil liberties.

But Reddit is effectively just a big message board. Such interpretation would force every small internet forum to either become a [0-9]chan or a closed space for invited members only. There is lots of middle ground where the owners/the community polices some speech and you can reasonably, realistically just go somewhere else.


> I don't think Reddit should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act

The entire point of 230 was to allow companies like Reddit to censor lawful content while still claiming protections. It was never designed at any point to create neutral platforms.

Lack of moderation was already a defense before 230 was created:

> CompuServe stated they would not attempt to regulate what users posted on their services, while Prodigy had employed a team of moderators to validate content. Both faced legal challenges related to content posted by their users. In Cubby, Inc. v. CompuServe Inc., CompuServe was found not be at fault as, by its stance as allowing all content to go unmoderated, it was a distributor and thus not liable for libelous content posted by users. However, Stratton Oakmont, Inc. v. Prodigy Services Co. found that as Prodigy had taken an editorial role with regard to customer content, it was a publisher and legally responsible for libel committed by customers. [0]

Section 230 was not designed to foster neutral platforms, it was designed to allow platforms/communities to moderate as they saw fit without opening themselves up to legal liability. To re-phrase that in modern terms, section 230 is one of the strongest protections we have today for community's Right to Filter[1].

When people say that they like Section 230 but it's gone off the rails, what they really mean is that they think Section 230 was always a bad idea. Or perhaps more charitably that they haven't really researched how Section 230 works or what its history was.

[0]: https://en.wikipedia.org/wiki/Section_230_of_the_Communicati...

[1]: https://anewdigitalmanifesto.com/#right-to-filter


> I don't think Reddit should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act.

The whole point of Section 230 was so sites could do that. Before 230, there has been court rulings that said if you moderate the content you are liable for the content. 230 was enacted specifically to reverse that.


I hope you realize the entire site, and every site, would be full of 99.99% commercial spam if this went into place.


The problem is that "lawful content" includes things like explicit support for Nazism and the KKK (not 2020 "Trump is a Nazi" kind of allegations of Nazism, but actual Nazis). Even things like abstract calls for violence are protected under the First Amendment: https://en.m.wikipedia.org/wiki/Brandenburg_v._Ohio

I don't think people espousing this view understand the scope of what sites need to tolerate to comply. Would reddit be a better place if they let people say, "ni\\\\s belong in shackles" and, "death to Jews"? Because that's legal speech.


How might a democracy suffer if those who control the common venues for voter's political dialogue are given a monopoly over setting the bounds of acceptable speech?


I think it's an extreme stretch to say that Reddit has a monopoly. Or even Reddit + Facebook + YouTube. My response would be to tell people to rediscover personal websites, mailing lists, and IRC. I'd be more concerned if ISPs, DNS providers, and payment processors start policing users' politics (which they have, recently, see the Carl Benjamin AKA "Sargon of Akkad" situation where payment processors threatened to cease business with Patreon unless they kicked him off the site). Being kicked off of Facebook or Reddit doesn't prevent someone from publishing their content. I've always told people if you want reliably get your message across, host your own website. It's not hard to pick up some basic HTML and CSS. It's only once the ability of people to publish their own messages that I get concerned.

As far as Reddit and Facebook go, understand that freedom of speech also means freedom from compelled speech. It's not just about being able to say what you want, it's about being able to not say the things you don't want to say. Making Section 230 contingent on hosting all legal content compels people so say a lot of things they don't want to say.


I don't think Hacker News should be able to censor lawful content and also claim protections under Section 230 of the Communications Decency Act. If they want to police lawful content that is fine but then they are no longer an open platform and they should forfeit their Section 230 protections. I dislike the ability for online platforms to have their cake of protection against lawsuits and also eat their cake of policing lawful content they don't like.


This is an absurd argument that the far right is plying as if they have some constitutional right to push their hateful noise on every site.

The internet is open. If you disagree with Reddit, Twitter, Facebook, etc, there are alternatives. Of course when the hateful all congregate, the farce of their positions becomes too clear. They need the legitimacy of Twitter or Facebook to ply their nonsense.

As an aside, it's interesting that those far right sites are some of the most fervently moderated, "snowflake" sites. Even The_Donald on reddit was hysterical in its suppression of even a simple question on the truthfulness of some fake claim. Instant comment deletion and account ban. Gab's extreme assertions of their moderation policy are uproarious when compared to its creation (you know - "Twitter silenced right wing voices!")


There seems to be a divide between people who think discourse is a progressive art of the exchange of ideas, and those who use language as a pretext for political struggle where the role of ideas is to act as provocations that yield a persons "true," alignment and identity. The latter view means engaging ideas compromises your ethical purity, where the former one means people can express abhorrent ideas without physical consequences.

There isn't a short answer to what's probably an eternal conflict, but I do know a lot of people in tech who might benefit from reflecting on the question of, "are we the baddies?"


The dichotomy is seems to be reflected in the growing divide between rational, scientific inquiry, and modern progressive liberal arts which are rooted in critical theory.


I used Reddit a lot back in 2013ish but went off of it.

Looking at it now, it feels like that in general the site is really mean spirited.

Also upvoting means that you're just seeing whatever the majority of voters want you to see. It works on a smaller community like HN because comments get 10s of upvotes at most. It's not a site for good discourse.

However there's communities where there is interesting discussion still and often if you want an opinion of a product or an issue you might find a thread on it from a real user.

But generally they've done nothing to stop the nurturing of racist/sexist/bullying content over the years.

Now you can argue about free speech being important for an internet platform.

But Reddit trying to change the community attitudes of their site now seems a little late.

Also there is a real sense that some users believe that Reddit is the Sheppard of society and they're fighting on the forefront of society. Which I think stems from the millions of apparent subscribers to subreddits.

But it's just a fancy internet forum.

Like this linked post is on a subreddit called Watch Reddit Die. Apparently something about free speech being stifled in Reddit.

People need to realise they shouldn't waste their life on a website they don't like.

The internet hasn't been fully consolidated by Facebook and Google just yet, they can find or make a different community.


I have been a redditor for a decade and have a few remarks based on my experience.

I feel like reddit has always had recurring and overlapping cycles of mean spiritedness. For instance, during the summer holiday period it becomes notoriously harder to have a reasonable exchange of ideas there (don't ask me why; I'm not quite sure). This year, since the beginning of the COVID19 isolation period in March, we've been in a period of very high mean spiritedness. It could be a consequence of users' high stress levels.

Reddit does still have several excellent communities. I find that there's one thing in common between them: They have real, dedicated, human moderators enforcing strict, rational policies. Also: There are some weird cliques of power moderators on reddit who control dozens if not hundreds of subreddits. A good subreddit is usually not run by those cliques. Not to discount the hard work that some grunt moderators put into trying to run the large, clique-controlled communities, but the commonalities are hard to ignore.

On upvotes: Reddit should have clear policies enforced equitatively and transparently at the company level, by a human staff, and downvotes shouldn't be presented as the opposite of an upvote. I don't see a complete alternative system that can replace the upvote, though. They do provide different sorting options; what else could they do, there?

On alternatives: People attempt to leave on a regular basis. The problem is that there's a much higher incentive for the people who are ostracized not only by reddit as a platform but by the associated community to leave than for people who are, for the most part, comfortable, even if the administration acts a little weird on occasion. All alternatives are quickly colonized by people that the majority of the reddit community (if there is such a thing) would rather not be immersed in: White supremacists, conspiracy theorists, fanatic nationalists, etc. For the more moderate people to be willing to move, they want to feel surrounded by a majority of people like them, not a minority.


The phenomenon you are describing is called "Summer Reddit", and that term comes about yearly because schoolchildren are home for the summer, with the average age of Redditors participating during this time lowering from 24 to 16. It also heavily correlates with an increase in edgy commenting and shitposting.

Reddit's issue is the issue of so many ad supported websites. Clicks sell ads and controversy draws more clicks than anything else.

It's been clear over the last few years that the popular subreddits are the ones that have the most dramatic content.

Smaller subreddit do tend to have better discourse though until eventually, they get large enough to become popular and start to become the same echo chambers.


First, big brands don't want controversy displayed next to their ads. This is likely behind some moves Reddit has made in the past to clean things up.

Second, I'm curious what % of revenue comes from user-purchased post and comment awards from gold. I've noticed those in use quite a bit in some more inflammatory subs like /r/politics. I wonder how they vet payments from users to avoid providing a way for hostile actors to buy visibility while flying under the radar. Ie. Avoiding conflicts of interest from accidentally monetizing trolls.


Big brands don't want to be advertised on r/watchpeopledie but probably have no issue with being advertised on r/relationshipadvice which is almost entirely a sub about controversy and thus drives clicks and revenue.

I've done some searching and it seems that reddit gold is a pretty small piece of the reddit revenue. Campaigns start at $50,000 with Reddit Ad's.


Oh I just found this: https://www.feedough.com/reddit-make-money-reddit-business-m...

Reddit made close to 1 million in revenue from Reddit gold in 2016.


That's pretty old data. And I realize I said Gold when what I meant was Reddit Coins, ie. the currency that lets people gild posts with various things like flames, and other colors.

I'm deeply interesting in learning what % of revenue that is for them and how quickly it is growing.


A recent heavily awarded post was broken down by cost per award. The money spent to award that post what it got was over $24,000, and that was just in the new award types and not the older medal/coin types. Adding the latter in pushed the total to just under $36k for that single post.

As for advertising on Reddit, I view it as a fool's errand on the part of advertisers given the extensive popularity of PiHoles and adblocking software among the userbase.


Do you have a link to that breakdown? I'd love to dig in further. I suspect that there's a lot of Coins in circulation from Reddit seeding the system. The marginal cost to them is roughly $0. Without being able to confirm if users paid the list price for those awards, I'm not sure we can confidently say Reddit banked ~$36k revenue for that post.

That said, my concern is that it creates an environment for misaligned incentives. If say, hostile foreign governments have their troll farms buying awards to increase visibility of divisive and inflammatory posts in key threads, that furthers their agenda and creates a revenue stream for Reddit that is fueled by creating an environment where that flourishes.

I'm NOT saying Reddit is actively trying to do this as I think they'd be horrified to find this happening, but they have a fine line to walk here from a product and business standpoint.

Re: your other point, I do digital advertising for a living. I can assure you that the vast majority of users on Reddit are not using PiHoles. While many may be using ad blockers, those largely don't work for native ads in the mobile app.


> It works on a smaller community like HN because comments get 10s of upvotes at most.

I think you underestimate the size of this place. It’s basically a busy Subreddit.


Reddit has been compromised with actors who do not have free speech in their best interest. The other day there was an illuminating ask reddit post about transexuals who decided to transition back and their experiences. That post got promptly removed by moderators; even though there was no hateful content within it at all. They effectively silenced the transexual population due to an ever so slight conflict of ideology.

To think that the reddit administrators would go so far as to ban people for even interacting with content that THEY deem problematic.

Reddit is dead and no one I know uses it anymore.

EDIT: - In case anyone is interested; here is the censored post that I was referring to https://old.reddit.com/r/AskReddit/comments/h9ctho/serious_a...


The post you are referring to was deleted by the user who made the post not the mods....

And that doesn't even relate to this topic as what you're referring to is on a subreddit basis and is determined by a set of volunteer mods not reddit admins.


> The post you are referring to was deleted by the user who made the post not the mods....

Not true. The post body says [removed]. If the user deleted it, it would say [deleted].


Indeed, it was removed by moderators. Here’s the OP’s response:

https://old.reddit.com/r/AskReddit/comments/h9ctho/serious_a...

Edit: another user determined it was both removed and deleted.



maybe it's because you use the "redesign"?

One of the problems I have is that as a moderator on some subreddits I have issues if I don't use the new reddit site. I prefer the old site much more but find it annoying to have to flip flop back and forth so I have settled on the crapdesign.

I just tested it and any post deleted by the user has [deleted] for both the post body and username. Posts removed by mods will have [removed] for the post body but will still leave the username untouched. A post needs to be both deleted and removed for it to appear like that specific example with [removed] for the body and [deleted] for the username.


It says "Sorry, this post was deleted by the person who originally posted it." https://imgur.com/a/8ZeyR7N


Old reddit says “removed”, which implies it was done by the moderators https://imgur.com/a/jowwaq6


https://i.imgur.com/pkzrm25.png -- It literally says it was deleted by the user.


As another user has pointed out, it's both. The post was removed by moderators first, then the user deleted it some time after that. This information is only visible in the old design, not in the redesign.


Unfortunately if the user removed it you can't really see the order of events then. I don't believe you can now see who removed it first. I know as a mod on reddit I have deleted posts that were already removed by the user.


Yes you can, because it is impossible to remove a deleted post. Your anecdote is not true.

Also, the user posted a comment complaining about the removal.


> The post you are referring to was deleted by the user...

How do we know that? The body just says [removed] and the user is marked as [deleted].

If the admins can alter the content of your posts, then surely they can delete / remove content and have it show whatever they choose.


>Reddit is dead and no one I know uses it anymore.

It's not dead, and I say thankfully because it not being dead retains its state as a containment mechanism of what is easily the worst group of internet users in internet history.

I fear for the day Reddit falls and its userbase spills out looking for another avenue to be comforted with groupthink, newspeak and feel goods for the Brands We Love.


Reddit is a collection of communities. I like several of them, and think they are generally good and uplifting, /r/pixelart for instance. Some communities have some pretty bad people in them, but it is impossible to create a social media site at scale without like-minded bad users creating and/or flocking to bad communities.

People are in different stages of life and seek solace in other people having similar experiences. Thus, the sexually frustrated congregate, the depressed congregate, drug enthusiasts (and abusers) congregate, disgruntled ex-<insert organization> congregate, you name it, and angry/sad/anxious people with a shared condition will congregate in search of validation/confidence.

But the opposite is also true - people with a shared hobby/passion/vision congregate as well and create some really fun and uplifting communities.


Yep reddit is increasing its conformity and intolerance to dissent at a very quick pace, maybe because it's election year? But in the past such forays have been one way only.

As the subs where you can speak with some degree of openness are ever smaller, it's not worth the time anymore.


Unfortunately this style of moderation has already has spilled out. Even on HN people are insulted for not agreeing with the groupthink, usually as an attempt to bully their point of view off of the internet. Now these people are becoming mods and have real power to suppress counter narratives. This is a systematic problem of people trying to oppress views outside of theirs.

I imagine the mods have a fairly finely tuned sense of which threads will turn into a shitstorm and don't give a fuck about pre-emptively killing them.

And honestly I WISH nobody used reddit anymore, maybe it could roll back 10 years when it was frankly more fun, but it sure seems like more people than ever are using it.


I imagine the mods have a fairly finely tuned sense of which threads will turn into a shitstorm and don't give a fuck about pre-emptively killing them.

How do we feel about pre-emptive moderation? Is it warranted? Is it warranted every time? Only certain topics?


Ultimately the mods are volunteers, they get to do what they want?

More precisely I would imagine they don't care about facilitating/refereeing culture wars inside their subs, cause lord only knows how much work that would be.


Well sure, they get to do whatever they want, you're right. It's their community they're moderating. We agree.

That wasn't what I was asking-however.


I think it's probably warranted some of the time, mostly around topics where much heat but little light is generated.


How does this method avoid the moderators implicit bias? Unless we get actual sitting judges how can these people be neutral and not impose their bias on what light is?

I 100% support Reddit killing any thread in a popular subreddit about anyone trans or the trans community preemptively because every single thread turns into an absolute dumpster fire. Reddit on the whole shows a very different side whenever those kinds of threads show up.


Many of the oldest users (me included) have just moved from the more general subreddits, where those shitstorms happen, into more and more specialized subreddits. But I miss the old reddit too.


> Do you think there was a way for you avoid this path? Like more understanding parents, other friends, another therapist? Or was it unavoidable?

>> It was a combination of things, so I can't say for sure what could have prevented this. More understanding parents definitely would have helped. Other perspectives online, as well. Detransitioners frequently get shut down on the internet, as their experiences are often labeled "transphobia." Which is utter bullshit, in my opinion. Perhaps if I had read a similar story as my own those years back, I would have had second thoughts.

https://old.reddit.com/r/AskReddit/comments/h9ctho/serious_a...

Ironically, the AskReddit mods contributed to this problem by removing the post.


Strictly speaking, that's not ironic at all; it's quite in line with what's routinely experienced by people trying to provide accurate and nuanced information about gender dysphoria, gender-dysphoric identity etc., and how to best help folks who may be affected by it.


Not exactly the same thing but the other day I got banned from r/ADHD for a (moderate from my POV) comment about pronouns.

The mod then gloated about banning dozens of people other this and that "LGBTQIA+ people run the subreddit", and that "the lack of discrimination (...) for being cis or being white or being straight or being a man overshadows any discrimination they would receive for their adhd"

I don't think that's great moderation given the subject of the subreddit but what can you even do about it that would not be time consuming and pointless?


The moderation is the worst thing about reddit. Reddit mods are some of the most corrupt people I've encountered but there's no way of telling how much the accepted groupthink is being carefully shaped by the mods.

My model for moderation would be opt-in. I want to be able to see everything by default but be able to see what moderators have flagged as inappropriate. I could then grow to trust some or all of those moderators and just hide everything they flag by default. But I'd always be able to go and quickly evaluate whether I still trust those moderators to be doing the right thing and not just hiding things they disagree with.


I think it's unfortunate that reddit chose the term moderator, because it implies an impartiality that you're not going to get from what is effectively the "owner" of a community.

These are just folks who either showed up early, or demonstrated they thought about the issue the same way as the person who started the reddit thought.


"Nobody goes there anymore. It's too crowded."


The fact that r/teenagers is often more open minded and mature than "important" subreddits like news and politics speaks volumes


There is a whole subreddit about this topic however.

https://www.reddit.com/r/detrans/


Agree. There's just nothing even remotely interesting or real being said on Reddit anymore. I'd honestly rather do anything else. I'd rather stare at a wall with my own thoughts than spend time on Reddit.

Here's an exercise - check out /r/unpopularopinion today. Full of perfectly boring normal opinions like "MTV should go back to showing music videos". All of the content just seems completely fake. There's not an unpopular opinion in sight.

Now go the Wayback Machine and take a look at that same sub from say, three to five years ago (which feels like a short time to me these days). Plenty of deeply unpopular, controversial, thought-provoking opinions. You don't have to like them, but they were real.


Reddits values aren't the same as yours.

This isn't a bad thing. It's not like Reddit has monopoly powers. Some people want to belong to communities that have agreed-upon values.


But the community would be a subreddit, not the site reddit.com, which is far too heterogeneous to be considered some kind of community.


That was the idea years ago, but they completely abandoned it.

https://archive.vn/KojNR#selection-7875.0-7947.5

Brief recap on its current state of affairs: https://www.youtube.com/watch?v=wqx4wdn4Odc (18 minutes and change)


You'd be surprised what percent of reddit visitors just browse popular or don't do specific subs outside the default. Most users aren't super users.


That's probably true, but people who just look at popular post and aren't engaged in anything aren't really part of any community either, unless we expand community to literally mean anybody that's visiting the site, at which point it becomes somewhat meaningless because it's too homogeneous again.

Terms that have no distinctive power have little value, so while I'd certainly consider them "users", I wouldn't consider them part of the community, much like a tourist staying in a village for a weekend isn't a part of the village's community.


I don't go to my city council meetings, but am still a member of our local community.

This is quick becoming a debate over definitions, but I think reddit thinks of itself as both having a community and also as having individual communities. For example, anybody who has had a post in their niche community get to the front page understands that there's a larger community they're a part of that is different from their smaller one.


Sure, much like you can get your 15 minutes of fame and get on national TV. I wouldn't call "the whole country" one community with smaller child communities though.

To me, community implies some amount of connections between the members. I've never put a number on it, but basically something like "if you randomly pick X people from the group, at least Y of them will know each other". That would limit both the size and also do a reasonable test for community membership: if nobody in the community knows you, you're not a part of the community.


I think the core of a lot of these issues is the distinction between universal findings which almost always are in favour of "progressive" worldviews and anecdotes which aren't necessarily. The example of trans people detransitioning is a good one here - the general truth that transgender people are very happy with their decision to do so (regardless of what that may mean for them) conflicts with individuals and their anecdotes regarding their (or people they know's) transition.

Whilst there isn't inherently incorrect or morally wrong about detransitioning in itself, often when brought up it can lead a person to an idea which is incorrect - essentially applying the anecdotal evidence of a Reddit thread to all situations.

Thus, moderators of subreddits (and administrators, although I think that whilst it is bad when they do so, it isn't as common as many would believe) feel conflicted on the matter. Should they allow the discussion to continue, almost guaranteeing that somebody is going to form misconceptions from the thread? Or should they shut it down, stifling free speech but at the same time reducing the level of overall animosity towards minorities in the community, and preventing people from forming misconceptions?

Ultimately, I think that it should always really come down on the side of free speech, as long as they aren't being blatantly hostile. However, in my opinion, if we really want to have a positive "marketplace of ideas", as some describe it, we really need to educate people from a young age on logical reasoning, cognitive biases and argumentative fallacies the same way we would teach history, geography or the arts. Whilst I think that's unlikely to happen in our current political climate, given that keeping people uneducated is a very effective way to make them vote for those who make the most compelling surface-level arguments, it seems to be the only good solution for the conflict laid out above.


[deleted]


I'm curious, can you point me towards some examples of the data you suggest? I personally haven't seen anything of the kind, although perhaps newer data may suggest slightly lower rates of overall satisfaction given that increasing awareness of something to an otherwise unaware population is always going to increase false negatives. Still, the figures I see are always approximately in the 95%+ positive range.


I don't think "reddit is dead", but I do think the relatively intelligent and interesting userbase that was there ~10 years ago has been almost entirely displaced by clickbait and propaganda targeted at the now mostly middlingly-intelligent read-only userbase.

This almost inevitably happens when a site gets popular enough. There are only so many intelligent people actively posting on the internet, and it's impossible to have a site with 100,000 users (let alone the >400,000,000 of Reddit) with a high fraction of them providing good content.

Certainly the heavy-handed censorship of the Reddit administration is a disincentive for people who are interested in topics at the edges of the overton window (i.e. most of the interesting people online).


How is it censored? There is a comment on that post with 5000 upvotes.


It was removed from the front page; therefore culling any more engagement with the post.

See the following comment https://old.reddit.com/r/AskReddit/comments/h9ctho/serious_a...


[flagged]


https://en.wikipedia.org/wiki/Transsexual

The second paragraph has four sources that cite some transexuals rejecting the label of transgender, so language-policing others based on your personal preferences is quite inappropriate here.

boxfire 14 days ago [flagged]

I suggest you educate yourself, because wikipedia is not a reliable source. Here is a page that also happens to help: https://www.glaad.org/reference/transgender

The bottom line is that transexual is a subset of transgender, which is an umbrella term. If you write transexual refering to a person whom you do not know particularly identifies as transexual in can be seen as hate speech in many places.


That page correctly notes that 'transsexual' "originates in the medical and psychological communities", much like Wiki itself does. While 'transgender' may be naturally broader, there's nothing hateful whatsoever about using that term; it's quite well-established.


> Reddit is dead

This is just absurd.

* https://foundationinc.co/lab/reddit-statistics/

* https://www.alexa.com/siteinfo/reddit.com

> and no one I know uses it anymore.

You say this as if it is relevant to your point.


Considering how much of Internet culture is currently being formed on reddit it's amazing how opaque the site is to history. The archival is low-quality and missing a lot.

I sometimes browse by /r/all/gilded, which gives you a weird, weird, WEIRD cross section of the Internet. One time I found a heavily gilded post memorializing a recently deceased contributor to /r/cripplingalcoholism -- which appears to be an important but saddening community for people with a serious medical problem -- then followed a few links on the profile of the deceased and their associated until, a few minutes later, I ended up at /r/opiaterollcall.

Wow oh wow, what a subreddit that was: https://web.archive.org/web/20130904195509/www.reddit.com/r/...

That forum died pretty soon after I found it (and got national press coverage at https://www.nytimes.com/2017/07/20/us/opioid-reddit.html ) but wow, oh wow, there was a lot going on there which is now essentially lost to readers of history, because existing archives don't go deep into posts and comments.


Gizmodo addressed this in February, though just as Covid-19 news coverage began overwhelming everything:

Yesterday, Reddit CEO Steve Huffman (or “Spez,” as Redditors affectionately know him) published the platform’s annual transparency report, highlighting how the platform took down more than 200,000 pieces of offending content over 2019, along with nearly 56,000 accounts and nearly 22,000 subreddits—the site’s communities, most of which are moderated by volunteer users. He also flagged a “company update” regarding the way Reddit thinks about “quarantined” communities...

https://gizmodo.com/reddit-will-start-to-punish-users-for-up...

That policy, in a Reddit post by Huffman:

When we expanded our quarantine policy, we created an appeals process for sanctioned communities. One of the goals was to “force subscribers to reconsider their behavior and incentivize moderators to make changes.” While the policy attempted to hold moderators more accountable for enforcing healthier rules and norms, it didn’t address the role that each member plays in the health of their community.

Today, we’re making an update to address this gap: Users who consistently upvote policy-breaking content within quarantined communities will receive automated warnings, followed by further consequences like a temporary or permanent suspension. We hope this will encourage healthier behavior across these communities.

https://old.reddit.com/r/announcements/comments/f8y9nx/sprin...

According to that statement, bans (apparently temporary) are for voting only in already quarantined subreddits.

WatchRedditDie's discussion is not especially nuanced, considered, or balanced.


I don't understand why you would even publicize this. Just stop counting those users' votes, if that's what you want to do?

There's enough fuzz in the vote counts that no one would notice, and you wouldn't have any controversy on your hands.


>I don't understand why you would even publicize this. Just stop counting those users' votes, if that's what you want to do?

Those user's votes don't matter, scaring the userbase into narrowing their overton windows of acceptable reddit voting is what matters.

More broadly Reddit wants to be seen as taking action. The incentive structure facing that company rewards loudly making a slight improvement to the site over quietly making a huge improvement.


I can think of a few reasons to publicize this, but none of them have anything to do with avoiding controversy.


I don't think avoiding controversy should be the goal.


The point is that the users who are upvoting this kind of content will stay on the site if they think their votes matter. Reddit clearly would rather they leave entirely, and this is a good way to encourage racists/misogynists/incels/whoever to find another platform.


It’s a good way to encourage any user at all to leave.


Nope, I would use reddit even more if it meant a higher quality user experience, which the expulsion of those sorts of people would guarantee.


It happens for far left content as well. I don’t think it’s limited to hateful content. At least, that’s what I’ve read from the chapotraphouse subreddit. Example being upvoting some far left stuff in r/politics.


Hateful content is never clearly defined and it comes from both political extremes.


'Far-left content' and 'hateful content' are not mutually exclusive. Quite the opposite, really.

It is amazing how people will redefine hating Trump supporters, men, white people, "the 1%", western civilization etc as somehow 'not hateful'. "My hate isn't hate, I just dislike things that are bad." Right.


[flagged]


> I wouldn't characterize far left content as hateful.

You're not familiar with, e.g. tankies, I take it?


The term has evolved over time. The people going by tankies now are likely not the same as decades ago. So, it's unclear what you think it is.

https://www.reddit.com/r/ChapoTrapHouse/comments/hb0xep/wtf_...

https://www.reddit.com/r/ChapoTrapHouse/comments/hb0xep/wtf_...


Tankie feels like it became a near useless term. Some well known liberal paper (maybe NYT?) published some article that insinuated that maybe US cold war era propaganda on the USSR wasnt nessicarily always true and someone linked it to me referring to it as a tankie article.


They still defend indefensible things like Holodomor or Tiananmen Massacre.

Eventually reddit will go the way of digg and fade into irrelevance.


Can't happen soon enough. The days where forums were the defacto place for discussion were just better. The communities would self-police and alternative opinions weren't suppressed nearly as easily. Reddit is a meme slideshow, not a place for discussion.


The long tail of reddit is the same as forums. I'm subscribed a bunch of niche subreddits for products I own, projects, etc and it's just like forums except with a measure of consistency and features that I appreciate. These communities are self-policed the same as any other forum.

The problem with reddit isn't technical, it's human. A large group of people will ruin literally anything.


You just have to be in the proper small subreddits. Reddit is where I get the highest quality of discussion (especially since Covid-19 as small subreddits stayed small but HN went downwards)


I don't want to register zillions of forums and manage my passwords. I contribute to hundreds of subreddits. If reddit dies I'll find a replacement like reddit and move on.


>I don't want to register zillions of forums and manage my passwords.

and I don't especially want the admins of any one site to be able to erase my entire existence off the net on a whim.

single/token login from known net authority agents is common. I'd rather embrace that kind of thing than the idea that any one site should be our de facto forum and communications standard.

I was reminded of this recently when a personal favorite BB forum disappeared , taking years and years of niche automotive data with it. Data that was rare from the get-go, and un-replaceable now.

I'd rather decentralize my 'data-stores'. Reddit will disappear one day, and it's up to them to be a good actor towards the archival group, providing data bandwidth during a time when investors have become disinterested, and paying the light bill is near impossible.

I wouldn't count on that.

Keep archives, repeat information elsewhere , and most importantly : don't embrace adding and relying on centralized structure where doing so inherently adds weakness and fallibility.


I don't think any "irreplacable" data or conversation should be on reddit. If I lose my conversations about funny cats, or factorio, or why I love a programming language, I'm not gonna lose any sleep over it. I agree with you for critical stuff, I don't think they belong to reddit. But please understand that for consumers like me, reddit is an incredibly convenient social media. I can login now and spend hours talking about hundreds of different things. There is no way in hell I'll be able to register to so many different forums and not get frustrated.


> “Reddit will disappear one day, and it's up to them to be a good actor towards the archival group, providing data bandwidth during a time when

Reddit’s content is already dumped and cloned in near-real-time. https://pushshift.io/


This is why an ActivityPub based, federated forum system might be the future here. Have it so you can join the network once, and carry the same username to any other site on the network without anything resembling a signup process. Perhaps you could even have a choice of aggregators which then displayed content from all those sites in one place, or let you view a combined karma score or post count or whatever too.

That'd fix most of Reddit's issues and then some.


A "log in with Google/Apple/Github/whatever" fixes that issue easily enough.


And most websites implement that? I like r/factorio on reddit. Factorio official forums require me to register with a username, email and password. I simply don't want to do that, and will not do that. I'd much rather not engage with factorio content than register a forum for 100s and 100s of things I'm interested in.


I think it just shifts it to Google/Apple/Github/whatever "owns" id management.


The biggest hit against reddit is that every large sub has to resort to weekly discussion posts/tagging every post so people can filter, simply because of the way subreddits are forced to be laid out.

A normal forum will have different sections to discuss everything even mildly related to the overarching topic, and a number of areas to talk about things not related to the topic. But Reddit doesn't have those features, so subs have to resort to these half measures or allow multiple daily posts about the same 5 topics.


A solution is creating more niche subs, As an example the main Star Trek sub is very relax, so you get a variety of posts there and the comments vary in quality from very informed ones to just memes , hare and entitlement. So there are sister sub-reddits , I like r/DaystromInstitute because if has a narrow scope and the moderation is strong no lazy comments allowed. There are subreddits for each Star Trek series(even the old one get hate and low effort meme comments) so you can discuss your favorite show and keep out the hate and meme comments. IMO this is the best thing, I seen it in gaming too, because fans will not agree which version of the game is good and which one is trash so they make smaller sub-reddits to talk in peace. Though you need someone to kickstart things and force moderation, then the community grows a bit and it can self moderate.


It's a matter of scale. As size grows, rules become harder to enforce and the amount of comments that would derail conversations sky-rocket. Living under the same rules globally, esp. across ideologies and world-views, is a disaster for everyone.

They’ve been working hard at that for a while.

The last I heard they still weren’t profitable.


I read digg multiple times a day.

It is far better than it was over a decade ago.

As an aggregator, it is nearly perfectly aligned with my interests (except for the year they spent heavily promoting Jimmy Fallon).


That doesn't really say anything.

I read CNN multiple times a day.

As a news site, it is nearly perfectly aligned with my interests.


OP hopes digg goes away. I point out that it is a good site.

Where did I lose you?


I see people writing this for years now. I guess at some point in history it will become true but I don't see a steep decline that would justify it atm.


True of every website eventually, but if data is any indication that won't be happening any time soon.


Digg dug their own grave with the sweeping changes that drove everyone to reddit.


Digg trying did not lose to reddit by trying to improve their platform by banning hateful content and low-quality users, nor did reddit win by being a haven for such content.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: