Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Reveddit.com: Improving online discourse with transparent moderation (cantsayanything.win)
180 points by rhaksw on Nov 4, 2022 | hide | past | favorite | 185 comments
Hi HN, this talk represents a summary of my work over the last four years on addressing shadow moderation with Reveddit.

Let me know what you think, good or bad, and I'll do my best to answer.

What is shadow moderation? It is any action taken against your content that you aren't told about and aren't able to detect while logged in. I focus on Reddit comments since every single removal is shadow removed— removed comments are shown to you as if they're not.

You can try this for yourself on,

https://www.reveddit.com

https://www.reddit.com/r/CantSayAnything/about/sticky/

Your content will be removed, you won't be told, and it will be shown to you as if it's publicly visible.




I love reveddit.com, it's hilarious to look at the typical r/science post and see 60%+ of comments removed on average (the 'controversial' ones go 75%+).

If the moderators agree with the study there is zero dissent tolerated. This subreddit has something like 1000+ moderators and they remove any comments that doesn't fit into Reddit's very narrowly defined overton window. Not just people being off-topic or taboo... but often anyone who questions the studies being posted.

My only critique is that it doesn't work with Firefox. I have to pull up Chrome which I only use for work. Do you have a social media profile to follow/provide feedback to?


My take: science has been politicized. As a result people feel like they are entitled to share their (often naive and inexperienced) opinion.

Problem is this: science is not democratic. Consensus plays a role, but not everyone getting to the point were you are equipped to peer review others often takes years of effort and study. That’s unavoidable, given that the easy questions have already been answered.

A single … “folkloric” comment is not problematic. When 60% or 75% of the comments are like that, the problem is that the subreddit isn’t science any more. It has become “wacky theory time”.

So at that time you have a choice: either you moderate, in order to keep the channel on-topic, or you accept everything, at which point the guys who benefit from science not being available to the general public have won.

Of course this moderation process isn’t going to be perfect and there will be false positives.

I have been on the receiving end of a campaign organized by a somewhat popular forum in order to fill up a direct democracy platform with as many memes and random shit as possible. The worst night I just had to hide everything that was posted, the signal to noise was too high. This meant that some legitimate proposals where lost. It wasn’t a perfect solution either, but it was the only one we had.

From my experience, I think without moderation r/science would become “aliens built the pyramids” in a week.


Moderation is fine, but what about secret moderation? The issue here is that users are being shown their removed comments as if they are publicly visible. I share my thoughts on the impact at 19:54 [1] in the video linked in this post.

> So at that time you have a choice: either you moderate, in order to keep the channel on-topic, or you accept everything, at which point the guys who benefit from science not being available to the general public have won.

There is another way to view this. I think there is a message in all of that hate speech you see. I'll paraphrase how I perceive it: "I don't see why this is wrong, and I'm frustrated that nobody will debate me about this issue from where I stand, so I'll get angrier and angrier until I get someone's attention."

We could debate over whether this is reasonable, but personally I find it harder and harder to see the merits in the secretive removal of any content. We do need mods to curate according to group rules and the law. I also think the removals should be reviewable at least by the author of the content.

[1] https://youtu.be/aCadmiIfbcI?t=1194


> "I don't see why this is wrong, and I'm frustrated that nobody will debate me about this issue from where I stand, so I'll get angrier and angrier until I get someone's attention."

This is a very succinct and insightful way of putting it. It's fine to refuse to engage with people who are refusing to engage with you in good faith (i.e. they won't respond to the arguments you're making with anything but a repetition of their own arguments.) However, assuming bad faith of everyone who has a different opinion than you, even an obviously bad opinion, just hardens their belief in that opinion. It also makes you the one who is refusing to engage (in good faith or bad.)

People know from their daily lives that when people refuse to engage, it's either because they're afraid their answer will anger you, or because they're afraid that they won't be able to defend it. When you refuse to engage with a bleach-drinker, and you're clearly not worried about angering them, they reasonably assume that you can't confidently tell them why they shouldn't drink bleach.


Yeah, I'd say another thing is to be aware that some might like to troll and steal your attention in order to distract. And to be like, okay, you got me haha, and just move on.

For example, the reply to me in this thread [1] is not the author that I originally replied to, and their comments are nonsensical, so I just considered them a troll and went to check for other replies.

I'll give someone a chance to make their case more clearly, but if they don't make any attempt at reading and responding to what I wrote, it's not worth it.

[1] https://news.ycombinator.com/item?id=33480818


If moderation isn’t silent then r/silence becomes “r/hey why has my comment been moderated, this is bullshit”. Same result as not moderating - The conversation about science gets diluted by the noise people lacking the basic skill level make.


> If moderation isn’t silent then r/silence becomes “r/hey why has my comment been moderated, this is bullshit”. Same result as not moderating - The conversation about science gets diluted by the noise people lacking the basic skill level make.

That's a myth. Discourse tells me when my content is moderated. Plenty of forums have worked this way by default without being overwhelmed by spam.

The widespread use of secret removals is unique to the social media era. It may have even helped create today's social media monopolies.

There is another harmful effect. The more you secretly remove toxic users' commentary from view, the less signal they get that their views are not approved. In fact, removing them from view makes society worse off since you're also taking away agency from other users who could be preparing counter arguments. Then, when these two disconnected groups meet in the real world, there's a shouting match (or worse) because they never had to deal with those arguments before. What's more, extremists in otherwise upstanding groups won't realize they're being censored. They may, as a result, think they're of the same mind since their extreme viewpoints were not challenged (as they would be in the real world). By the time they act in the real world, they're far removed from reality.

Secretly removing commentary is different from just ignoring someone in the real world. IRL if you ignore someone, they know you've ignored them. Online if you "ignore" them by secretly removing their comments, they don't know they've been ignored, and thousands or millions of other users don't know that that line of argument even existed as a thought in someone else's mind. It's incomprehensible to them.

It's sad that hundreds of thousands of online moderators think they're helping society by secretly removing such commentary, while in doing so they may actually be creating the environment they seek to avoid. Everyone is trying to create pristine corners online, and it ends up covering the whole map. Meanwhile the real world goes down the drain. Many of us spend too much time using systems whose operations we aren't reviewing. Every day more people are becoming aware of the importance of transparency, and I think at this point the only question is when and to what degree it comes about.


> The more you secretly remove toxic users' commentary from view, the less signal they get that their views are not approved.

And related: the more secrecy about removals, the less signal that favored users get that their opinions were published because the mods approved of them. If you mod away 75% of posts:

a) the remaining 25% see the 75% as opinions nobody has, and the debate looks like a reasonable 15% vs a radical 5% on one side and another radical 5% on the other side. The vigorous debate among a small minority gives the illusion of a free space, even if it's e.g. people debating how much should be spent on the King's new crown, while the people saying that there shouldn't be a King, or even that the King shouldn't have a new crown have been modded away. And

b) the 25%, seeing the vigorous debate about the cost of the King's new crown (and not the debate about the King or the crown themselves), don't understand why having the King as a mod could have a chilling effect on speech.


> the more secrecy about removals, the less signal that favored users get that their opinions were published because the mods approved of them

That's deep. I hadn't thought about what happens to the ego of users whose content is approved. One lie in the chain really does screw up everything.

> b) the 25%, seeing the vigorous debate about the cost of the King's new crown (and not the debate about the King or the crown themselves), don't understand why having the King as a mod could have a chilling effect on speech.

You're right, it's like a newly manufactured Overton window that can be adjusted on a per-group basis. A simpler term is brainwashing but I think you do need to spell it out as you do here. This is good stuff, I can use these principles to tell stories using real examples and make what's happening more relatable. Or you or anyone else could too.


>Problem is this: science is not democratic.

The problem with this; who gets to decide what's right? The institution of science has had this issue before, and decided no one group is capable of always doing that. Hence it is decided using consensus and the scientific method. It's easy to say 'only allow professionals' but there ARE professionals with controversial opinions that will still get mobbed.


Can we agree that hundreds of meme posts and hundreds of "That's what she said" posts are not helpful? You see that behavior on tons of popular subreddits that are less moderated. That the conversation looks like a copy of most of the other threads in the sub because of the low quality recycled posts.

If we did that on HN our posts would be flagged here too. We do need to separate out how much 'garbage' gets filtered from the ideas that are controversial opinions.


> Can we agree that hundreds of meme posts and hundreds of "That's what she said" posts are not helpful?

The dispute here is secret moderation, not what content gets removed.

If moderation were transparent, users would also be part of the decision making process. Currently they are not part of the process because their removed content is shown to them as if it is publicly possible.


As I mentioned it is far from perfect. Scientists are still human.

It is still the best system we have, and it is definitely better than “opening the gates and letting everyone in”. If you want that, try MTV.


> it is definitely better than “opening the gates and letting everyone in”.

Letting users discover when their content is removed is not "letting everyone in".

You can have both moderation and transparency. They're not mutually exclusive, and many people will alter their behavior in a group after being told their content was removed. They're less likely to change when they're not told. For those who persist, consider what Jonathan Rauch says about speaking in a public forum,

> The person you're talking to isn't the person you're directly talking to. You're not going to be able to persuade that hardcore LGBT left-wing activist that your point of view is correct. The person you're talking to is [the one overhearing your conversation]... They're seeing you sound reasonable, and the other person sound dogmatic and censorious... [1]

In other words, don't compromise your values to deal with a minority of very persistent people with whom most people don't find alignment anyway. When you do compromise your values, that person will shift to attack you on the basis of censorship, and you've just given them the very platform they needed to gain followers. At that point you're no longer the open discussion forum to whatever degree you claimed.

So you can't convince everyone of your viewpoint. So what? Leave it at that. Accepting that someone may not agree with you right away, or possibly ever, is the grown up thing to do. After all, in some cases it may be your mind that gets changed. To the extent you don't do this, you are your own worst enemy. And when you can see it through, you will have become your best advocate.

[1] https://www.youtube.com/watch?v=E0T9XSG73kY&t=4889s


Subs like /r/askscience and /r/science have always needed moderation even before Reddit became cancer.

The problem is the unauditable censorship. This is especially important on political forums but it also becomes relevant on science forums when scientific issues are starting to be politicized.

Of course, Reddit can be whatever it wants, but as it stands, you can only treat it as a propaganda outlet, as you have no way of knowing what gets removed.

The way we can make moderation trustworthy is the same way we make software trustworthy: By putting it out in the open so anyone can scrutinize it and raise a stink when they discover something suspicious, and can't be silenced.

There is almost zero legitimate reason to completely scrub any comment, hiding them by default would work just fine. The only reason to really delete a comment would be if it directly puts a person in danger, such as by containing personally identifying information.

Hard censorship must be a rare enough event that it becomes a scandal and everybody hears about it. The normalization of censorship allows moderators to do basically anything they want, because comments being removed is just expected and doesn't raise any eyebrows anymore. "Oh yeah, those 75% removed comments were just memes and hate speech."

The other side of the coin is that flooding forums with memes, hate speech, and inane nonsense is a quick and easy way to drown out meaningful discussion and actual information, so soft censorship is necessary.

They are two different attack vectors, and we need a defense against both of them. To prevent infiltration of the mod team, speech that we accept to be censored completely must require actual effort so mods can't plausibly claim it was "just memes" when comments get removed on a daily basis. To prevent exterior soft censorship by flooding, we ironically need moderation with power to implement soft censorship, but one that we can trust.

We also cannot accept "hate speech" (or any other combination of words) that requires zero effort to produce to be a reason to force moderators to implement hard censorship. That allows anyone to censor any platform they like or at a minimum to force its moderators to destroy the trust the platform's users can objectively have in them.

Of course, this isn't completely implementable due to unreasonable laws in pretty much every country in the world, but a trustworthy platform would at least try to approximate it as closely as possible under the laws they're bound by. Reddit certainly isn't one, at least it hasn't been since Aaron Swartz was bullied out a long time ago.


> The problem is the unauditable censorship.

It's not even that. The biggest problem is more basic, that authors themselves are shown their removed content as if it's not removed! You can see that in action here [1]. It's happening everywhere, on every issue, on every side, in every geography and on every major platform.

Start there. We don't even need to ask to see everything that gets removed yet. Asking for users to be able to know when their content gets removed is enough. And as I said elsewhere [1], this transparency must be citizen provided by raising awareness and through newly built Reveddit-like technology for other social media sites. Expecting the government do achieve this for us will only delay the inevitable realization that we must build these systems ourselves. As I mention in the talk linked on this post at 26:10, don't ask someone to fix it for you– that's handing power over to authorities. You fix it.

[1] https://www.reddit.com/r/CantSayAnything/about/sticky/

[2] https://news.ycombinator.com/item?id=33480576


> science is not democratic

I strongly disagree, it's inherently democratic. The scientific method is all about making mistakes and trying again. Discourse is all about sharing knowledge. Open societies do this effectively.

Jonathan Rauch, in Kindly Inquisitors (1993) wrote,

> Authoritarian systems have their intellectual giants. What they lack is the capacity to organize and exploit their masses of middling thinkers.

https://books.google.com/books?hl=en&id=uZUVAQAAQBAJ&pg=PA74...

> From my experience, I think without moderation r/science would become “aliens built the pyramids” in a week.

It might. What would happen then? I'd argue there'd be a lot of good discussion.

The reason people don't think it's possible to change people's minds over the last 7-8 years is because during this whole period you didn't know about shadow moderation.

Watch the linked talk, look at the examples. Deception is being crowdsourced and it is happening across every ideology, geography, and on every major platform.


> I strongly disagree, it's inherently democratic.

No, it really is not.

You might as well state that truth is decided by committee and that individuals cannot or should not decide what is true for themselves.

Democracy is a tool, not an inherent virtue. If science was inherently democratic, The Science would be "settled"; such is a state of things that has been demonstrated time and again to discredit consent. Just because a bunch of people, no matter the number or expertise, decide that something is a fact doesn't make it so. Conclusions on science are opinions, and always will be.

By the way, nothing about the scientific method involves review by committee. Science is just having a hypothesis, doing tests, and reporting observations. Everything else is overloading the term "science."


> No, it really is not.

Forgive me, I live in Taiwan and in this region "democracy" can sometimes be colloquially equated with "free speech", perhaps because the two concepts seem distant to some stricter regimes.

You're right though, and now I can't edit the comment. If I could, I would rewrite it to say,

> Science may not be "democratic", however it does depend on the ability to conduct civil discourse, which depends on free speech and is central to the US implementation of democracy.

I agree with what you wrote. I think the rest of my comment illustrates that, but saying "science is inherently democratic" is clearly wrong outside of the colloquial definition I've given here which probably nobody else on this board uses. facepalm


It might. What would happen then? I'd argue there'd be a lot of good discussion.

Why bother though? There are thousands of places you can have that discussion, including on Reddit. You can set up, say, /r/scienceforall right now and establish the rules you want. The /r/science sub has decided that it’s a space which should have heavy moderation - a totally legitimate viewpoint with obvious benefits and costs. You might not agree with the balance, but the beauty is you don’t have to.


> Why bother though?

See 18:20 [1] and 19:52 [2] in the talk linked on this post. Nobody will visit r/scienceforall because r/science will remove references to it. Plus, forcing me to set up a group doesn't sound like a solution, it sounds like a shake down. To get an open discussion space I have to moderate it, and then have the social media site threaten removal of my mod powers if I fail to remove something they don't like within their desired time frame. That means I have to review every piece of user-submitted content. No thanks. Transparency and light-touch moderation would be more efficient.

> The /r/science sub has decided that it’s a space which should have heavy moderation - a totally legitimate viewpoint with obvious benefits and costs. You might not agree with the balance, but the beauty is you don’t have to.

My issue is with the system, not moderators. Legally speaking, shadow moderation may be in the clear, but it's still harmful for society while we have just a few major platforms. We should all know and share what's going on.

[1] https://youtu.be/aCadmiIfbcI?t=18m20s

[2] https://youtu.be/aCadmiIfbcI?t=19m52s


I think the issue here is that you’re blurring your complaints between the two quite distinct issues of heavy-handed moderation, and shadow moderation. You’re talking about having a philosophical or principled issue with shadow moderation; fine - I don’t necessarily agree, but it’s a reasonable argument to have.

But what you have ended up saying here is “all moderation in all communities must be to my own standards and any others are wrong”. And I don’t think that’s reasonable - individual communities, like on Reddit, should have the freedom to enforce the rules of those communities as they see fit. Reddit is full of terrible, mostly unmoderated cesspits; it’s also full of heavily- and lightly-moderated subs.

I would tend to agree that moderation in general should be on the lighter side and in particular there should be approaches to tackle the abuse of power that’s pretty common (such as auto-banning links to alternative subs you mention). But I have seen so many spaces ruined by lack of moderation over the years that it’s hard to get enthusiastic about it as a cause.


> I think the issue here is that you’re blurring your complaints between the two quite distinct issues of heavy-handed moderation, and shadow moderation. You’re talking about having a philosophical or principled issue with shadow moderation; fine - I don’t necessarily agree, but it’s a reasonable argument to have.

Could be. They kind of go together. Shadow moderation often leads to heavy handed moderation. But yes it's entirely possible I'm muddling things together.

> But what you have ended up saying here is “all moderation in all communities must be to my own standards and any others are wrong”.

You lost me. How did you come to that conclusion? My gripe is with shadow moderation, and even in that case I think it should be up to the platforms to decide whether or not they want to implement it. Without further understanding of the law, I don't support government intervention. So I don't see how I've removed choice from either platforms or moderators here.

> But I have seen so many spaces ruined by lack of moderation over the years that it’s hard to get enthusiastic about it as a cause.

I see a lot get ruined by secretive moderation. And, getting rid of secretive moderation does not mean getting rid of moderation altogether.

Remember, your ideological opponents have access to the same tools, and they don't have your moral safeguards. So they are inherently better at abusing whatever tool you provide to grow their following. What you want is to divide powers so that your worst enemy cannot abuse the things you have access to in order to do things to you that you would not do to them. Comprende?


If nobody will visit your subreddit, then that might be an indication of how it's actually being perceived, and not evidence of some great injustice surrounding "free speech".


No, it's just hard to advertise them. Groups often offshoot specifically because they disagreed with moderators of the previous group. The new group will want to attract users from the old group, but the old group's moderators set up bots to auto-remove any attempts at sharing the new group.

I linked the point in the talk where I discuss this, but I can share an example here too. r/LibertarianUncensored is 2,000 members and stickied this post [1] explaining shadow removals occurring on r/Libertarian which has 500,000 members.

If you bring this up on r/Libertarian your commentary will be removed and you'll either be banned or shadow banned. Both make it impossible for you to share what's going on with users in r/Libertarian. Meanwhile, moderators there publicly deny that anything untoward is happening [2].

Maybe some day r/LibertarianUncensored can overcome its userbase deficit of half a percent, but it's obvious that shadow moderation is helping to maintain r/Libertarian's domination.

Other groups that skew younger, for example r/Overwatch, may already be familiar with Reveddit and thus have a better chance at challenging mod biases [3].

[1] https://www.reddit.com/r/LibertarianUncensored/comments/uotv...

[2] https://archive.ph/O0GN8

[3] https://old.reddit.com/r/Overwatch/comments/ye16uv/this_subr...


>”You can set up, say, /r/scienceforall right now and establish the rules you want.”

In theory this is a viable solution. In all likelihood, this won’t succeed and there are probably a dozen offshoots already that no one knows about. I suspect two things would happen. The most likely thing is the subreddit languishes and never gets any traction. But if it somehow does, it is likely to get targeted as a disinformation subreddit or the admins will install their own powermods - as they typically do once subreddits become big enough.

Given what we know about how Reddit chooses to display content and run its moderation team, I don’t think any alternative science subreddit would be allowed to successfully exist and be of any real competition to r/science.


Not at all. For the same reason that heart surgery isn’t. Or black belt karate isn’t. Some human activities just have high skill floors. Science (at the levels we are talking about here - groundbreaking hard to understand complex stuff) is one of them.

Just to clarify: I think science MUST be open for anyone and I think divulging it is critical.


> Not at all. For the same reason that heart surgery isn’t. Or black belt karate isn’t. Some human activities just have high skill floors. Science (at the levels we are talking about here - groundbreaking hard to understand complex stuff) is one of them.

Are you saying that anyone publicly discussing science on the internet should be subjected to secretive removals because publishing groundbreaking science is complex stuff? If so I strongly disagree and I find it hard to square that with your following statement,

> Just to clarify: I think science MUST be open for anyone and I think divulging it is critical.

Those two paragraphs are in direct conflict. You're saying science should be open but not everyone should be able to discuss it online without having their commentary secretly removed.???

Nobody is born a heart surgeon or karate expert. Every single one of today's experts got there by beginning somewhere near the bottom and working their way up. Some may have found their way easier than others, but that doesn't change where they began, or the general direction they had to take. Left foot, right foot, we all march, and those who had a harder time of it may be better off for making it through that (not that I want this to happen, I'm just saying it can be a result of enduring hardship).

Not to mention, science is meant to be disseminated. If you just put it up to be read rather than discussed, nobody is going to understand it. It's okay if people misinterpret things, that is part of how we learn.


No. I have explicitly mentioned that a single comment by someone doesn’t amount to anything. When 60% of the comments are like that, that is when this becomes a problem. Then it simply isn’t science any more. Just like when my site was flooded by memes, the original goal of the channel has been compromised.


> No. I have explicitly mentioned that a single comment by someone doesn’t amount to anything. When 60% of the comments are like that, that is when this becomes a problem. Then it simply isn’t science any more. Just like when my site was flooded by memes, the original goal of the channel has been compromised.

Here's the thing. You won't show us the content of those 60% comments. You will only describe your perception of them. You're here saying 60% of comments are "folkloric", which I assume means you consider them to be useless, but the nature of secretive censorship means two things:

(i) the authors are not able to defend themselves, since they are unaware of the removals, and

(ii) the victimization you present here, that such comments disrupt your conversations, means that you are unwilling to be transparent about what was actually removed, since sharing the comments would give them the publicity that in your view they don't deserve. And in that case, nobody here can evaluate the truth of what you say. You might then say "checkmate", but I think the reality is it's checkmate for the people because they can see this happening. The more you remove, the more obvious the source of the problem becomes.


Unfortunately being a reddit on r/science doesn't require a black belt in science.


> I have been on the receiving end of a campaign organized by a somewhat popular forum in order to fill up a direct democracy platform with as many memes and random shit as possible.

I'm curious what platform you are referring to here?


Decide Madrid, for the Madrid City Hall:

https://decide.madrid.es/

It was built on Ruby on Rails. There’s an Open Source white label platform generator project called Consul. I came up with that name (it comes from “consulta ciudadana”, not from an ancient Roman politician).

https://consulproject.org/

My last involvement with the project was 6 years ago. My understanding is that the site took a downward dive with regards to code quality after me and other founding members left the project, and with the change in government.


ur the definition of a flaming redditor soyjack


> This subreddit has something like 1000+ moderators and they remove any comments that doesn't fit into Reddit's very narrowly defined overton window

Or the subreddit has a very strict set of rules for posting top level comments which include:

> 3. Non-professional personal anecdotes will be removed

> 4. Criticism of published work should assume basic competence of the researchers and reviewers

> 5. Comments dismissing established findings and fields of science must provide evidence


Just curious, have you spent time looking at the type of comments getting removed?

Have you looked at any of the top threads on Reveddit/Unddit with 75% of the comments gone?

It doesn't take a rocket scientist to see how it goes way, way beyond those rules you just listed, into something that can only be described as a petty dictatorship - but with any hundred angry nerds who happened to show up playing the dictator.

Zero transparency, zero due process, no one cares.

This sort of excuse (blindly believing all that matters is "just don't be an asshole on the internet and you have nothing to worry about", "just follow the rules on the sidebar") is exactly how this sort of thing happens and no one bothers to look into it. If Reddit had a button to list the comments arbitrarily removed (like Google shows DMCA request removals) you can be sure the outrage would be 100x and those sidebar rules would seem meaningless besides maybe 20% of the time.

I chose r/science because that's a topic that typically prides itself on caring about finding the truth, not about satisfying the ideological whims of the gatekeepers.


You call it petty dictatorship, I call it the best moderated subreddit of the whole platform.


You just happen to be lucky enough to be within the very narrow overton window. Or at least a) believe what's being curated for you and b) never cared to look at what's not being said.

You should hope that your views (or more accurately, their views) never stray from it and the information that a tiny group of people are controlling is good for the entire population.

You could say just use another subreddit. But this isn't limited to one.


Is the window really that narrow, or do you just happen to fall outside it/identify as having fallen outside of it for personal identity reasons?


It's narrower than you might think. For example, I've recently been paying attention to the trans issue because some are saying that pro-trans groups are equating words with violence [1] [2]. Apparently the pro-trans groups have been saying that only 1% detransition, but there are youth speaking out saying that number is likely far greater [3] [4] [5].

It's hard for me to believe these are made up stories. And, it appears that such stories are currently only being told via conservative media. Please correct me if I am wrong.

This is of interest to me because the US holds that words are not equal to violence, with the only exception being the emergency principle. According to Nadine Strossen, President of the ACLU from 1991-2008, the emergency test says speech may be punishable "if, in context, it directly causes specific imminent serious harm." [6] If groups are successfully convincing people otherwise, then it may mean we are forgetting what free speech is. Free speech, the US version anyway, draws the line at violence, not words.

[1] https://archive.ph/1NeiV#selection-1513.7-1513.57

[2] https://corinnacohn.substack.com/p/the-world-should-not-need...

[3] https://www.youtube.com/watch?v=3tQr6WZXHSQ

[4] https://www.youtube.com/watch?v=OdSKBVHrizo

[5] https://youtu.be/STE1pmCjULQ?t=2047

[6] https://books.google.com/books?&hl=en&id=whBQDwAAQBAJ&q=in+c...


It took a long time for you to get around to it, but was the reason your comments keep getting moderated because you show up in every gender / transgender thread dropping links and "just asking questions"?

More important than the content of speech in social media moderation is the behavior. You demonstrate bad behavior here.


> It took a long time for you to get around to it, but was the reason your comments keep getting moderated because you show up in every gender / transgender thread dropping links and "just asking questions"?

No, this is only the second time I've discussed the gender/trans issue. My comments generally get moderated for containing links to Reveddit.

> More important than the content of speech in social media moderation is the behavior. You demonstrate bad behavior here.

What is the bad behavior to which you are referring?


That's a _VERY_ selective set of sources ([3][4][5]), which you are in bad faith using a representation of the "youth"


"There are youth speaking out" should not be taken to mean "all youth". It means "some youth".

It's another point of view on an issue that I'd only previously seen presented from one perspective. That's why I found it to be so interesting. I was shocked to hear it because I had no prior understanding that anything untoward was going on. Question for you, what do you think about Keira Bell's story?


Views are a currency, and not one I wish to give to GBnews.


I'm sure you can find her story elsewhere.


'Bad faith' is the modern authoritarian's favourite excuse for censorship or dismissing comments. Just like 'problematic' it's a vague enough term that it can be easily abused ('dog whistle' and 'gaslighting' are other favourites). You don't have to openly challenge the source material, just question the motives for even asking the question.

Perfect for when you don't have the balls to directly call what the person is saying wrong or bad. Ad hominem 101.


I hope the irony of your comment isn’t lost on you


To be fair, tolerating transgender people's existence at all used to fall outside the Overton window. Having a small cabal of unimpeachable shadow lords ruling all discourse with an iron fist doesn't seem healthy for society to me.

Particularly when there's some circumstantial evidence that Ghislaine Maxwell used to be one of those mods: https://old.reddit.com/r/TopMindsOfReddit/comments/rrldd9/to... (skeptical link, but even here you can see that it's plausible)


I find it a little sad that in order to disagree with me you have to be mean about it.

There are plenty of reasons why a subreddit like r/science is fine for people, not just the fact that they have a "very narrow Overton window". Maybe it's because we understand that this particular subreddit does not conform to the usual agora of discussion that reddit is known for and just shows a list of maybe interesting science papers/articles and clarifications on them from people in the domain. I may disagree with some of the ideas put forward but I can be an adult about it and not jump to the conclusion that there's a cabal of "big science" that is trying to brainwash me.


> I find it a little sad that in order to disagree with me you have to be mean about it.

I didn't find what was written to be mean. Eleanor Roosevelt said,

"No one can make you feel inferior without your consent."

If you feel sad, that's fine, I just want to point out that this is a choice we all make. Sometimes there is wide agreement on what constitutes offense, and we use this to justify the secret removal of other people's content. But offense is inherently subjective, and such secretive removals rob us of chances to sharpen our counter speech. Rather than trying to make perceived hateful conversations secretly disappear, I think we are better off allowing each other to prepare counter arguments to offensive speech.


Being mean? Jeez, I was hardly being mean. Being easily offended and being pro-censorship are two intersecting circles on the venn diagram.


Wait until your view becomes the one attacked. Regardless of scientific merit.


I think you and the other posters decrying my "oh so narrow" point of view forget that the subreddit is nothing more than a place to see interesting news about science and the clarifications that people in science can bring on those news.

What it is not is a place where laymen like me and (probably you) can debate the veracity of said science. I don't give a toss about which science articles are posted there and if they agree with my world view as long as the curation and moderation is consistent with the principles described in the subreddit's rules. In my opinion is fine to not have an opinion or comment even if someone on the internet is wrong - present time excluded. :P


Check out this talk from Jonathan Rauch [1]. He goes through how both the left and right censor, and how that ends up being good for nobody, even if it starts out happening only in the private sector.

[1] https://www.youtube.com/watch?v=E0T9XSG73kY&t=1774s


Until the day you fail the purity test and have your IP and all accounts permabanned due to a disagreement with a mod in disguise.


There's no purity test in the subreddit except being an acknowledged scientist in the domain the submission is about. Forgive me the snark, but it's like you don't even know what we're talking about.


Luckily, moderators can't IP ban anyone. :)


Yes, they can. If you piss off a moderator they'll just ask an admin to ban you, assuming it's not an admin's alt in the first place (admins have sockpuppet mod accounts on every major sub)


They just use the internal tools and create a ticket to have an admin ban your IP and all current and future accounts. Wow, such a hurdle!

Even more awesome to have this happen with a dormitory IP. /s


I actually think the quality of the comment section in /r/science has gotten worse over the years because moderation is too slow or lenient. I assume that has more to do with the volume of work than intent of the moderation team themselves.


There are a lot of legitimate criticisms of papers that get removed, including comments that suggest sub-optimal experiment design, overreach in conclusions, or statistical issues (which are usually present in the hard-hitting, catchy studies that make it to r/science). Normally, these are part of healthy scientific discourse.

When the moderators of r/science particularly agree with a post or it is politically charged, almost any discussion of the actual science involved is purged (unless it is praise or an "explanation" that includes a lot of praise). When the mods don't agree, you can see some discussion of gaps in papers, but you must take the conclusion of the paper as 100% true, which is not how science works.


It makes me think about comment sections here on HN when a study on psychedelics gets posted. Suddenly 90% of top comments are all personal anecdote about mushrooms or whatever.


That seems normal to me. I click and comment on the articles on topics that interest me. Aviation articles will have aviation buffs over-represented. Math articles math buffs. EV articles will have EV buffs and people with an opinion about Teslas over-represented.

For psychedelics, the effect might be more pronounced because there aren’t that many people casually interested and/or those who are are reading and not writing. (I’ve never taken any; what am I going to add to the discussion? If someone has and their comment is interesting, I’ll upvote it.)


Right, but it's all personal anecdote. Not commentary on studies or research. If the guidelines for the subreddit are to keep things more focused on research it would make sense to me for mods to remove the massive amount of anecdote that comes out of the woodwork.

Sure people can be interested in using psychedelics themselves, but they want the discussion to focus on research


This is what most people seem to miss in this HN thread. Go look at the average 'large' subreddit, and I'm talking about the ones where personal anecdotes are expected. You'll tend to have some top level posts that are decent, and generally longer post. You'll have hundreds of posts that are memes, crap like "5 out of 7, perfect", subarguments that go off topic, and plenty of bots posting comments from other threads. The format the /r/science requires is the antithesis of 99% of the rest of reddit.

This leads to moderation fatigue. You have lots of stuff that should be summarily executed and takes up tons of your time. Then you have the minority of posts that require some thought on if it's just a controversial post, or that guy that just keeps making accounts and being an intentional malicious troll. But, you don't have any time to think on these post because another 3 crap posts ended up in the moderation queue in the 3 seconds you're looking at the first one. Eventually people get to the point that if a post ends up in the moderation queue it just gets blocked, which then trolls use to flag just about everything. It's a messy situation when you're asking volunteers to do this.


Building a culture of moderation that promotes transparency and not using mod tools as your pet ideological argument winning device is not a lot to ask.

There seems to be zero ethical guidelines / pressure / consequences for any of this stuff.

Of course the problem is hard but I've seen tons of high quality, on-topic comments with 500+ upvotes get removed because the mods obviously disagreed with it. The commenter won't even know it's gone and 99% of visitors will not realize how thoroughly editorialized the entire conversation threads are.

I seriously recommend picking any even slightly controversial r/science post and look at the comments being removed. It's not just memes and trolling.


> Criticism of published work should assume basic competence

What this is saying is that even if egregious blunders are made it is against the rules to call that out - how can I point out incompetence if I have to assume competence?

I host a medical journal club - statistical incompetence in the life sciences is bountiful.

> Comments dismissing established findings

I’m sure defining established isn’t difficult to do/open for abuse. /s

Those aren’t “strict” rules - they are poorly defined rules open up to tremendous interpretation.

Reddit is a cesspool for honest intellectual debate.


I once simply pointed out how an equation in a climate science paper didn't make sense because there was a unit error (there was exp(x) where x was not unitless) => removed.


Better than r/legaladvice where many of the moderators are law enforcement officers and aggressively remove good advice (including from actual attorneys) and aggressively push bad "cop law" that instructs people to take actions which are highly prejudicial to their own interests. :)

Ever once in a while I ponder how many people are unnecessarily stuck in jail because of that one subreddit then feel ill and have to stop thinking about it.


Is there an article that exposes this? How do you know?


The weird behavior there first came to my attention after hearing an attorney I know griping about their standard advice being removed in favor of "go turn yourself in" messages.

It's fairly well known at this point, I see multiple relevant threads on the first page of https://www.google.com/search?q=reddit+r%2Flegaladvice+moder...


It’s better to say I think the authors should have thought about X rather the author is a moron and incompetent for not doing X. You can still say it by critiquing the actions not the character of the person. Similar rules exist in the Houses of parliament around accusations of lying simply because everyone would decend into that as a method of argument. It’s harder to call people out but still possible in my opinion.


It doesn't matter to the mods of r/science how politely you put it if you are suggesting that not considering X would have significantly changed the conclusions of "the science."

As an aside, most of the mods are toward the "soft" side of science (biology, medicine, social science), and likely do not understand statistics very well. Papers with glaring issues regularly make it to the top because their conclusions are cool, not because they are truly good/impactful works.


>It’s better to say I think the authors should have thought about X rather the author is a moron and incompetent for not doing X.

It is probably less being a moron or incompetent than having an agenda would be the likely complaint ... depending on the subject matter that is. I suspect this is much more of a problem is social sciences than, say, pure mathematics.


Ok, but that’s not what the rule is saying (be civil, no ad hominem, etc). If that were the case I would have no issue. Maybe that would be a reasonable rule - and it is on many forums.

It’s saying that criticism must “assume competence” - that’s very different - and as a moderator I can twist any kind of criticism into a violation of this rule. And why do I have to assume competence anyway - shouldn’t the work stand on its own merits?

And let’s get something straight - “being a moron” is not equivalent to being incompetent (in an area of expertise). One is just name calling the other is a specific legitimate criticism.


The community is 95% less competent than the researchers so I think this rule is fine.


The same can be said for any community discussing politics, technology, or entertainment. An overwhelming majority of online commenters are ignorant of those fields and the myriad of factors that have to be taken into account by the experts who are making the decisions that are being discussed. So, should comments in those subreddits "assume competence" of legislators, software businesses and TV producers, lest they be deleted?


If you are as competent as the researchers you’re perfectly capable of pointing out the mistakes made in the research without insults like competency or stupidity. As it says in the rules for hacker news:

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

That these rules are not enforced by deleting content but the comments largely disappearing from view (based on community voting) is not a hugely different outcome IMO. It’s also worth noting that you don’t have to participate in such a community if you don’t want to. Their space and their rules.


I absolutely agree with you that comments should not explicitly cast doubt on the authors competency, nor insult them or their reasonings. However, that is not the rule we are criticising. The rule is:

> Criticism of published work should assume basic competence

That rule for example would bar a commenter from critiquing the work for not taking into account a confounding factor, or for using a non-representative WEIRD [0] population sample to reach overly broad conclusions. Taking these issues into account can be argued to be "basic competence", but they are too often ignored in many the most headline-worthy social science papers that attract the most upvotes.

[0] western, educated, industrialized, rich and democratic : https://en.m.wikipedia.org/wiki/Transnational_psychology#WEI...


That's why I go to the comments, for the 1% of commenter who do know better and can add something. That's why HN is still quite good, there are real experts here commenting on stuff below the line. Imagine if HN ruled that every article posted must be assumed correct - HN as a commentary would have no value for me any more.


The fact that the average community member is a moron doesn't mean that you should delete the posts from the people who are not. I'm sure they get tons of "biology doesn't replicate" posts that should get removed. They also remove posts that are polite, technically correct, and specific, and point out glaring issues in the paper.


I think that posses the credentials to show that are not "morons" can get verified in the subreddit and will not have their comments removed.


That is true, but this happens even to credentialed users who speak on papers outside of their supposed field of expertise, even when their comment is within their area of expertise. If a mathematician points out a math error in a biology or climate science paper, it gets deleted. The mathematician rightly should know more about math than a biologist or a climate scientist, and the comment is still taken down.

Also, you shouldn't need credentials to demonstrate that you know something - the fact that you post something polite and useful should be enough to have your comment not be removed. It is a useful proxy for moderation at scale, though.


> you shouldn't need credentials to demonstrate that you know something - the fact that you post something polite and useful should be enough to have your comment not be removed

The moderators of the subreddit obviously disagree. From the sides I would say it works for them.


It certainly works for the moderators - I agree. Whether it works for the community is another question.


This particular subreddit is not for actual debates. Everyone in this thread seems to forget about that.


One theory here: Moderation is hard work, people want something in return for hard work, reddit isn't willing to pay mods.

Inevitable result: Over time, the mod pool consists more and more of people who get something in return for the act of moderating itself. Either they get to push an agenda, they get to be petty tyrants, or both.

This could explain the pattern where for any given topic, you'll often see discussion of that topic splinter into several diametrically opposed subreddits which are constantly trying to dunk on each other. No one wants to moderate a sub which attempts to synthesize all the perspectives, but there is a supply of mods available for a sub that dunks on "the bad guys".


> I love reveddit.com, it's hilarious to look at the typical r/science post and see 60%+ of comments removed on average (the 'controversial' ones go 75%+).

Yeah I mention r/science at 7:00 in the linked video.

> If the moderators agree with the study there is zero dissent tolerated. This subreddit has something like 1000+ moderators and they remove any comments that doesn't fit into Reddit's very narrowly defined overton window. Not just people being off-topic or taboo... but often anyone who questions the studies being posted.

I'm sure that's true. I haven't looked at the exact biases in that group too much. For awhile I was collecting examples of biased removals in certain groups, but that was just so I could come up with relatable stories to tell. I disagree with pretty much every secretive removal. Even in cases where someone is threatening violence I wonder whether we should keep such a removal secret from the author: an extremist in a given group, whose views went unchallenged, may perceive silence as agreement and therefore believe they're part of a team when in fact they're not.

> My only critique is that it doesn't work with Firefox. I have to pull up Chrome which I only use for work. Do you have a social media profile to follow/provide feedback to?

It does work on Firefox but you have to disable tracking protection [1]. The reason is because Reddit, to which Reveddit obviously must connect, appears on an arbitrary list of domains that Firefox's partner Disconnect.me considers to be "trackers" [2]. That list breaks tons of websites that are tracked and seemingly ignored in this 8 year old bug [https://bugzilla.mozilla.org/show_bug.cgi?id=1101005]. The bug is updated with new sites almost every day. I'm not up to date on how browsers or standards are changing, but I'm guessing the Disconnect.me list is a hack until a better solution can be found.

I don't know of a better solution. I'm all ears if you have one provided it doesn't cost me more or hurt the site's operation by putting its logic on a server [3].

[1] https://www.reveddit.com/about/faq/#firefox

[2] https://github.com/disconnectme/disconnect-tracking-protecti...

[3] https://groups.google.com/forum/#!topic/mozilla.dev.privacy/...


One more question I just thought of. When I use services like Reveddit I often notice [removed too quickly] where I'm guessing moderators get to it before the scraper can archive it.

What type of timeframe do you guys typically work with before it gets removed? Any ideas on how to improve the capture times?

Also are you looking for dev help with your project?


> When I use services like Reveddit I often notice [removed too quickly] where I'm guessing moderators get to it before the scraper can archive it. What type of timeframe do you guys typically work with before it gets removed? Any ideas on how to improve the capture times?

Current capture time is noted in the status on Reveddit.com/info [1]. That blue box will show up in threads when the capture time is delayed more than the standard, which I think is typically under a minute.

The content for [removed] comments in threads comes from another third party service called Pushshift. It isn't able to get many auto-removed comments (such as those removed by Automoderator), and that service sometimes goes down, may fail to ingest something, etc.

In an attempt to fill some of these gaps, Reveddit can restore some unarchived comments by searching the user pages of other commenters in the thread. And if you visit Reveddit via a direct link to a removed comment, such as from the button that is provided under such comments by the Reveddit Linker extension [2], then it will try to auto-load those comments this way. Otherwise you can click Restore to do it manually.

Almost all of the features on the site should be documented in either a bubble or the FAQ. If you see something that you think needs a help bubble, or a better description, let me know.

> Also are you looking for dev help with your project?

Sure! The code is on Github. Feel free to brainstorm via issues, email, or submit PRs. My main focus now is raising awareness so it would be great to have backfill on keeping the code functional.

[1] https://www.reveddit.com/info

[2] https://www.reveddit.com/add-ons/linker


> Reveddit can restore some unarchived comments by searching the user pages of other commenters in the thread.

Clever


There are a lot of fun things you find when you start working on this. Virtually nobody else is doing it because they all think it's wrong without realizing it's the right thing to do.

For example, you could reverse engineer automod configurations by scraping user pages with a headless browser, storing all of the auto-removed comments, and then looking for the most common n-grams in a given subreddit. Alternatively you could make the code work "server-side" so some sort of json is returnable, though that would take more work.

And I haven't even mentioned the history pages, which may indicate where users and moderators have disagreed the most:

https://www.reveddit.com/v/worldnews/history/

Plus, this is only one social media site. Every site does this, and there's probably a way to reveal the shenanigans going on in a lot more places.


> I don't know of a better solution. I'm all ears if you have one

No, that makes sense. I see you've added a modal that explains how to get it to work with FF. My bad, I think I used Unddit in the past which didn't say this. I'll use Reveddit instead.

Besides maybe a web extension that somehow bypasses those controls? (idk if that works).

> Yeah I mention r/science at 7:00 in the linked video.

Thanks I'll watch the video tomorrow. Appreciate your work.


> I see you've added a modal that explains how to get it to work with FF. My bad, I think I used Unddit in the past which didn't say this. I'll use Reveddit instead.

No worries. I'll just share the differences with Unddit here for any interested. AFAIK Unddit only works with threads, whereas Reveddit works with nearly any type of Reddit page, including subreddits and domains. And last I checked, Unddit shows a limited number of comments up front, whereas Reveddit loads them all. So if a thread has more than 1,000 comments, and you visit an Unddit link, it might be missing some, and you won't know to check for more unless you know how the site works. In my opinion, that's unintuitive, so I choose to load everything up front. Also, Unddit shows user-deleted comments [1] while Reveddit does not [2].

The most distinguishing features of Reveddit IMO are that it works with user pages, can restore comments from user pages, has an extension that can notify you of removals when they occur [3], it has more filters, sort options, etc.

> Besides maybe a web extension that somehow bypasses those controls? (idk if that works).

Well that's basically what the modal describes. You can customize where you want to turn it off, no extension needed.

[1] https://www.reddit.com/r/pushshift/comments/qiwurl/what_happ...

[2] https://www.reveddit.com/about/faq/#user-deleted

[3] https://www.reveddit.com/add-ons/direct/


Doing keyword analysis (and perhaps sentiment analysis) could be a really powerful/valuable way to see what subreddits have what ideological biases, most especially political-heavy subreddits; in particular ON KEYWORDS/topics in DELETED COMMENTS.

For example, /r/CanadaPolitics - the moderator /u/_Minor_Annoyance (a username that fits into the category of "demon troll" nicknames that Jordan Peterson has defined) is heavily biased to the Liberal party of Canada/the establishment narrative, and anyone who questions or posts anything (truthful) that counters their narrative or makes the "Liberals" or Trudeau look bad gets deleted most commonly (even if highly detailed and very substantive) under the subreddit's "Rule 3 - Not substantive": "Keep submissions and comments substantive."

It's sickening [if you understand the consequences of it].


Blocking trackers is a feature not a bug. Disagreeing with what belongs on the list is fine of course but it isn't a bug. IMO Reddit belongs there equally as much or as little as Google and Facebook.


> Blocking trackers is a feature not a bug.

I didn't say it was a bug, I said the list of websites broken by that feature are tracked in a bug.


I can see how shadowbanning ("bot-bans") is effective against bot accounts, since it's less trivial for a bot maker to write code that detects when their bot has been detected, resulting in a lower rate of new bot accounts. That makes sense. However what seems to have occurred is a widespread blatant misuse of bot-bans to silence different views from the group of moderators.

shadowbanning is obviously a security-through-obscurity measure. Instead of having good bot detection, they seem to just give moderators an invisible-yet-almighty ban hammer with no accountability.

I've personally never used Reddit other than as a lurker to skim the valuable content from subreddits like r/BIFL, etc. I've found the politiking of platforms like Reddit too vitriolic for my tastes :/


> I can see how shadowbanning ("bot-bans")is effective against bot accounts, since it's less trivial for a bot maker to write code that detects when their bot has been detected, resulting in a lower rate of new bot accounts.

Huh? It's trivial to write code that detects shadow bans, and bot-makers are more likely to write such code than individuals who get shadow banned. I mention this at 28:00 in the linked video on this post.

The fuss over bots makes no sense to me. It's just an excuse to build more dystopian shadow removal or shadow demotion tools. Bot or not, some human is behind the pushing of every single piece of content you see. There is no AGI, and therefore machine-learning-driven algorithms are trained on data that comes from humans.


>The fuss over bots makes no sense to me.

Have you been paying attention to some of the content ending up on reddit then? I've been seeing accounts in posts with no original content. They appear to index other users threads then apply some ML and post them to other similar subjects where it seems to fit. It was only when I saw one of these bots post the same content in a thread with two different user accounts that I realised this was a thing.

"But it comes from humans"

Cancer is a human cell, just coming from humans doesn't mean it can have a negative outcome if allowed to spread. Reddit is a business with costs. The business model assumes that at least some percentage of content pulled will drive ad revenue that allows the business to run servers. You can keep increasing the portion of bots to humans to the point that the business model is not sustainable. Letting "Dead Internet Theory" run wild is a great way to go out of business quickly.


You're ignoring the core argument and going on a tangent. Bots being a problem (which they very well might be) is not at all solved by shadowbanning because it's trivial to detect.


Like security through obscurity, it will thwart probably a small percentage of the total new bot accounts. The issue is that you are right, eventually people catch on to the obscurity.


Very interesting and eye-opening talk! I checked one of my old Reddit usernames (I don't really use Reddit anymore) and was gobsmacked at how many (I thought were relatively innocuous) comments that were shadow-banned. And then others in the same subreddit that were nearly identical passed right through!

Also some of the links on your slide-deck are just as eye-popping (eg. the public-private discourse document with crosstabs). Lots of good info in there.

Good work!

However, I am not sure what to do with this information except perhaps sharing it with some people I know own Reddit (as per your FAQ), but then that makes me think "then what"? What happens then? Getting Reveddit exposure is good, but there does not seem to be a clear stated strategy of how any of that traction will be practically used.

Also, on slide #60 of your presentation you have a slide title of "There is no evidence that censorship doesn't change minds." I don't know if I am just having a slow brain day, but I was not sure how to interpret this in the context of what you were saying and the bullet-points on the slide itself. It this just meant to be a more non-committal version of "Censorship changes minds"?


> Also some of the links on your slide-deck are just as eye-popping (eg. the public-private discourse document with crosstabs). Lots of good info in there.

Thank you for going through it! Yours is the first comment I've seen mentioning the talk. You've provided some great feedback here that I will incorporate into future versions of any related presentations.

> However, I am not sure what to do with this information except perhaps sharing it with some people I know own Reddit (as per your FAQ),

Right, I think we need to do two things. (1) build these systems for other platforms, and (2) raise awareness about what's going on by telling stories about it.

I've got some idea about how to do both of these, but I don't own the patent or anything. So to the extent that someone's motivated to collaborate, I'm willing. This is very grass roots. To get to the point where I could so openly collect my thoughts has taken some time, however now I've got a good idea about how to go about it. In fact I've always felt that way about this work! Every couple of weeks I'm like, oh, I should be doing that. Then I go do it and I reach a new milestone, but real public awareness remains elusive.

I've tried reaching out to hundreds of journalists and several organizations who support free speech and so far haven't had success, so if you have any connections and want to fire off an email or tweet about shadow moderation, Reveddit, or the talk, please do your worst.

> but then that makes me think "then what"? What happens then? Getting Reveddit exposure is good, but there does not seem to be a clear stated strategy of how any of that traction will be practically used.

Well, I can say with confidence that when the site is shared on Reddit, groups do change. Users create new groups, moderators become more open, or admins may even step in. So my expectation is that sharing it on other social media sites might lead to more discussion and possibly more openness from platforms, because again this is happening everywhere on every side of every issue in every geography.

For example, I have virtually no Twitter following and don't know how to do Twitter, especially since I'm usually awake while you all are sleeping.

Does that provide some ideas for direction? I know it's not all buttoned up like a campaign from FightForTheFuture. I'd love for it to be like that, but right now it's still just me, and maybe you all. I don't have good ideas for what to do with money so I am not asking for donations.

Another to-do is to start doing video shorts on stuff that gets removed, like Cheyenne did here [1] [2].

> Also, on slide #60 of your presentation you have a slide title of "There is no evidence that censorship doesn't change minds." I don't know if I am just having a slow brain day, but I was not sure how to interpret this in the context of what you were saying and the bullet-points on the slide itself. Is this just meant to be a more non-committal version of "Censorship changes minds"?

Yes, that's a much better way of putting it, or maybe "Myth: Censorship might change minds" ? I think I was thinking of this conversation [3] where I said "countering negative ideas with suppression does not work", and someone else said they don't believe it. I've seen the same argument made elsewhere too.

Since this talk, I have developed a stronger opinion on that. I now believe censorship creates the toxic environment that mods seek to eliminate [4].

[1] https://www.tiktok.com/@cheyenne.l.hunt/video/71115053899502...

[2] https://www.tiktok.com/@cheyenne.l.hunt/video/71118581026385...

[3] https://news.ycombinator.com/item?id=33055132

[4] https://old.reddit.com/r/TheoryOfReddit/comments/ymeqaz/hate...


I agree with your stronger opinion and have for a while since I read [1] (he was calling it hell-banning but foresaw the same problems that emerged)

One problem I see with your end-goal of changing moderation tools for a better (and more open and honest) social platform experience is that you will have to use them to do it.

You have already documented the problem with this -- they can shadow-ban you like anyone else! Even more, if the top-brass decides they don't like it they could just totally ban you or mention of reveddit from the entire platform. And it appears that they do like this power, even if it is the censorship that is creating the toxic elements of it because pretty much every one of them uses it. And there is, as you mentioned, a legitimate use for it. So they would have to be convinced to spend resources on changing something that they want and already works for them and in a sense "taking away power" from people that have it now which is always difficult.

So there seems to be a nearly insurmountable chicken and egg issue with making real change, short of convincing someone with vast resources or someone with vast reach to get involved.

Alternatively, your plan of using the toxicity of this censorship to build up enough of a group to make noticeable noise seems to be about the only realistic way. You mentioned doing video shorts like Cheyenne, which can get some notice. Although I suspect her or someone like her might make a much larger initial impact then you just starting out and you may never get traction considering the vast amount of content there.

I suspect too, that even with the videos you would have to thread the needle by combing through and picking out particularly egregious abuses of shadow-banning to avoid having people associate you, the project, or any associated group with any controversial subjects.

Another idea (of working within the system) might be to get volunteers to directly DM people that have had comments and posts shadow-banned, show them the shocking proof on reveddit, and see if they might be recruited to a sub or discord dedicated to making changes or support in some other way even if it is a mention or hashtag etc. From there more discussion and opportunities for growth might emerge. It is possible this could even be automated to start.

One last subjective thought is that this could have an out-sized attraction for people that are libertarians due to the censorship component of it. Libertarians are very ... diverse ... politically, and so if you have some initial traction you might wind up having to "eat your own dogfood" by using your ideas concerning moderation to deal with particularly "hot" topics and people to keep the project/goal on course.

I am interested in this and would like to think about it some more. What is the best way to stay in touch?

[1]https://blog.codinghorror.com/suspension-ban-or-hellban/


Thanks, I largely agree with all of this except for concerns about me being banned or whatever. I'm not worried about that at all. Reddit banning Reveddit would probably shoot it to the moon because there's absolutely a story there. The thing about censorship is it actually helps the message of whoever's being censored if you know that focusing on the censorship is what wins people over. People naturally want to hear both sides of the argument, and if they're not allowed to hear one, they want to hear that side even more [1].

It's a little sad to hear that that blog post foretold this situation. I knew of the post but didn't realize it predicted the result. If so I wonder why not build something like Reveddit sooner? Oh well.

About contact, there's an email in my HN profile. I have to sign off now, but would be happy to chat further there.

[1] https://www.youtube.com/watch?v=E0T9XSG73kY&t=4889s


I found some time and wanted to respond to more of your comment,

> So there seems to be a nearly insurmountable chicken and egg issue with making real change, short of convincing someone with vast resources or someone with vast reach to get involved.

The solution to this is to make more content myself rather than asking others to do it.

I already spent months listening to probably a hundred relevant podcasts and reached out to hundreds of people and organizations. I've nearly made several connections, so I think it's only a matter of time until one lands. In the mean time, the best way to actually reach them may be to become a creator myself. For example, originally I was just writing an email describing Reveddit. Then I got a lucky break. A friend invited me to be on his podcast and I was able to share that around. That may have led to me being invited to a conference, for which I was motivated to prepare a good talk and put my face on camera. Now, instead of writing long emails, hoping something catches the reader's eye, I can just share that video.

Better still might be to break that content into smaller videos, targeting a different audience with each, and then growing from there. I never wanted to be an influencer, but it seems to be the most effective way of being heard these days. Plus, it beats begging someone popular to write or talk about Reveddit themselves. If I produce video content that's succinct and convincing enough, that makes for a far better email than the written word where I would often ask someone else to write the story for me. The problem may be that they can't write that story because they don't have all of the examples at their fingertips like I do. I've tried compiling these into a spreadsheet and sharing that, but it may still be too overwhelming for journalists at this point.

I can get behind that because it's been a gradual buildup for me too. I mean, I didn't tell anyone I knew that I was working on this aside from my nearest family for some years. Then when I first opened up, I had too much to say and lost the thread with some people to whom I related the work. Gradually, I started to make sense of where the public conversation was on this (disinformation vs. free speech) and where I could make a good point while providing room for a listener to ponder what I laid out. More and more I'm able to engage in conversation rather than giving eccentric one-sided speeches.

> I suspect too, that even with the videos you would have to thread the needle by combing through and picking out particularly egregious abuses of shadow-banning to avoid having people associate you, the project, or any associated group with any controversial subjects.

Well personally I think they're all egregious abuses. Pretending to a toxic person, for example, that their most vile content is publicly visible while it was in fact removed is harmful: it may lead them to think they're still "part of the team" and then act on their seemingly "approved" ideas in the real world. So, demonstrating the egregiousness of the practice is just a matter of showing conversations as they appear when secretly moderated, then showing how they would appear without secret moderation, and then discussing the possible thought processes of the conversation's participants, readers, and society at large. The removed content doesn't necessarily need to be something in the Overton window, you just need to show how the lie hurts civility.

But yes, I do now try to meet people where they are so that I can show them what I see from their vantage point.

I suspect this strategy is known by most who produce popular content, and I, like you, am just coming to understand it. You don't gain followers by blowing people away with something completely new. I once thought that's all that's needed to go viral, but it isn't. New ideas, or in this case forgotten ideas may need coddling in the early stages. Once you have an audience you probably end up shooting yourself in the foot a few times, but hopefully by that point you're prepared to deal with it.

> Another idea (of working within the system) might be to get volunteers to directly DM people that have had comments and posts shadow-banned, show them the shocking proof on reveddit, and see if they might be recruited to a sub or discord dedicated to making changes or support in some other way even if it is a mention or hashtag etc. From there more discussion and opportunities for growth might emerge. It is possible this could even be automated to start.

That sounds like a chicken and egg problem to me. Where do I find such volunteers? Anyway, I'm not sure if Reddit would approve or not, and I'm not keen to influence them towards having mentions of the site shadowbanned in PMs, for example. I've no idea if that's already happening, by the way. I haven't tested it.

Before I made Reveddit, I had a bot that would PM people when their content was removed, and Reddit summarily suspended it after 5 days, saying that unsolicited PMs were not permitted. I wasn't even the first to try that. Afterwards, I found several subreddits that had been banned for making bots that auto-mention users for the same reason. I felt silly and annoyed at having redone something that wasn't permitted, yet I still learned something and fortunately realized there was still another way to do what I wanted. There always is.

Given Reddit's ban on sending unsolicited PMs along with their crackdown on "brigading", I guess they would frown upon PMing people with links to their removed content, even if it came from real people and not just a bot. On top of that, it might be akin to growing an army to ask people to deliver bad news to other people without any personalization and less self-discovery than happens when someone comes across Reveddit more organically. The army stuff is what's already happening on social media, and I want to relate a slightly different message.

I want to raise awareness and share a possibly new direction that may be the last thought on many people's minds. There is a solution to today's problem of society having difficulty communicating. Secretive censorship is holding us back from solving the problem, and upending that will allow those-who-are-willing to come up with good counter arguments to the hateful speech people fear. We don't need to upend each other, only the parts of ourselves that enable these systems to be built and become popular. You are your own worst enemy, and you are your own best advocate.

Hate speech is only scary if you don't know how to counter it and you think you're the only one who can counter it. It's okay if you don't know how because there are many capable people who can. The only reason you think that isn't possible anymore is because you didn't know about the degree to which human-driven shadow moderation is disrupting that process. Once you understand that, you're back in the driver's seat, or at least you're now in a vehicle being driven by a capable driver because you have faith in humanity.

Plus, most Reveddit traffic does now come from people sharing it. So to some extent what you're describing is already happening without my involvement, and I prefer that to telling people how they should share the tool. I'm after mindfulness, not mindlessness, even when it comes to sharing Reveddit. Perhaps I am being haughty to say so. I gather your point is I should build a better network, and I agree!

> One last subjective thought is that this could have an out-sized attraction for people that are libertarians due to the censorship component of it. Libertarians are very ... diverse ... politically, and so if you have some initial traction you might wind up having to "eat your own dogfood" by using your ideas concerning moderation to deal with particularly "hot" topics and people to keep the project/goal on course.

Probably true. Interestingly, the r/Libertarian group on Reddit is a huge target of censorship. The mods there lie up and down [1] more than any other place I've seen, and r/LibertarianUncensored is trying to call it out [2]. The "uncensored" group is small, but I expect they will eventually be able to get their message out to more users in r/Libertarian. I mention them in my talk at 18:18 [3], and the slides contain these same two links.

Is there another place they hang out, or were you suggesting posting about Reveddit in r/Libertarian? I've already tried there; they removed my comments. I also tried messaging mods of r/neoliberal and did not receive a response. I guess I could try posting there without prior approval and just see what happens. I'm a little tired of trying to post about Reveddit on Reddit itself because it so often gets removed, but I could always try again. Anyone else could too. Something simple like I recently posted in r/detrans [4] might work.

[1] https://archive.ph/O0GN8

[2] https://www.reddit.com/r/LibertarianUncensored/comments/uotv...

[3] https://www.youtube.com/watch?v=aCadmiIfbcI&t=1098s

[4] https://www.reddit.com/r/detrans/comments/yej7mx/has_secreti...


Just tested this on my 15-year-old account and it's really well executed. Great work!

The main link here should be to reveddit, not the video at cantsayanything as it is now.


> Just tested this on my 15-year-old account and it's really well executed. Great work!

Thanks!

> The main link here should be to reveddit, not the video at cantsayanything as it is now.

I intended to share the video because it's an overview of what I've learned while building the site, which I thought might engender more interesting discussion than the tool itself.

TL;DW I think secret removals creates the toxicity it seeks to remove. See also this ongoing discussion:

https://www.reddit.com/r/TheoryOfReddit/comments/ymeqaz/hate...


For Show HN, it's customary to link directly to the product.

I agree about secret removals.


I see. I don't see a specific mention about the link itself in the rules for Show HN. I also think customs are less fixed than rules and don't need to be moderated.

https://news.ycombinator.com/showhn.html


Wow, great. Of course I'm getting shadow censored left and right on reddit, but I always knew it was the case.

I'm not using reddit anymore anyway. Too many hidden rules, not even as a simple user, but as a sub creator, where you can't tell your own sub redditors why their comment were removed, per reddit admin rules, that's absolutely crazy, and if you disclose the hidden rules, your sub is banned.


> I'm not using reddit anymore anyway.

Secretive removals happen on all of the platforms (see 5:50 in the talk [1]).

Plus, the groups who are brainwashed by it are large enough to have an impact in the real world. I don't think it benefits anyone. We're much better off knowing when conversations have been interrupted.

[1] https://cantsayanything.win/2022-10-transparent-moderation/


It is a scary reality, one that I think the average person would be strongly against if they were aware of it. I would go so far as to say there should be laws or regulations preventing shadow banning/deletions


> It is a scary reality that I think the average person would be strongly against if they were aware of it.

That makes it easy to support raising awareness about it, which isn't scary at all.

> I would go so far as to say there should be laws or regulations preventing shadow banning/deletions

Nope, don't go there. The government is a huge hammer and they would love the opportunity to take this win from you. If you start making exceptions to the holding that "code is speech", then eventually you're going to lose something you care about that the government is after, such as encryption. And, a law that "bans encryption", which we all know isn't possible, would just further entrench existing power structures. The law would be arbitrarily applied because you can't ban math.

The same is true for banning shadow moderation. Either the behavior will shift overseas, or local companies will skirt around the written law.

Ultimately, society will have to accept that only individuals can hold individual decision makers accountable. We are each our own worst enemy, and each our own best advocate.

And, it takes less time to build Reveddit-like systems that apply transparency to existing social media platforms than it takes existing platforms and power structures to alter their behavior or grow userbases for new platforms. You need to flip your thinking. They don't have the power to control you, you control yourself, and therefore you have the ability to challenge the status quo.


What exactly are these hidden rules that you arent allowed to disclose?


Insights I got from my own account:

a) Reddit randomly decides some posts were spam and shadow-banned them O.o Content was all over the place, from links to link-free text posts. Weird.

b) Low quality subs (/r/europe, /r/Germany, cryptocurrency subs) liberally shadowban all kinds of comments.

All other places this shows, have comments/flairs about the removal.


Subs cannot "shadowban" users themselves. Either one banned and gets a notice, or one isn't banned at all. Comment removal initiated by moderators themselves is not "shadowbanning" - that's something Reddit's automated mechanisms (and maybe admins too) do. I think you meant "content removal without explanation" instead.

I will say, however, that for about half a year now or even more, reddit's own anti-spam detection has been filtering too many posts in a sub (60k big) that I co-moderate, to the point where we've had to manually reinstate up to a dozen posts on specific days.


> Subs cannot "shadowban" users themselves.

They can. It's now called a "bot ban" and used to be called a shadow ban. It's done by adding a user's name to the sub's Automoderator configuration.

> With a bot ban, some users won't realize they've been banned. [1]

There is more explanation on Reveddit's FAQ under What is a shadowban?

[1] https://www.reddit.com/r/AutoModerator/wiki/library#wiki_use...

[2] https://www.reveddit.com/about/faq/#shadowban


We experienced that too, at a much higher rate. Mods would get blamed for deletions AutoMod would do. Turned into quite a shitshow.


> We experienced that too, at a much higher rate. Mods would get blamed for deletions AutoMod would do. Turned into quite a shitshow.

Sorry, how are mods not responsible for the bot they coded?

Or, are you saying users were attributing the removals to humans rather than understanding that it was done by a script?

If the latter, then I'm not sure the distinction is so important for users. What they care about is understanding why their content was removed, not how. And I think that is a fair question.

In the real world, we do not throw criminals in jail without explaining what they did wrong and putting that in the public record. Obviously it's different online, but I think at the very least users should be able to discover when their content has been removed. The system shouldn't pretend that their removed content is publicly visible. And, there is some research that indicates that when mods provide removal explanations on posts, it reduces the likelihood of that user having a post removed in the future [1]. That's a reduction in mod workload that should be of interest to moderators, and it was researched and published before Reddit started showing post removal notices on posts [2]. Prior to that, all posts were shadow removed. Now only some are shadow removed, and all comments are shadow removed. To my knowledge, this research has not yet been done for comments.

[1] https://www.reddit.com/r/science/comments/duwdco/should_mode...

[2] https://www.reddit.com/r/changelog/comments/e66fql/post_remo...


>In the real world, we do not throw criminals in jail

Again, you seem to be very confused about private property.

Think of it as a civil action, which at least in US law, that is operated under a completely different set of books and assumptions. For example I can tell you to leave my properly in a civil action without any explanation why. If you come back a criminal charge can be presented against you.

Much like a business can remove its patrons at will for (nearly, civil rights apply) any reason they can be negatively impacted if and when they do this to too many people and the patronage stops coming, you still don't have any legal right to their property.


You must be trolling, haha, there was more to my quote! We do throw criminals in jail, the point was we don't do it without making a public record of how and why.

That was an analogy that you've taken completely out of context and on another tangent.


While I do think your work is interesting indeed, and needs to be done to examine the problems at hand, and my replies may be unintentionally harsh towards you, there has been a general trend where many users think they have a right to an audience. This presidence doesn't exist without contract, and the contract in social media heavily biased in the direction of the media as the user is only indirectly paying for the service.

As you've stated in this thread, an involuntary contract between the government, the media company, and the user are not optimal as it tends to give more power to political entities. Voluntary contracts between the business and non-paying users are seemingly just as worthless, no for profit entity is going to hobble itself in a such a risky manner. As it stands that contract is "We the media company can do whatever we like, and you can jump off a bridge".

It is possible for user outrage to change the media companies behavior, but when there are no valid alternatives because of network effects, companies will commonly say they are going to change to placate quell the masses, and then do nothing because the users will remain on the site anyway. Many places have tried compete against Reddit, but mostly under the 'no moderation' banner in which the sites become a toxic hell hole. As of so far 'better moderation' has not been a motivator to get users to shift. Maybe your work will lead people that way.


Your replies aren't harsh they're just off topic, and I think you're trolling me for laughs, trying to get me or someone else mixed up. For example,

> As you've stated in this thread, an involuntary contract between the government, the media company, and the user are not optimal

I said no such thing, and I have no idea what you're talking about. Good one, I'm sure you thoroughly confused a few people.


> Subs cannot "shadowban" users themselves.

That's not technically true, they can set an automod regexp to match the comments and set them for review and then fail to approve them.

The comments are effectively shadowbanned.

Automod rules are flexible enough to direct it at specific users, classes of users (e.g. new or low karma), or keywords in the posts.


Subs definitely can make your content in that sub (including everything you post in that sub) disappear in a way that isn't just "without explanation", but explicitly in a way that makes it appear as still online to you to make it harder for you to notice. As the demo OP provided demonstrates.

As far as I know this is not a feature Reddit officially provides, but creative (ab)use of the spam filter, moderation queue, or similar features, which reddit doesn't effectively prevent.


> As far as I know this is not a feature Reddit officially provides

No, it's provided by Reddit and is in the docs,

> With a bot ban, some users won't realize they've been banned. [1]

There are several more dystopian features built atop shadow removals [2], each of which multiplies the number of impacted users by 10x or more. Social media sites are constantly building more and more powerful tooling for moderators to more easily remove more content as userbases get more and more riled up. The ability to disagree with words online is being shrouded in darkness, and it won't stop until we start applying transparency that can provide openings to see what's been hidden from us.

[1] https://www.reddit.com/r/AutoModerator/wiki/library#wiki_use...

[2] https://www.reddit.com/r/reveddit/comments/sxpk15/fyi_my_tho...


Is there option for preapproving/hiding comments from new users in a subreddit? I'm thinking of a certain regional politics sub where on the main page, you'll see XX number of replies, open the thread and it be completely empty for a couple hours.


> Is there option for preapproving/hiding comments from new users in a subreddit?

Yes, I mention this in the talk at 14:50 [1]. A year ago Reddit upgraded its "Crowd Control", which once only collapsed comments from new users. The feature was bad enough in its initial form. Now it can also remove comments. When they announced it, I said it was like automod on steroids [1]. Now I call it Crowd Control with Prejudice [2], and it's only one of many dystopian tools.

Subs can choose to turn it on, and more and more are doing it in order to make their jobs easier. Between mods acting largely based on user reports, and tools like Crowd Control removing all commentary from new users, there is an absolutely massive amount of content authored by real people being secretly removed from Reddit every second, and that's a single platform.

The irony is, the more they secretly remove, the more people stay put and the bigger the groups get. So, there is then more content for moderators to review which means more work for unpaid volunteers. Or maybe it isn't irony because a bigger userbase is better business. I have no problem with platforms making money, but I do take issue with their systems which lie to millions, perhaps even billions users about the visibility of their own authored content.

[1] https://youtu.be/aCadmiIfbcI?t=890

[2] https://www.reddit.com/r/reveddit/comments/qi1r55/fyi_crowd_...

[3] https://www.reddit.com/r/reveddit/comments/sxpk15/fyi_my_tho...


Automoderator can be manually configured to perform actions on posts and comments that fit certain criteria, though as far as I know, it's not as extensive as, say, a bot that one would program themselves and tailor the needs of their own subreddit.

There is also the "approved users" feature [1], but it requires manual intervention for every particular user, and I am not sure whether it practically serves any purpose in public subreddits. Reddit claims it does, but when we tested it a few years ago, it did not work correctly. In any case, I do not recommend it for public subreddits with more than 1k daily users.

[1] https://mods.reddithelp.com/hc/en-us/articles/360009164452


That's nothing. There are far more powerful removal tools than automoderator these days, and that was already very dystopian itself. See:

https://www.reddit.com/r/reveddit/comments/sxpk15/fyi_my_tho...


Hmm, doesn't seem very flexible, so it's not of much use to our subreddit. No wonder no mod has bothered to turn it on.


> no mod has bothered to turn it on.

Glad to hear it. Some of this, such as the way user blocks now work, isn't controlled by mods.


I indirectly use your product. I surf reddit via teddit, libreddit and both of the. have integrated reveddit into their codebase.

If a comment is removed by reddit mods/admins, on libreddit you get a small notice which then directly links the notice to reveddit site from which I can see what the deleted comment looked like.


Looks like libreddit links to another site whose differences I mentioned here [1]. See here for example [2].

[1] https://news.ycombinator.com/item?id=33478793

[2] https://libreddit.spike.codes/r/meirl/comments/ymj861/meirl/...


As I remember it, libreddit used to have reveddit integrated. They must have changed it or I might have confused it with teddit.

But, teddit right now does have reveddit integrated.

https://teddit.pussthecat.org/r/meirl/comments/ymj861/meirl/...

Also, is there no way to use Reveddit without turning of Enhanced Tracking Protection in firefox. Could the reddit tracker be proxied from reveddit's end so that firefox (tracker blocker) does not block the tracker.


> teddit right now does have reveddit integrated.

Oh I see, they link it from the [deleted] author name instead of the [removed] content. That's an odd choice.

> Could the reddit tracker be proxied from reveddit's end

teddit and libreddit may themselves be trackers! I don't see how proxying empowers users. You don't need to be a huge company to be a tracker, and that's the problem I have with this arbitrary disconnect.me list which I mentioned in another comment [1]. It's picking and choosing who you should trust, and that is a dubious security strategy.

We should be building systems that are trustworthy, not relying on lists of domains. Inevitably, such lists are flawed. They are arguably rotten to the core because they're telling you to trust new domains that won't appear on their list (such as your pussthecat.org link) and not to trust anyone they mark as "evil". I don't think any social media site or proxy site deserves blind trust. Caveat emptor.

Plus, if I set up such a proxy I'd just end up hitting Reddit's API limits because requests would then go out from fewer sources. By allowing clients to make the requests, it's the same as users opening the app or website.

I guess these other sites you mention don't get much traffic, otherwise they would be hitting those rate limits. Reveddit also makes more requests than them because they, like Reddit, do not show where auto-removed comments appear. For example, this comment [2] should have a reply. Reveddit can show it [3]. Even if the author and content were not filled in from the user page, Reveddit would still show a marker for that removed comment.

[1] https://news.ycombinator.com/item?id=33478410

[2] https://teddit.pussthecat.org/r/interestingasfuck/comments/w...

[3] https://www.reveddit.com/v/interestingasfuck/comments/wenhq7...


If you browse political/country subreddits (r/india for me), this is the only correct way to browse that subreddit because a slight agreement with a new government policy/good news regarding India gets removed by mods pretty quick.


/r/India is a cesspit of vitriol against anything 'traditionally' Indian or Hindu.


Holy smokes, so many orphaned comments!

Reddit is the absolute worst for preserving history. If you ever google something and open a few years old thread its just one deleted user/comment replying to another, totally useless.


Unfortunately, reveddit has stopped showing content that was removed by reddit administration-- which is also usually hidden from the user-- it's made it much harder to appeal screwups by reddit itself because you can't see the content you're being punished over.

Ever since someone started an off-site campaign to false-report bomb me I've had to screenshot every comment I make myself just so I have records when reddit inevitably screws up and suspends me for a completely innocuous comment again.


> Unfortunately, reveddit has stopped showing content that was removed by reddit administration

Technically, I never showed it. The change is that Reddit changed the way they were doing comment removals, and I declined to update Reveddit to show the body text of those. I did update the site to label them as admin removals [1]. I gave my reasoning for this in several posts recently [2] [3].

> which is also usually hidden from the user-- it's made it much harder to appeal screwups by reddit itself because you can't see the content you're being punished over.

I see that as Reddit's fault, not mine. Their system could easily show you, the logged in user, your own content with the same sort of indicator that mods see, the red background. They once wrote,

> "Privacy is core to our DNA, our culture, and our values. We are committed to making our policies clear and focused on empowering users to be masters of their identities—and their data." [4]

Hold them to that, not me.

[1] https://old.reddit.com/r/reveddit/comments/w8i11a/good_news_...

[2] https://old.reddit.com/r/FreeSpeech/comments/yabyzu/reveddit...

[3] https://old.reddit.com/r/reveddit/comments/x6332d/reveddit_w...

[4] https://www.reddit.com/r/privacy/comments/ixx6mo/i_just_got_...


My apologies, I didn't intend to imply you were in any way wrong there -- I'd assumed that the material was omitted to keep the services from being blocked, it sounds like that was part of your reasons but I'm thankful to have learned of more of them.

I know the opacity is ultimately reddit's fault (and I've complained to them about it), but reddit these days seems almost completely unresponsive to small scale public feedback.

Thanks for the citations.

(from your comments)

> Since I regard secrecy from authors as the bigger problem,

I guess part of the problem with the admin removals is that while the removal itself may not be secret (though it's often hidden, e.g. requiring a direct link to a removed thread), the rational and basis for it always is. It increasingly appears to be automated based on user reports and if any human is reading the text at all in many cases they must not be native english speakers. (I've noticed that a lot of the more obviously bad admin removals seem to happen in deep in the night, pacific time).

To the extent that user-community moderation is often bad and admin moderation grows in frequency in response I think we can only expect admin moderation to become more incompetent and kafkaesque: As you note, it fundamentally can't scale but they can (and I think have) respond to the lack of scalability by just doing a worse job of it rather than not taking actions.

But regardless, I understand your focus better now. If you ever do come across a way to give posters access to their own admin removed comments without creating risk or distraction for your focus on shadow moderation, I know it would be appreciated by many.


> I know the opacity is ultimately reddit's fault (and I've complained to them about it), but reddit these days seems almost completely unresponsive to small scale public feedback.

Then let's make it big! Can do #ShadowModeration on Twitter or ... ?? Once you figure out how to share this issue, it's an easy victory. Nobody wants their content to be shadow removed, so there's no justifiable reason for the practice. Platforms will say it helps with bot spam, but that's bologna because coders who make bots can code to detect shadow bans and removals. So it's clearly designed to manipulate real people, including moderators who use it and people who work at these social media companies. They think they're creating environments free from toxicity, but what they're really doing is creating toxicity by making it more and more difficult for people to reach across ideological spectrums.

> I guess part of the problem with the admin removals is that while the removal itself may not be secret (though it's often hidden, e.g. requiring a direct link to a removed thread), the rational and basis for it always is.

I completely agree. It's just something that's out of focus for me right now, and as I wrote, I don't want to compromise what I have so far. Undoubtedly I lose some traffic over it, and I just accept that.

> It increasingly appears to be automated based on user reports and if any human is reading the text at all in many cases they must not be native english speakers. (I've noticed that a lot of the more obviously bad admin removals seem to happen in deep in the night, pacific time).

That's interesting. I'm pretty sure the Philippines is a pretty big hub for moderation because they are low cost and know the language quite well. Plus that was reported in 2014 [1].

> To the extent that user-community moderation is often bad and admin moderation grows in frequency in response I think we can only expect admin moderation to become more incompetent and kafkaesque: As you note, it fundamentally can't scale but they can (and I think have) respond to the lack of scalability by just doing a worse job of it rather than not taking actions.

Yeah, that's effectively what is going on. They are building tools that remove mass amounts of comments in different ways [2] and now allow users to cut each other out of conversations too with the new user block described in the same link. Just today I came across a post from some 3rd party building a tool to identify hateful groups of users to be shared with subreddits [3]. And the website is all done-up too as if that's an idea that would be popular. It's crazy. I replied on that thread and the author of the tool just completely sidesteps the fact that any actions taken against users on his list will be occurring in secret. Like, that is not his concern because "that tool is just made for spam" which is just such a weak argument.

There is absolutely no concern for how much gets removed as long as users don't know about it. The good thing is the problem then becomes easier to point out. It's a race to do that before more harm is done.

> But regardless, I understand your focus better now. If you ever do come across a way to give posters access to their own admin removed comments without creating risk or distraction for your focus on shadow moderation, I know it would be appreciated by many.

Noted.

[1] https://www.wired.com/2014/10/content-moderation/

[2] https://www.reddit.com/r/reveddit/comments/sxpk15/fyi_my_tho...

[3] https://old.reddit.com/r/TheoryOfReddit/comments/ymeqaz/hate...


On my phone, the link to the CantSayAnything/About/Sticky page goes to a 404.

I thought Kelly Cotters paper was very interesting - it seems like “gaslighting” is becoming a more common way to call out people or companies who are not technically lying but are definitely communicating something that isn’t true. Unfortunately that’s a common enough thing now that it’s useful to have a simple reference.


> On my phone, the link to the CantSayAnything/About/Sticky page goes to a 404.

Try this,

https://www.reddit.com/r/CantSayAnything/comments/xwvbsy/wri...

> I thought Kelly Cotters paper was very interesting

Yes she is great. To my knowledge, she's one of the few who has actually questioned and researched the area of shadow moderation. Here's the paper for any interested,

https://kelleycotter.com/wp-content/uploads/2021/11/Shadowba...


Reveddit is where I go when I need to remind myself why I appreciate the Reddit mods.


Do you still display content removed by their authors and not the moderators?


My focus has always been to reveal secretive removals to authors, and Reveddit has never shown author-deleted body text,

https://www.reveddit.com/about/faq/#user-deleted


Thanks! I may have thought about removeddit by mistake.


You need to use unddit.com for that


Thank you for doing this.


I'm happy to do it, I've learned a ton about speech, myself, and society along the way. I also enjoy getting feedback, positive or negative, so please shoot me a note if you're thinking anything either here, via the address listed in my HN profile, or https://twitter.com/rhaksw


Any chance you could work with Christian to get this functionality worked into Apollo? I can’t stand to use Reddit any other way, but this would be nice functionality to have.



But then you don't get the slides which sync with the video :)


I think this has been a great tool to use since I was introduced. Thanks.


Have you received any legal pressure to shutdown?


Not yet! I don't think there would be any basis for it. The site is all code. Data comes from Pushshift and Reddit, and even Pushshift hasn't had any trouble as far as I know.

On top of that, legal action might draw more attention to the issue. The funny thing about censorship is it ends up helping promote the censored idea; that is, once the censorship is discovered.


I was a mod, this tool is the worst from a mod's perspective. In a sub with 100k-200k people, I would spend all my time telling users why I took action. Instead I take action when content breaks rules, and it's your responsibility to abide by the rules to not have your content deleted. This tool complicates the life of people who give their time for free, and asks them to provide even more time solely to satisfy user curiosity. I don't like it.


> I was a mod, this tool is the worst from a mod's perspective. In a sub with 100k-200k people, I would spend all my time telling users why I took action. Instead I take action when content breaks rules, and it's your responsibility to abide by the rules to not have your content deleted. This tool complicates the life of people who give their time for free, and asks them to provide even more time solely to satisfy user curiosity. I don't like it.

Upvote because this is an important viewpoint being shared that is in the minority in this thread.

I made this tool to challenge the systems that enable the secret removal of content. You wouldn't need to spend an extra second moderating if the system simply indicated to authors that their comments were removed. The burden placed upon you comes from today's social media sites, not Reveddit.

I didn't make the tool to give mods a hard time. I made it so people, both mods and users, can help themselves into better environments. "Better" means spaces where people might have to have difficult conversations that challenge their existing world views, and where they need to sharpen their arguments to get ready for real world conversations when they meet people with whom they disagree.


None of my removed comments on reddit have ever broken any site or sidebar rules. When I've used this tool to ask mods for clarification about removed comments, mods have just made stuff up (example: saying I was engaging in namecalling when I didn't even refer to another person at all... the real reason is the mod disagreed with the position I was advocating for) and sometimes issued bans. As a user, revealing this moderator abuse is more important than helping moderators abuse their powers without anybody noticing.


While this is commonly the case, also look at another effect occurring here, especially in larger or more controversial subs.

That is, someone was reporting large number of your particular posts. If you go to a moderation queue and there are 1000 backlogged moderation tasks, even at a second each, you've still taken 15 minutes of that moderators life. Chances are they shortcut and lean to banning before reading because huge parts of the moderation queue are objective trash content.


Did you ever have a feeling of being Tom Sawyered into doing free work for Reddit? Moderation may provide some feeling of authority and control which i suppose could be pay-off enough but in the end its just free work.

I suppose if I created my own little subreddit and curated its contents that would be different but moderating tens of thousands of users must be just a crappy chore. For you was it a slow transition from fun curation to work like chore as the subscribers grew?


Never, because the subject matter in the sub I moderated was very local (a city sub). It never felt we were enforcing reddit's rules but our own, and so we never felt like we worked for Reddit but for our community. And that's really where they get ya, cause aside from /r/reddit.com, everything has this "this is your space" feeling. That's becoming less and less true with all the shit they're pushing to monetize users/content though


Yeah, it's "yours" until you try to siphon people of the site, advertise something the admins don't like, or you fail to push the politics of the day from a random part of the USA where Reddit is headquartered


So the #2 problem is that you're not getting paid.

And the #1 problem is actually much worse since "you" aren't paid - in that there is a strong likelihood that "you": 1) are a paid shill to suppress specific content by either domestic or foreign bad actors (government or industrial complex trying to control/suppress narratives), and/or 2) "you" have an ideological bent - and are "passionate" (and biased) on certain issues and why you're volunteering your time.


Totally agree. There are great mods then there are people with an agenda. The reputation mods get on reddit is always one of selfish, controlling power-tripping assholes, but for the large majority that's way off. But there's also cases where it's completely justified.


They do it for free...


For "free"...


You're right, and that's an important point. The message that "they do it for free" may be propaganda itself. We don't know that to be true, so it's wrong to assert that any given mod either is not or is being paid, short of evidence indicating the latter. I don't know how you would prove the former, particularly given the existence of digital coins. Bitcoin seems perfect for this along with a host of other things for which secret payment might be useful.

Come to think of it, that may make a pretty good case for making moderation transparent. You can't ever be sure someone isn't being paid to influence the conversation, and therefore users deserve to be able to review what gets removed.


> You can't ever be sure someone isn't being paid to influence the conversation, and therefore users deserve to be able to review what gets removed.

describes HN.


I'm so sorry that this tool is making you more accountable for your actions. It's truly a shame that people in positions of authority over others are held to a higher standard of scrutiny. We should just be blindly trusting them.


Your alternative that time is spent explaining in detail every action taken isn't realistic, doesn't scale and doesn't start to consider the fact that it's volunteer work. Especially when the reasons for 100% of the actions that are taken should be documented and publicly available (and we're not talking about mod abuse here, but actual moderation). But sure, it's a great idea; go ahead and enforce it at scale now. Reddit couldn't, Twitter can't, Facebook sucks at it.


You've got lots of other alternatives: 1) You don't have to explain moderation decisions if you don't want to. This will of course amplify the volume of complaints. 2) You can step down. This "I'm a noble volunteer and should therefore be immune to criticism" attitude doesn't tend to be popular amongst those being moderated. Nobody is forcing you to do this. You may not be getting paid, but are presumably getting some sort of emotional benefit out of doing this job. 3) You can make references to anti-censorship tools a bannable offence, like some of the shittier subreddits do.

Fundamentally, you are getting mad at users making and comparing notes.


Oh no. Being held accountable when you are in a position of power. The travesty.


You try moderating hundreds of thousands of users and caring about every little unique snowflake with an attitude like yours. See which comes first: your burn out or your change of attitude


> ...with an attitude like yours.

What attitude? I'm giving a slight jab at your complaints. You're a volunteer and you have power over others, of course you should be scrutinised! That's the _correct_ attitude to have. I'm not a fan of pockets of humanity where autocratic-leaning individuals want to exercise their power over others, but if we do, then you should be held absolutely accountable.

I don't need that level of power over other people to feel fulfilled in life, so I don't do it.


Nobody moderates a community for free for to obtain fulfillment by having power over other users. You need to talk with people in position of power more.

They do get users like you, who will tug at everything they can find even the most outlandish of conspiracy theories, to feel like they're not powerless and have to abide by common law rules.


I'm of the view that users subjected to an outright shadow ban should at least have a civil cause of action available for all the time they unnecessarily wasted writing comments that no one else will ever see.

If that was established perhaps we'd see the despicable practice go away.

There are a few cases where I can almost see it as justified-- not spam as its trivial for spammers to automate detection of-- but certain kind of obsessive mentally ill-people that will forever shout their abusive and disruptive comments into the void if shadowbanned but get a new account if they are informed. But that fringe case just can't justify the abuse against more reasonable people, particular in a world where users are frequently actioned against in error.


> I'm of the view that users subjected to an outright shadow ban should at least have a civil cause of action available for all the time they unnecessarily wasted writing comments that no one else will ever see.

And for shadow removed comments too? Because that's the majority of it, and slicing and dicing what stays up versus what is secreted away makes it much harder for users to discover that shadow moderation is happening.

I don't think it will work. As mentioned elsewhere [1], the government (as an entity, not a particular political party) would love to be your savior here. If they get to pick and choose who wins they will end up controlling social media. They're not going to be able to handle all of the shadow removals or demotions that occur; heck, shadow moderation has been occurring since the dawn of social media and society is hardly aware of it now 16 years later. So, since they couldn't handle all those cases, they'd have to be selective, and that is just more cronyism.

No, we need citizen-built transparency systems starting ASAP. Nobody will help us better than ourselves [2].

> There are a few cases where I can almost see it as justified-- not spam as its trivial for spammers to automate detection of-- but certain kind of obsessive mentally ill-people that will forever shout their abusive and disruptive comments into the void if shadowbanned but get a new account if they are informed. But that fringe case just can't justify the abuse against more reasonable people, particular in a world where users are frequently actioned against in error.

Yes, exactly. It's a fringe case that we know will come up, and it will be annoying but it won't destroy society as we know it, whereas the systems as built today are doing exactly that. The good part is, it's an easy win. Tell people about it, get people to talk and write about it, and show people who work at social media companies that we don't like or want systems that are driven by shadow moderation.

[1] https://news.ycombinator.com/item?id=33480576

[2] https://youtu.be/aCadmiIfbcI?t=1571


> And for shadow removed comments too?

Yeah, I think that line of thinking really only works well for outright shadowbans, not singular shadow removal. Single removals might have been justified, and the time people spend on further comments might not actually be wasted (because they might not be removed).

That said-- getting rid of outright shadowbans would be a useful step forward and establish a norm against shadow moderation more generally.

Re: government-- I did say civil cause of action for a reason: The argument there would go "you intentionally deceived me by making me think my comments were visible, when they never would be-- and in doing so caused me to waste a lot of resources, I'm entitled to recover from my losses". No new law would be required. And the moderation being correct or not isn't the concern-- just the fundamental dishonesty of shadow banning. (though, ironically, outright spammers might have an easier time making the legal argument because they'd be more able to show monetary damages).

But as you note, the single comment removals are more common and more insidious, so perhaps its just a distraction.

I am tremendously grateful for reveddit, and I think it's made a contribution to better the world. But I'm also not sure how far we can get with awareness. Even after all this time the whole subject can't even come up without people constantly repeating the false claim that subreddit mods can't shadow ban users (or their comments)!

But don't see anything better to do right now-- so education it is!


> And the moderation being correct or not isn't the concern-- just the fundamental dishonesty of shadow banning.

Exactly. The dishonesty is the part that bugs me.

> (though, ironically, outright spammers might have an easier time making the legal argument because they'd be more able to show monetary damages).

Right. For me, even if you could show that everyone had monetary damages, I'm not sure society comes out on top, because then there's also shadow demotion. How do you show that? Being the top item vs. the second item in a list makes a huge difference in terms of visitors.

IMO we need to build systems that hold today's popular social media sites accountable on a regular basis. Accept that shadow moderation is part of today's systems and just start building tools that can counter it with information. If we need to build them ad-hoc once in awhile, or somehow be "on alert" for when the problem inevitably crops up again, I'm not sure that works as well as always having the problem somewhat at hand. In the free speech world there's the concept of Mill's "dead dogma",

> However unwillingly a person who has a strong opinion may admit the possibility that his opinion may be false, he ought to be moved by the consideration that however true it may be, if it is not fully, frequently, and fearlessly discussed, it will be held as a dead dogma, not a living truth. [1]

That's from John Stuart Mill's On Liberty.

> I am tremendously grateful for reveddit, and I think it's made a contribution to better the world. But I'm also not sure how far we can get with awareness. Even after all this time the whole subject can't even come up without people constantly repeating the false claim that subreddit mods can't shadow ban users (or their comments)!

Haha, I consider such folks paid to say so. It makes no sense, right? Some of these messages get repeated over and over as if they're the covers on TPS reports.

We're close to getting real publicity about it. I had an hour long conversation with a journalist a few months ago but I think they decided not to do the story. Regardless, people can see that transparency is needed, and they know that people are somehow becoming fanatics because of social media across all sorts of issues. They're nervous about showing their ideological opponents what gets removed, so they're not ready to take that step. We just need to help them see it's the right thing to do. Internet forums should be a little bit ugly like the real world. To the extent they're more pristine, you're just putting off that problem until the real world. We can "make the internet experimental again", or whatever.

> But don't see anything better to do right now-- so education it is!

Yeah! That's where I'm at. Happy to meet someone of the same mindset. Gotta catch some Zzz's now, thanks for your comments.

[1] https://www.gutenberg.org/files/34901/34901-h/34901-h.htm




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: