I had thought working as a moderator would be a great way to help grow the subreddit's base. Offer more useful real world interaction points and basically have a big impact on Portland.
Instead what I've found is that 99% of the time the moderators are dealing with bad actors and that has to be the primary focus to keep the sub from falling apart.
It isn't just dealing with trolls or stepping into stopping out of control discussions.
It is actively performing anti-ban-evasion against people who are targeting the sub for disruption and then going after moderators that get in the way of these attempts.
There is at least one active case number with the Portland Police Department of a person that has both attempted to doxx our moderators and has gone to the home of a moderator and vandalized property repeatedly.
In this example, the person will not stop and creates new accounts every day.
While there are things Reddit can be doing to help, (such as improving tools to counter ban-evasion,) I think this problem is bigger than Reddit and focuses on lack of enforcement for digital actions that would qualify as genuine crimes of harassment if translated into the physical realm.
There are roughly two views on moderators. Moderators think that moderators are the thin green line protecting the ordinary users from a handful of bad actors, letting them live in blissful ignorance of the awful things that are happening. Ordinary users think the that moderators are power-mad bullies who ban and delete as they see fit, with no accountability.
Both of these things can be true at once!
This reminds me a lot of the situation with the police, who are to an extent the moderators of the big room . 90% of a policeman's time is spent dealing with a small number of really nasty people. 90% of a member of the public's interaction with the police is a policeman exceeding their authority, not being interested in helping with a problem, etc .
If you spend 90% of your time dealing with really awful people, you will get messed up. You will develop an itchy trigger finger. Most of the time - most of your time - that's appropriate. But when you're dealing with someone who's not awful, you will get it wrong.
 Maybe that's not true of you. Maybe that's not true in general where you live. It is true in general where i live.
> Both of these things can be true at once!
I've been on both sides of this. Last year, /r/portland began using the word 'criddler' to describe people in various states of meth-addiction and bicycle thievery.
The word was beginning to be used very often to disparage many different types of people. The moderators made a decision to ban it, and the community backlash was supreme. I let the mods have it as well.
After a few months, I came to terms with the label of 'criddler' and realized how it would come up in my mind as a form of negative judgement on a person that was not healthy for me. I realized their decision was a good one, and I described this turn around in my nomination thread running for moderator.
Your description of the situation with the police is apt and I have thought similarly many times. Most people have no idea what day-to-day police work is.
If it was possible to see the challenge of dealing with keeping the peace there would likely be more compassion for people when mistakes are made or there's anger about moderator decisions that affect the community.
Someone famous once said (100yrs ago) that when people write to you they want to test their mettle. It is often best not to reply until a week after the letter was sent and to find the fire in the original message has since left the person who sent the letter.
These instantaneous and unpredictable word bans seem to be almost the perfect design for getting the maximum amount burn out of a given burst of new flame. Is the lone moderator too far removed from the pulse of the crowd to give effective feedback?
The word was humorous to me at one point as well.
However, it was being applied in general to homeless people.
Portland, like many cities, has a serious and growing problem with homelessness.
The word was being used to demonize and unfairly group drug addicted folks, people with mental health issues and those who are not those things but still homeless.
I do not know if there was a need to ban it outright, however I think there were reasons to want to.
To some extent, sub-wide decisions like this are taste-profile questions and with taste you will not get agreement.
For example, Craig Newmark had to walk a fine line with adult services on Craigslist. Many craigslist users wanted those forums. Many did not. Ultimately, I think decisions like this give character to the community.
So with /r/portland, banning 'criddler' wasn't just about banning a word, presenting a posture of the sub toward homeless people in general.
 One way to solve this is to allow the sub to elect moderators, which is what /r/portland did. In this case, I did say that I was for the ban on the word and was still elected by the community that was angered by the ban.
In my area, people complain that there are homeless people 'now', when I was a kid, fishermem would come home from Alaska with thousands of dollars of cash in their pockets, buy a tent and a sleeping bag and just camp right next to the bar. Lots of couch surfing. People had bars on their windows.
These PNW cities have always had a lot of drugs and poverty and depression. Cities built on logging, fishing, lumber mills, paper plants, shipping ports and airplane factory work.
The tech hub vibe and the people who moved to here from the suburbs see this as new. And in a way, it's been exacerbated by skyrocketing rent, but part of what we're seeing is a lack of places where these people could hide in plane sight.
They've always been here. This is a subculture that has always existed.
I'm 4th generation Southern Oregon and when I was a kid (20 years ago) I rarely saw homeless people in Medford, they either were much more reclusive or didn't exist. Ashland is it's own case however and we would play the game "Hiker or Homeless."
There's a bike path that runs from Ashland all the way to... Well, at least Central point, or maybe Gold Hill? It's about 25 miles. The Medford sections are now overflowing with tents and homeless camps, enough to make it dangerous for families and children. This was unthinkable twenty years ago.
That's what I'm getting at, I should probably do research on my own time. Thanks for doing the good & dirty work of moderating a subreddit.
Don't drag the debate in here!
You might as well wave a red flag in front of a bull! Or pour gasoline on a tire fire!
(Meant humorously with no disrespect.)
From my experience in the past, moderating doesn't scale at all. So you basically have to pre-empt issues that tend to lead towards requiring mod actions. An example of this would be the auto-mod stuff on Reddit (though I don't have any experience with the specifics here, just assumptions).
Another approach is to root out the "trigger" words that tend to escalate arguments quickly, and are generally a sign that a discussion has just become a battle of insults instead of a useful discussion.
Again, because its not just one situation between users, it's potentially dozens of arguments across a site between dozens of users. You can't wait until after things get out of control, because there isn't enough time in the day to clean stuff up in that manner. And people want clear rules for what they can and can't say and rules like "Don't be disrespectful" aren't usually super helpful, especially against people who are more malicious in their interactions.
So, calling out specific words that are banned allows for a simple, straightforward rule that can be pointed to without leaving lots of wiggle room for arguments (which again, you don't have time to have with every user who complains).
So you try to be alert for people who use certain phrases. Racism, xenophobia, conspiracy theories, paranoia, extreme pettiness, severely abrasive attitude, choosing beggars, and so on.
And flame wars, and people jumping on others. And of course if there is a group of people showing such unwanted behavior it takes no time for the powder keg to go off.
I think having sort of simple but a big ambiguous rules are okay if there are very broad but exact rules too. (Eg. Rule1: don't be a dick, Rule2: no harrassment, no xenophobia, no posting of personal information, etc.)
Moderators and the community should be proactive. (I find that the best tool is reports, because that signals what people find problematic. Sometimes they are just annoyed that some newcomer posted that again. That's okay, but most of the time users report spammers, crazy serious racists with too much time, and the occasional lost redditors' posts.)
I think I’ve spent enough time around people on meth and bicycles to probably understand it - but I just don’t get it yet.
The reason it's contentious is that one person uses it, someone else objects to it, people take sides along left-vs-right lines, and the thread devolves in to a shouting match.
It seems it’s pretty well established though, from 2010:
but, "criddler" is much older on the sub than "last year", it's been a mainstay for at least 10 years. and Portland doesn't have a police department, they have a police bureau (sorry, that one's a pet peeve).
The group of mods running the show now seem to me good people and having seen how they operate I'd be impressed if other city subs of this size are doing a far better job.
I realized after writing the above that it was likely an older word than that, but it hadn't really been a growing memey word in the sub before last year that I had noticed. Maybe the year before? Time flies.
And thanks for the note on the Portland Police Bureau. Too late to update my post, but I'll try to remember that one.
Since the concept of a city subreddit is so obviously pointless, the only people who stay are the ones who like the conflict, exacerbating the problem; or in worse cases like the Canada subreddit, there is a hostile takeover by one side. If you want to talk to people who happen to live nearby, I think you should do so on a different website; reddit doesn't work at all for that usecase.
EDIT: just noticed you mentioned "liberal city", so if it's political stances you're referring to then yes a more conservative-leaning person may not feel welcome on r/portland
funnily enough, I'm referring to the sub often times having a more conservative audience, often coming in from the suburbs (Vancouver being a big one there).
Note how your conclusion doesn't actually follow: just because something isn't healthy for you, doesn't make it unhealthy for everyone else.
Trying to police too much is likely why moderators get such a backlash when they employ heavy-handed tactics like outright bans.
Moderators also very likely operate in a like-minded bubble: the sort of people who would volunteer to be moderators are far more likely to have more in common with each other, than with the average community member. As such, mods will likely always get backlash.
For instance, I'd bet it's far more likely that your fellow mods debate to what extent language should be policed, and there are probably almost no mods questioning whether language should be policed at all.
Some kind of moderation system with strictly limited mod terms and random promotions to mod status would likely improve relations, but achieving the right balance would be challenging.
This logic doesn't follow. If you don't police enough, then users have an expectation that anything goes. When you do decide to moderate behavior, you experience a backlash because you've changed stances.
If you moderate too much, then you get accused of being heavy-handed, fascistic or trampling over their free speech rights. When you're a moderator, there's often very little that you can do that won't generate backlash unless the user in question is so toxic that the community as a whole agrees they need to go.
Your idea of random promotions to mod status w/ limited mod terms is an incredibly bad idea as well, because the moment a bad faith actor gets promoted to mod your entire community will quickly go up in flames. Trust me, there are people that will pretend to be in good faith for months, years until it gets them a position of power just so they can burn it all down.
You experience a backlash from some users, sure. If most users consider it a positive change, then no big deal.
> When you're a moderator, there's often very little that you can do that won't generate backlash unless the user in question is so toxic that the community as a whole agrees they need to go.
This seems to be assuming ban-like tactics. They're a poor option and I don't favour them.
> Your idea of random promotions to mod status w/ limited mod terms is an incredibly bad idea as well, because the moment a bad faith actor gets promoted to mod your entire community will quickly go up in flames.
You're making a lot of assumptions:
1. There's an innate equilibrium between the cardinality of the mod set, the probability that it contains a bad actor, and the probability that actor can cause appreciable disruption. The larger the mod set, the less likely a bad actor can do anything meaningful. You can likely make this probability arbitrarily small if that's a likely threat model. Tandom appointments work quite well to increase efficiency in various consensus-driven systems .
2. Mods shouldn't have absolute power. There is simply no way that a single mod should be able to destroy a whole community, any more than a bad judge in the legal system, or a single bad politician could destroy a city, county or state.
3. A transparent appeals process is always needed, in which other mods and community members review mod decisions. It took us millennia to develop our robust legal systems. Technology can eliminate some of the bureaucratic inefficiencies of the legal system in this setting, but it still contains robust patterns that should be copied.
4. You're assuming an open signup process which is vulnerable to DoS/brigading tactics. Maybe there's a way to allow open signups too (with reputation systems), but it's not strictly necessary.
5. Various reputation systems can be overlaid on this, and this interacts well with transparent judgments/appeals process, ie. someone with a long record of violating conditions and losing appeals would be less likely to be given mod power (but never 0%).
In general, today's moderation systems are intentionally vulnerable to a number of problems, because sites optimize for growing a user base rather than fostering community, because that's how they raise money.
Consider something akin to stack overflow, which randomly shows you messages to review or triage. Every now and again you get 5 messages or mod decisions to review, and you vote your approval/disapproval. This narrows the gap between traditional mod status and users status, where true mods would be relegated to reviewing illegal content that places the whole community in jeopardy.
Of course, there might also be considerations for avoiding the tyranny of the majority, but my point is only that the space of possible moderation strategies is considerably wider than most seem to think.
Have you acted as a moderator before? What are some communities which you believe have the idealized form of moderation? Because even Stack Overflow as you've referenced has issues with high toxicity among users chasing off moderators all the same.
Forum administration already is a limited government, typically authoritarian in current incarnations. Mods are the police, judges and juries. Works fine if you have the resources and mods are fair, and maybe that's typical for minor infractions, but the conflict of interest is clear.
Authoritarian moderation doesn't scale though, and you disenfranchise a lot of people with every a mistake, particularly since a) there's rarely a transparent appeals process, and b) people don't typically like owning their mistakes. Doubly so when "it's my platform, so I can do what I want with it". Maybe that's not something you care about, but given the increasing importance of social media to democratic government, it's a problem that will likely worsen.
> And I would hope that as reality has proven, government is very easily gamed by people seeking power.
Government is a system of rules for governing people's interactions. A moderation system is a system of rules for governing people's interactions on a specific site. You can't speak of these things as if they're that different. Either a system of rules is vulnerable to exploits, or it's not.
> Because even Stack Overflow as you've referenced has issues with high toxicity among users chasing off moderators all the same.
I mentioned stack overflow specifically for the unintrusive and random review process and nothing else. SO doesn't feature any of the other ideas I listed.
Finally, I have no "idealized form of moderation" in mind, I have knowledge of where existing systems fail, and how other systems have already addressed very similar problems. Designing a secure and scalable moderation system is a big task, so if you want to hire me to research and build such a system, then I will be happy to address all of your questions in as much detail as you like.
You can also split moderation along another axis, dividing on whether moderators are curators or janitors.
The janitorial view would be that moderators should be generally hands-off, acting only in clear-cut cases of abuse or spam. The community does or should mostly run itself through social norms, and heavy-handed moderation is unnecessarily and unfairly restrictive.
The curation view is that moderators serve an active role in building the community, and they should generally act strongly to define and preserve community norms. The community is the moderators' space first, and a free space second if at all.
As a moderator on another forum (not reddit), I disagree. Many of the moderation actions I take are visible to ordinary users, and that's on purpose. For community norms to be maintained, they have to be visibly enforced: enforcement is not just for the particular bad actor, but also for the ordinary users who genuinely want to respect community norms, so they know what the boundaries are (and therefore know how to respect them), and can see that people who violate those boundaries are dealt with (so they have confidence that the norms are meaningful).
> Ordinary users think the that moderators are power-mad bullies who ban and delete as they see fit, with no accountability.
Again, I disagree. If moderation actions are often visible, as above, ordinary users who genuinely want to respect community norms can see what is actually being enforced, and over time, if the moderators are doing a reasonable job, ordinary users will see that the pattern of enforcement reasonably matches the stated norms that are supposed to be enforced. Most ordinary users are reasonable and don't expect perfection, but they do expect consistency and reasonable judgment.
The biggest problem I see facing good moderation is that moderators can't be everywhere at once. One way to address that is to give ordinary users a way to report problematic posts, to bring them to the attention of the moderators. That also gives ordinary users another way to see how moderation is being done, because they can see what is or is not done in response to their reports.
There will always be some people who are never satisfied, and who will find something to complain about no matter what moderators do. But I don't think most ordinary users fall into that category.
Also, about "no accountability": ultimately, as a moderator, I'm accountable to the owner of the site. Similarly, the HN mods are ultimately accountable to the owners of HN. So the corresponding question for reddit would be, who owns reddit?
What complicates matters even more is that the interaction usually involves fear at least on one side. In a peaceful society, a regular citizen doesn't interact with police officers day to day. If you capture the attention of an officer, it usually means either something bad happened to you, or you're suspected of doing something wrong - both of which put you in a "fight or flight" state. I suspect that the most common interaction between a westerner and a police officer involves said citizen breaking traffic rules, which is a pretty antagonistic situation from the start.
It's a job that attracts that sort of person, people who just want things to function smoothly will get fed up with it quickly, the only people who stay at it will be the ones who want to mold the community to their vision (whether it wants to be molded or not).
I see the logic in your statement, but it's an imperfect model.
Moderators operate purely on the Judge Dredd model.
But then, in reality, the police operate in a 90% classical, 10% Dredd sort of way (for values of 90% and 10% that vary by location). They can mete out small punishments without going to court, oversight is not very effective, etc.
Or, as often happens, they're dead. I don't envy members of a profession that have to get every life-and-death decision right in real time on a daily basis.
It'd be interesting to see real numbers, but I suspect officers on the beat get the short end of that stick.
I try to remember this when interacting with law enforcement. If you have to call the police, you're having a bad day. If you're a police your job is dealing with everyone's bad day.
Anyways, when our main mod got approached on the street and asked if he was /u/nickname on reddit and he replied yes, he got stabbed. That was the day I quit.
It’s not just geo subs that have this problem. A friend of mine who mods a number of comedy subs (of all things) showed me his mod mail that appeared to indicate systematic brigading from far right discord servers.
They post extremist content in a bid to get the sub banned, and if it fails, they escalate harassment campaigns on the moderators
Bad actors can harass a community as a whole (and they do) but it's much harder for them to target specific moderators. The flipside is that moderators have to hold each other accountable for impartial enforcement of the rules.
I totally agree that lack of response to ban-evasion has been a problem. Even Wikipedia does better than Reddit at managing user bans.
The amount of damage that bad actors can do on reddit is disappointing. It also doesn't seem to be getting any better. In the 9 years I have been modding the same problems are still present.
The poweruser situation is also frustrating. The reddit moderation system and the ranking of moderation power by seniority really encourages cabals. It has been tough to resist on some of the subs I moderate for them being taken over by power users. You invite one mod to help with spam or problem users and all of a sudden you have 5 more mods, all of who moderate dozens of subs. Slowly the power users move towards top mod position and they never bring in fresh mods, only other power users. They will also try to remove any existing top mods to entrench their control. The system is killing the democratic nature of reddit by concentrating moderation into a narrow group. Given reddits position that moderators effectively own the community it makes it very difficult to resist this kind of takeover. My advice: never invite anybody who is already modding more than 100K users in other subs to be a mod on your sub.
I'd be interested to know if actual forum moderators think that we could solve the bad actor problem with algorithms...
I only use Hacker News and Reddit. And I always manage to get banned every few months, so I guess I'm a "bad actor".
Usually I start off OK. I can rank up 1000 points on HN or 20k karma on Reddit quickly because some of the discussions are interesting and non-inflammatory and I have interesting things to say.
But then some topics veer into domains that make me angry (hello politics), or I find a comment inappropriate or unfair, or mod behavior hypocritical. And I share that in a post that gets flagged or downvoted, and I get banned.
It is hard to detect because I think anyone has potential to become a bad actor in the eyes of a mod. There is no such thing as an unbiased mod. They will rub some people the wrong way with their comments or actions.
Some personality type's are just not compatible with what mods want to see in their well-tended gardens. I've never stalked or hounded anyone, and I've disagreed with dang's assessment of my posts once or twice, (FYI DanG is a paid mod, not a volunteer, which is probably why he's so calm about it, even though he get's riled too: read the New Yorker article about him) but this appears to be a problem techies cannot solve. Why do I say this? Because it's been going on since the 80's with Usenet. Almost 40 years of trolls, and tech hasn't solved it, but it has created some systems that have worked to keep the weeds out of the gardens. But gardens still need weeding.
Kurt Tucholsky gave great advice on how to write a letter to a government agency that is applicable to all potentially heated discussions:
* write letter.
* put letter in drawer.
* wait three days. don't look at letter in drawer, write a new letter and send that one.
I'm not sure how best to fix that. Some subreddits like /r/scenesfromahat use a timed-release system that only displays vote scores on comments after a fixed period (12 or 24 hours, I forget which). That at least helps reduce the first-comment effect, but it still means that anything after the votes are revealed is basically not going to be seen.
At a certain scale, I think threaded conversations are incredibly difficult to follow without a voting system to sort it out, and once you introduce a voting system you end up with voting system problems as you mentioned.
I honestly think that flat forums are more fit for purpose. Multiple concurrent conversations end up being a bit messy, but there was at least a reasonable chance that someone would reply to any given post, since everybody in the thread was on the same page.
Heck, you can even gamify engagement with a flat forum - one of those that I still frequent allows you to "react" to any given post. It doesn't actually do anything except have a number beside the post tick up, but people still use it.
Isn't that a standard subreddit setting? I don't think that actually affects the problem you were describing. I think it's only meant to help keep the vote tallys from influencing user behavior (e.g. bemoaning how many up/down votes some comment got, being extra motivated to karmawhore).
In my experience, quick-feedback scores seem to have a negative influence on people's behavior and emotional experience. IIRC, HN used to show comments scores, but stopped some years ago. I personally try to disable such "features" as much as possible.
Reddit's default "best" sorting algorithm is designed to mitigate that effect . It does a good job of not biasing old comments in terms of sort order, but it doesn't help the related problem that older comments tend to accumulate the most replies so newer stuff can still get drowned out in terms of quantity of other content.
Instead of getting warned or banned, a user is placed in a timeout box where all of their posts get a three day lag and remain editable, and when the user logs in, they must re-read their posts before proceeding to the site. Kind of an 'in-your-face' reminder not to be a troll.
It wouldn't stop the insane trolls, but maybe it would give the borderline trolls time to think.
If anyone else is interested, here it is:
Some interesting stuff in there, though I would love to read more about the behind-the-scenes tools they use to search and keep track of everything.
It's strange to me that especially on a technology forum discussing the negatives or pitfalls of certain mechanisms is being suppressed, censored - just like with politics, as politics is inherently intertwined with everything including non-action.
I'm on another forum that opted for an up/down vote system and showed counts per-post much like news.yc used to. It created so much drama and anxiety that people were openly antagonistic towards each other over what amounted to fake-internet-points.
The solution the admins came up with over there was "keep the counts shown publicly" but also "make public the specific votes that were up/down on any given post" (with an anonymization of all historical downvotes before the policy change, but not afterwards). Within days, the community adapted and it was a huge net benefit, IMO.
For people who think of themselves as good actors, I will reflect on posts that attract downvotes and try to figure out if I should have been more constructive on a given post. (I don't "care" about the score per-se, but I do care about the community that I'm part of and if people are telling me that I'm being an asshole on a given post, I should reflect on that, decide if I agree with them, and if I do, to change next time. (Often, I think my content was fine and someone just disagreed and used a downvote to express that. That's not how I use downvotes, but if they do, so be it...))
I had a funny incident, where I misunderstood a comment, thinking that "all your answers are negative" was very true, because I often disagree and try to be contrarian. SE has a policy that discourages discussion, so if users find it uncomfortable having their views challenged, when they really wanted an easy answer, howbeit simplistic, that may be lawful.
The situation is different when discoussion is the aim of the game. And it's a bit disingenious to say that SE doesn't have "discussion"; rather, they avoid controversy, heated arguments, or open ended debate.
Anyhow, regarding your comment that downvoting is terrible for civil discours, I agree inasmuch as it's often hard to tell what the downvote signalizes, as it might lump up many different opinions into one false agreement. Say, five different opinions compete, but only one takes on all of the answers and will accrue downvotes for the attacks, whereas the others get basicly ignored by competitors for being irrelevant, but yield feedback so that the vote is really one of popularity, as opposed to ...
Really says something about diplomacy. And I hate it so much.
I think a lot of that wouldn't make up such a giant part of the site if there were downvotes on stories.
He would say probably something along these lines: "The chief danger to freedom of thought and speech at this moment is not the direct interference of … any official body. If publishers and editors exert themselves to keep certain topics out of print, it is not because they are frightened of prosecution but because they are frightened of public opinion."
Your "click to read" today is the "buried in page 12 in pt-6 font" of the past. Same train of thought.
* I have accounts not just here, but also at places like Lobsters and Something Awful. In those places, because accounts are rare and can be banned so easily, discourse is constantly trying to stay much more civil than here or Reddit.
* As a former community moderator, I don't respect moderation actions on sites where anonymous signup is allowed. You asked for hoi polloi to wander in off the street and give their opinions; you can't then wonder why discourse is trash. Here, it's even worse; the moderators are paid for their work, which lends a clear bias to every moderation action. Similar happenings on Reddit led directly to user protests and revolts, and it's amazing that the community tolerates paid moderation here.
The idea of the well-tended garden is a potent one. I have had to tolerate obviously toxic but helpful people before and it is always irritating to not ban them, despite knowing that they are good for the garden.
We don't put barriers to signup because we want it to be easy for authors, experts, and people with firsthand knowledge of a situation to step into a thread. Those are some of the best comments HN receives. If you put up barriers to keep out hoi polloi, you end up keeping out the likes of Alan Kay and Peter Norvig too, and plenty of lesser known people who have made first-rate contributions.
Besides that, there are legitimate cases when throwaway accounts are needed in order for a person to post on a topic, often when they have first-hand knowledge of a situation as well. How do you allow that while keeping out trash?
Obviously, if there were a way to allow the above good stuff while keeping out trolls, toxic comments, etc., that'd be grand. But as long as there's a tradeoff, I'd rather have the long tail at both ends—I think the forum would be more mediocre and stale without it.
p.s. I'm puzzled by your comment about paid moderation. It seems to me that unpaid moderation would be more likely to be biased, since people are going to extract compensation for the work in some form or other. If it isn't money, it's probably going to be power or an ideological or personal agenda, or something else that manifests as bias. In any case I'd be curious to hear what sort of bias you think is showing up in mod actions on HN.
I understand where you are coming from here. I struggle with this. I think there is a legit theory for it, usually given in the context of how to reconcile shitty behavior of geniuses (Picasso comes to mind: legendary artist, shitty human.)
Even if toxic people have something good to say once in a while, do the ends justify the means if they stomp all over the roses in the process?
> You asked for hoi polloi to wander in off the street
The garden analogy is potent because where I live there is a huge rose garden that anyone can wander in off the street and visit. Some people come in and do stamp on the roses. And it sucks for everyone else. Which is why I can understand the desire to keep those people out.
However, shouldn't the gardeners KNOW that there are and always will be shitty humans?
I'm truly ambivalent on this one: I want to participate, but I lack impulse control, so I'm excluded. That's not fair. And if I was tending a garden, I'd want to keep the "me"s out.
Yes, it is, because the problem is not the garden, it's you. You want to participate, but you don't have a basic skill (impulse control) that is required for participation. It's like saying you want to be a concert pianist, but you don't know how to play the piano, so you're excluded and that's not fair.
I think your argument mixes up things you can control (skill) with things you cannot control (impulsivity), if the latter could be controlled it wouldn't be impulsive.
And I admit that is a big gray area. There's a continuum of toxicity online, and there are going to be some moderation rules that are subjective.
Unlike a pianist, I see the argument as more akin to web developers choosing not to implement alternate or semantic constructs which in turn excludes blind people. A visitor can't get better at not being blind. Of course, the analogy breaks down because blind people aren't adding noncritical discourse (aka what one mod may consider "flamebait"), but now we are back to subjectivity and affordance as to what is noncritical. We clearly know how to make the web accessible to blind people, but we don't have a universally clear way to make discourse available to people who sometimes suck at it.
However, I can create as many accounts as I want, so I got that going for me.
First, we're not talking about a binary distinction; things aren't either "can control" or "can't control". It's a continuum.
Second, if it's really true that you can't control your impulsive behavior, that still doesn't change the fact that that behavior will make it virtually impossible for other people to deal with you in certain contexts. It's still up to you to recognize the impact that your behavior has on others, and to make choices about what you can realistically do or not do--or about how much work you are willing to do or how much risk you are willing to take to be able to participate in certain activities (for example, if it turned out there was a drug that enabled you to control your impulsive behavior, would you take it in order to enable you to do something you wanted to do?).
> I see the argument as more akin to web developers choosing not to implement alternate or semantic constructs which in turn excludes blind people.
Ok, so what "alternate or semantic constructs" could the programmers of HN, for example, put into their code so it won't exclude people who can't control their impulsive behavior?
> we don't have a universally clear way to make discourse available to people who sometimes suck at it.
It's not that we don't have a "universally clear way" to do this. We don't have a way at all. "Sucking at discourse" is simply not something we know how to accommodate for. The only way we know of to deal with it is for the person who sucks at discourse to learn how to not suck at it.
Perhaps at some point we'll have an AI or something similar that can mediate such discussions so all parties can participate. But we don't have anything now.
> so you are excluded
from what, playing the piano? Do you maybe see a connection here to why somebody might not know "how to play the piano"?
Or in other words: A garden without "you" is not really a garden, except in theory, if the proverbial tree makes a sound when nobody can hear it fall. That's a slippery slope argument.
Many people may lack impulse control, but preemptive judgement can't weed them all out. That's one reason why it's "not fair". It's fair to those that have "impulse control", maybe, but it is perhaps unfair that they get to decide what that is, when a moderator might act out of impulse, or experience, all the same. It is however futile to assume that life were just not fair, because then "you" have already lost.
If entry is taken to afford the gate keeper, it is not an open garden anymore, open to the public. At least not if the submission requirements are arbitrary to an uncertain degree. Maybe it's the wrong approach to take that internet discussion is not important and impulse control therefore let down too easily. But then again, the impulse to post or visit at all might be the problem to begin with, as in this post.
Really, who's aspiring to become a concert pianist in this day and age? That's a weak rhyme, unless you meant to imply that the reddit moderator cabal were playing the readers like an instrument.
I didn't; the person I responded to did, by using the word "I". They were specifically talking about themselves.
> from what, playing the piano?
From being a concert pianist. Read what I actually wrote.
> It's fair to those that have "impulse control", maybe, but it is perhaps unfair that they get to decide what that is, when a moderator might act out of impulse, or experience, all the same.
My statement that impulse control is a basic requirement for participation applies just as much to moderators as to any other participants.
Who gets to decide what the forum rules and norms are is whoever owns the forum. That's as fair as it gets.
There are some forums where lack of impulse control isn't much of a problem, because nobody else on that forum has it either. So strictly speaking, I should have restricted my comments to forums where that is not the case. I don't think that makes much difference in practice for this discussion, since as far as I can tell the forums where lack of impulse control is the norm don't have moderation problems since they don't have moderation at all.
> who's aspiring to become a concert pianist in this day and age?
Googling "how to become a concert pianist" gets plenty of hits, so it looks like plenty of people are trying to help aspiring concert pianists. Perhaps they're all speaking to an audience of zero, but I doubt it.
> unless you meant to imply that the reddit moderator cabal were playing the readers like an instrument
You're going way off into left field here.
There are communities like this; Something Awful is the first which comes to mind. These communities deliberately acknowledge that money is required to fund community spaces, and use the money to improve the space.
There are also extensions to the analogy. A local park has a bulletin board. Postings to this board are generally made by community consent; anything that any community member feels strongly enough about can be removed immediately. This is also how postings on telephone poles work. Sometimes a community will lock up their bulletin board after a wave of abusive listings. This is analogous to primitive message board moderation, as seen here on HN.
Are we here to advertise to each other, like on a bulletin board? Are we here to produce a great knowledge base, like in a garden? What should the shape of conversation be?
Speaking generally, without real-world consequences for violating community TOS on a service there are no teeth in tracking bad actors, banning them, etc.
For a long time, I loathed Facebook's real-name policy. However, to some extent I suspect the amount of identity validation and attribution of comments to actions does have a limiting impact on casual trolling and harassment on that service.
It seems to be trivial for bad actors to hack/farm unused accounts and impersonate people almost at will. Meanwhile nation state-level and/or corporate bad actors have the resources to bypass or subvert the id validation and create fake identities and accounts at industrial scale.
I don't think real ID is a solution. Not even if it's linked to some kind of robust physical key - which is obviously going to have privacy implications anyway.
AI or automated searching for troll-like activity, most likely followed by manual oversight, is more likely to be successful.
But there's the lingering question of whether FB is really interested in pursuing this with any effectiveness or enthusiasm. Given FB's resources and its lack of success to date, the answer seems to be "no."
Just in the last couple of days of me commenting on FB I've been called a "retard" (sorry for the word, but that's what the lady used to describe me) and a "troll" (presumably Russian), this latter description accompanied by said commenter saying how I only write down dumb stuff which shows that I don't think before writing (but not saying where and especially why he thought I was wrong in my statements).
Compared to all that commenting almost anonymously on HN seems like a breeze of fresh air. Of course in the many years I have been commenting in here there have been ups and downs (a couple of very down points for me were the reactions after the Boston marathon and the endless banana references after the Fukushima disaster), but other than that when one is told that he/she is wrong the person saying it usually comes with his/her own reasons which helps move the conversation forward.
There might be some interesting data here supporting real-name policies. I don't say anything on LinkedIn because I'm scared anything I say can and will be used against me when job hunting.
Also: thanks for modding /r/portland. I love PDX. Just wish it was sunnier: it would be the perfect city.
You can still have throwaway accounts for commenting on things and the overall community will be increasingly more civil.
I surfed HN for years before I decided to make an account, and even now I don't really know anyone else using it in my circle. I guess I might have found someone if I asked around, but it would definitely raise the bar.
Let us read our Shirky:
A well-tended community is constrained by things like Dunbar's Number and SNR ratios.
Soft forking is a common response - IMO reddit is the closest social platform to "getting it right", although it should do a better job of pushing low-value content (/r/politics, etc) down into the "minor leagues" of subreddits.
> You can still have throwaway accounts
How can both of these things be simultaneously true?
Done. The inviter doesn't know the username of the invitee. It's only visible to the moderation team who invited whom.
If the throwaway account is banned, the original link-giver would lose their good standing. (sorry, I should have highlighted this to the original reply)
We can argue about whether the assertion is actually true, but even the perception that it might be true will make people reluctant to give out an invitation to someone wanting to create a throwaway account.
If I had my own named account in good standing, I suppose I might be willing to use it to create a throwaway account for myself, provided that I was careful to only use that throwaway for... I don't know... "lightly" controversial content that is only likely to be downvoted rather than abusive content that is likely to be banned. (Not that I would ever actually create a throwaway in order to be abusive! Just trying to think like a troll).
Actually, I guess that's what we wanted to encourage anyway, right? Controversial content should be fine; abusive content is not. Maybe this would work after all...
Case 2: Your assertion is incorrect. There is some other reason for making a throwaway (like talking about a former employer, for example) and the system works as intended. The throwaway doesn't get banned and nobody gets their account demoted.
Also these demotions could have a time element to them, where you can't invite someone new for, say, a year.
The upvote/downvote system is designed to reinforce echo chambers and create a chilling effect for your “real” opinion if you know it doesn’t agree with the hive.
That system is already trying to solve the problem and instead creating a different one.
All you can do with an invite system is to make sure the echo chamber is even more structured. That “those people” aren’t even allowed to join. Hello country club.
See regular posts on this very forum about algorithms running amok, banning people without clear reasons, and accounts only becoming reinstated by being famous enough to get enough traction to get a human to review your false positive.
That sense of powerlessness doesn’t mitigate things, it sometimes escalates them.
The shit-peddlers outside the building are constantly convincing pool-goers to join a shit-peddling pyramid scheme so that they bring boxes of shit in to the pool.
And besides that, some people just want to trash the pool for their own enjoyment.
At least as far as websites or individual platforms go, this is exactly what accounts, moderation, and "banning" already attempt to do. We've seen how difficult and expensive this is to scale when your site gets as big as Facebook or Reddit. I can't fathom a sufficiently "benevolent dictator" that I would trust to act as a gatekeeper for the entire Internet at large.
one thing reddit could be doing to help is to stop relying on community moderators for so much. volunteer community moderators shouldn't be dealing with people abusing the reddit platform, that should be the job of reddit's staff. Instead of just building tools, reddit should be actively involved in the enforcement of platform rules to handle these cases, and leave the community moderators to focus on creating and maintaining their communities - ensuring content fits the theme of the subreddit, people are communicating with each other in a tone that fits the intended tone of the subreddit, etc.
Reddit needs people like gallowboob who are willing to do the drudgery of sifting through all the platform abusers, but from an end-user perspective it's easy to see somebody moderating 80% of the big subreddits as a problem, because it's not clear whether a moderator is actually influencing the community or just sifting through obvious abuse.
Free speech is hard. Really hard. Allowing communities to express themselves in person (aka pre-internet) was much easier, because there was a cost to appear in person (time, society's perceptions, etc).
With the internet, truly anonymous speech has flourished, and much of it really is a waste of time or even damaging.
I hope as a society that we will be able to figure out this conundrum without eliminating free speech. I think that just like credit cards accept some bad debts, we have to be able to accept some bad actors in speech - the trick is limiting it without killing free speech.
To me, communities like reddit, HN, FB, twitter, etc are all huge experiments in free speech and how to manage that problem. Hopefully it turns out right - I don't want the future to look like East Germany.
Honestly, this is something Metafilter did pretty well: the five dollar account fee makes a ton of abuse tactics more costly to deploy, while simultaneously funding efforts to combat it.
It would go a long ways I think to at least allow mods to restrict subreddits to read-only for unpaid accounts.
It'd be awesome if we had anonymous reputation scores, like PGP or something, that other people could vouch for. Make bad actors pay to build up good reputation, then burn it down when they misbehave.
I really want a system like that to help filter the deluge of comments anyway. Though there's a danger of forming a filter bubble, I want to see commentary that is vouched for by those I respect. Not just on Twitter, but everywhere. As a protocol or data exchange format.
The whuffie concept relies on strong online identities. Without that, declaring bankruptcy is too easy and allows malicious actors to simultaneously harm others and boost their own scores rep with bots.
Politically motivated reddit people are, or at least can be, scary.
Part of the problem is that online communities rely on the fallacy that, because most people are good, such communities can thrive with some reasonable amount of guidance. But it only takes a relative few people acting in bad faith to cause great damage.
As such, I've long considered that the online world amplifies the sociopaths, bad actors, and worst among us to an untenable level, giving them outsized power and voice. I think it's driving our "real" society and culture in a negative direction to a degree that people vastly underappreciate.
Layer on top of that adversarial nations that actively use our online communities to divide and propagandize, and there's a very real question of whether we're better off without these communities.
More succinctly, many of the largest platforms that enable online communities seem consistently unable or unwilling to rein them in. And when you look at experiences like those you convey, it's little wonder. Attempts to moderate bad actors will invariably add to the problem, as they devolve the discourse further and generally seek to be heard, else burn the place down.
Good moderation can keep out bad actors and promote an adult and civil tone, but it's insanely time-consuming and expensive.
There really needs to be a new legal concept - something like poisoning of free speech. It's one thing to have unpopular and unusual opinions and to argue them, but another to set out to knowingly and deliberately subvert and poison communities with calibrated lies and aggression.
There is no upside to the latter, and the face-to-face equivalent would usually have consequences. Free speech advocates online seem to believe that online communities are somehow magically strong enough to handle these threats automatically - but in reality they can be more fragile than face to face communities. Some formal appreciation of that might not be a bad thing.
one of the few cases where online and offline worlds converge on a common modality to deal with common fundamental inputs. you don't know the valence of a new group member until they choose to reveal it to you, when it most favors their leverage.. This (among other things) tilts the odds of successful group influence in the favor of the new comers. Freedom isn't free and all that.
When reddit goes public, moderators get nothing. The employees of reddit will become defacto IPO millionaires, but moderators who create the success of the site, get nothing. Has anything been discussed along these lines about how unfair this is?
If you create a new subreddit, anyone can show up and participate. Good actor or bad.
Can you imagine if you threw a physical party and let every single stranger in the world literally walk in your front door, wander around their house, and do what they want? You'd be a honeypot for thieves and vandals.
On the Internet, they don't take your physical stuff, but they harm the online equivalent: your information and attention. Every popular online forum is likewise a honeypot for scammers, shady advertisers, griefers, and other malcontents. Why wouldn't it be?
The typical suggested option is massive policing, but that's really hard in a world a bad actor can don a near-perfect disguise (create a new account) in an instant.
I would love to see sites like Reddit and Twitter adopt a model closer to real-world social interactions: something like a web of trust where you need to be invited to participate in a space and where there is some level of vouching for you. But those kinds of businesses don't scale well so we almost never see them.
Another option would be to force something like real identities. The reason a bar can get away with just a couple of bouncers is because a bad actor can't as easily escape the consequences of their actions by shedding their identity. But, of course, requiring real identity gets in the way of good actors who use anonymity for good reasons (generally avoiding other bad actors).
It's a hard problem, but it makes me sad that almost every business just ends up doing "let everyone in and accept that there's going to be shitbags all over the place fucking it up for everyone".
But some mods become really jaded over time and are mean to users by default. I was moderator of a few mid-sized subreddits for a few years and saw that in action. A common thing I saw is a sort of purity mindset, where a moderator feels like the more people they ban, the better the subreddit becomes. It’s really easy to dehumanize people online - users do it to mods, and mods do it to users. But those bad moderators make users hate mods in general, and the cycle continues.
Same platform, but the experience couldn't be more different: we spend almost all our time dealing with "shallow" problems at a high scale. For us that means flamewars, brigading and (usually political) astroturfing, and to a lesser extent commercial spam. We don't even remotely have time to go looking for deeper problems such as ban evasion, because the volume is so high. We know it's probably happening but it's almost impossible to police because it disappears against the everyday noise.
One thing that I think helps us a lot is heavy use of automoderator rules and the spam filter to remove the most obvious problems automatically.
I think smaller and more local communities breed a very different class of problems than big ones -- it's much more personal for the participants, and their retaliation is more personal as well. Also, a single abusive user can do so much more damage in a small community because they don't just blend into the background noise.
One constant though is that we do see some really troubling things as well.
You poor soul. I stopped visiting that sub a few years ago. There was a pervasive negativity about it that was just depressing. I hope it's gotten better.
> [...] the Portland Police Department
BTW, it's Portland Police Bureau (PPB). Calling it PPD will get you labeled a transplant. ;-)
Thus, the system is both evenly-applied (most of the time) and highly scalable.
Thanks for keeping r/Portland such a fun place to hang around in, really enjoy the creative community/people who post there!
A subreddit isn't worth your safety.
There are online places where they use a third-party to verify identity. It may not have the most adoption, but at least you're not putting yourselves at risk.
I mean, if your goal is to "control" conversations, yeah, that's going to be a lot of work. What's the point of that anyway?
There's no benefit to banning the bad folks, since, like your guy, they just create a new account.
name: [baddude1, baddude2]
action_reason: "Troll / Spammer"
@bredren Would you want to lead /Portland District?
As such, this is spam for a vague mailing list.
The senior moderators are faster and better than me, so my actual workload is relatively light.
I believe Reddit will grow in its influence and that tools can and will be developed to improve the situation for community moderators.
"Doxxing" public figures who try to anonymously control public discourse is not.
Not to blame you but this is the kind of BS that makes me uninterested in the Reddit community, and uninterested in Portland in general, rife with cancel culture.
The video I posted was about a Lake Oswego woman going nuts and verbally abusing some local PD who were handling the situation with elegance. My title was something like "this is why I don't trust when people senselessly dog on the police." It was removed because there wasn't anything specific to Portland or surrending area (not true but ok). I told the mod I would just repost with the title "this is why I don't trust when Portlanders senselessly dog on local police" and was told not to post. No rules broken, just intimidation and censorship.
I don't subscribe to the progressive/liberal/left/socialist ideology and I'm genuinely afraid to publicly share that fact in Portland. There are many people in PDX who will violently attack you if they catch wind of that fact. This isn't a secret. It's also not a secret that Oregon outside of PDX and Eugene are generally Red communities.
For instance, look at the protests in Portland last year. The counter-protesters were far more disruptive and violent than the visiting protesters, but the typical Portlander wouldn't see it that way. In my own discussions with them, they see Trump supporters et. al. As being fascist and as causing violence simply by existing, thus justifying their pre-emptive violence.
Getting the actual conversation and actual article you tried to post gives a lot of context into why you believe you were cancelled. I don't want you to out the mods, I want to understand their side of the story.
I've heard your side, and it sounds like the content didn't fit the vibe of the subreddit, as per the mods. They told you that and you threatened to break their rules again. Yes, you were threatening to break their rules.
I'd say they gave you a chance, and given this particular story re: mod harassment, didn't want to engage with an aggressive user.
> No rules broken, just intimidation and censorship.
A moderator of a forum asking you to not do something is not intimidation. It's asking you to adhere to the vibe of the subreddit. The description of the article you tried to post was obviously an attempt at rabble rousing on your part. There are helpful and constructive ways to voice your opinion. If you have consistent trouble doing that, I would consider therapy to try and understand why people see you as aggressive when you try to share your opinion.
There's nothing wrong with asking for help when you're struggling to fit in socially, especially if your opinions diverge from the norm. Being different is HARD.
Thanks for looking into my Reddit post creep, I feel totally weird about you now.
I'm tired of this conversation anyway, just another corrupt person in power bending rules for their own favor, covering up injustice, lying to affirm their position, and creeping on people cross-social media. I would even bet you and your mod pals have put me on some kind of watch list. It's disgusting and shameful really, and I'll be calling out moments like this as long as the law lets me.
My post objectively didn't break any rules on /r/portland.
We (buu700 and I) added Gallowboob some years back largely to have insight into how other teams modded their subs at a time when we were growing quickly and needed to know what to do to keep up. While useful in meeting that goal, we quickly realized it was pretty meaningless for /r/relationship_advice because of the nature of the questions we were getting. The only sub like ours is /r/relationships, and we each differentiated from each other by having different rules and content creation controls.
As best as we know, Gallowboob enjoys the place and is pretty decent at modding, so we're pretty happy to have him. Most mods burn out quickly because of how dark the questions get as well as because of how meaninglessly violent people become when their posts are modded.
Separate comment on an item in the story:
> Allam believes his time on the site has made him a more “paranoid” person and led him to develop “borderline PTSD.”
Moderating Reddit's larger subreddits is absolutely capable of resulting in PTSD-like symptoms. I've been dealing with some on and off after a post some years ago where somebody who requested advice followed through with the best course of action only to find that his wife killed their kids soon after. And Reddit has absolutely no support system for things like this.
What are you getting out of this?
They don't. They do the work for a community they're part of. The social network hosts that community, and tries to make a profit doing so.
I organise a meetup that's held in a pub (well, not at the moment). People coming to the meetup spend money on the pub's beer and pizza. Am i doing free work for a for-profit public house?
Yes. Unequivocally, yes. If I were the owner of X thing and Y person was sending lots of people my way I would be more than happy to give Y person something in return. In your example I would offer free pizza and beer at the absolute minimum. You are selling yourself short.
It's not your intent to do work and it's probably not the pub owners desire to pay for meetup organizers, but regardless of your and the pub owner's intent, work has happened.
It's one thing to organize a bake sale in a public park, but when you choose one pub over other competitors, and they profit from it, you're doing work for the pub for free.
This is why experienced community managers works with businesses to get prices for the event discounted, or set up an agreement for profits to go towards the event.
I'm sorry I wasn't clear, this point is not directed at friendlybus, but to the general sentiment upthread (e.g. ramphastidae) that seems to assume that because reddit is benefiting from the existence of the community therefore the community cannot benefit from the existence of reddit. The underlying reason why is because individuals benefit from the community.
Also I would question if reddit actually makes much profit from individual communities like r/relationshipadvice, especially compared to the pub scenario which I find generally acceptable.
A reddit moderator's free time and a toxic community is a net negative for a moderator, payment may offset the health cost.
The option is to do "work for free" or to shell out hundreds of dollars.
You may as well ask why people volunteer their time at all.
It's fair that both parties feel like they're benefiting, so no direct transaction needs to occur.
Comparing this to volunteering generally isn't the same. I volunteer with local registered charities because I believe in their mission, and the volunteer work I do is directly impacting and improving the lives of the people I'm working with.
Which may be the crux of the GP's question. Why volunteer in the community in a way that results in a for-profit organisation monetising you for their gain, instead of finding a non-profit in your community to be volunteering your time towards? The non-profit / charity in your community's goals are more likely to be aligned with your own, than the for-profit social network.
A moderator’s work primarily benefits, directly, the users, because it allows them to communicate safely. The platform only gets help indirectly.
Charities often have salaried or reimbursed employees; do you volunteer in order to get their salary paid? Of course not, you do it because it helps actual users; the fact that this allows people to get paid, somehow, is an indirect benefit. Same here, really.
If the charity started stalking me to sell my data onto advertisers in order to pay the salaried staff or other stakeholders more money, I'd stop doing any work through them.
The charity's mission is not maximising revenue / profits.
Any VC/PE backed corporation's mission is to maximise revenue / profits at any cost.
Benefitting from socializing is not worth the reddit moderator role is my opinion, under the psychological load they operate in.
The same will, and should, happen to Reddit. One of the largest sites on the Internet is making money hand over first while moderators end up paying the price for no real benefit.
a) prevents people who want to be mods, from being mods
b) breaks the way the site works
What gives you the right to do that? Seriously, if you want to live in a nanny state, please, not in my back yard.
I do not want you to be my nanny.
Reddit's moderation problems are not so big that you are justified in putting a gun to other peoples' faces to force them to do it the way you want.
The Fair Labor Standards Act: https://webapps.dol.gov/elaws/whd/flsa/docs/volunteers.asp
Reddit's preference of exploiting unpaid labor, and it's failure to plan for the inevitable event where that exploited labor decides they want to be compensated for their work is entirely on Reddit Inc. A company valued at $3 billion should have understood the potential risk.
All Reddit mods who feel burned out or are otherwise struggling because of the work they've done as mods should file a complaint with the Dept. of Labor and request financial compensation for the work performed, and coverage of any medical treatment stemming from the results of moderating toxic communities.
I don’t get it for relationship subreddits because it’s so mentally exhausting but for fandoms and niche communities the experience is super cool.
I've volunteered at all sorts of community centers, schools, etc... Never once at a for-profit business in town.
In your analogy reddit is the property owner or landlord of the community center, but it does not operate the community center, so volunteers are necessary.
I mean, I've done internships, but the quid-pro-quo there was quite clear.
E.g., I did a quality improvement internship at a hospital to get a job in quality improvement there, once upon a time. It was "unpaid," but it was an audition for a job that I wanted.
As opposed to my time as a volunteer in a local free clinic, where no benefit accrued to me at all.
It's certainly volunteering in the synonymous sense of 'doing something for nothing because you can' but I feel this statement-left in the vacuous state of its own brevity-comes loaded with an unsaid implication that "volunteering" on Reddit is no different than "volunteering" in one's local community and I'm not sure it's that simple.
It does feel just as limited of a definition of 'volunteering' if taking the grandparent comment at face value.
moderators should be volunteers from that community either way, not paid outsiders.
Not the OP, but most people just want to help a community that they are a part of and take a hand in making it better and helping members of the community.
You have to remember, this issue is played on pretty much every online community that has likely ever existed, in some form or another. Long before the term "social network" became used.
But at the end of the day, you need moderators. And the job sucks and is unpaid, so you can't exactly select only the "perfect" candidates.
Being the invisible hand that shapes the narrative of an entire community to one that you see fit is surely an alluring power. Especially if you think the powers of that influence will grow over time.
It's not crazy to assume that a portion of them are in it for the power. People love positions of minor authority. See: the fragmented history and (hilarious) mod power struggle that led to Seattle now having a bunch of differently managed subreddits.
For a nontrivial number of people, this community is the only place to turn. If we step away, do people incur harm as a result?
(in other words, it's almost a psychological obligation of sorts at this point)
Maybe if mods were more transparent across all the subs.
Most people get angry when someone removes their post and yet they see it reposted and approved hours or even minutes later (Gallowboob is infamous for that but regular users do it too)
This is a common refrain I've heard going back well before Reddit ever existed.
I helped moderate the Vault Network boards for a while back when they were a thing.
Its hard to overstate how amazingly...disturbed and vitriolic some of the individuals of a community can be. And dishonest. And spiteful.
No amount of transparency helps when all it takes to refute any evidence is to label the mods as lying or "corrupt".
Let's say a user says something bad and you mod the post and give them a warning. You tell them exactly what the offense was and why it was moderated. They might even act civil in response.
Then you see later them talk about "X mod totally censored their post for no reason and refuses to explain why. X and the rest of the mods are totally corrupt".
So, what do you do? Post a screenshot of the private messages exchanged (something, for instance, we weren't supposed to do)? Take a screenshot of a browser window with the "mod view" (aka. uncensored) of the original post? Something said user will point out can be easily edited because well, its a browser showing a website, not exactly hard to alter).
And that happens every day, all the time constantly. And no one is being paid and there is a constant stream of other stuff you are trying to stay on top of.
And sure, some of the mods are shitty and "corrupt" in a sense. But I would say its as much a reflection of the community itself as anything.
The first is that transparency doesn't "fix" hostile community members. What it does is justify the actions of those in power to the rest of the community. Without this the community will very quickly lose faith and the situation just devolves into factions and hostility.
This is very indicative of your last line really. It's like any form of government. When the people lose faith in those in power due to perceived capricious or opaque behavior there tends to be a lot of civil disorder. This usually leads to those in power entrenching themselves and enacting even more draconian measures. It's a vicious circle. Perhaps all governance is subject to the laws of entropy and will eventually fail, but I believe transparency, consistency, and effective communication are the only methods to slow such an eventuality.
The second point is one of size. As the population of a community grows and the proportion of those who wield power shrinks you also end up with a lot of discontent. Many community members will no longer feel they can effectuate change as they're a small voice amongst many without any real connection to the small group in power.
Also note that my experiences are in relation to fairly tight knit communities. Reddit is a little different in the sense that plenty of subreddits are far less communal. Effective communication is very difficult when the community is mostly transitory right up to the point where mob mentality takes over.
When the people lose faith in those in power due to perceived capricious or opaque behavior there tends to be a lot of civil disorder.
I would be really interested in figuring out how to combat "perceived opaque behavior" especially when the source of the complaints is more artificial, where the complaints are being used as a tool for manipulating the community rather than being based in an actual grievance.
That's the reality you run into sometimes, like that oft-used quote from the Batman movies "Some men just want to watch the world burn."
As the GP said, transparency is for the punished is not sufficient. Governance must be transparent to the public, otherwise it will seed distrust. The behavior you described isn't "perceived" to be opaque, it is opaque.
Maybe you could keep complete raw backups of the "minutes" (or mod logs in this case). So you would take a backup every 24 hours, encrypt it, and upload it to an independent write-only server (which proves you couldn't tamper with it). And after e.g. 1 year (let's reduce it from 30 years), you would release the encryption key to that mod log backup you made. This ensures transparency and trust.
Naturally people will still complain since it's impossible to fix people, but I'd imagine having a "authoritative" list of every moderator action and the accompanying explanation would help stave off the corruption/lies accusations. That combined with a reputation of transparency.
I mean, I think that would be an interesting experiment, as I can't think of a large community that provides such data.
But again, if you the underlying assumption is "the mods are corrupt", then that accusation can easily be transferred to whatever logs are provided as well.
Something Awful also has a number of other moderation features that I think more sites should emulate:
1. Temporary bans from posting ("probation") of variable length, to allow for punishment short of a full ban. Usually 6 hours to a couple days, depending on the offense, occasionally a week or longer.
2. A nominal registration fee ($10, one time) to register an account, to cut down on bad actors just making new accounts.
3. Normal bans for being a dick are reversible by paying $10 (same cost as registering a new account), unless you get "permabanned" for either repeated bad behavior or posting illegal content. If you get permabanned, any new accounts you create get permabanned as well (assuming the admins can find them, which they do remarkably effectively using IP and I think payment info).
That last point sounds like it incentivizes the mods to ban users, so that the forums get more money. But it doesn't seem to actually have that effect, possibly because most of the mods are not paid.
There have also been a few interesting experiments in moderation that were less useful, but are definitely entertaining, such as the ability to limit an account to a single subforum (usually the one for posting deals, or one of the ones for shitposting). It's also possible to view a user's "rap sheet" of moderation actions from any of their posts.
I really don’t care how shitty another poster was to you. I only care how shitty you are treating me.
I have much more experience in dealing with corrupt and biased mods than I see anything else
Imagine seeing a post exactly like yours, literally word for word, with the same civil tone and all.
Except the user in question had been banned for posting a tubgirl (its gross) image by a moderator that happened be female and had responded in private messages on a clear alt account (minutes after the ban with the same IP address) calling that moderator a slut who deserved to be raped and killed.
And that style of interaction being relatively common.
I was merely pointing out that the "public" persona a user portrays doesn't have to match the truth.
In the end, that's why its such a difficult problem. You take the normal conflict that occurs in communities, add in the potential for malicious actors on both sides and it's no surprise that the normal conflicts can spiral out of control. Especially in the virtual and relatively anonymous setting of online communities.
And from a person on the outside looking in, it can be impossible to actually know what the truth is.
We're (r/relationship_advice) rarely transparent with removal reasons. Our templatized removal reasons generally look something like this:
> u/[user], please message the mods:
> 1. to find out why this post was removed, and
> 2. prior to posting any updates.
> User was banned for this [submission/comment].
The reason is because we have a high population of people who:
1. post too much information and expose themselves to doxxing risks, and
2. post fantasies that never happened.
So in order to protect the people who inadvertently post too much information, we tend to remove these posts using the same generic removal template. However, if people know that the post was pulled for one of these two reasons, the submitter may still end up on the receiving end of harassment as a result, meaning we have to fuzz the possibility of the removal being for one of these two reasons by much more broadly withholding removal reasons.
This is specific to r/relationship_advice. Other subreddits have other procedures.
Don't get me started.
Particularly "muting" posts with auto-moderator (silently hiding them for others without notification/warning/explanation). It was originally created for spam control but is regularly abused for generic moderation. It needs more controls placed on it.
To give a recent example, I wrote a long reply in /r/fitness's daily discussion but it was muted because it contained an offhand remark about COVID-19 (vis-a-vis getting hold of fitness equipment right now). Why are they muting all comments that contain "COVID?" Who even knows, but the /way/ it was done was pretty irksome and resulted in wasted effort on my part for a comment that violated zero published rules or etiquette.
/r/politics has a huge dictionary of phases and idioms that result in auto muting. None of which are defined in their rules.
This is true here on HN too. One such word that I’ve seen to cause comments get auto-killed is “m∗sturbation” (censored for obvious reasons), and I am sure there are others.
masturb circlejerk faggot nigger feminazi shitstain lmgtfy.com #maga
If a few "medium" actors get banned by accident, that's the price to pay for the rest of us getting to enjoy discussing tech without dealing with a toxic cesspool.
Metafilter is a forum which used to be very diverse in opinion and is now basically captured by a small vocal minority. How did they do this? The minority produced a large amount of content for the forum and was very active. 95% of that content was high quality and on topic but the remainder is biased and very opinionated. As a result of their interaction with the site they were closer to the mods, regarded more highly, given the benefit of the doubt in "bad actor" conversations.
Now today Metafilter is a dying community of territorial users who eviscerate anyone who doesn't know how to play the game. The minority won and created their little clubhouse corner of the internet. Metafilter as a forum though is a shell of its former self. There are fundraisers to keep the site alive and formerly paid mods and devs are now either retired or do it for free. Not only did it drive away old and new users but the minority also seems to have become bored without constant drama and moved on (as seen by new posts and commenting activity falling of a cliff).
I know the common refrain on HN and in general is "on a private platform there is no such thing as free speech" but be careful. Don't let blatantly toxic users run your platform but beware of users who are quick to call everything toxic.
> Maybe if mods were more transparent across all the subs.
Does the latter really justify the former?
If you disagree with a moderation decision, take it up with them politely. If you consistently disagree, maybe this community just isn't for you!
Mods are just people donating their time. Even if they're inconsistent or "corrupt", there's no reason you should respond in any way that can be described as "violent".
There are basically two scenarios here. One, a lot of people agree with you — in which case you should be able to appeal the decision or splinter off successfully. Two, most people disagree with you — in which case the mod is probably right, or at the very least you're simply not welcome in the community.
Let's also not forget that we're talking about violence in response to moderation decisions. So even if that's their response, it's still not okay to e.g. threaten them.
> Let's also not forget that we're talking about violence in response to moderation decisions. So even if that's their response, it's still not okay to e.g. threaten them.
Ah the classic "don't let others make changes but outcast them" approach.
We’re talking about online communities — low stakes to join, low stakes to leave. There are exactly zero reasons that anyone should resort to doxxing/threats/etc in response to moderation decisions.
If you're helpfully, innocently, looking after dozens of top subs, and people mention that and wonder what's going on, you don't censor them, you have flipping AMA about it!
Because personal witchhunts are against Reddit's rules as a form of targeted harassment, and mods are de-modded by Reddit for not enforcing Reddit's rules -- or worse, subreddits that consistently see significant Anti-Evil Ops (effectively Reddit's on-payroll God-Mods) action may be quarantined.
My immediate thinking involves referring affected parties to professionals and specialists who deal with this sort of thing, but in your opinion-what should Reddit (the company) be doing?
I guess I'm having some trouble unpacking your suggestion, can you help me gain some clarity on what you're saying?
> I've been dealing with some on and off after a post some years ago where somebody who requested advice followed through with the best course of action only to find that his wife killed their kids soon after.
Two, if someone feels upset from getting messages like "you're a piece of shit", I wouldn't say they're an immature child or a weakling. I'd say they could be sensitive, perhaps also very considerate, and in that moment they may worry that another human being is upset and they're responsible for it. They might hurt from the pressure to moderate something that's important to them. Maybe they struggle with social interaction, even if it's online, and experiences like this can be very hurtful.
I don't care why something hurts someone, I care more that they are hurt. Chronic exposure to these things, even if they do seem benign or minor from the outside, absolutely can lead to trauma.
Trauma is a result of exposure to acute or chronic damage to your physical or mental well being. This can occur in a staggering number of ways from person to person. How each of us handles that will vary, but if it leads to a lasting impact, it's trauma.
I'm sorry you experienced what you did. It's arguably worse than what the moderators are describing, but it isn't exclusive to those experiences. It's also not a contest.
> Calling someone with PTSD “weak”
That's a nonsequitor derail. The post you are responding to is about faux self-diagnosis.
"weakling" is right there at the end and the GP is going on about a strawman not mentioned by the OP anyways, as several of the child comments point out.
Name calling (as a proxy for illegitimacy) is all the same for the intent of the point. Claiming damage without evidence is not compelling, even if it makes for tasty fluff content.
Think more everything from graphic descriptions of rape to child porn. Reddit generally lets comments of the "piece of shit" nature stand.
Watching ones you love die is terrible but it can also bring about a sense of peace. Being actively predated on is no fun. I have also seen mental and physical abuse as a child, fwiw.
We're conditioning a culture to be hypersensitive, and at the same time, for some of them, hyperaggressive. Lack of experience has always clouded the vision of humans, but many find themselves, willingly or not, living within silos and echo chambers, which reinforce their beliefs and behaviors. What they consider trauma does not begin to approach yours.
In other times, their vocal ignorant statements would be squashed immediately upon utterance. In these times, they receive reinforcement, from some.
1. We're trying to keep the subreddit as accessible as possible to people with nowhere else to turn. This is 𝐇ard.
2. There's at least one verified crisis counselor who frequently comments - u/ebbie45. We've verified their credentials.
We're still figuring out how to deal with it short of a figurative nuke.
Care to give us some examples?
In fact, I expected that the best examples got modded out.
There was a moderator of the gaming subs that killed himself fairly recently. He said largely the same thing that modding was not healthy - but he continued to do it.
Why do you think that is? I suspect it was because he had control over something and found that too appealing to let go of. That’s not really a soldier’s dilemma of duty and responsibility.
So that anecdote aside, I’ve worked with a special forces vet that actually has PTSD.
Respectfully, if you think moderating an online forum is any sort of analog even to be “PTSD-like” you are either mistaken or have a far more gruesome task than I think possible.
Speaking solely on behalf of myself: we see a notable volume of fantasy and fetish posts as well as legitimate pleas for help that veer into extremely disturbing territory. The result is a situation where mods may well find themselves feeling substantially troubled with extended exposure.
I'm not about to impose that on someone else, and as a result of inevitable scope creep from the sub gaining readers, we've now got to sustain an environment that people use as either a first- or last-resort option while at the same time turning away significant populations of people (a subset of followers from influencers such as https://twitter.com/redditships/) who appear to relish creating drama from people calling for help. Great example: https://twitter.com/eganist/status/1263534755045412870
When staying imposes a burden on myself but leaving heightens the risk that people may be harmed, it's a lose-lose, and the trauma arises from this.
I'd show you some of the stuff we've had to mod out, but it's too dark for Hackernews.
Why do people volunteer on anything online, like open source projects even? Sometimes people like a thing, want to keep it good, and feel that it's less likely to happen without them. Leaving is condemning the thing to possibly get worse and decay through less contribution, and interrupts their social connections formed through it and their established routine that gives the satisfaction of contributing. It's not something easily abandoned after investing years in.