Hacker News new | past | comments | ask | show | jobs | submit login
Reddit's top user leaves platform after harassment (dailydot.com)
291 points by admiralspoo 13 days ago | hide | past | web | favorite | 452 comments

I am a freshman moderator of /r/portland.

I had thought working as a moderator would be a great way to help grow the subreddit's base. Offer more useful real world interaction points and basically have a big impact on Portland.

Instead what I've found is that 99% of the time the moderators are dealing with bad actors and that has to be the primary focus to keep the sub from falling apart.

It isn't just dealing with trolls or stepping into stopping out of control discussions.

It is actively performing anti-ban-evasion against people who are targeting the sub for disruption and then going after moderators that get in the way of these attempts.

There is at least one active case number with the Portland Police Department of a person that has both attempted to doxx our moderators and has gone to the home of a moderator and vandalized property repeatedly.

In this example, the person will not stop and creates new accounts every day.

While there are things Reddit can be doing to help, (such as improving tools to counter ban-evasion,) I think this problem is bigger than Reddit and focuses on lack of enforcement for digital actions that would qualify as genuine crimes of harassment if translated into the physical realm.

This is a really interesting insight, thanks.

There are roughly two views on moderators. Moderators think that moderators are the thin green line protecting the ordinary users from a handful of bad actors, letting them live in blissful ignorance of the awful things that are happening. Ordinary users think the that moderators are power-mad bullies who ban and delete as they see fit, with no accountability.

Both of these things can be true at once!

This reminds me a lot of the situation with the police, who are to an extent the moderators of the big room [1]. 90% of a policeman's time is spent dealing with a small number of really nasty people. 90% of a member of the public's interaction with the police is a policeman exceeding their authority, not being interested in helping with a problem, etc [2].

If you spend 90% of your time dealing with really awful people, you will get messed up. You will develop an itchy trigger finger. Most of the time - most of your time - that's appropriate. But when you're dealing with someone who's not awful, you will get it wrong.

[1] http://www.catb.org/jargon/html/B/Big-Room.html

[2] Maybe that's not true of you. Maybe that's not true in general where you live. It is true in general where i live.

You're welcome.

> Both of these things can be true at once!

I've been on both sides of this. Last year, /r/portland began using the word 'criddler' to describe people in various states of meth-addiction and bicycle thievery.

The word was beginning to be used very often to disparage many different types of people. The moderators made a decision to ban it, and the community backlash was supreme. I let the mods have it as well.

After a few months, I came to terms with the label of 'criddler' and realized how it would come up in my mind as a form of negative judgement on a person that was not healthy for me. I realized their decision was a good one, and I described this turn around in my nomination thread running for moderator.

Your description of the situation with the police is apt and I have thought similarly many times. Most people have no idea what day-to-day police work is.

If it was possible to see the challenge of dealing with keeping the peace there would likely be more compassion for people when mistakes are made or there's anger about moderator decisions that affect the community.

The 'criddler' word seems humorous. Is there any particular need to ban it outright? These kinds of neologism happen all the time in real life and we ask people to 'tone it down' or give some other social signal that the joke has gone too far. The social feedback is immediate with breathing room, nobody is ejected from social areas, but you know they might be eventually. The joke naturally deflates as the last dregs of fun come out of it, and we all move on.

Someone famous once said (100yrs ago) that when people write to you they want to test their mettle. It is often best not to reply until a week after the letter was sent and to find the fire in the original message has since left the person who sent the letter.

These instantaneous and unpredictable word bans seem to be almost the perfect design for getting the maximum amount burn out of a given burst of new flame. Is the lone moderator too far removed from the pulse of the crowd to give effective feedback?

You'll find plenty of debate on this specific subject in the sub's post about it.

The word was humorous to me at one point as well.

However, it was being applied in general to homeless people.

Portland, like many cities, has a serious and growing problem with homelessness.

The word was being used to demonize and unfairly group drug addicted folks, people with mental health issues and those who are not those things but still homeless.

I do not know if there was a need to ban it outright, however I think there were reasons to want to.

To some extent, sub-wide decisions like this are taste-profile questions and with taste you will not get agreement.[2]

For example, Craig Newmark had to walk a fine line with adult services on Craigslist.[1] Many craigslist users wanted those forums. Many did not. Ultimately, I think decisions like this give character to the community.

So with /r/portland, banning 'criddler' wasn't just about banning a word, presenting a posture of the sub toward homeless people in general.

[1] https://en.wikipedia.org/wiki/Craigslist#Adult_services_cont...

[2] One way to solve this is to allow the sub to elect moderators, which is what /r/portland did. In this case, I did say that I was for the ban on the word and was still elected by the community that was angered by the ban.

Portland (and Seattle) have had such a terrible homeless (and drug addict) problem for decades that it's been reported on in various PBS shows such as Frontline several times over the years. It even spawned an entire music subgenre in rock that gave birth to the likes of Nirvana, Hole, Mudhoney, etc.

Heh, the dressing like a homeless logger/fisherman trend wasn't a show or a fashion statement.

In my area, people complain that there are homeless people 'now', when I was a kid, fishermem would come home from Alaska with thousands of dollars of cash in their pockets, buy a tent and a sleeping bag and just camp right next to the bar. Lots of couch surfing. People had bars on their windows.

These PNW cities have always had a lot of drugs and poverty and depression. Cities built on logging, fishing, lumber mills, paper plants, shipping ports and airplane factory work.

The tech hub vibe and the people who moved to here from the suburbs see this as new. And in a way, it's been exacerbated by skyrocketing rent, but part of what we're seeing is a lack of places where these people could hide in plane sight.

They've always been here. This is a subculture that has always existed.

You're only partly right, the problem more is exceptionally worse than when you were a kid.

I'm 4th generation Southern Oregon and when I was a kid (20 years ago) I rarely saw homeless people in Medford, they either were much more reclusive or didn't exist. Ashland is it's own case however and we would play the game "Hiker or Homeless."

There's a bike path that runs from Ashland all the way to... Well, at least Central point, or maybe Gold Hill? It's about 25 miles. The Medford sections are now overflowing with tents and homeless camps, enough to make it dangerous for families and children. This was unthinkable twenty years ago.

Is there any other tool you can use? A warning that it will be banned then lean into it a week later? Mark individual words automatically inside posts with a symbolic warning?

That's what I'm getting at, I should probably do research on my own time. Thanks for doing the good & dirty work of moderating a subreddit.

Yeah! Shoosh already!

Don't drag the debate in here!

You might as well wave a red flag in front of a bull! Or pour gasoline on a tire fire!

(Meant humorously with no disrespect.)

Just interested in ideas. I'm not going to tell you were I got that old quote from, it has an incendiary speaker! :P

The 'criddler' word seems humorous. Is there any particular need to ban it outright?

From my experience in the past, moderating doesn't scale at all. So you basically have to pre-empt issues that tend to lead towards requiring mod actions. An example of this would be the auto-mod stuff on Reddit (though I don't have any experience with the specifics here, just assumptions).

Another approach is to root out the "trigger" words that tend to escalate arguments quickly, and are generally a sign that a discussion has just become a battle of insults instead of a useful discussion.

Again, because its not just one situation between users, it's potentially dozens of arguments across a site between dozens of users. You can't wait until after things get out of control, because there isn't enough time in the day to clean stuff up in that manner. And people want clear rules for what they can and can't say and rules like "Don't be disrespectful" aren't usually super helpful, especially against people who are more malicious in their interactions.

So, calling out specific words that are banned allows for a simple, straightforward rule that can be pointed to without leaving lots of wiggle room for arguments (which again, you don't have time to have with every user who complains).

Usually the problem is, that you don't want to ban the user, but the behavior. And - reddit being mostly a textual thing - it is represented by words. Banning the user is unfair, plus they can just create a new account anyway.

So you try to be alert for people who use certain phrases. Racism, xenophobia, conspiracy theories, paranoia, extreme pettiness, severely abrasive attitude, choosing beggars, and so on.

And flame wars, and people jumping on others. And of course if there is a group of people showing such unwanted behavior it takes no time for the powder keg to go off.

I think having sort of simple but a big ambiguous rules are okay if there are very broad but exact rules too. (Eg. Rule1: don't be a dick, Rule2: no harrassment, no xenophobia, no posting of personal information, etc.)

Moderators and the community should be proactive. (I find that the best tool is reports, because that signals what people find problematic. Sometimes they are just annoyed that some newcomer posted that again. That's okay, but most of the time users report spammers, crazy serious racists with too much time, and the occasional lost redditors' posts.)

Excellent points. You also get at a key issue around volume of moderation tasks.

This is almost off topic... but can you explain criddler?

I think I’ve spent enough time around people on meth and bicycles to probably understand it - but I just don’t get it yet.

It's an insulting term for scruffy looking people who are presumed to be homeless or drug-using.

The reason it's contentious is that one person uses it, someone else objects to it, people take sides along left-vs-right lines, and the thread devolves in to a shouting match.

I guess I meant the etymology of the word.

It seems it’s pretty well established though, from 2010:


Also politicians. They too are trying to fight the good fight and help their community. Pretty much any position of authority attracts a certain mindset yes, but also changes the person who assumes that authority.

r/portland is one of the worst cesspool subreddits I've seen, and has had a history of really bad moderators over many years. I likely haven't read anything in r/portland since you became a mod, so I can't comment on how it is at the moment, but something about a liberal city draws out the worst in people on that subreddit.

but, "criddler" is much older on the sub than "last year", it's been a mainstay for at least 10 years. and Portland doesn't have a police department, they have a police bureau (sorry, that one's a pet peeve).

Jerry, I'd welcome you to come back and see how things are now. I understand there was some moderation in the past that was difficult but that a change went through some time ago.

The group of mods running the show now seem to me good people and having seen how they operate I'd be impressed if other city subs of this size are doing a far better job.

I realized after writing the above that it was likely an older word than that, but it hadn't really been a growing memey word in the sub before last year that I had noticed. Maybe the year before? Time flies.

And thanks for the note on the Portland Police Bureau. Too late to update my post, but I'll try to remember that one.

Subreddits for any location go completely against the grain of reddit. The point of a subreddit is to group people with a similar interest, but the people who live in a city don't have that shared interest, they're just in the same place. Even on the bigger side, you end up with this cross section of the entire site, shrunk down and fragmented so that it's too big to ignore the inevitable arguments between the different factions, but also too small for the moderation to be any good.

Since the concept of a city subreddit is so obviously pointless, the only people who stay are the ones who like the conflict, exacerbating the problem; or in worse cases like the Canada subreddit, there is a hostile takeover by one side. If you want to talk to people who happen to live nearby, I think you should do so on a different website; reddit doesn't work at all for that usecase.

Are you saying that people who live in a similar location don't have a shared interest in that location?

I'm not sure what point in time you're referring to, but I've been subbed there for years and I find the content to be above average in quality (although not great) relative to other subs. If I had to rank it, a very rough ranking would be somewhere between r/diy and r/technology in terms of quality. Shit-posting is on just about every sub, so I don't think it's fair to base it entirely off of the handful of trolls r/portland has.

EDIT: just noticed you mentioned "liberal city", so if it's political stances you're referring to then yes a more conservative-leaning person may not feel welcome on r/portland

> just noticed you mentioned "liberal city", so if it's political stances you're referring to then yes a more conservative-leaning person may not feel welcome on r/portland

funnily enough, I'm referring to the sub often times having a more conservative audience, often coming in from the suburbs (Vancouver being a big one there).

Ah ok. Then that goes back to my last sentence - there's a handful of trolls and I think always will be, but the majority of what I see is pretty decent.

> After a few months, I came to terms with the label of 'criddler' and realized how it would come up in my mind as a form of negative judgement on a person that was not healthy for me. I realized their decision [to ban everyone's use] was a good one

Note how your conclusion doesn't actually follow: just because something isn't healthy for you, doesn't make it unhealthy for everyone else.

Trying to police too much is likely why moderators get such a backlash when they employ heavy-handed tactics like outright bans.

Moderators also very likely operate in a like-minded bubble: the sort of people who would volunteer to be moderators are far more likely to have more in common with each other, than with the average community member. As such, mods will likely always get backlash.

For instance, I'd bet it's far more likely that your fellow mods debate to what extent language should be policed, and there are probably almost no mods questioning whether language should be policed at all.

Some kind of moderation system with strictly limited mod terms and random promotions to mod status would likely improve relations, but achieving the right balance would be challenging.

> Trying to police too much is likely why moderators get such a backlash when they employ heavy-handed tactics like outright bans.

This logic doesn't follow. If you don't police enough, then users have an expectation that anything goes. When you do decide to moderate behavior, you experience a backlash because you've changed stances.

If you moderate too much, then you get accused of being heavy-handed, fascistic or trampling over their free speech rights. When you're a moderator, there's often very little that you can do that won't generate backlash unless the user in question is so toxic that the community as a whole agrees they need to go.

Your idea of random promotions to mod status w/ limited mod terms is an incredibly bad idea as well, because the moment a bad faith actor gets promoted to mod your entire community will quickly go up in flames. Trust me, there are people that will pretend to be in good faith for months, years until it gets them a position of power just so they can burn it all down.

> If you don't police enough, then users have an expectation that anything goes. When you do decide to moderate behavior, you experience a backlash because you've changed stances.

You experience a backlash from some users, sure. If most users consider it a positive change, then no big deal.

> When you're a moderator, there's often very little that you can do that won't generate backlash unless the user in question is so toxic that the community as a whole agrees they need to go.

This seems to be assuming ban-like tactics. They're a poor option and I don't favour them.

> Your idea of random promotions to mod status w/ limited mod terms is an incredibly bad idea as well, because the moment a bad faith actor gets promoted to mod your entire community will quickly go up in flames.

You're making a lot of assumptions:

1. There's an innate equilibrium between the cardinality of the mod set, the probability that it contains a bad actor, and the probability that actor can cause appreciable disruption. The larger the mod set, the less likely a bad actor can do anything meaningful. You can likely make this probability arbitrarily small if that's a likely threat model. Tandom appointments work quite well to increase efficiency in various consensus-driven systems [1].

2. Mods shouldn't have absolute power. There is simply no way that a single mod should be able to destroy a whole community, any more than a bad judge in the legal system, or a single bad politician could destroy a city, county or state.

3. A transparent appeals process is always needed, in which other mods and community members review mod decisions. It took us millennia to develop our robust legal systems. Technology can eliminate some of the bureaucratic inefficiencies of the legal system in this setting, but it still contains robust patterns that should be copied.

4. You're assuming an open signup process which is vulnerable to DoS/brigading tactics. Maybe there's a way to allow open signups too (with reputation systems), but it's not strictly necessary.

5. Various reputation systems can be overlaid on this, and this interacts well with transparent judgments/appeals process, ie. someone with a long record of violating conditions and losing appeals would be less likely to be given mod power (but never 0%).

In general, today's moderation systems are intentionally vulnerable to a number of problems, because sites optimize for growing a user base rather than fostering community, because that's how they raise money.

Consider something akin to stack overflow, which randomly shows you messages to review or triage. Every now and again you get 5 messages or mod decisions to review, and you vote your approval/disapproval. This narrows the gap between traditional mod status and users status, where true mods would be relegated to reviewing illegal content that places the whole community in jeopardy.

Of course, there might also be considerations for avoiding the tyranny of the majority, but my point is only that the space of possible moderation strategies is considerably wider than most seem to think.

[1] https://www.sciencedirect.com/science/article/pii/S037843711...

I mean what you're arguing is to essentially turn forum administration into miniature governments. And I would hope that as reality has proven, government is very easily gamed by people seeking power. You seem to make a lot of assumptions which, as someone that has acted as a moderator in the past does not bear out.

Have you acted as a moderator before? What are some communities which you believe have the idealized form of moderation? Because even Stack Overflow as you've referenced has issues with high toxicity among users chasing off moderators all the same.

> I mean what you're arguing is to essentially turn forum administration into miniature governments.

Forum administration already is a limited government, typically authoritarian in current incarnations. Mods are the police, judges and juries. Works fine if you have the resources and mods are fair, and maybe that's typical for minor infractions, but the conflict of interest is clear.

Authoritarian moderation doesn't scale though, and you disenfranchise a lot of people with every a mistake, particularly since a) there's rarely a transparent appeals process, and b) people don't typically like owning their mistakes. Doubly so when "it's my platform, so I can do what I want with it". Maybe that's not something you care about, but given the increasing importance of social media to democratic government, it's a problem that will likely worsen.

> And I would hope that as reality has proven, government is very easily gamed by people seeking power.

Government is a system of rules for governing people's interactions. A moderation system is a system of rules for governing people's interactions on a specific site. You can't speak of these things as if they're that different. Either a system of rules is vulnerable to exploits, or it's not.

> Because even Stack Overflow as you've referenced has issues with high toxicity among users chasing off moderators all the same.

I mentioned stack overflow specifically for the unintrusive and random review process and nothing else. SO doesn't feature any of the other ideas I listed.

Finally, I have no "idealized form of moderation" in mind, I have knowledge of where existing systems fail, and how other systems have already addressed very similar problems. Designing a secure and scalable moderation system is a big task, so if you want to hire me to research and build such a system, then I will be happy to address all of your questions in as much detail as you like.

Like moderator jury duty?

Yes, there are many possible options here for random mod status, limited terms, etc. I review a couple here:


> There are roughly two views on moderators. Moderators think that moderators are the thin green line protecting the ordinary users from a handful of bad actors, letting them live in blissful ignorance of the awful things that are happening.

You can also split moderation along another axis, dividing on whether moderators are curators or janitors.

The janitorial view would be that moderators should be generally hands-off, acting only in clear-cut cases of abuse or spam. The community does or should mostly run itself through social norms, and heavy-handed moderation is unnecessarily and unfairly restrictive.

The curation view is that moderators serve an active role in building the community, and they should generally act strongly to define and preserve community norms. The community is the moderators' space first, and a free space second if at all.

> Moderators think that moderators are the thin green line protecting the ordinary users from a handful of bad actors, letting them live in blissful ignorance of the awful things that are happening.

As a moderator on another forum (not reddit), I disagree. Many of the moderation actions I take are visible to ordinary users, and that's on purpose. For community norms to be maintained, they have to be visibly enforced: enforcement is not just for the particular bad actor, but also for the ordinary users who genuinely want to respect community norms, so they know what the boundaries are (and therefore know how to respect them), and can see that people who violate those boundaries are dealt with (so they have confidence that the norms are meaningful).

> Ordinary users think the that moderators are power-mad bullies who ban and delete as they see fit, with no accountability.

Again, I disagree. If moderation actions are often visible, as above, ordinary users who genuinely want to respect community norms can see what is actually being enforced, and over time, if the moderators are doing a reasonable job, ordinary users will see that the pattern of enforcement reasonably matches the stated norms that are supposed to be enforced. Most ordinary users are reasonable and don't expect perfection, but they do expect consistency and reasonable judgment.

The biggest problem I see facing good moderation is that moderators can't be everywhere at once. One way to address that is to give ordinary users a way to report problematic posts, to bring them to the attention of the moderators. That also gives ordinary users another way to see how moderation is being done, because they can see what is or is not done in response to their reports.

There will always be some people who are never satisfied, and who will find something to complain about no matter what moderators do. But I don't think most ordinary users fall into that category.

Also, about "no accountability": ultimately, as a moderator, I'm accountable to the owner of the site. Similarly, the HN mods are ultimately accountable to the owners of HN. So the corresponding question for reddit would be, who owns reddit?

> 90% of a member of the public's interaction with the police is a policeman exceeding their authority, not being interested in helping with a problem, etc

What complicates matters even more is that the interaction usually involves fear at least on one side. In a peaceful society, a regular citizen doesn't interact with police officers day to day. If you capture the attention of an officer, it usually means either something bad happened to you, or you're suspected of doing something wrong - both of which put you in a "fight or flight" state. I suspect that the most common interaction between a westerner and a police officer involves said citizen breaking traffic rules, which is a pretty antagonistic situation from the start.

Having been a moderator of a large, 60k-ish person subreddit for a couple years, I can assure you that that's not the case: removing spam and other obvious rule breaking (sending users death threats, racial slurs, etc) took no more than a few minutes a day. Well over 90% of my fellow moderators' time was spent exceeding their authority: harassing people they just disliked, removing posts for personal biases, and so on.

It's a job that attracts that sort of person, people who just want things to function smoothly will get fed up with it quickly, the only people who stay at it will be the ones who want to mold the community to their vision (whether it wants to be molded or not).

This assumes good faith tested to the point of cynicism, on the part of the police, when it's been shown time and again that police are indoctrinated into a regime of institutional bias from the moment they enter the force.

I see the logic in your statement, but it's an imperfect model.

The analogy is really insightful, but also makes me slightly uneasy. When a moderator gets it wrong, you can't comment on an internet forum. When a police officer gets it wrong, you're dead.

Which is why we, in principle, regulate the police much more than moderators. The classical ideal of policing is that the most the police can do is apprehend you and put you in front of a judge and jury. The jury are specifically not people who spend all their time with criminals, so they should be able to act as an effective check on the policeman's instincts. Then we have all sorts of procedures and laws and rules of evidence and oversight and so on, to try and guide the police into doing their job well, and protect people who are caught up by mistake.

Moderators operate purely on the Judge Dredd model.

But then, in reality, the police operate in a 90% classical, 10% Dredd sort of way (for values of 90% and 10% that vary by location). They can mete out small punishments without going to court, oversight is not very effective, etc.

> When a police officer gets it wrong, you're dead.

Or, as often happens, they're dead. I don't envy members of a profession that have to get every life-and-death decision right in real time on a daily basis.

That's a fair point, although statistically the job of police officer doesn't crack the top 10 in most dangerous careers.

Similarly, a vanishingly small proportion of citizens (as opposed to bad guys) have a life-altering bad experience with the police.

It'd be interesting to see real numbers, but I suspect officers on the beat get the short end of that stick.

The latter is why policing often fails in the absence of serious, dedicated, time intensive protocol development and (more so) training dedicated to countering this effect.

>If you spend 90% of your time dealing with really awful people, you will get messed up.

I try to remember this when interacting with law enforcement. If you have to call the police, you're having a bad day. If you're a police your job is dealing with everyone's bad day.

I know what you're talking about. I was also a mod myself on /r/Suomi (Finland) and quit after a rather serious incident. Other mods and users on far-right subs started to compile a list of European subreddit moderators whom they deemed 'SJWs' and tried to gather as much personal information on everyone. Someone from that group leaked the documents to us and it was quite alarming. The scale, effort and hate that went into those documents was the most surprising thing.

Anyways, when our main mod got approached on the street and asked if he was /u/nickname on reddit and he replied yes, he got stabbed. That was the day I quit.

I’m sorry you and your friend dealt with that.

It’s not just geo subs that have this problem. A friend of mine who mods a number of comedy subs (of all things) showed me his mod mail that appeared to indicate systematic brigading from far right discord servers.

They post extremist content in a bid to get the sub banned, and if it fails, they escalate harassment campaigns on the moderators

Horrifying. Is there a news story on the stabbing?

This is the best argument I've seen for making it publicly anonymous which moderator took a specific action and maintaining a large pool of moderators.

Bad actors can harass a community as a whole (and they do) but it's much harder for them to target specific moderators. The flipside is that moderators have to hold each other accountable for impartial enforcement of the rules.

Yikes. I am the moderator of r/Metric among other medium sized reddit subs. It turns out there are people who really hate the Metric system and see it as part of a globalist conspiracy. All of the behaviour you describe is familiar, though thankfully hasn't risen to the level where police needed to be involved. Our worst offender has thankfully recently focused on Brexit and left us alone.

I totally agree that lack of response to ban-evasion has been a problem. Even Wikipedia does better than Reddit at managing user bans.

The amount of damage that bad actors can do on reddit is disappointing. It also doesn't seem to be getting any better. In the 9 years I have been modding the same problems are still present.

The poweruser situation is also frustrating. The reddit moderation system and the ranking of moderation power by seniority really encourages cabals. It has been tough to resist on some of the subs I moderate for them being taken over by power users. You invite one mod to help with spam or problem users and all of a sudden you have 5 more mods, all of who moderate dozens of subs. Slowly the power users move towards top mod position and they never bring in fresh mods, only other power users. They will also try to remove any existing top mods to entrench their control. The system is killing the democratic nature of reddit by concentrating moderation into a narrow group. Given reddits position that moderators effectively own the community it makes it very difficult to resist this kind of takeover. My advice: never invite anybody who is already modding more than 100K users in other subs to be a mod on your sub.

Hacker News suffers from the same problem. We have folks like dang (God bless him) manually dealing with assholes. Seems like we could solve these problems with technology, but here we are two decades into the 21st century and we still have humans cleaning the pools.

I'd be interested to know if actual forum moderators think that we could solve the bad actor problem with algorithms...

Hello, bad actor here.

I only use Hacker News and Reddit. And I always manage to get banned every few months, so I guess I'm a "bad actor".

Usually I start off OK. I can rank up 1000 points on HN or 20k karma on Reddit quickly because some of the discussions are interesting and non-inflammatory and I have interesting things to say.

But then some topics veer into domains that make me angry (hello politics), or I find a comment inappropriate or unfair, or mod behavior hypocritical. And I share that in a post that gets flagged or downvoted, and I get banned.

It is hard to detect because I think anyone has potential to become a bad actor in the eyes of a mod. There is no such thing as an unbiased mod. They will rub some people the wrong way with their comments or actions.

Some personality type's are just not compatible with what mods want to see in their well-tended gardens. I've never stalked or hounded anyone, and I've disagreed with dang's assessment of my posts once or twice, (FYI DanG is a paid mod, not a volunteer, which is probably why he's so calm about it, even though he get's riled too: read the New Yorker article about him) but this appears to be a problem techies cannot solve. Why do I say this? Because it's been going on since the 80's with Usenet. Almost 40 years of trolls, and tech hasn't solved it, but it has created some systems that have worked to keep the weeds out of the gardens. But gardens still need weeding.

> But then some topics veer into domains that make me angry (hello politics), or I find a comment inappropriate or unfair, or mod behavior hypocritical. And I share that in a post that gets flagged or downvoted, and I get banned.

Kurt Tucholsky gave great advice on how to write a letter to a government agency that is applicable to all potentially heated discussions:

* write letter.

* put letter in drawer.

* wait three days. don't look at letter in drawer, write a new letter and send that one.

This is absolutely true, and it's unfortunate that the current format of sites like Reddit and HN discourage this kind of behavior. The comments that get the most votes are the ones that have been present the longest, which incentivizes early and fast, "shoot from the hip" commenting so your comments get into the flow early where people can see and vote on them. In most large subreddits, it's basically not worth your time to comment on anything more than 12 hours old or so: no one but the original poster (and sometimes not even them!) will see or read your comments.

I'm not sure how best to fix that. Some subreddits like /r/scenesfromahat use a timed-release system that only displays vote scores on comments after a fixed period (12 or 24 hours, I forget which). That at least helps reduce the first-comment effect, but it still means that anything after the votes are revealed is basically not going to be seen.

> I'm not sure how best to fix that.

At a certain scale, I think threaded conversations are incredibly difficult to follow without a voting system to sort it out, and once you introduce a voting system you end up with voting system problems as you mentioned.

I honestly think that flat forums are more fit for purpose. Multiple concurrent conversations end up being a bit messy, but there was at least a reasonable chance that someone would reply to any given post, since everybody in the thread was on the same page.

Heck, you can even gamify engagement with a flat forum - one of those that I still frequent allows you to "react" to any given post. It doesn't actually do anything except have a number beside the post tick up, but people still use it.

> I'm not sure how best to fix that. Some subreddits like /r/scenesfromahat use a timed-release system that only displays vote scores on comments after a fixed period (12 or 24 hours, I forget which). That at least helps reduce the first-comment effect, but it still means that anything after the votes are revealed is basically not going to be seen.

Isn't that a standard subreddit setting? I don't think that actually affects the problem you were describing. I think it's only meant to help keep the vote tallys from influencing user behavior (e.g. bemoaning how many up/down votes some comment got, being extra motivated to karmawhore).

In my experience, quick-feedback scores seem to have a negative influence on people's behavior and emotional experience. IIRC, HN used to show comments scores, but stopped some years ago. I personally try to disable such "features" as much as possible.

> The comments that get the most votes are the ones that have been present the longest, which incentivizes early and fast, "shoot from the hip" commenting so your comments get into the flow early where people can see and vote on them.

Reddit's default "best" sorting algorithm is designed to mitigate that effect [1]. It does a good job of not biasing old comments in terms of sort order, but it doesn't help the related problem that older comments tend to accumulate the most replies so newer stuff can still get drowned out in terms of quantity of other content.

[1]: https://redditblog.com/2009/10/15/reddits-new-comment-sortin...

I guess one way to combat this is to just not worry so much about whether a comment gets lots of upvotes or whether or not it is even read. They’re fake internet points, don’t mean anything, and are not worth anything. If you feel you can contribute to the conversation, do so. Don’t worry about whether you’re commenting early enough or whether you are betting on the right “lucrative” thread. Your comment could be one out of a thousand—who cares? HN karma is not some kind of competition.

Hey, that's a great idea!

Instead of getting warned or banned, a user is placed in a timeout box where all of their posts get a three day lag and remain editable, and when the user logs in, they must re-read their posts before proceeding to the site. Kind of an 'in-your-face' reminder not to be a troll.

It wouldn't stop the insane trolls, but maybe it would give the borderline trolls time to think.

I like this idea too. Also temporary bans are good for offering a cool-down period for users who generally are constructive but tend to lose their tempers from time to time.

> (FYI DanG is a paid mod, not a volunteer, which is probably why he's so calm about it, even though he get's riled too: read the New Yorker article about him)

If anyone else is interested, here it is:


Some interesting stuff in there, though I would love to read more about the behind-the-scenes tools they use to search and keep track of everything.

I've long argued that the downvote is terrible for civil discourse, I've had to tread carefully and be warned many times when I comment on it - including writing in-depth constructive criticism and not simply whining.

It's strange to me that especially on a technology forum discussing the negatives or pitfalls of certain mechanisms is being suppressed, censored - just like with politics, as politics is inherently intertwined with everything including non-action.

The upvote/downvote thing was way, way worse when vote counts were shown per-post. That went away years ago and I think the current system seems to work pretty well when viewed through a macro lens. (In the sense of "is this a good community overall?" and "would it likely be worse if the up/down voting were entirely removed?")

I'm on another forum that opted for an up/down vote system and showed counts per-post much like news.yc used to. It created so much drama and anxiety that people were openly antagonistic towards each other over what amounted to fake-internet-points.

The solution the admins came up with over there was "keep the counts shown publicly" but also "make public the specific votes that were up/down on any given post" (with an anonymization of all historical downvotes before the policy change, but not afterwards). Within days, the community adapted and it was a huge net benefit, IMO.

What benefits do downvotes provide over just an upvote sorting the best content to the top and ability to report a link or comment for some greater infraction than difference of understanding?

Downvoting for being blatantly inaccurate or blatantly anti-community seems a worthwhile community-sourced signal for the posts of bad actors (whether temporary or permanently bad actors). This should absolutely factor into the sorting and eventual hiding of sufficiently negative posts.

For people who think of themselves as good actors, I will reflect on posts that attract downvotes and try to figure out if I should have been more constructive on a given post. (I don't "care" about the score per-se, but I do care about the community that I'm part of and if people are telling me that I'm being an asshole on a given post, I should reflect on that, decide if I agree with them, and if I do, to change next time. (Often, I think my content was fine and someone just disagreed and used a downvote to express that. That's not how I use downvotes, but if they do, so be it...))

I like the stackexchange approach, where upvotes count 10x for karma, because no matter how obviously wrong and indecent, we still care for the share of good answers, and more fine grained controls may take care of the rest (sometimes manually). This isn't perfect, 1:10 is an arbitrary proportion, and it might give a wrong impression of being succesfull, and take too long to register as a downward slope; but very importantly, this detaches voting on content that is still immediately visible from karma, that is immediately personal and maybe not that important to other users.

I had a funny incident, where I misunderstood a comment, thinking that "all your answers are negative" was very true, because I often disagree and try to be contrarian. SE has a policy that discourages discussion, so if users find it uncomfortable having their views challenged, when they really wanted an easy answer, howbeit simplistic, that may be lawful.

The situation is different when discoussion is the aim of the game. And it's a bit disingenious to say that SE doesn't have "discussion"; rather, they avoid controversy, heated arguments, or open ended debate.

Anyhow, regarding your comment that downvoting is terrible for civil discours, I agree inasmuch as it's often hard to tell what the downvote signalizes, as it might lump up many different opinions into one false agreement. Say, five different opinions compete, but only one takes on all of the answers and will accrue downvotes for the attacks, whereas the others get basicly ignored by competitors for being irrelevant, but yield feedback so that the vote is really one of popularity, as opposed to ...

Really says something about diplomacy. And I hate it so much.

If you look at hacker news however, a lot of nonsense makes it to the front page. One word titles, clickbait, old links to wikipedia articles, submarine advertising, something no one has heard of announcing a new version, something few care about announcing they are shutting down, a point release of rust, a program that is only relevant because it was written in a certain language, pointless, trivial side projects that only seem relevant to people unaware of the the same idea being done before, etc.

I think a lot of that wouldn't make up such a giant part of the site if there were downvotes on stories.

The downvote/upvote count thing should be informative, and the display preferences based on user's choice. I find the HN practice of dimming downvoted comments downright Orwellian.

You can always click on a faded comment's timestamp to go to its page, where the comment should be in a readable font. I'd love to know what Orwell would say about requiring an extra click, but oh well.

> I'd love to know what Orwell would say about requiring an extra click

He would say probably something along these lines: "The chief danger to freedom of thought and speech at this moment is not the direct interference of … any official body. If publishers and editors exert themselves to keep certain topics out of print, it is not because they are frightened of prosecution but because they are frightened of public opinion."

Your "click to read" today is the "buried in page 12 in pt-6 font" of the past. Same train of thought.

I was banned from r/hotsauce because I posted a coupon-code for a brand I was interested in trying. They were trying to sell stock to help mitigate the downfall of the COVID-19. The mod said I was an advertising shill and needed to be purged... As if people posting their collections all day didn't count as advertising.

A lot of subreddits end up being snakes telling their tail how it's the best tail. A lot of those can turn into ads as people start making whatever brands to get in on the circle jerk. Some subreddits I suspect go further and have companies working with moderators to do very potent marketing without anyone drawing attention to it. /r/mechanicalkeyboards, /r/gadgets, etc, the list goes on.

And my desperate attempts to become unbanned was met with paranoid skepticism and unfounded rebuttals. I had literally only posted one other time, showing off a meager 10 bottle collection. And somehow trying to help not only the community by sharing a discount code, not only starting a discussion about the aforementioned sauces, but also helping the struggling company. I didn't really see a problem with it. The community like the post and engaged, but somehow I was a first-offence marketing shill worth banning for life.

Not sure what i'm missing here. You clearly did not read the subreddit rules before posting. However the fact that you have no recourse after being banned other than debating a mod, that's a clear issue.

Another "bad actor" here. I think that it's a combination of two phenomena:

* I have accounts not just here, but also at places like Lobsters and Something Awful. In those places, because accounts are rare and can be banned so easily, discourse is constantly trying to stay much more civil than here or Reddit.

* As a former community moderator, I don't respect moderation actions on sites where anonymous signup is allowed. You asked for hoi polloi to wander in off the street and give their opinions; you can't then wonder why discourse is trash. Here, it's even worse; the moderators are paid for their work, which lends a clear bias to every moderation action. Similar happenings on Reddit led directly to user protests and revolts, and it's amazing that the community tolerates paid moderation here.

The idea of the well-tended garden is a potent one. I have had to tolerate obviously toxic but helpful people before and it is always irritating to not ban them, despite knowing that they are good for the garden.

> I don't respect moderation actions on sites where anonymous signup is allowed.

We don't put barriers to signup because we want it to be easy for authors, experts, and people with firsthand knowledge of a situation to step into a thread. Those are some of the best comments HN receives. If you put up barriers to keep out hoi polloi, you end up keeping out the likes of Alan Kay and Peter Norvig too, and plenty of lesser known people who have made first-rate contributions.

Besides that, there are legitimate cases when throwaway accounts are needed in order for a person to post on a topic, often when they have first-hand knowledge of a situation as well. How do you allow that while keeping out trash?

Obviously, if there were a way to allow the above good stuff while keeping out trolls, toxic comments, etc., that'd be grand. But as long as there's a tradeoff, I'd rather have the long tail at both ends—I think the forum would be more mediocre and stale without it.

p.s. I'm puzzled by your comment about paid moderation. It seems to me that unpaid moderation would be more likely to be biased, since people are going to extract compensation for the work in some form or other. If it isn't money, it's probably going to be power or an ideological or personal agenda, or something else that manifests as bias. In any case I'd be curious to hear what sort of bias you think is showing up in mod actions on HN.

> I have had to tolerate obviously toxic but helpful people before

I understand where you are coming from here. I struggle with this. I think there is a legit theory for it, usually given in the context of how to reconcile shitty behavior of geniuses (Picasso comes to mind: legendary artist, shitty human.)

Even if toxic people have something good to say once in a while, do the ends justify the means if they stomp all over the roses in the process?

> You asked for hoi polloi to wander in off the street

The garden analogy is potent because where I live there is a huge rose garden that anyone can wander in off the street and visit. Some people come in and do stamp on the roses. And it sucks for everyone else. Which is why I can understand the desire to keep those people out.

However, shouldn't the gardeners KNOW that there are and always will be shitty humans?

I'm truly ambivalent on this one: I want to participate, but I lack impulse control, so I'm excluded. That's not fair. And if I was tending a garden, I'd want to keep the "me"s out.

> I want to participate, but I lack impulse control, so I'm excluded. That's not fair.

Yes, it is, because the problem is not the garden, it's you. You want to participate, but you don't have a basic skill (impulse control) that is required for participation. It's like saying you want to be a concert pianist, but you don't know how to play the piano, so you're excluded and that's not fair.

That is why I said I'm ambivalent to the previous comment's statement about benefits from toxic personalities.

I think your argument mixes up things you can control (skill) with things you cannot control (impulsivity), if the latter could be controlled it wouldn't be impulsive.

And I admit that is a big gray area. There's a continuum of toxicity online, and there are going to be some moderation rules that are subjective.

Unlike a pianist, I see the argument as more akin to web developers choosing not to implement alternate or semantic constructs which in turn excludes blind people. A visitor can't get better at not being blind. Of course, the analogy breaks down because blind people aren't adding noncritical discourse (aka what one mod may consider "flamebait"), but now we are back to subjectivity and affordance as to what is noncritical. We clearly know how to make the web accessible to blind people, but we don't have a universally clear way to make discourse available to people who sometimes suck at it.

However, I can create as many accounts as I want, so I got that going for me.

> I think your argument mixes up things you can control (skill) with things you cannot control (impulsivity), if the latter could be controlled it wouldn't be impulsive.

First, we're not talking about a binary distinction; things aren't either "can control" or "can't control". It's a continuum.

Second, if it's really true that you can't control your impulsive behavior, that still doesn't change the fact that that behavior will make it virtually impossible for other people to deal with you in certain contexts. It's still up to you to recognize the impact that your behavior has on others, and to make choices about what you can realistically do or not do--or about how much work you are willing to do or how much risk you are willing to take to be able to participate in certain activities (for example, if it turned out there was a drug that enabled you to control your impulsive behavior, would you take it in order to enable you to do something you wanted to do?).

> I see the argument as more akin to web developers choosing not to implement alternate or semantic constructs which in turn excludes blind people.

Ok, so what "alternate or semantic constructs" could the programmers of HN, for example, put into their code so it won't exclude people who can't control their impulsive behavior?

> we don't have a universally clear way to make discourse available to people who sometimes suck at it.

It's not that we don't have a "universally clear way" to do this. We don't have a way at all. "Sucking at discourse" is simply not something we know how to accommodate for. The only way we know of to deal with it is for the person who sucks at discourse to learn how to not suck at it.

Perhaps at some point we'll have an AI or something similar that can mediate such discussions so all parties can participate. But we don't have anything now.

Stop making this personal.

> so you are excluded

from what, playing the piano? Do you maybe see a connection here to why somebody might not know "how to play the piano"?

Or in other words: A garden without "you" is not really a garden, except in theory, if the proverbial tree makes a sound when nobody can hear it fall. That's a slippery slope argument.

Many people may lack impulse control, but preemptive judgement can't weed them all out. That's one reason why it's "not fair". It's fair to those that have "impulse control", maybe, but it is perhaps unfair that they get to decide what that is, when a moderator might act out of impulse, or experience, all the same. It is however futile to assume that life were just not fair, because then "you" have already lost.

If entry is taken to afford the gate keeper, it is not an open garden anymore, open to the public. At least not if the submission requirements are arbitrary to an uncertain degree. Maybe it's the wrong approach to take that internet discussion is not important and impulse control therefore let down too easily. But then again, the impulse to post or visit at all might be the problem to begin with, as in this post.

Really, who's aspiring to become a concert pianist in this day and age? That's a weak rhyme, unless you meant to imply that the reddit moderator cabal were playing the readers like an instrument.

> Stop making this personal.

I didn't; the person I responded to did, by using the word "I". They were specifically talking about themselves.

> from what, playing the piano?

From being a concert pianist. Read what I actually wrote.

> It's fair to those that have "impulse control", maybe, but it is perhaps unfair that they get to decide what that is, when a moderator might act out of impulse, or experience, all the same.

My statement that impulse control is a basic requirement for participation applies just as much to moderators as to any other participants.

Who gets to decide what the forum rules and norms are is whoever owns the forum. That's as fair as it gets.

There are some forums where lack of impulse control isn't much of a problem, because nobody else on that forum has it either. So strictly speaking, I should have restricted my comments to forums where that is not the case. I don't think that makes much difference in practice for this discussion, since as far as I can tell the forums where lack of impulse control is the norm don't have moderation problems since they don't have moderation at all.

> who's aspiring to become a concert pianist in this day and age?

Googling "how to become a concert pianist" gets plenty of hits, so it looks like plenty of people are trying to help aspiring concert pianists. Perhaps they're all speaking to an audience of zero, but I doubt it.

> unless you meant to imply that the reddit moderator cabal were playing the readers like an instrument

You're going way off into left field here.

Let's finish the analogy. Rose gardens and other community parks are usually community-funded; my local gardens are funded with taxes. There are not only paid moderators (police), but paid curators (gardeners and arborists) who deliberately build up and cultivate an appearance for the garden. Some of the more expensive gardens, like the local zoo, also have an entrance fee, because taxes alone would not fund the garden at its given size and occupancy.

There are communities like this; Something Awful is the first which comes to mind. These communities deliberately acknowledge that money is required to fund community spaces, and use the money to improve the space.

There are also extensions to the analogy. A local park has a bulletin board. Postings to this board are generally made by community consent; anything that any community member feels strongly enough about can be removed immediately. This is also how postings on telephone poles work. Sometimes a community will lock up their bulletin board after a wave of abusive listings. This is analogous to primitive message board moderation, as seen here on HN.

Are we here to advertise to each other, like on a bulletin board? Are we here to produce a great knowledge base, like in a garden? What should the shape of conversation be?

Banning is more like pruning weeds than pulling out weeds.

As I was typing this, I was thinking about moderation of HN and but was cautious to comment on meta.

Speaking generally, without real-world consequences for violating community TOS on a service there are no teeth in tracking bad actors, banning them, etc.

For a long time, I loathed Facebook's real-name policy. However, to some extent I suspect the amount of identity validation and attribution of comments to actions does have a limiting impact on casual trolling and harassment on that service.

Facebook's real name policy is a complete failure. And professional PR, "influencing", and astroturfing are endemic, especially on political groups.

It seems to be trivial for bad actors to hack/farm unused accounts and impersonate people almost at will. Meanwhile nation state-level and/or corporate bad actors have the resources to bypass or subvert the id validation and create fake identities and accounts at industrial scale.

I don't think real ID is a solution. Not even if it's linked to some kind of robust physical key - which is obviously going to have privacy implications anyway.

AI or automated searching for troll-like activity, most likely followed by manual oversight, is more likely to be successful.

But there's the lingering question of whether FB is really interested in pursuing this with any effectiveness or enthusiasm. Given FB's resources and its lack of success to date, the answer seems to be "no."

Don't think the real-name policy helps that much, to be honest.

Just in the last couple of days of me commenting on FB I've been called a "retard" (sorry for the word, but that's what the lady used to describe me) and a "troll" (presumably Russian), this latter description accompanied by said commenter saying how I only write down dumb stuff which shows that I don't think before writing (but not saying where and especially why he thought I was wrong in my statements).

Compared to all that commenting almost anonymously on HN seems like a breeze of fresh air. Of course in the many years I have been commenting in here there have been ups and downs (a couple of very down points for me were the reactions after the Boston marathon and the endless banana references after the Fukushima disaster), but other than that when one is told that he/she is wrong the person saying it usually comes with his/her own reasons which helps move the conversation forward.

I would like to hear what LinkedIn moderators deal with. I wonder if it is easier/harder/same as Facebook or Reddit.

There might be some interesting data here supporting real-name policies. I don't say anything on LinkedIn because I'm scared anything I say can and will be used against me when job hunting.

Also: thanks for modding /r/portland. I love PDX. Just wish it was sunnier: it would be the perfect city.

This problem is really easy to solve. Make your forum invite-only and have a ban on a user revoke the ability of the person that invited you to invite new users. Do not make the links between who invited whom public. It's for moderation only.

You can still have throwaway accounts for commenting on things and the overall community will be increasingly more civil.

That would work. However it could also make the community hard to grow.

I surfed HN for years before I decided to make an account, and even now I don't really know anyone else using it in my circle. I guess I might have found someone if I asked around, but it would definitely raise the bar.

Making communities hard to grow is a feature not a bug.

Let us read our Shirky:


A well-tended community is constrained by things like Dunbar's Number and SNR ratios.

Soft forking is a common response - IMO reddit is the closest social platform to "getting it right", although it should do a better job of pushing low-value content (/r/politics, etc) down into the "minor leagues" of subreddits.

Yup. The key is restricting access. Another good option is to charge for an account. The Something Awful forums are the best large community I'm aware of on the Internet, and I believe it's mostly because of the $10 entry fee and the excellent moderators. When you have to pony up hundreds of dollars a year to keep shitposting, you tend to get fewer shitposters.

> Make your forum invite-only

> You can still have throwaway accounts

How can both of these things be simultaneously true?

Say you or a friend has an account in good standing. Get an invite link with a unique token. Sign up with the link.

Done. The inviter doesn't know the username of the invitee. It's only visible to the moderation team who invited whom.

Assertion: A throwaway account is probably more likely to be banned than an identified account -- why else would anyone bother with throwaways?

If the throwaway account is banned, the original link-giver would lose their good standing. (sorry, I should have highlighted this to the original reply)

We can argue about whether the assertion is actually true, but even the perception that it might be true will make people reluctant to give out an invitation to someone wanting to create a throwaway account.

If I had my own named account in good standing, I suppose I might be willing to use it to create a throwaway account for myself, provided that I was careful to only use that throwaway for... I don't know... "lightly" controversial content that is only likely to be downvoted rather than abusive content that is likely to be banned. (Not that I would ever actually create a throwaway in order to be abusive! Just trying to think like a troll).

Actually, I guess that's what we wanted to encourage anyway, right? Controversial content should be fine; abusive content is not. Maybe this would work after all...

Case 1: Your assertion is correct. The system works as intended and bans decrease.

Case 2: Your assertion is incorrect. There is some other reason for making a throwaway (like talking about a former employer, for example) and the system works as intended. The throwaway doesn't get banned and nobody gets their account demoted.

Also these demotions could have a time element to them, where you can't invite someone new for, say, a year.

How would you account for people being invited, then immediately creating a bunch of alternate accounts to hand out to people/use for nefarious purposes later?

The fact that so many people have the need for throwaway counts entirely disputes the concept that this problem is easy to solve.

The upvote/downvote system is designed to reinforce echo chambers and create a chilling effect for your “real” opinion if you know it doesn’t agree with the hive.

That system is already trying to solve the problem and instead creating a different one.

All you can do with an invite system is to make sure the echo chamber is even more structured. That “those people” aren’t even allowed to join. Hello country club.

In this system you could still have throwaways, but the throwaway has to be invited/created from another account. This way you can create as many throwaways as needed but if you behave badly with your throwaway and it gets banned, then your main account is prevented from inviting new users and creating throwaways. Trolls could create new, primary, accounts but someone would still need to invite them back in. I definitely don't think that this would eliminate them, but it could potentially slow them down.

And because the moderators have the tree of invites it's pretty easy to determine which branches are yielding troll accounts and prune further up if necessary.

Thankfully the HN moderators are paid employees--I can't imagine doing that kind of work for free.

Dan has better things to be doing than this; he runs the entire site. It's not like individually dealing with jerks is meant as his whole job.

Google, Facebook and Twitter are all trying moderation by algorithm.

See regular posts on this very forum about algorithms running amok, banning people without clear reasons, and accounts only becoming reinstated by being famous enough to get enough traction to get a human to review your false positive.

The problem is the “assholes” never get told they are assholes, just mysteriously rate limited for some offense never explained to them, such as having an unpopular opinion.

That sense of powerlessness doesn’t mitigate things, it sometimes escalates them.

To beleaguer your "cleaning the pools" analogy: It is not as simple as just cleaning the pool of grime brought in through normal use.

The shit-peddlers outside the building are constantly convincing pool-goers to join a shit-peddling pyramid scheme so that they bring boxes of shit in to the pool.

And besides that, some people just want to trash the pool for their own enjoyment.

This was my same thought. We should understand the size of the problem and whether technology can help. If tech can’t help moderating, maybe tech can help in other ways. For example, establishing an anonymous unique internet ID that allows you to be online: if you get three strikes, your ID is suspended and you can’t spend time online for the duration of the suspension. Of course the system should be built so that you can’t bypass the one person-one ID and so that it’s completely anonymous. Also, the ID should not be transferable.

I can't think of a world where I would trust such a system (or trust anyone to create/run such a system). Is this just for websites? Or does it encompass all use of the Internet?

At least as far as websites or individual platforms go, this is exactly what accounts, moderation, and "banning" already attempt to do. We've seen how difficult and expensive this is to scale when your site gets as big as Facebook or Reddit. I can't fathom a sufficiently "benevolent dictator" that I would trust to act as a gatekeeper for the entire Internet at large.

Perhaps a potential solution is POW based registration, registration is slow and CPU heavy. I don't think it's likely there is going to be a perfect solution to this and might be that it's better to slow down bad users and make it expensive to ban evade.

This will just end up promoting the interests of already wealthy bad actors, and fomenting user cartelification behind a screen of anonymity. See also: Bitcoin

That's a very interesting idea, though it wouldn't significantly affect people with vast resources such as botnet owners, corporations, and governments. It would certainly slow down the garden-variety troll, however the determined troll would just keep a core burning to create a steady stream of sock puppet accounts.

>While there are things Reddit can be doing to help

one thing reddit could be doing to help is to stop relying on community moderators for so much. volunteer community moderators shouldn't be dealing with people abusing the reddit platform, that should be the job of reddit's staff. Instead of just building tools, reddit should be actively involved in the enforcement of platform rules to handle these cases, and leave the community moderators to focus on creating and maintaining their communities - ensuring content fits the theme of the subreddit, people are communicating with each other in a tone that fits the intended tone of the subreddit, etc.

Reddit needs people like gallowboob who are willing to do the drudgery of sifting through all the platform abusers, but from an end-user perspective it's easy to see somebody moderating 80% of the big subreddits as a problem, because it's not clear whether a moderator is actually influencing the community or just sifting through obvious abuse.

Thanks for sharing. Kinda sux, especially for all the work moderators do (and thanks to dang for the hard work on HN!)

Free speech is hard. Really hard. Allowing communities to express themselves in person (aka pre-internet) was much easier, because there was a cost to appear in person (time, society's perceptions, etc).

With the internet, truly anonymous speech has flourished, and much of it really is a waste of time or even damaging.

I hope as a society that we will be able to figure out this conundrum without eliminating free speech. I think that just like credit cards accept some bad debts, we have to be able to accept some bad actors in speech - the trick is limiting it without killing free speech.

To me, communities like reddit, HN, FB, twitter, etc are all huge experiments in free speech and how to manage that problem. Hopefully it turns out right - I don't want the future to look like East Germany.

Sadly, I think you guys are at the intersection of wielding influence and being a target, but not really having any opsec/personal security (or at least you didn't think you'd need it when you took the job). Idk if GallowBoob tied his identity to that nickname himself at any point, but there's only so much reddit can do if he did.

Aside from the likely fact that “his” account is run by multiple people considering it’s his actual job to repost on Reddit... he definitely “outed” himself for news articles. He enjoys the fame and not the consequence.

> performing anti-ban-evasion

Honestly, this is something Metafilter did pretty well: the five dollar account fee makes a ton of abuse tactics more costly to deploy, while simultaneously funding efforts to combat it.

It would go a long ways I think to at least allow mods to restrict subreddits to read-only for unpaid accounts.

100% agree. Metafilter has other major flaws that prevent if from having a meaningful community size, but the barrier to entry is a crucial mechanic that keeps out bad actors and mentally juvenile people.

What in the world is wrong with people? Why do people expend so much effort doing these things?

It'd be awesome if we had anonymous reputation scores, like PGP or something, that other people could vouch for. Make bad actors pay to build up good reputation, then burn it down when they misbehave.

I really want a system like that to help filter the deluge of comments anyway. Though there's a danger of forming a filter bubble, I want to see commentary that is vouched for by those I respect. Not just on Twitter, but everywhere. As a protocol or data exchange format.

> It'd be awesome if we had anonymous reputation scores, like PGP or something, that other people could vouch for. Make bad actors pay to build up good reputation, then burn it down when they misbehave.

The whuffie concept relies on strong online identities. Without that, declaring bankruptcy is too easy and allows malicious actors to simultaneously harm others and boost their own scores rep with bots.

I can second this type of experience. When I (more actively) moderated the subreddit for my university, there were a few repeat offenders who took most of our time, and one who went out of his way to try and dox and threaten me specifically (which wasn't hard since I ran the account under my real name), and who I finally dealt with by involving the police.

Politically motivated reddit people are, or at least can be, scary.

>this problem is bigger than Reddit and focuses on lack of enforcement for digital actions that would qualify as genuine crimes of harassment if translated into the physical realm.

Part of the problem is that online communities rely on the fallacy that, because most people are good, such communities can thrive with some reasonable amount of guidance. But it only takes a relative few people acting in bad faith to cause great damage.

As such, I've long considered that the online world amplifies the sociopaths, bad actors, and worst among us to an untenable level, giving them outsized power and voice. I think it's driving our "real" society and culture in a negative direction to a degree that people vastly underappreciate.

Layer on top of that adversarial nations that actively use our online communities to divide and propagandize, and there's a very real question of whether we're better off without these communities.

More succinctly, many of the largest platforms that enable online communities seem consistently unable or unwilling to rein them in. And when you look at experiences like those you convey, it's little wonder. Attempts to moderate bad actors will invariably add to the problem, as they devolve the discourse further and generally seek to be heard, else burn the place down.

You make a good point. All online posts are equal, and that's a terrible thing because on popular sites a shouty bad actor looks just as credible as a real expert - especially if the former relies on dogwhistle rhetoric, deliberate confusion, and superficial point-scoring, and the latter is trying to respond with facts and reason.

Good moderation can keep out bad actors and promote an adult and civil tone, but it's insanely time-consuming and expensive.

There really needs to be a new legal concept - something like poisoning of free speech. It's one thing to have unpopular and unusual opinions and to argue them, but another to set out to knowingly and deliberately subvert and poison communities with calibrated lies and aggression.

There is no upside to the latter, and the face-to-face equivalent would usually have consequences. Free speech advocates online seem to believe that online communities are somehow magically strong enough to handle these threats automatically - but in reality they can be more fragile than face to face communities. Some formal appreciation of that might not be a bad thing.

> but it's insanely time-consuming and expensive.

one of the few cases where online and offline worlds converge on a common modality to deal with common fundamental inputs. you don't know the valence of a new group member until they choose to reveal it to you, when it most favors their leverage.. This (among other things) tilts the odds of successful group influence in the favor of the new comers. Freedom isn't free and all that.

In person, people value civility to keep the shared benefit of social groups. In the digital world, you can easily find another group so lesser value is placed on civility. And that turns easily into the division you see in today’s digital world and society. And if you threaten the integrity of communities and societies you threaten the state and country upon which those are built.

I used to mod /r/answers. It took over my life, in a bad way. There is (or was) next to no support from paid staff on serious problems.

As a side question: As a moderator, you basically create the tone and feel of a subreddit. If a subreddit is successful, it's because of the moderators.

When reddit goes public, moderators get nothing. The employees of reddit will become defacto IPO millionaires, but moderators who create the success of the site, get nothing. Has anything been discussed along these lines about how unfair this is?

Some mods have already openly discussed taking their subs entirely private and invite only in the event of an IPO. Reddit gets no ad revenue from private subs.

I think one of the major failures of social businesses on the Internet is that their business need for massive user growth forces them to upend basics of social interaction that we have taken for granted in the real world for eons.

If you create a new subreddit, anyone can show up and participate. Good actor or bad.

Can you imagine if you threw a physical party and let every single stranger in the world literally walk in your front door, wander around their house, and do what they want? You'd be a honeypot for thieves and vandals.

On the Internet, they don't take your physical stuff, but they harm the online equivalent: your information and attention. Every popular online forum is likewise a honeypot for scammers, shady advertisers, griefers, and other malcontents. Why wouldn't it be?

The typical suggested option is massive policing, but that's really hard in a world a bad actor can don a near-perfect disguise (create a new account) in an instant.

I would love to see sites like Reddit and Twitter adopt a model closer to real-world social interactions: something like a web of trust where you need to be invited to participate in a space and where there is some level of vouching for you. But those kinds of businesses don't scale well so we almost never see them.

Another option would be to force something like real identities. The reason a bar can get away with just a couple of bouncers is because a bad actor can't as easily escape the consequences of their actions by shedding their identity. But, of course, requiring real identity gets in the way of good actors who use anonymity for good reasons (generally avoiding other bad actors).

It's a hard problem, but it makes me sad that almost every business just ends up doing "let everyone in and accept that there's going to be shitbags all over the place fucking it up for everyone".

There is no way to get unbanned on Reddit, even if your first post is innocuously misplaced, there is simply no mechanism to get unbanned. So what do you suggest people do?

Message the mods politely. If you made a mistake by posting something against the rules, apologize for that. Usually, mods are happy to unban you in that case.

But some mods become really jaded over time and are mean to users by default. I was moderator of a few mid-sized subreddits for a few years and saw that in action. A common thing I saw is a sort of purity mindset, where a moderator feels like the more people they ban, the better the subreddit becomes. It’s really easy to dehumanize people online - users do it to mods, and mods do it to users. But those bad moderators make users hate mods in general, and the cycle continues.

Not stalk and harass moderators.

How will this result in an unban? Nobody ever gets unbanned. What you said is emblematic of the problem.

If you believe stalking and harassing moderators is a valid response to anything moderators do, you shouldn't be unbanned.

Not at all what I said, I asked how will passively sitting by result in an unban? You are simply confirming the fact that there is no way to get unbanned, even if the moderators made a mistake, or if the person wants to actively participate. Reddit basically says if you get banned you must make a new account if you want to participate in that board still. So it's natural people make lots of accounts. Is there a mechanism to get unbanned? I'll ask you again.

Stalking and harassing moderators won't get you unbanned; instead, it confirms that there should be no place for you anywhere on Reddit.

You keep responding with completely unrelated text.
maest 13 days ago [flagged]

Hi, quick question - do you think stalking and harassing moderators will get me unbanned?

Create a new account?

"Reminder from the Reddit staff: If you use another account to circumvent this subreddit ban, that will be considered a violation of the Content Policy and can result in your account being suspended from the site as a whole."

It's against the TOS! What are you, a savage?!

I became a mod for one of the larger subreddit communities (10M+ subscribers) for similar reasons: I enjoyed the community and wanted to help out.

Same platform, but the experience couldn't be more different: we spend almost all our time dealing with "shallow" problems at a high scale. For us that means flamewars, brigading and (usually political) astroturfing, and to a lesser extent commercial spam. We don't even remotely have time to go looking for deeper problems such as ban evasion, because the volume is so high. We know it's probably happening but it's almost impossible to police because it disappears against the everyday noise.

One thing that I think helps us a lot is heavy use of automoderator rules and the spam filter to remove the most obvious problems automatically.

I think smaller and more local communities breed a very different class of problems than big ones -- it's much more personal for the participants, and their retaliation is more personal as well. Also, a single abusive user can do so much more damage in a small community because they don't just blend into the background noise.

One constant though is that we do see some really troubling things as well.

> freshman moderator of /r/portland

You poor soul. I stopped visiting that sub a few years ago. There was a pervasive negativity about it that was just depressing. I hope it's gotten better.

> [...] the Portland Police Department

BTW, it's Portland Police Bureau (PPB). Calling it PPD will get you labeled a transplant. ;-)

It's way too easy for bad actors to create throwaway accounts on Reddit. If I remember correctly, there's no captcha and no email confirmation required. I suspect a large swath of Reddit activity is bots and paid political actors.

Reddit allows bots because if it weren't for the product placement, the site would fall into obscurity.

4chan seems to have a superior system wherein there are a small number of mods who take recommendations from a large, churning group of janitors. These janitors can only suggest bans and must follow a template for each of their ban recommendations. However, the janitors can delete posts/images at will (reversible by mods).

Thus, the system is both evenly-applied (most of the time) and highly scalable.

Experienced same problems with aggressive individuals or groups trying to engineer or buy there way into digital properties. Found the whole experience very odd, since while I had no interest in making a deal, the amount of effort they were putting forth made no sense to me. Luckily, after they realized I would not agree, get tricked, etc — they moved on.

Hello fellow Oregonian, have you tried shadowbanning this/these disruptive individuals? How do you catch repeat offenders beginning a VPN? Browser fingerprinting? (Anyone else know?)

Thanks for keeping r/Portland such a fun place to hang around in, really enjoy the creative community/people who post there!

Shut your subreddit down and detail your actions. Use a tool that doesn't put your moderators in danger.

A subreddit isn't worth your safety.

There are online places where they use a third-party to verify identity. It may not have the most adoption, but at least you're not putting yourselves at risk.

I would take an aggressive stance againsnt these assholes that are vandalizing someone's property. It's petty, immature, and crosses a line. Physical mentorship is the only way to get through to pieces of shit like that.

Hello fellow Oregonian, have you tried shadowbanning this/these disruptive individuals? How do you catch repeat offenders beginning a VPN? Browser fingerprinting? (Anyone else know?)

> stepping into stopping out of control discussions.

I mean, if your goal is to "control" conversations, yeah, that's going to be a lot of work. What's the point of that anyway?

Maintaining a high signal-to-noise ratio so that the platform continues to be useful and interesting.

are you actually banning people or using automod to automatically remove their posts?

There's no benefit to banning the bad folks, since, like your guy, they just create a new account.

        name: [baddude1, baddude2]
    action: remove
    action_reason: "Troll / Spammer"
In my subs, this one condition does the most work. I also filter off low karma and new accounts to prevent spammers, trolls, etc.

I’m baffled what could motivate someone to get so worked up or angry as to go find strangers from the internet in real life and I’m sorry that’s happening.

/r/portland is definitely a challenging place to moderate. Frankly I've never seen a local city sub quite like that one.

We are solving these problems at - https://district.so. Dealing with trolls can be draining and it's a plague for a good community. We working on solving that.

@bredren Would you want to lead /Portland District?

You don't describe your product or service on the site.

As such, this is spam for a vague mailing list.

Why do you keep the post?

I've only just begun and I don't walk away from challenging circumstances quickly.

The senior moderators are faster and better than me, so my actual workload is relatively light.

I believe Reddit will grow in its influence and that tools can and will be developed to improve the situation for community moderators.

Does min account history + min karma to comment enforcement helps?

Vandalism is wrong.

"Doxxing" public figures who try to anonymously control public discourse is not.

One of my posts to r/Portland was incorrectly removed with the excuse "not related to Portland." in reality it was related however the post was contentious and I was prevented from re-posting. The moderator didn't even pretend to acknowledge a genuine rule that was broken, just simply told me not to post it. After I complained I was muted from contacting the mods.

Not to blame you but this is the kind of BS that makes me uninterested in the Reddit community, and uninterested in Portland in general, rife with cancel culture.




I'm not sure what benefit sharing the screenshots would have, I'm not trying to out the mod or get vengeance.

The video I posted was about a Lake Oswego woman going nuts and verbally abusing some local PD who were handling the situation with elegance. My title was something like "this is why I don't trust when people senselessly dog on the police." It was removed because there wasn't anything specific to Portland or surrending area (not true but ok). I told the mod I would just repost with the title "this is why I don't trust when Portlanders senselessly dog on local police" and was told not to post. No rules broken, just intimidation and censorship.

I don't subscribe to the progressive/liberal/left/socialist ideology and I'm genuinely afraid to publicly share that fact in Portland. There are many people in PDX who will violently attack you if they catch wind of that fact. This isn't a secret. It's also not a secret that Oregon outside of PDX and Eugene are generally Red communities.

For instance, look at the protests in Portland last year. The counter-protesters were far more disruptive and violent than the visiting protesters, but the typical Portlander wouldn't see it that way. In my own discussions with them, they see Trump supporters et. al. As being fascist and as causing violence simply by existing, thus justifying their pre-emptive violence.

>I'm not trying to out the mod or get vengeance.

Getting the actual conversation and actual article you tried to post gives a lot of context into why you believe you were cancelled. I don't want you to out the mods, I want to understand their side of the story.

I've heard your side, and it sounds like the content didn't fit the vibe of the subreddit, as per the mods. They told you that and you threatened to break their rules again. Yes, you were threatening to break their rules.

I'd say they gave you a chance, and given this particular story re: mod harassment, didn't want to engage with an aggressive user.

> No rules broken, just intimidation and censorship.

A moderator of a forum asking you to not do something is not intimidation. It's asking you to adhere to the vibe of the subreddit. The description of the article you tried to post was obviously an attempt at rabble rousing on your part. There are helpful and constructive ways to voice your opinion. If you have consistent trouble doing that, I would consider therapy to try and understand why people see you as aggressive when you try to share your opinion.

There's nothing wrong with asking for help when you're struggling to fit in socially, especially if your opinions diverge from the norm. Being different is HARD.

Which rule is "vibe" again?

Thanks for looking into my Reddit post creep, I feel totally weird about you now.


"It doesn't fit here" and "vibe" are in your head, none of that was ever discussed. The mod said "it doesn't belong here because it's not related to Portland" which is objectively false. As in, factually wrong. And I never said I would "do it anyway", I was trying to find a title that satisfied the mod.

I'm tired of this conversation anyway, just another corrupt person in power bending rules for their own favor, covering up injustice, lying to affirm their position, and creeping on people cross-social media. I would even bet you and your mod pals have put me on some kind of watch list. It's disgusting and shameful really, and I'll be calling out moments like this as long as the law lets me.

My post objectively didn't break any rules on /r/portland.

Relationship Advice lead-ish mod here. We were one of the subs "called out" as being controlled by Gallowboob with the original post and subsequent reposts.

We (buu700 and I) added Gallowboob some years back largely to have insight into how other teams modded their subs at a time when we were growing quickly and needed to know what to do to keep up. While useful in meeting that goal, we quickly realized it was pretty meaningless for /r/relationship_advice because of the nature of the questions we were getting. The only sub like ours is /r/relationships, and we each differentiated from each other by having different rules and content creation controls.

As best as we know, Gallowboob enjoys the place and is pretty decent at modding, so we're pretty happy to have him. Most mods burn out quickly because of how dark the questions get as well as because of how meaninglessly violent people become when their posts are modded.


Separate comment on an item in the story:

> Allam believes his time on the site has made him a more “paranoid” person and led him to develop “borderline PTSD.”

Moderating Reddit's larger subreddits is absolutely capable of resulting in PTSD-like symptoms. I've been dealing with some on and off after a post some years ago where somebody who requested advice followed through with the best course of action only to find that his wife killed their kids soon after. And Reddit has absolutely no support system for things like this.

Why do you free work for a for-profit social network, especially if you say yourself that the work negatively impacts your mental health and that the community you are moderating is toxic?

What are you getting out of this?

> Why do you free work for a for-profit social network

They don't. They do the work for a community they're part of. The social network hosts that community, and tries to make a profit doing so.

I organise a meetup that's held in a pub (well, not at the moment). People coming to the meetup spend money on the pub's beer and pizza. Am i doing free work for a for-profit public house?

>People coming to the meetup spend money on the pub's beer and pizza. Am i doing free work for a for-profit public house?

Yes. Unequivocally, yes. If I were the owner of X thing and Y person was sending lots of people my way I would be more than happy to give Y person something in return. In your example I would offer free pizza and beer at the absolute minimum. You are selling yourself short.

Then the other group members will wonder why you get stuff free while they have to pay, or they might doubt why you chose that particular pub. If you really need a physical reward for doing something you enjoy it would make more sense to agree a deal with the pub, like "second drink on the house" for the whole group.

Selling friends or peers to an establishment for pizza and beer is not really how a trusted human relationship works.

Yes you are. Social organizers, advertisers, trend setters, ect frequently get free product, kickbacks or some money for their 'work'. That's the work you have done.

It's not your intent to do work and it's probably not the pub owners desire to pay for meetup organizers, but regardless of your and the pub owner's intent, work has happened.

Ok, so in addition to the community getting value out of it, the pub benefitted too, that's just called win-win. Why does the pub have to lose in this situation for you to be ok with the result?

They didn't really claim the inverse, just that any form of community organizing inside a private for-profit business profits the business, making it free work for them.

It's one thing to organize a bake sale in a public park, but when you choose one pub over other competitors, and they profit from it, you're doing work for the pub for free.

This is why experienced community managers works with businesses to get prices for the event discounted, or set up an agreement for profits to go towards the event.

I don't disagree that incidentally involving a business in a community's activities benefits ('does work for') the business. But that doesn't mean that the business absorbs all the value a priori and the community no longer gets any value.

I'm sorry I wasn't clear, this point is not directed at friendlybus, but to the general sentiment upthread (e.g. ramphastidae) that seems to assume that because reddit is benefiting from the existence of the community therefore the community cannot benefit from the existence of reddit. The underlying reason why is because individuals benefit from the community.

Nobody is saying that a for-profit business absorbs all the value. Is your premise that if the community gains value from an action, then that action is justified no matter who else profits from it?

My premise is that if a community and individuals in it benefit from organizing themselves, then it's reasonable for the business that the community chose to host their interactions to have some amount of profit.

Also I would question if reddit actually makes much profit from individual communities like r/relationshipadvice, especially compared to the pub scenario which I find generally acceptable.

I am okay with the result. The guy asked if he was doing work, and I said yes he has done work. Why would I not be okay with that? The pub & friends situation is a win-win.

A reddit moderator's free time and a toxic community is a net negative for a moderator, payment may offset the health cost.

Individuals seem to get a lot of value out of interacting with the community, so it's not obvious to me that the moderator's sacrifice (which primarily comes from a small portion number of individuals) is unjustified.

A hotel conference room costs on the order of $50-300 an hour depending on location and group size and that's without the benefit of a bar so the host has to front the money for all the drinks. Most other available venues are either just as expensive or forbid alcohol, loud noise, and so on. A four hour party would cost at least $200 plus drinks before the hosts even collect any money - or if they do, that's another cost and more risk for everyone.

The option is to do "work for free" or to shell out hundreds of dollars.

Is it not possible that the situation is a win-win for all parties and that all parties feel like they are well-compensated in terms of the value they've received, and that therefore no money actually needs to exchange hands?

You may as well ask why people volunteer their time at all.

Both points are valid I think.

It's fair that both parties feel like they're benefiting, so no direct transaction needs to occur.

Comparing this to volunteering generally isn't the same. I volunteer with local registered charities because I believe in their mission, and the volunteer work I do is directly impacting and improving the lives of the people I'm working with.

Which may be the crux of the GP's question. Why volunteer in the community in a way that results in a for-profit organisation monetising you for their gain, instead of finding a non-profit in your community to be volunteering your time towards? The non-profit / charity in your community's goals are more likely to be aligned with your own, than the for-profit social network.

It’s a false dichotomy.

A moderator’s work primarily benefits, directly, the users, because it allows them to communicate safely. The platform only gets help indirectly.

Charities often have salaried or reimbursed employees; do you volunteer in order to get their salary paid? Of course not, you do it because it helps actual users; the fact that this allows people to get paid, somehow, is an indirect benefit. Same here, really.

There's a difference between a charity and a regular for-profit corporation.

If the charity started stalking me to sell my data onto advertisers in order to pay the salaried staff or other stakeholders more money, I'd stop doing any work through them.

The charity's mission is not maximising revenue / profits.

Any VC/PE backed corporation's mission is to maximise revenue / profits at any cost.

Hey I'm not saying money needs to change hands. I'm just pointing out it's work, like in the physics sense. Something valuable has happened, both in the business sense and social sense, but obviously the priority for parent poster is the socializing.

Benefitting from socializing is not worth the reddit moderator role is my opinion, under the psychological load they operate in.

You can likely get a kickback from the pub, but you can also negotiate a deal where everyone wins. E.g. discount or freebies for meetup members, or you can run a small contest every night with prize fulfilled by pub.

AOL would use volunteers, called Community Leaders, to moderate chats/BBSes, etc. They even got perks, like free Internet. I volunteered as a Host Guide when I was 16. CLs eventually filed a complaint with the DoL claiming AOL violated the Fair Labor Standards Act by using non-employees to do unpaid work that could have been done by paid employees. In fact, it's still illegal even if the volunteer doesn't mind or doesn't want to be paid, so long as the company is for-profit. AOL paid out something like $15 million in a class action. [1]

The same will, and should, happen to Reddit. One of the largest sites on the Internet is making money hand over first while moderators end up paying the price for no real benefit.

1. https://en.m.wikipedia.org/wiki/AOL_Community_Leader_Program

In your favored outcome, the government intervenes in a way that

a) prevents people who want to be mods, from being mods

b) breaks the way the site works

What gives you the right to do that? Seriously, if you want to live in a nanny state, please, not in my back yard.

I do not want you to be my nanny.

Reddit's moderation problems are not so big that you are justified in putting a gun to other peoples' faces to force them to do it the way you want.

>What gives you the right to do that?

The Fair Labor Standards Act: https://webapps.dol.gov/elaws/whd/flsa/docs/volunteers.asp

Reddit's preference of exploiting unpaid labor, and it's failure to plan for the inevitable event where that exploited labor decides they want to be compensated for their work is entirely on Reddit Inc. A company valued at $3 billion should have understood the potential risk.

All Reddit mods who feel burned out or are otherwise struggling because of the work they've done as mods should file a complaint with the Dept. of Labor and request financial compensation for the work performed, and coverage of any medical treatment stemming from the results of moderating toxic communities.


This is just a forced wealth transfer to people who have managed to partially capture the government.

Because the continued existence and quality of the subreddit brings value to them. It’s no different than volunteering. The fact that Reddit is for-profit doesn’t really enter into the equation.

I don’t get it for relationship subreddits because it’s so mentally exhausting but for fandoms and niche communities the experience is super cool.

It's very different than volunteering - since most people don't volunteer at a FOR PROFIT company.

I've volunteered at all sorts of community centers, schools, etc... Never once at a for-profit business in town.

As far as the business of reddit is concerned it provides the tech to run communities on, reddit itself doesn't manage the communities themselves (with the exception of admins stepping in to remove illegal stuff I suppose).

In your analogy reddit is the property owner or landlord of the community center, but it does not operate the community center, so volunteers are necessary.

I don't think I've ever volunteered for a profit-maximizing private venture. Is that a thing people commonly do in real life?

I mean, I've done internships, but the quid-pro-quo there was quite clear.

(Serious question) what's the difference between an unpaid internship and volunteering?

An unpaid internship is done at a place where you expect to either get a job, or because the place and the work you do there is important to your subsequently getting a job or school entrance.

E.g., I did a quality improvement internship at a hospital to get a job in quality improvement there, once upon a time. It was "unpaid," but it was an audition for a job that I wanted.

As opposed to my time as a volunteer in a local free clinic, where no benefit accrued to me at all.

I would assume most mods don't put "reddit mod" on their resume, for one.

It’s no different than volunteering.

It's certainly volunteering in the synonymous sense of 'doing something for nothing because you can' but I feel this statement-left in the vacuous state of its own brevity-comes loaded with an unsaid implication that "volunteering" on Reddit is no different than "volunteering" in one's local community and I'm not sure it's that simple.

A lot of volunteer groups meet in for-profit places like cafes, and use for-profit ISPs and for-profit email providers or IM operators to talk to each other. An online forum is basically the same thing: a way to communicate.

Meeting together, talking about %thing%? Okay that's fair, I don't disagree that there are similarities.

It does feel just as limited of a definition of 'volunteering' if taking the grandparent comment at face value.

That community does genuinely help real people struggling with their relationships, regardless of whatever value it's existence also provides to reddit. I don't think the motivation is all that different from volunteering in one's community, both come from a place of wanting to help others.

i disagree. it is that simple. the community consists of the people that contribute to the discussion. the fact that this community is using reddit as their tool to communicate is secondary. if they as a group were unhappy with how things are done they could move. stackexchange, discord, whatever.

moderators should be volunteers from that community either way, not paid outsiders.

What are you getting out of this?

Not the OP, but most people just want to help a community that they are a part of and take a hand in making it better and helping members of the community.

You have to remember, this issue is played on pretty much every online community that has likely ever existed, in some form or another. Long before the term "social network" became used.

Judging from most moderators that are active it would appear to be a power thing for them.

Oh for sure, for some people that's definitely true. And honestly, the community itself self-selects for people for who that's true as well, since most reasonable people realize that being a moderator is actually kind of terrible in reality. So the people who last the longest tend to be the people who aren't just in it out of altruism (users work hard to drive that motive into the ground).

But at the end of the day, you need moderators. And the job sucks and is unpaid, so you can't exactly select only the "perfect" candidates.

You're being downvoted, but GP directly lists one of his reasons for modding as "because Reddit will grow in influence"

Being the invisible hand that shapes the narrative of an entire community to one that you see fit is surely an alluring power. Especially if you think the powers of that influence will grow over time.

It's not crazy to assume that a portion of them are in it for the power. People love positions of minor authority. See: the fragmented history and (hilarious) mod power struggle that led to Seattle now having a bunch of differently managed subreddits.

This is absolutely true, which is why we tend to be pretty conservative with adding new rules - to reduce the risk of people getting carried away with the hammer. Some communities are the exact opposite - they'd rather people hammer away specifically because they're extremely selective with content, so people with a "power thing" are a better fit for those communities.

> What are you getting out of this?

For a nontrivial number of people, this community is the only place to turn. If we step away, do people incur harm as a result?

Most likely.

(in other words, it's almost a psychological obligation of sorts at this point)

It used to be that the mods ran little fiefdoms that they advertised things to buy in some subreddit's feed. Those micro business success stories seem far less common in the past few years. I don't know why you would anymore.

this will certainly be unpopular but if they don't get any money out of it then most likely it's an ego / power thing

I moderate a sub because if found it useful, I want to make sure it thrives and it's my way of giving back to the community, even if I don't participate as much as some other redditors.

Power tripping is very addictive.

>how meaninglessly violent people become when their posts are modded

Maybe if mods were more transparent across all the subs.

Most people get angry when someone removes their post and yet they see it reposted and approved hours or even minutes later (Gallowboob is infamous for that but regular users do it too)

Maybe if mods were more transparent across all the subs.

This is a common refrain I've heard going back well before Reddit ever existed.

I helped moderate the Vault Network boards for a while back when they were a thing.

Its hard to overstate how amazingly...disturbed and vitriolic some of the individuals of a community can be. And dishonest. And spiteful.

No amount of transparency helps when all it takes to refute any evidence is to label the mods as lying or "corrupt".

Let's say a user says something bad and you mod the post and give them a warning. You tell them exactly what the offense was and why it was moderated. They might even act civil in response.

Then you see later them talk about "X mod totally censored their post for no reason and refuses to explain why. X and the rest of the mods are totally corrupt".

So, what do you do? Post a screenshot of the private messages exchanged (something, for instance, we weren't supposed to do)? Take a screenshot of a browser window with the "mod view" (aka. uncensored) of the original post? Something said user will point out can be easily edited because well, its a browser showing a website, not exactly hard to alter).

And that happens every day, all the time constantly. And no one is being paid and there is a constant stream of other stuff you are trying to stay on top of.

And sure, some of the mods are shitty and "corrupt" in a sense. But I would say its as much a reflection of the community itself as anything.

As someone who used to moderate a pretty large gaming community back in the day I think there are two things to keep in mind.

The first is that transparency doesn't "fix" hostile community members. What it does is justify the actions of those in power to the rest of the community. Without this the community will very quickly lose faith and the situation just devolves into factions and hostility.

This is very indicative of your last line really. It's like any form of government. When the people lose faith in those in power due to perceived capricious or opaque behavior there tends to be a lot of civil disorder. This usually leads to those in power entrenching themselves and enacting even more draconian measures. It's a vicious circle. Perhaps all governance is subject to the laws of entropy and will eventually fail, but I believe transparency, consistency, and effective communication are the only methods to slow such an eventuality.

The second point is one of size. As the population of a community grows and the proportion of those who wield power shrinks you also end up with a lot of discontent. Many community members will no longer feel they can effectuate change as they're a small voice amongst many without any real connection to the small group in power.

Also note that my experiences are in relation to fairly tight knit communities. Reddit is a little different in the sense that plenty of subreddits are far less communal. Effective communication is very difficult when the community is mostly transitory right up to the point where mob mentality takes over.

Yeah, you definitely raise some good points here.

When the people lose faith in those in power due to perceived capricious or opaque behavior there tends to be a lot of civil disorder.

I would be really interested in figuring out how to combat "perceived opaque behavior" especially when the source of the complaints is more artificial, where the complaints are being used as a tool for manipulating the community rather than being based in an actual grievance.

That's the reality you run into sometimes, like that oft-used quote from the Batman movies "Some men just want to watch the world burn."

> how to combat "perceived opaque behavior"

As the GP said, transparency is for the punished is not sufficient. Governance must be transparent to the public, otherwise it will seed distrust. The behavior you described isn't "perceived" to be opaque, it is opaque.

I've been thinking of what I would do if I were a forum moderator and I came to the conclusion that I would have to implement a minutes keeping rule like they do in the actual government. In the UK for example you have the 30 year rule, which says that all the minutes in cabinet meetings get released after 30 years.

Maybe you could keep complete raw backups of the "minutes" (or mod logs in this case). So you would take a backup every 24 hours, encrypt it, and upload it to an independent write-only server (which proves you couldn't tamper with it). And after e.g. 1 year (let's reduce it from 30 years), you would release the encryption key to that mod log backup you made. This ensures transparency and trust.

That sounds like a job for public automated mod logs.

Naturally people will still complain since it's impossible to fix people, but I'd imagine having a "authoritative" list of every moderator action and the accompanying explanation would help stave off the corruption/lies accusations. That combined with a reputation of transparency.

That sounds like a job for public automated mod logs.

I mean, I think that would be an interesting experiment, as I can't think of a large community that provides such data.

But again, if you the underlying assumption is "the mods are corrupt", then that accusation can easily be transferred to whatever logs are provided as well.

The Something Awful forums, which have been around for a very long time and have been relatively successful in maintaining a decently sized community, have a public moderation log[0].

Something Awful also has a number of other moderation features that I think more sites should emulate:

1. Temporary bans from posting ("probation") of variable length, to allow for punishment short of a full ban. Usually 6 hours to a couple days, depending on the offense, occasionally a week or longer.

2. A nominal registration fee ($10, one time) to register an account, to cut down on bad actors just making new accounts.

3. Normal bans for being a dick are reversible by paying $10 (same cost as registering a new account), unless you get "permabanned" for either repeated bad behavior or posting illegal content. If you get permabanned, any new accounts you create get permabanned as well (assuming the admins can find them, which they do remarkably effectively using IP and I think payment info).

That last point sounds like it incentivizes the mods to ban users, so that the forums get more money. But it doesn't seem to actually have that effect, possibly because most of the mods are not paid.

There have also been a few interesting experiments in moderation that were less useful, but are definitely entertaining, such as the ability to limit an account to a single subforum (usually the one for posting deals, or one of the ones for shitposting). It's also possible to view a user's "rap sheet" of moderation actions from any of their posts.

[0] https://forums.somethingawful.com/banlist.php

Mods apparently inflict their PSTD from bad posters on good posters, given the common ban first ask questions later approach.

I really don’t care how shitty another poster was to you. I only care how shitty you are treating me.

While I am sure that occurs, I have been personally banned from several subreddits not for rules violations, or violence comments, but because the moderator simply disagreed with my comment or I was harsh in my commentary against the product the subreddit was about, for example a certain web browser subreddit banned me because I talked badly about a policy of that web browser

I have much more experience in dealing with corrupt and biased mods than I see anything else

Just to provide the other side of this.

Imagine seeing a post exactly like yours, literally word for word, with the same civil tone and all.

Except the user in question had been banned for posting a tubgirl (its gross) image by a moderator that happened be female and had responded in private messages on a clear alt account (minutes after the ban with the same IP address) calling that moderator a slut who deserved to be raped and killed.

And that style of interaction being relatively common.

Imagine treating a civil poster as if they were a jerk, just because a different civil poster once turned out to be a jerk.

Its' not about treating people like a jerk, just that you learn quickly to never trust a person at face value when they say "I didn't do anything wrong" because that's usually the first and most common phrase a person who does something wrong uses.

I was merely pointing out that the "public" persona a user portrays doesn't have to match the truth.

Which can be equally applied to moderation staff, mods that say "We are not biased, and always are fair only banning people that break the rules" are equally as likely to be bad actors as your narrative about regular users

Definitely. I mean, in the vast majority of cases, the moderators are just "regular users" who are given (or chose in the case of the creation of subreddits) power over other users. Even if the moderators are employees they are still fallible, with personal motives, just like the people they moderate.

In the end, that's why its such a difficult problem. You take the normal conflict that occurs in communities, add in the potential for malicious actors on both sides and it's no surprise that the normal conflicts can spiral out of control. Especially in the virtual and relatively anonymous setting of online communities.

And from a person on the outside looking in, it can be impossible to actually know what the truth is.

Not sure how that invalidates the equally if not more common occurrence of banning based on philosophical, political or other disagreements with the moderation staff

> Maybe if mods were more transparent across all the subs.

We're (r/relationship_advice) rarely transparent with removal reasons. Our templatized removal reasons generally look something like this:

> u/[user], please message the mods:

> 1. to find out why this post was removed, and

> 2. prior to posting any updates.

> Thanks.


> User was banned for this [submission/comment].

The reason is because we have a high population of people who:

1. post too much information and expose themselves to doxxing risks, and

2. post fantasies that never happened.

So in order to protect the people who inadvertently post too much information, we tend to remove these posts using the same generic removal template. However, if people know that the post was pulled for one of these two reasons, the submitter may still end up on the receiving end of harassment as a result, meaning we have to fuzz the possibility of the removal being for one of these two reasons by much more broadly withholding removal reasons.

This is specific to r/relationship_advice. Other subreddits have other procedures.

> Maybe if mods were more transparent across all the subs.

Don't get me started.

Particularly "muting" posts with auto-moderator (silently hiding them for others without notification/warning/explanation). It was originally created for spam control but is regularly abused for generic moderation. It needs more controls placed on it.

To give a recent example, I wrote a long reply in /r/fitness's daily discussion but it was muted because it contained an offhand remark about COVID-19 (vis-a-vis getting hold of fitness equipment right now). Why are they muting all comments that contain "COVID?" Who even knows, but the /way/ it was done was pretty irksome and resulted in wasted effort on my part for a comment that violated zero published rules or etiquette.

/r/politics has a huge dictionary of phases and idioms that result in auto muting. None of which are defined in their rules.

> /r/politics has a huge dictionary of phases and idioms that result in auto muting. None of which are defined in their rules.

This is true here on HN too. One such word that I’ve seen to cause comments get auto-killed is “m∗sturbation” (censored for obvious reasons), and I am sure there are others.

Here's the list:

  masturb circlejerk faggot nigger feminazi shitstain lmgtfy.com #maga
...plus a bunch of spam sites. Why those words and not others? Shitstained if I know. These things tend to get added when somebody does something, and then stay there.

Plenty of people complain about overmoderation here on HN as well... I am one of those people

I firmly believe that the only way an online forum, particularly an anonymous one with free signups can remain relatively civil is very heavy handed moderation. This is not a free speech zone (it's a privately owned space), and the only way to prevent bad actors is to be very liberal in applying heavy moderation.

If a few "medium" actors get banned by accident, that's the price to pay for the rest of us getting to enjoy discussing tech without dealing with a toxic cesspool.

I strongly disagree. Bad actors thrive just as well if not better in places with moderation and even sign up fees.

Metafilter is a forum which used to be very diverse in opinion and is now basically captured by a small vocal minority. How did they do this? The minority produced a large amount of content for the forum and was very active. 95% of that content was high quality and on topic but the remainder is biased and very opinionated. As a result of their interaction with the site they were closer to the mods, regarded more highly, given the benefit of the doubt in "bad actor" conversations.

Now today Metafilter is a dying community of territorial users who eviscerate anyone who doesn't know how to play the game. The minority won and created their little clubhouse corner of the internet. Metafilter as a forum though is a shell of its former self. There are fundraisers to keep the site alive and formerly paid mods and devs are now either retired or do it for free. Not only did it drive away old and new users but the minority also seems to have become bored without constant drama and moved on (as seen by new posts and commenting activity falling of a cliff).

I know the common refrain on HN and in general is "on a private platform there is no such thing as free speech" but be careful. Don't let blatantly toxic users run your platform but beware of users who are quick to call everything toxic.

Every community has its rise and fall, and there is no one solution for all of them, but I still think without heavy moderation that fall comes much quicker and more violently.

Yes, moderation might be necessary, but what the anecdote illustrates is that simply having moderation will not be enough.

>>how meaninglessly violent people become when their posts are modded

> Maybe if mods were more transparent across all the subs.

Does the latter really justify the former?

If you disagree with a moderation decision, take it up with them politely. If you consistently disagree, maybe this community just isn't for you!

Mods are just people donating their time. Even if they're inconsistent or "corrupt", there's no reason you should respond in any way that can be described as "violent".

What if you take it up with them politely and this is their response? https://imgur.com/a/jhmGXzJ

Find or create another community.

There are basically two scenarios here. One, a lot of people agree with you — in which case you should be able to appeal the decision or splinter off successfully. Two, most people disagree with you — in which case the mod is probably right, or at the very least you're simply not welcome in the community.

Let's also not forget that we're talking about violence in response to moderation decisions. So even if that's their response, it's still not okay to e.g. threaten them.

The problem with the first suggestion is the fact that the moderator in the above screenshot mods 1000+ subreddits. You can be pretty sure they mod the other subreddits too with the same mindset, how many of them do you recreate? This is the whole problem that has caused the drama in the past few days.

> Let's also not forget that we're talking about violence in response to moderation decisions. So even if that's their response, it's still not okay to e.g. threaten them.


> If you consistently disagree, maybe this community just isn't for you!

Ah the classic "don't let others make changes but outcast them" approach.

If you are in conflict with enough other members of a community, you are de facto outcast anyway. And if enough other members agree with you, just break off and form your own community!

We’re talking about online communities — low stakes to join, low stakes to leave. There are exactly zero reasons that anyone should resort to doxxing/threats/etc in response to moderation decisions.

As a mod - how much time do you commit to this? Are you just sitting there reading comments all day? I'm fascinated by this whole mods thing especially since it's volunteer (!!) work - which strikes me as insane.

If Gallowboob and friends are just helpful innocent people, why were were the calling-out comments, dozens upon dozens of them, deleted, and the people posting them banned?

If you're helpfully, innocently, looking after dozens of top subs, and people mention that and wonder what's going on, you don't censor them, you have flipping AMA about it!

> If Gallowboob and friends are just helpful innocent people, why were were the calling-out comments, dozens upon dozens of them, deleted, and the people posting them banned?

Because personal witchhunts are against Reddit's rules as a form of targeted harassment, and mods are de-modded by Reddit for not enforcing Reddit's rules -- or worse, subreddits that consistently see significant Anti-Evil Ops (effectively Reddit's on-payroll God-Mods) action may be quarantined.

I looked at some of the deleted comments, and if I were a mod, I would delete them as well.

I looked at some of the deleted comments, and there was no reason except hubris to delete them.

And Reddit has absolutely no support system for things like this.

My immediate thinking involves referring affected parties to professionals and specialists who deal with this sort of thing, but in your opinion-what should Reddit (the company) be doing?

If anything Reddit (the company) should be working to limit the influence of a single "power" user like this particular moderator, not make it easier for him to control the site.

That's fair, I was more speaking to the question of moderators affected by PTSD. I don't necessarily disagree with what you've said here, but it doesn't quite answer my question, or are you suggesting Reddit Admins limiting mods ability to be mods is a solution to this particular problem?

I guess I'm having some trouble unpacking your suggestion, can you help me gain some clarity on what you're saying?

paying for the councelling of the moderators.

I more or less quit Reddit when I realized what kinds of traits moderation tends to select for: https://jakeseliger.com/2015/03/16/the-moderator-problem-how.... Perhaps you're an exception, or the exception, and, based on Reddit's growth, it seems that my preferences are minority ones.

Why should reddit offer you any support? You volunteered; no one forced you.


This is a little more serious than "you're a piece of shit", no?

> I've been dealing with some on and off after a post some years ago where somebody who requested advice followed through with the best course of action only to find that his wife killed their kids soon after.

I've got two thoughts after reading this. One is that trauma isn't a binary thing, and it can accumulate over time. Some of these mods are dealing with serious events, like the user who was given advice which lead to his wife killing their children. Others got doxxed and receive death threats.

Two, if someone feels upset from getting messages like "you're a piece of shit", I wouldn't say they're an immature child or a weakling. I'd say they could be sensitive, perhaps also very considerate, and in that moment they may worry that another human being is upset and they're responsible for it. They might hurt from the pressure to moderate something that's important to them. Maybe they struggle with social interaction, even if it's online, and experiences like this can be very hurtful.

I don't care why something hurts someone, I care more that they are hurt. Chronic exposure to these things, even if they do seem benign or minor from the outside, absolutely can lead to trauma.

Trauma is a result of exposure to acute or chronic damage to your physical or mental well being. This can occur in a staggering number of ways from person to person. How each of us handles that will vary, but if it leads to a lasting impact, it's trauma.

I'm sorry you experienced what you did. It's arguably worse than what the moderators are describing, but it isn't exclusive to those experiences. It's also not a contest.

Unfortunately (or fortunately, as the case may be) PTSD and trauma in general are not yours to define or gatekeep.

Calling someone with PTSD “weak” is exactly how society brushed aside soldiers and child abuse survivors with PTSD for decades.

> I'm so sick of people on the internet saying they have PTSD

> Calling someone with PTSD “weak”

That's a nonsequitor derail. The post you are responding to is about faux self-diagnosis.

> That's a nonsequitor derail. The post you are responding to is about faux self-diagnosis.

"weakling" is right there at the end and the GP is going on about a strawman not mentioned by the OP anyways, as several of the child comments point out.

> "weakling" is right there at the end Again, this is not related to the assertion.

Name calling (as a proxy for illegitimacy) is all the same for the intent of the point. Claiming damage without evidence is not compelling, even if it makes for tasty fluff content.

> A stranger saying, "you're a piece of shit," is not fucking traumatic to anyone but an immature child or a weakling.

Think more everything from graphic descriptions of rape to child porn. Reddit generally lets comments of the "piece of shit" nature stand.

Negative social judgements, death threats and attempts at doxxing and property damage is a heavy burden when you know (or believe) it to be coming from real people that you can't control and who are unpredictable. It is a reasonably similar tactic to torture. The increased perception of danger grows your amygdala and it doesn't ever shrink again, it's something you have to know how to deal with now. If you didn't before it can be traumatic.

Watching ones you love die is terrible but it can also bring about a sense of peace. Being actively predated on is no fun. I have also seen mental and physical abuse as a child, fwiw.

Have an upvote.

We're conditioning a culture to be hypersensitive, and at the same time, for some of them, hyperaggressive. Lack of experience has always clouded the vision of humans, but many find themselves, willingly or not, living within silos and echo chambers, which reinforce their beliefs and behaviors. What they consider trauma does not begin to approach yours.

In other times, their vocal ignorant statements would be squashed immediately upon utterance. In these times, they receive reinforcement, from some.

Why is that subreddit so similar to Jerry Springer? How did it become so trashy?

You shouldn't be seeking relationship advice from random strangers or a reality show in the first place. Given that no one is a verified counselor the sub is largely redundant at best, knowingly harmful at worse. It is rife with low-effort karma farming, and toxicity, among the defaults.

Largely true with two caveats.

1. We're trying to keep the subreddit as accessible as possible to people with nowhere else to turn. This is 𝐇ard.

2. There's at least one verified crisis counselor who frequently comments - u/ebbie45. We've verified their credentials.

I stand corrected.


We're still figuring out how to deal with it short of a figurative nuke.

> Most mods burn out quickly because of how dark the questions get

Care to give us some examples?

You don't know if that's what he's referring to.

In fact, I expected that the best examples got modded out.

The "̶b̶e̶s̶t̶"̶ most representative examples often involve egregious cases of abuse against parties either without consent or without a capacity to consent.

> Moderating Reddit's larger subreddits is absolutely capable of resulting in PTSD-like symptoms

There was a moderator of the gaming subs that killed himself fairly recently. He said largely the same thing that modding was not healthy - but he continued to do it.

Why do you think that is? I suspect it was because he had control over something and found that too appealing to let go of. That’s not really a soldier’s dilemma of duty and responsibility.

So that anecdote aside, I’ve worked with a special forces vet that actually has PTSD.

Respectfully, if you think moderating an online forum is any sort of analog even to be “PTSD-like” you are either mistaken or have a far more gruesome task than I think possible.

> Why do you think that is? I suspect it was because he had control over something and found that too appealing to let go of. That’s not really a soldier’s dilemma of duty and responsibility.

Speaking solely on behalf of myself: we see a notable volume of fantasy and fetish posts as well as legitimate pleas for help that veer into extremely disturbing territory. The result is a situation where mods may well find themselves feeling substantially troubled with extended exposure.

I'm not about to impose that on someone else, and as a result of inevitable scope creep from the sub gaining readers, we've now got to sustain an environment that people use as either a first- or last-resort option while at the same time turning away significant populations of people (a subset of followers from influencers such as https://twitter.com/redditships/) who appear to relish creating drama from people calling for help. Great example: https://twitter.com/eganist/status/1263534755045412870

When staying imposes a burden on myself but leaving heightens the risk that people may be harmed, it's a lose-lose, and the trauma arises from this.

I'd show you some of the stuff we've had to mod out, but it's too dark for Hackernews.

> but he continued to do it. Why do you think that is? I suspect it was because he had control over something and found that too appealing to let go of.

Why do people volunteer on anything online, like open source projects even? Sometimes people like a thing, want to keep it good, and feel that it's less likely to happen without them. Leaving is condemning the thing to possibly get worse and decay through less contribution, and interrupts their social connections formed through it and their established routine that gives the satisfaction of contributing. It's not something easily abandoned after investing years in.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact