You're confusing bad actors with bad behavior. Bad behavior is something good people do from time to time because they get really worked up about a specific topic or two. Bad actors are people who act bad all the time. There may be some of those but they're not the majority by far (and yes, sometimes normal people turn into bad actors because they get upset about a given thing that they can't talk about anything else anymore).
OP's argument is that you can moderate content based on behavior, in order to bring the heat down, and the signal to noise ratio up. I think it's an interesting point: it's neither the tools that need moderating, nor the people, but conversations (one by one).
I think that's right. One benefit this has: if you can make the moderation about behavior (I prefer the word effects [1]) rather than about the person, then you have a chance to persuade them to behave differently. Some people, maybe even most, adjust their behavior in response to feedback. Over time, this can compound into community-level effects (culture etc.) - that's the hope, anyhow. I think I've seen such changes on HN but the community/culture changes so slowly that one can easily deceive oneself. There's no question it happens at the individual user level, at least some of the time.
Conversely, if you make the moderation about the person (being a bad actor etc.) then the only way they can agree with you is by regarding themselves badly. That's a weak position for persuasion! It almost compels them to resist you.
I try to use depersonalized language for this reason. Instead of saying "you" did this (yeah that's right, YOU), I'll tell someone that their account is doing something, or that their comment is a certain way. This creates distance between their account or their comment and them, which leaves them freer to be receptive and to change.
Someone will point out or link to cases where I did the exact opposite of this, and they'll be right. It's hard to do consistently. Our emotional programming points the other way, which is what makes this stuff hard and so dependent on self-awareness, which is the scarcest thing and not easily added to [2].
If someone points out a specific action I did that can/should be improved upon (and especially if they can tell me why it was "bad" in the first place), I'm far more likely to accept that, attempt to learn from it, and move on. As in real life, I might still be heated in the moment, but I'll usually remember that when similar cues strike again.
But if moderation hints at something being wrong with my identity or just me fundamentally, then that points to something that _can't be changed_. If that's the case, I _know they are wrong_ and simply won't respect that they know how to moderate anything at all, because their judgment is objectively incorrect.
Practically at work, this has actually been a good policy you described when I think about bugs and code reviews.
> "@ar_lan broke `main` with this CLN. Reverting."
is a pretty sure-fire way to make me defend my change and believe you are wrong. My inclination, for better or worse, will be to dispute the accusation directly and clear my name (probably some irrational fear that creating a bug will go on a list of reasons to fire me).
But when I'm approached with:
> "Hey, @ar_lan. It looks like pipeline X failed this test after this CLN. We've automatically reverted the commit. Could you please take a second look and re-submit with a verification of the test passing?"
I'm almost never defensive about it, and I almost always go right ahead to reproducing the failure and working on the fix.
The first message conveys to me that I (personally) am the reason `main` is broken. The second conveys that it was my CLN that was problematic, but fixable.
Both messages are taken directly from my companies Slack (ommitting some minor details, of course), for reference.
> I try to use depersonalized language for this reason. Instead of saying "you" did this (yeah that's right, YOU), I'll tell someone that their account is doing something, or that their comment is a certain way. This creates distance between their account or their comment and them, which leaves them freer to be receptive and to change.
I feel quite excited to read that you, dang, moderating HN, use a similar technique that I use for myself and try to teach others. Someone told my good friend the other day that he wasn't being a very good friend to me, and I told him that he may do things that piss me off, annoy me, confuse me, or whatever, but he will always be a good friend to me. I once told an Uber driver who told me he just got out of jail and was a bad man, I said, "No, you're a good man who probably did a bad thing."
I think your moderation has made me better at HN, and consequently I'm better in real life. Actively thinking about how to better communicate and create environments where everyone is getting something positive out of the interaction is something I maybe started at HN, and then took into the real world. I think community has a lot to do with it, like "be the change you want to see".
But to your point, yeah my current company has feedback guidelines that are pretty similar: criticize the work, not the worker, and it super works. You realize that action isn't aligned with who you want to be or think you are, and you stop behaving that way. I mean, it's worked on me and I've seen it work on others, for sure.
> I try to use depersonalized language for this reason. Instead of saying "you" did this (yeah that's right, YOU), I'll tell someone that their account is doing something, or that their comment is a certain way. This creates distance between their account or their comment and them, which leaves them freer to be receptive and to change.
I use this tactic with my kids when they do something wrong. Occasionally I slip up and really lay into them, but almost all of the time these days I tell them that I love them, I think they are capable of doing the right thing, but I didn't love some action they did or didn't do and I explain why. They may not be happy with this always, or with the natural (& parent-imposed) consequences of their actions, but it reinforces that they have a choice to do good in the future even if they slip up from time to time. If all of us were immutably identified by the worst thing we ever did, no one would have any incentive to change.
Thanks for the thoughtful & insightful comment, dang.
I think you do a good job on HN and I appreciate, as someone who moderated a similarly large forum for a long time, how candid you are in your communications on and off site. You're also a very quick email responder!
> I try to use depersonalized language for this reason. Instead of saying "you" did this (yeah that's right, YOU), I'll tell someone that their account is doing something, or that their comment is a certain way. This creates distance between their account or their comment and them, which leaves them freer to be receptive and to change.
My sense is that this is a worthy thing to do (first of all because it's intellectually correct to blame actions rather than people, and second of all because if you're right about the effect it's all upside). But I suspect this will produce very little introspection, maybe a tiny but on the margins.
It's pretty normal in an argument between two people IRL that one will say something like "That was a stupid comment" or "Stop acting like an asshole" -- both uses of distancing language -- and the other person will respond "Don't call me stupid" or "Don't call me an asshole". I think most people who are on the receiving end of even polite correction are going to elide the distancing step.
On the social psych side, I have no idea whether there's any validated way to encourage someone to be more introspective, take a breath, try to switch down into type-II processing, etc.
Yes. But in our experience to date, this is less common than people say it is, and there are strategies for dealing with it. One such strategy is https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme... (sorry I don't have time to explain this, as I'm about to go offline - but the key word is 'primarily'.) No strategy works in all cases though.
... kinda wondering if this is the sort of OT post we're supposed to avoid, it would be class if you chastised me for it. But anyway, glad you're here to keep us in check and steer the community so well.
Empty comments can be ok if they're positive. There's nothing wrong with submitting a comment saying just "Thanks." What we especially discourage are comments that are empty and negative—comments that are mere name-calling.
It's true that empty positive comments don't add much information but they have a different healthy role (assuming they aren't promotional)
This "impersonal" approach to also works in the other direction. Someone who said something objectively bad once doesn't have to be a "known bad person" forever.
That scares me. Today's norms are tomorrow's taboos. The dangers of conforming and shaping everyone into the least controversial opinions and topics are self evident. It's an issue on this very forum. "Go elsewhere" doesn't solve the problem because that policy still contributes to a self-perpetuating feedback loop that amplifies norms, which often happen to be corrupt and related to the interests of big (corrupt) commercial and political powers.
I don't mean persuade them out of their opinions on $topic! I mean persuade them to express their opinions in a thoughtful, curious way that doesn't break the site guidelines - https://news.ycombinator.com/newsguidelines.html.
Sufficiently controversial opinions are flagged, downvoted til dead/hidden, or associated users shadow banned. HN's policies and voting system, both de facto and de jure, discourage controversial opinions and reward popular, conformist opinions.
That's not to pick on HN, since this is a common problem. Neither do I have a silver bullet solution, but the issue remains, and it's a huge issue. Evolution of norms, for better or worse, is suppressed to the extent that big communication platforms suppress controversy. The whole concept of post and comment votes does this by definition.
There are a few sacred cows here (I won't mention them by name, though the do exist), but I have earned my rep by posting mostly contrarian opinions, and I almost always have quite a few net upvotes - sometimes dozens. It's not too difficult: First, I cite facts that back up my claims from sources whose narratives would typically go against my argument. I cite the New York Times, Washington Post, the Atlantic, NPR, CNN, etc.; I only rarely cite Fox News, and never cite anything to the right of Fox. Second, I really internalize the rules about good faith, not attacking the weakest form of an argument, not cross-examining, etc. Sometimes I have a draft that has my emotions, and I'll edit it to make it more rational before posting. Third, I ask open-ended questions to allow myself to be wrong in the mind of other commenters. Instead of just asserting that some of my ultra-contrarian opinions are the only way anyone can see an issue, I may propose a question. By doing that, I have at times seen some excluded middle I hadn't considered, and my opinion becomes more nuanced. Fourth, I often will begin replying and then delete my reply because I know it won't add anything. This is the hardest one to do, but sometimes it's just the way you have to go. Some differences are merely tastes and preferences, and I'm not going to change the dominant tastes and preferences of the Valley on HN. I can only point out some of the consequences.
The content moderation rules and system here have encouraged me to write better and more clearly about my contrarian opinions, and have made me more persuasive. HN can be a crap-show at times, but in my experience, it's often some of the best commentary on the Internet.
Completely disagree about HN. Controversial topics that are thought out, well formed, and argued with good intent are generally good sources of discussion.
Most of the time though, people arguing controversial topics phrase them so poorly or include heavy handed emotions so that their arguments have no shot of being fairly interpreted by anyone else.
That's true to an extent (and so is what ativzzz says, so you're both right). But the reasons for what you're talking about are much misunderstood. Yishan does a good job of going into some of them in the OP, by the way.
People always reach immediately for the conclusion that their controversial comments are getting moderated because people dislike their opinion—either because of groupthink in the community or because the admins are hostile to their views. Most of the time, though, they've larded their comments pre-emptively with some sort of hostility, snark, name-calling, or other aggression—no doubt because they expect to be opposed and want to make it clear they already know that, don't care what the sheeple think, and so on.
The way the group and/or the admins respond to those comments is often a product of those secondary mixins. Forgive the gross analogy, but it's as if someone serves a shit milkshake and when it's rejected, say, "you just hate dairy products" or "this community is so biased against milkshakes".
If you start instead from the principle that the value of a comment is the expected value of the subthread it forms the root of [1], then a commenter is responsible for the effects of their comments [2] – at least the predictable ones. From that it follows that there's a greater burden on the commenter who's expressing a contrarian view [3]. The more contrarian the view—the further it falls outside the community's tolerance—the more responsibility that commenter has for not triggering degenerative effects like flamewars.
This may be counterintuitive, because we're used to thinking in terms of atomic individual responsibility, but it's a model that actually works. Threads are molecules, not atoms—they're a cocreation, like one of those drawing games where each person fills in part of a shared picture [4], or like a dance—people respond to the other's movements. A good dancer takes the others into account.
It may be unfair that the one with a contrarian view is more responsible for what happens—especially because they're already under greater pressure than the one whose views agree with the surround. But fair or not, it's the way communication works. If you're trying to deliver challenging information to someone, you have to take that person into account—you have to regulate what you say by what the listener is capable to hear and to tolerate. Otherwise you're predictably going to dysregulate them and ruin the conversation.
Contrarian commenters usually do the opposite of this—they express their contrarian opinion in a deliberately aggressive and uncompromising way, probably because (I'm repeating myself sorry) they expect to be rejected anyhow, and it's safer to be inside the armor of "you people can't handle the truth!" than it is to really communicate, i.e. to connect and relate.
This model is the last thing that most contrarian-opinion commenters want to adopt, because it's hard and risky, and because usually they have pre-existing hurt feelings from being battered repeatedly with majoritarian opinions already (especially the case when identity is at issue, such as being from a minority population along some axis). But it's the one that actually has a hope of working, and is by far the best solution I know of to the problem of unconventional opinions in groups.
Are there some views which are so far beyond the community's tolerance that any mention in any form will immediately blow up the thread, making the above model impossible? Yes, but they're rare and extreme and not usually the thing people have in mind. I think it's better to stick to the 95% or 99% case when having this discussion.
Just wanted to say that it's great to have you posting your thoughts/experience on this topic. I've run a forum for almost 19 years as a near-lone moderator and so have a lot of thoughts, experience and interest in the topic. It's been frustrating when Yishan's posted (IMO, solid) thoughts on social networks and moderation and the bulk of HN's discussion can be too simple to be useful ("Reddit is trash", etc).
I particularly liked his tweet about how site/network owners just wish everyone would be friendly and have great discussions.
> The more contrarian the view—the further it falls outside the community's tolerance—the more responsibility that commenter has for not triggering degenerative effects like flamewars.
This sounds similar to the “yelling fire” censorship test
it’s not that we censor discussing combustion methods,
there would be no effect if everyone else was also yelling fire
But people were watching a movie and now the community’s experience has been ruined (with potential for harm), in exchange for nothing of value
And bad behavior gets rewarded with engagement. We learned this from "reality television" where the more conflict there was among a group of people the more popular that show was. (Leading to producers abandoning the purity of being unscripted in the pursuit of better ratings.) A popular pastime on Reddit is posting someone behaving badly (whether on another site, a subreddit, or in a live video) for the purpose of mocking them.
When the organizational goal is to increase engagement, which will be the case wherever there are advertisers, inevitably bad behavior will grow more frequent than good behavior. Attempts to moderate toward good behavior will be abandoned in favor of better metrics. Or the site will stagnate under the weight of the new rules.
In this I'm in disagreement with Yishan because in those posts I read that engagement feedback is a characteristic of old media (newspapers, television) and social media tries to avoid that. The OP seems to be saying that online moderation is an attempt to minimize controversial engagement because platforms don't like that. I don't believe it. I think social media loves controversial engagement just as much as the old-school "if it bleeds, it leads" journalists from television and newspapers. What they don't want is the (quote/unquote) wrong kind of controversies. Which is to say, what defines bad behavior is not universally agreed upon. The threshold for what constitutes bad behavior will be different depending on who's doing the moderating. As a result the content seen will be influenced by the moderation, even if said moderation is being done in a content-neutral way.
And I just now realize that I've taken a long trip around to come to the conclusion that the medium is the message. I guess we can now say the moderation is the message.
I'd argue that bad actors are people that behave badly "on purpose". Their goals are different than the normal actor. Bad actors want to upset or scare people. Normal actors want to connect with, learn from, or persuade others.
I can "behave well" and still be a bad actor in that I'm constantly spreading dangerous disinformation. That disinformation looks like signal by any metadata analysis.
Yes, that's probably the limit of the pure behavioral analysis, esp. if one is sincere. If they're insincere it will probably look like spam; but if somebody truly believes crazy theories and is casually pushing them (vs promoting them aggressively and exclusively), that's probably harder to spot.
OP's argument is that you can moderate content based on behavior, in order to bring the heat down, and the signal to noise ratio up. I think it's an interesting point: it's neither the tools that need moderating, nor the people, but conversations (one by one).