Hacker News new | past | comments | ask | show | jobs | submit login
Telling users why their content was removed reduces future issues [pdf] (shagunjhaver.com)
236 points by EndXA on Nov 12, 2019 | hide | past | favorite | 150 comments

Study title: "Does Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit".


> When posts are removed on a social media platform, users may or may not receive an explanation. What kinds of explanations are provided? Do those explanations matter? Using a sample of 32 million Reddit posts, we characterize the removal explanations that are provided to Redditors, and link them to measures of subsequent user behaviors—including future post submissions and future post removals. Adopting a topic modeling approach, we show that removal explanations often provide information that educate users about the social norms of the community, thereby (theoretically) preparing them to become a productive member. We build regression models that show evidence of removal explanations playing a role in future user activity. Most importantly, we show that offering explanations for content moderation reduces the odds of future post removals. Additionally, explanations provided by human moderators did not have a significant advantage over explanations provided by bots for reducing future post removals. We propose design solutions that can promote the efficient use of explanation mechanisms, reflecting on how automated moderation tools can contribute to this space. Overall, our findings suggest that removal explanations may be under-utilized in moderation practices, and it is potentially worthwhile for community managers to invest time and resources into providing them.

My interpretation of this is most people don't intend to break the community's rules. That is to say, most people aren't sociopathic trolls.

Most people don't want their posts removed. Informing them of what rules they broke will help them about breaking those rules again, assuming they wish to not break rules, which it appears is generally the case.

I've had posts removed on subs for breaking some arcane rule before. On some subs it wasn't clear why, so I just never posted again. Others told me why, and even gave directions on how to avoid it again (usually flair related rules). It was easy to successfully post going forward on those subs.

Too many subs have completely ridiculous rules and tyrants for mods. Some subreddit about interesting pictures didn't want screencaps so they made the rule say "screens" and then removed posts involving any screen anywhere even if the contents weren't the focus, like a cool breakage pattern. Such a sad state of existence salivating at such a tiny amount of power that you ruin a subreddit for everyone removing valid posts. I honestly wish subreddits had no mods at all, only spam removal allowed, after having had to deal with the bad mods that ruin the site.

The paper (and I only read the abstract) is interesting, but as others have noted, fairly obvious.

One thing that seems to be an assumption is that the "company" needs to provide the explanation. I think it is even better if the user provides the explanation. The assumption a user can't provide it is probably because we've all seen Terms of Service agreements that are totally opaque.

Back in the day when I was doing a bit of admin work, I decided to simplify our TOS, and then when I had to block someone, I just kicked the ball back into their court: "If you would like your account restored, please point out in the TOS what rule you violated." It worked better than expected. People that cared enough about their access to the system usually figured it out pretty quickly, and we got the knowledge that they actually read the TOS to some degree. They got their account restored and that was the end of it. Repeat offenders at that point were willfully causing problems, so we just left them blocked.

Obviously this only works if a human can understand your TOS. Another interesting line of questioning might be "at what complexity level is your TOS useful in shaping behavior and where does it just become a legal shield."

I'm going through a related situation right now, but I'm the one who is the recipient of the negative action.

I posted a video of me playing the piano on YouTube. I got a copyright notification, that I was playing the melody to a song that someone else held the copyright for.

What's the problem? Well, the melody was published in 1886 (133 years ago) under the exact name identified in the copyright claim. The composer died in 1901 (118 years ago). It is not under copyright protection in any jurisdiction! Now, I'm having to appeal the copyright claim... not to YouTube, but to ASCAP (the company who is claiming the copyright in the first place)!!!! In fact, because it was MY arrangement and MY performance and MY production, I own the copyright to that video in every way legally recognized! In my mind, this is THEFT... from ME!

My point is, if YouTube had not at least told me what I'm being accused of, there is no way that I would have figured this out! I haven't done anything wrong! Someone else (ASCAP, ICE_CS) is fraudulently claiming copyright!

Under your system, I would have to "invent" things to confess to.

Of course, now the problem is that I have no power in this situation. ASCAP must agree that they don't want to monitize my video, and they have no incentive to do that. I have no protection or recourse. :(

And, for anyone who's interested, this still isn't resolved.

>I have no ... recourse.

In theory, you could sue ASCAP for damages and/or injunctive relief. Perhaps for libel, for grossly negligently communicating a false and disparaging claim about you to YouTube. Perhaps for tortious interference with business relations. But unfortunately, winning such a lawsuit likely requires an expensive lawyer, and sometimes you can only get as much justice as you can afford to buy.

Exactly why I described it as having no recourse. :/

I argue this is different for a //simple// "human readable" ToS.

In the lack of any clear infringement you can go through the terms of service and disprove every single one, and then come to a list of those that you're less clear about on a second pass and ask if you are mistaken about one of those portions and/or some other area that your conceptions are blind to.

At that level of genuine effort at the very least you've proven an attempt to understand (and maybe that the ToS got a little over-simplified in one or more areas, or that moderation was wrong).

Similarity detection software are unable to distinguish copyright infringement, plagiarism and accidental rediscovery.

I understand that problem, it's just that it is impossible for them to own the copyright that it matched against.

I assume most TOS are just a blob of legalese, and not a "this is what it means to behave on this server".

It feels like the trend has been in the other direction. In most services I've interacted with recently, the service often doesn't even tell you that the content has been removed, let alone why. One example is Discord - if a mod removes a comment you made, it just disappears. You might try to repost it, thinking it failed to send. Similarly if you are removed from a server, it simply disappears from your list. This kind of thing is very frustrating/confusing - not that I expect a mod to necessarily provide explanation (though, that's nice) but at least the service saying "hey, that got removed" prevents me from trying to repost it.

The usual explanation involves spammers/scammers. Something like, if they know they got removed for violating some rule or the other, they can use that information to work around it in their next attempt. But for actual human users it can be very frustrating.

It also only works if they agree that they actually did violate your TOS.

I assume the response "sorry, I looked at your TOS and don't see the problem" would not result in getting their account reinstated.

This assumes we blocked people for no reason just to engage them, or that there was some automated blocking or something. At the time, all this stuff was manual, so once someone was blocked, it was because there was a real problem.

Now, every once in awhile we would get the "johnny logged into my computer and did it, not me." For technical and privacy reasons, we can't really block people, just computers. So, ya, they stayed blocked.

Is it perfectly fair? Nope. But simplifying a TOS definitely cut down on arguing with people, which was a win for us.

That's probably a good thing. If people can't figure out which rule they broke, then they don't understand the rules, and they'll probably break them again soon.

(of course, if everyone complains that they don't see the problem, then maybe it's the mod that doesn't understand the rules, or the rules are ambiguous)

Or, its possible they should be able to appeal it because the company got something wrong, and the violation isn't there. Besides this feels excessively paternalistic.

Sounds like the old cop tactic to get users to confess violations you didn't know about...

I was about to comment to say the same, haha. It's pretty nasty. You have a broken blinker, and now you're going to have more trouble for speeding too, because you guessed wrong.

Originator of the action should be explaining it (even if decision making is automated and the decision making system is not open), seems pretty obvious as noone else can realy know, only guess.

Moderator of a large sub here. I hate removing content that violates our rules because of the pitch forks coming out. But it's like baseball and you're the ump. These are the rules, you broke the posted rules (and it's not wishy washy rules like /r/politics or /r/videos), and here's the rule you broke. Most of the time you don't have to eject anyone, but sometimes you do.

Clearly pointing to rules violations also helps other users understand what to avoid doing. It hurts when people are trying to toe the line and get away with violating the spirit of the rules, but the large-scale impact of just consistently going 'deleted because of rule 3 violation: xxx' works pretty well and has been proven on ancient forums like Something Awful where bans and suspensions and post deletions all have clear, publicly listed rationales.

Yeah, most forums I use to be active in were very transparent about why stuff was deleted while e.g. most sub-Reddits are not. I vastly preferred the transparency of forum moderators.

You shouldn't be in that position. You are a user, Reddit shouldn't out you in a position to have to do their job for them.

At the very least, you should get a cut of the ad revenue your sub brings in. No technical reason they couldn't do that, and it would incentivize you to behave, as well.

Well I enjoy the topic I moderate so that's why I do it.

How does the power you have over people who are otherwise identical to you play into it?

Because I put in the time to make the sub what it is. Weed out the bad actors, update the theme, etc etc. I donate my work for the better of community. Who watches the watchmen? If not me then someone else could do it sure.

It seems so obvious that it's a pitty it wasn't studied before.

It doesn't work just online: ask a children to solve some problem in a blackboard and then erase it without telling a thing, while leaving answers from other kids on it. You'll see that nobody likes to be "corrected" without an explanation.

Noone likes to be corrected without an explanation. That part is fine. But explaining takes time for the moderators as well as providing value to the user, and some users try to get past moderation by overwhelming the moderators. Spamming.

The automated removal can also come with broad reasoning in response, and that doesn't overwhelm the system.

"Violating Policy X" (where X can be something like Twitter's Rules, not necessarily something specific) is still a terrible explanation, but isn't nothing either. Vague enough spammers can't use it to game the system, specific enough that user's can guess they did something wrong.

I disagree that this is any better, because terms can be so wide as to not have any meaning

E.g. i was blocked from facebook rencently and all it said is "you are inelligible, refer to our terms", but reading through them doesn't help at all

If it isn't "specific enough the user can guess what they did", like in your case, then it's a clear failure. So is a lack of a transparent appeal process.

Automation-assisted, not automation everything.

I wasn't advocating for blanket terms that mean nothing. You need to provide the user with something meaningful.

In the cases of places like Reddit, there are addons like Mod Toolbox that place canned messages messages into the removal workflow, so there’s little reason to ever silently delete things.

> Additionally, explanations provided by human moderators did not have a significant advantage over explanations provided by bots for reducing future post removals. We propose design solutions that can promote the efficient use of explanation mechanisms, reflecting on how automated moderation tools can contribute to this space.

Obviously the rules need some flexibility to solve special cases. I do not think anyone has suggested that you need to provide a unique hand written explanation for the 100th repeat offense. At that point just delete the spam and ban them.

BTDT. But I've also spent time thinking "uh, is this someone who warrants a polite explanation, or will that be a mistake?" It's not easy. Spammers will try to exploit anything you do, anything at all.

Comment I saw about the manner from a mod on reddit:

>As a moderator, I will sometimes send a message to a poster whose post is removed. However, if it is "commercial spam," I don't bother because we both know why.

Sometimes redditors comment without understanding that they broke the rules. Sometimes redditors comment using spam and they fully know what they are doing. In the first case, a message to them to tell them why is helpful. In the second case, it's not.

I personally (as an active Reddit user), feel that the interaction with the mods there is always in a negative context. On /r/technology, for example, you can get ban without any specific reason and messaging the mods won't do anything.

A few weeks ago I made a silly comment in HN on a post that reached the front page. My comment got a few downvotes and one of the mods sent me a message with what I did wrong.

I then went one step back, understood that I wrote something that is against the rules of HN and sent the mod an email with an apologie. He replied almost instantly.

That's one of the reasons I prefer HN over Reddit. It seems like the mods are not here to punish, but to create a healthy conversation infrastructure and to lead users who are not into the HN spirit yet, into the right path

I agree that even though the HN moderation tends to be more heavy handed, it's 100% beneficial for the community. However I doubt that the HN approach would scale well for a site like reddit— once you have dozens or hundreds of people doing moderation it gets hard to vet someone. Eventually you're gonna get a jerk moderator that goes on a power trip. I was a fan of how slashdot did "metamoderation" but I don't know if it was actually effective.

Could not disagree more; the community is an echo chamber because of how poorly dang treats dissent, and more importantly how easy it is to silence alternative viewpoints here.

As a result, growth doesn't happen here, only self validation. Try taking a nuanced view on privacy if you disagree.

Can you be more specific about the “dissent” and the “nuanced view” parts?

People will flag a comment they don't like, hop on multiple accounts to spam downvotes, will downvote everything a person writes (via their comment history), all kinds of malicious behavior on HN if someone posts something they disagree with.

It doesn't take many of those people to ruin the experience for others, and dang doesn't lift a finger A) because he's of the opinion that disagreement amongst users is bad for HN and B) the software running HN is old and not sophisticated enough to detect bad actors.

I had to abandon my other account recently because I'd get hit with 10-15 downvotes in the span of 3-5 minutes, multiple times a day as I was using the site. Many hours of minimal activity and then boom, minus 15 karma, all at once, corresponding precisely with the number of comments I had that were votable (less than a day old).

Not sure how to describe a nuanced point of view though, it's a point of view that loses critical fidelity when generalized. If the generalization falls into the "dissent" bucket, it's then given the above treatment by bad actors on HN, which has the effect of only allowing a single specific viewpoint to exist on HN, because "flagged" messages aren't displayed at all, and downvoted comments are literally hard to read via fading.

I meant specific about the content of your dissent.

Dissenting opinions are basically just different from the majority. Anarchist opinions are like this, vegan opinions are like this. Fascist opinions are also like this. Just saying it’s dissent is not too informative.

Informative? Seems like you want to mitigate my theoretical dissent, to be honest...

Whether I'd want to migitate it is entirely based on the content of your dissent. That's my point really.

There's all kinds of dissent which should absolutely be moderated away on a forum like HN.

There seems to be a general trend to avoid explanations in a lot of areas. For example, you often get no feedback after job interviews. Google just canceled my Drive subscription after 10 years without telling why but they offered me to sign up again. No idea why and support wouldn’t tell.

How are people supposed to learn and improve without knowing told what they did wrong?

I think the reasons nobody tells you anything anymore is because doing so would open you up to liability. It’s probably a similar thing to why you’re not told the reasons for not getting a job, even though it’s somewhat unfortunate that it means useful context is lost…

The other reason is to avoid long arguments.

We used to give specific details for things (ie, price is going up because x). People INTENTIONALLY misunderstand, selectively pay attention, and argue endlessly.

Same thing with job interviews and hires. Imagine a potential hire for a client facing role with poor communication skills. If asked point blank about any areas of challenge in their application you might say this was a high pressure customer facing role and so communication skills, as described in job post, would be a big factor in the analysis. You might then be repeatedly followed up and both re-assured by them that they have excellent communication skills (with lots of bad grammar, spelling errors and terrible tone) and threatened for evaluating them on their skills in this area (which may overlap various protected classes).

People sometimes take the worst possible interpretation of any action and nitpick every explanation. Read hackernews for plenty of examples here.

In many cases, particularly if someone is not paying you - it's really not worth getting into it with folks who love to argue endlessly. They usually have FAR more time then you do, and if you have 20 people arguing with you an entire day can be lost dealing with them.

I understand your point but don't you think there needs to be some middleground between not offering any follow-up at all and arguing endlessly about the decision?

If you send an Email with some reasons, you're not required to spend hours defending your position. But giving some feedback is beneficial to the applicant and potentially also to yourself because you're forced to think about the reasons and keep a record which might also be useful.

The problem is the downside risk and law of large numbers. The downside is huge and the upside is tiny in terms of engaging at all.

You screen 100 resumes - you talk to 20 people. If you give quick feedback in the response for those 20, and do 5 positions in a year that's 100 quick pieces of feedback - I agree - would be totally useful.

Didn't have required license, communication skills weaker, not familiar with industry, not local to area etc etc. I could do these very quickly.

BUT it only takes ONE person tweeting and being offended by the response to cause huge problems even BEFORE you get to litigation risk and folks going back and forth on topics.

Imagine this - even job references in the US have gone to almost no content. "Giving a negative reference may expose the company to legal liability if the former employee does not get a desired job and decides to sue for defamation or slander. But providing a positive reference or failing to disclose potentially damaging information can leave the company open to legal liability (negligent referral) as well." - Result - again, just very little info is disclosed.

Tragedy of the commons. If 99 people will endlessly argue with you and 1 is rational, you'd just do a blanket ban on giving explanations. I've frequently found that you just need to get people started and their rant will probably not end. Has always been true on Internet. Has become truer in day to day life for me too. At a certain point, you just want to switch off the tap.

But do you actually have to argue back with those people? If I had to give a feedback to a candidate, I would just send the feedback, read their response (if I cared for it), and then set an email rule to autoforward all of their future emails to a separate folder that I have for stuff like that that I never open.

Just realize that one person could result in you losing your job.

Sometimes folks have an axe to grind or lots of time on their hands (unemployed).

In the case of unemployment, that "or" could very well be an "and". Every day of being unemployed grinds that axe sharper and sharper.


I used to work at a prominent venture capital firm where I started an initiative that required everyone on the investing team to respond to all inbound emails from founders, even if the reply was "Sorry, this isn't a fit for us." We tried for several months to respond with atleast 2-3 sentences about why we passed on companies if a founder ever asked. About 10% of founders said 'Thank you, that's useful' and moved on, another 10-15% straight up said "You're assholes" but moved on. The remaining majority were just incapable of understanding what we were trying to say since they were so sheerly blinded by their self-belief. Example, I remember emailing one founder of a poker game app back to say we didn't invest in gaming, only to receive an angry email saying poker isn't a game, it was a social activity. Thinking it would help, I replied saying 'Hey, thanks for the note. Really, our issue is that gaming apps and apps where the primary social activity is gaming, are very hits driven and we don't think we have the necessary experience ore desire to predict hits in this space.' He then proceeded to tell me why I did in fact have the skills required, even though literally no one on our team had consumer or gaming experience, and that I in fact also did have the desire to predict hits in this space - how could I not?_____"I was a VC after all"_____(direct quote)

A few months later, cold inbound emails that were passes went straight to archive...

When I've done job interviews I've always given feedback to candidates. In fact, when we were actively trying to sponsor visa positions we legally had to record exactly why a local candidate was not suitable for the position. We gave that feedback to the candidate and also added some information about how the candidate might improve their pitch for other jobs. For example, if we felt that the candidate was overstating their experience in an area, we would tell them this and explain why we felt that way.

Occasionally we would get questions back. There is a point at which you have to stop answering questions, though. We're not a tutoring service. At that point we'd just say something like, "We don't have any more detailed feedback other than what we've given you. Good luck on your job search" kind of response. We never got beligerent replies from candidates, but sometimes got them from recruiters.

The recruiters are the real problem because they will sometimes (some of them even often) demand "partial payment" for candidates that they thought were "good enough" but who we rejected. We had to write a few strongly worded "Our decision is final and we believe it is justified" letters to those recruiters. Saying that we won't accept any more candidates from the recruiters if they persist shut up most of them, but not all of them.

Eventually we gave up using recruiters anyway because they were essentially giving us random candidates. I think if you can find good recruiters and you can build up a good rapport with them, this kind of feedback is as useful to them as it is to the candidate. If you can't, then you are better off without them.

So, if anyone is wondering if it's worth doing, I would say that my experience is generally positive.

I am not so sure it’s only about liability. Maybe with job interviews but in other areas it’s just cheaper to have automated systems that do the rejections and cancellations. Having support staff that can explain and even override decisions costs money This is undesirable so it’s better to lose a few customers.

I think we’re at risk of building up the ultimate faceless and inhumane bureaucracy that works well in most cases but if you have bad luck then you have no way to clarify things unless you have a lot of money for lawyers or can raise a stink on social media.

I think that's closer to the real reason. As a culture it seems we're more and more frequently engaging in selfish "optimizations," that are really just creating or ignoring negative externalities.

Corporations are especially guilty of this, given how pathologically focused they are on shareholders. Often their cost-cutting verges on customer abuse: (e.g. https://news.ycombinator.com/item?id=21513556, for an example I recently read about).

Have you ever worked for a company and felt like most of your time was spent negotiating the misaligned incentives within that organization? One aspect I found in orgs like that is that they rely on a culture of isolation or conflict between departments.

The worst example of this I experienced was a purchaser buying parts that were not to the spec of the design and caused problems in the field. No one in engineering or support was told of this and it was discovered by our support team in the field. Then engineering had to step in and and say no, only to have to economically justify the increase in cost to meet the spec.

We hurt our customers, our product and lost future revenue due to a tarnished reputation. It is difficult to be proud of what you do when something like this happens. It is also unnerving to know that this could even happen in the first place.

All of this is due to short sited economics. Negative externalities be damned. Could it be better if we could measure negative externalities in dollars and cents? How does one economically measure trust?

When I used to do engineering that requirement procurement, we would have a checkbox that it is a critical component and needs engineering approval to substitute.

Even for job interviews, liability isn't really the reason.

As a general principle, conveying true information does not incur liability. In the specific case of job interviews, as long as the feedback is true and not of a legally discriminatory nature, there are no grounds to sue, and such a case would be thrown out by summary judgement. Any employment lawyer would laugh someone out of their office who came to them with a "case" like this.

I'm inclined to think it's primarily for the reasons mentioned elsewhere: avoiding back and forth arguments. Frivolous lawsuits from _pro se_ plaintiffs might be a minor consideration, but, as I said, those are easy to get thrown out, provided you're telling the truth and the reason isn't related to any protected classes in employment.

An automated system can still explain why it's making this decision. Sure, for machine learning that's an open research question, but for most systems you actually find in charge of these tasks it's trivial to give useful information. "Your account was terminated because of unpaid invoices"; "Your application was automatically rejected because you failed to meet our education standards"; "You have to reset your password because your company's admin changed the password requirements"; "Your order was canceled because you didn't pass our credit check"

I suspect that often the reason for not getting a job is simply that there were more qualified candidates than positions. In that case, there may be no reasons to give; someone else was selected but your application wasn’t deficient in any way.

It's the same thing with speaking submissions for conferences, for example. Sure, sometimes submittals get flatout rejected because the abstract is incoherent, it's not an appropriate topic for the event, or it reads like a sales pitch. But far more aren't rejected so much as they're not accepted for any of a host of reasons.

That said, providing explanations also invites arguments and hard feelings if they don't agree with your reasons or just think you're unfairly favoring someone else.

This is certainly a common reason not to get a job. Anecdotally, I've never been given a reason for rejection, other than this one: I can't say they've always told me when it's the reason, for obvious reasons, but I can say it's the only reason that's ever been provided, and it's happened more than once.

That would be totally valid feedback if it were possible for the candidate to interpret it at face value. Given that effectively no companies provide substantive feedback, even if "there was a more qualified candidate" is the reason, any reasonable candidate is going to interpret that as "sorry, we can't tell you the reason."

My theory is that companies tend to not have an actual process, or that if they do it still ends up in the bin of human biases that their process can't quantify.

There are probably obvious things like "crash and burned on a whiteboard problem path-finding" or "requested more than was allocated for the position" (would be nice if possible compensation ranges were required in the posting); but past the obvious disqualifications (which candidates should get feedback on) there's that 'fits with the (a) team' phase. Having an 'exit value', even if generic / approved by legal, would be helpful for everyone including internal company metrics that might drive a better posting if the job isn't fulfilled.

I've been told not to provide reasons for liability concerns. I'm a teacher by nature and it's so incredibly frustrating on both sides, but I wanna keep my own job so I do what I'm told...

Well, it would be nice to know how others are more qualified so you can remedy the situation.

It may not be that they are more qualified. I was hired at a job and they told me afterwards that other applicants were more qualified, but those applicants lived in a different state, while I lived a few miles away, so they decided to go with me instead so they didn't have to deal with helping the other candidates move.

> I think the reasons nobody tells you anything anymore is because doing so would open you up to liability.

Google has no legal obligation to provide you with Drive services, though. I believe they specifically disclaim any obligation in the EULA you agree to.

Google in particular seems to have a broad and well-established policy of refusing to talk to individual users, whether for bans or simple support, outside of a few specific categories. Someone did the math and decided that the cost of maintaining a review process and a customer service department was greater than the cost of the bad press from accidentally screwing over a few customers.

A few days ago there was a thing where someone had their Google account deleted without warning or recourse--including their Gmail--because they spammed emotes in someone's YouTube stream.

That's intolerable. I need to look into a convenient way to maintain a live local backup of my Google accounts; I've trusted Google for years to keep my data safe, and it seems like I've been naive.

This is why I don't rely on Google anymore. I do have a Youtube account separate from the Gmail account I used to use, but I don't trust Google to not figure out its the same person and simply terminate both if the Youtube account does something they don't like.

It really is amazing how much power they have. In 19th century terms, if you're an ass to people at the tavern they can raid your home and burn all your personal documents and correspondences.

I think a better 19th century analogy would be: You chose to keep all your personal documents and correspondences in the basement of the town tavern. Then some time later you have a disagreement with the tavern owner and he bans you from entering, preventing you from accessing your stuff.

I think the lesson is when you rely on hosted services, you’re denying yourself the final say on who has access to your stuff and giving that say to someone else. Make sure you trust that entity.

You're looking at it from the point of view of technical details. I'm looking at it from the point of view of social norms. Even my technical friends think it's weird that I don't want Google having my only copy of emails, pictures, and increasingly documents. Your average user has no chance.

> Google has no legal obligation to provide you with Drive services, though. I believe they specifically disclaim any obligation in the EULA you agree to.

That doesn't mean that the EULA is actually legally binding.

Nations with tougher consumer protections may actually allow recourse when Google decides to break a contract without what would be considered "good reason" under those same protections.

I am skeptical. Because I see this issue in my country too where opening yourself to liability is not an issue. Maybe one could argue that it is due to cultural influences from the US which are copied without the original context, but ... eh, I feel it is pretty weak.

Liability and "speaking for the employer" aside...

When I was first starting out, I found a job posting that I really wanted, so I applied and interviewed. I felt the interview went pretty well, but I got a call back that I wasn't what they were looking for. It was a small company, and the head of IT called me back to give me the bad news.

I really wanted the job, so I asked why. He told me I was too junior of a developer, and didn't understand OOP well enough. Being young and cocky as I was, I straight up told him I disagreed (looking back nearly 15 years, he was totally right). I didn't recall talking about OOP, per se, in the interview, and I had nothing to lose at this point, so I pressed the matter. I doubt it was OOP specifically that concerned him, and that it was more of a proxy for my inexperience. Either way, somehow I managed to get him to ask me about OOP over the phone. We talked a little about the vague concepts, and I have no fucking idea what happened, but he changed his mind and offered me the job. It was a great job, and I quickly learned how inexperienced I really was, but I grew a lot.

What am I getting at... in forums, you still _want_ them to participate, and come back. It is the opposite in an interview, you have essentially ended the relationship. Offering feedback starts the conversation again, and gives punks like me a chance to drag it out, disagree, and waste your time. We are just as likely to disagree with your assessment, if not more so, than to take it to heart.

Thank god he gave me another chance, I loved that job

I like how your story is both a lesson to really push to get what you want, and a lesson to not ever give people the opportunity to do so.

It depends on how the candidate treats it, and it definitely warms my heart when I get actually useful feedback from failed interviews. Though I definitely understand it isn't sustainable for a company to do that, since for every reasonable person, there would be 10 who would argue to death why the interviewer was wrong.

Surprisingly enough, one of the places I least expected to get useful feedback from wasn't some small informal start-up, but a fintech giant Citadel. I already had a feeling about my weaknesses that led to failing their onsite, but their rejection email (with the feedback) helped me way more than that. Not only they pointed out those specific things I knew about already, they also pointed out a few others that I missed (that were all true) and gave me actually something useful to work with. Just wanted to express my gratitude for that, because I definitely (at least partially) attribute my improvement in those areas to that email.

You make it sound as if he regretted hiring you. It's a pain in the ass to find people you want to hire and I'm always relieved when we successfully fill a seat. If someone can convince me at the last minute then more power to them.

You are right, I didn't mean to imply that. But I did feel that in that moment on the phone, he regretted giving me a chance to speak (but I have no way of knowing that... maybe he knew exactly what he was doing). I genuinely didn't expect him to change his mind but I didn't want to make the same mistake, whatever it was, in my next interview.

In the end, it worked out for everyone. I spent 11 years there, finally leaving after we sold the company, and still keep in touch with many of them.

I think it's simply that the intrinsic act of telling someone why they failed to get a job is likely to be unpleasant for the person doing it and that person has absolutely no incentive to do it.

Why would they do an unpleasant task they don't really need to do?

I’m going to take this opportunity to call out Airbnb as having the best rejection experience I’ve ever gotten as a candidate. After a day of onsites (I think 9 interviews including the lunch culture interview), the recruiter called me at 6 PM and told me I was being rejected, and which of the interviews I failed (which matched my perception that I didn’t click with that interviewer). It left me with a positive feeling toward the company.

People think they will be exposed to liability and think they will get more lawsuits.

It doesn't mean that their current behavior actually reduce risk.

This. I've seen so many ass-backwards approaches to the liability boogeyman where companies opened themselves to much greater liability issues because they were afraid of liability. Things like insisting on using a padlock to lock an emergency exit from the outdoor patio of a pool each night even though it could only be opened from the inside if someone already jumped the fence to get in. Staff would then routinely fail to unlock the exit, but they were afraid of some convoluted scenario in which someone would jump the fence to let other people in who wouldn't otherwise just jump the fence, and then drown in the hot tub or something.

“Because liability” has always seemed like a too vague and dismissive cop-out. You might as well say “because wizards”. Assuming a company’s actions are actually legally above-board, I’d love to hear the articulable reasoning behind why being transparent about these actions adds legal exposure. If the company is doing things that are not legal, then yea, of course being transparent is risky.

When a company says, “we need to be opaque and evasive because liability” I immediately now assume the company is up to no good.

I think the liability is so remote as to be a fake reason. It's a cop out to get out of doing it or even to look into doing it.

Employment claims are perhaps one of the top liability concerns of almost any business based in the US at least.

The probability may be remote, but the impact of a hit is high, therefore it gets a lot of weight despite the low probability.

Another reason is security. Telling the user explicit reasons for account deactivation could open you up to more sophisticated attacks. Of course this isn't a huge concern in most situations, and it can sometimes be less of a concern for older, active accounts. But this is definitely an issue in banking.

This sounds like a proxy for money. Using a bad heuristic with high false positives rate and not talking about the reasons behind the false positives lets an organization to avoid paying the costs of fixing the heuristic.

I wouldn't say it's the result of bad heuristics. It's coming from not being able to fully trust your user and Goodhart's law. A small minority of your userbase might be extremely motivated to attack you, and giving them explicit reasons for your actions will just make your security policies ineffective faster.

The other reason they don't tell you is because they don't know. It's automated systems making automated decisions sending out automated notifications all the way down.

I've gotten interview feedback before. It's less about liability, and more about companies not giving enough of a shit to provide feedback.

> How are people supposed to learn and improve without knowing told what they did wrong?

To shove this back into the 'online community' context, there are certain people who have no interest in learning what they're doing wrong; or even who fully understand that what they're doing is wrong, and have no interest in improving.

Examples: spammers, trolls, and the willingly-obtuse-slash-rules-lawyers (who are slightly separate from trolls, but are similarly un-educatable).

The point of the non-disclosure is to prevent people from gaming the system by abusing specific rule sets

Yeah, we've seen this play out where rules are made explicit. Think of all the people who want to speed, but know that they can avoid tickets by driving no faster than the posted limits!

There are definitely forums that don't inform you of why your post was removed because they think that trolls will use this to basically probe the defenses to see exactly how much they can get away with. Making the moderation more obscure is thought to make them more reluctant to try things. I personally don't think this works, but it is the theory.

However, it is important to consider how bad actors will abuse your system if given the chance, because on the internet there will always be some trolls trying to burn your house down.

This really depends on if it's a learning moment for closing the Overton Window (#1) a little or if it's something clearly egregious.

A community is better served by making gentle and public re directions that all can learn from where possible.

#1 https://en.wikipedia.org/wiki/Overton_window

> For example, you often get no feedback after job interviews.

Because sometimes you're not hired because the person interviewing you can't imagine spending eight hours a day sitting across from someone who laughs like you do. We have increasingly open offices and then pretend that jobs are won and lost based on depersonalized notions of merit, completely without reference to the individuals who have the merit. Since we have to keep up the pretense of professionalism in the workplace, and can't, for example, walk over to someone and say that, while laughing is perfectly laudable, your specific laugh reaches into my skull and attempts to pith me by slow degrees, it's better to head those things off early.

Can confirm, have seen that exact thing happen.

With interviewing I think it's a bit different. At least in my mind it is. As someone who has interviewed people over the years, I don't give feedback because of the way nearly all human beings operate. Specifically, if I were to tell someone, for example, that they came off as abrasive or based on the things they said in the interview it sounded like they were a bit too controlling or possibly micromanaged previous employees and projects, most people will take that feedback and simply try to mask that in an interview rather than actually trying to change their core personality. Meaning the changes are just for the sake of appearance and then you get a nasty surprise when they start working for you and only then do you find out who you actually hired. I'd much rather not give feedback for this reason.

Some of the biggest surprises and worst hires I've seen is people who are extremely slick in interviews. My mother has similar stories from working for a major financial institution for 35 years. She had consistently told me growing up over the years that half the time the best educated folks were all super slick in interviews and stellar on paper and then lazy as hell on the job.

Risk of having a statement misconstrued or taken as a much bigger deal than it is is why people (who want to actually land a job) are so fake in interviews to begin with. It ends up mostly being a test of whether you can put on your "interview face" without letting it slip (any slip must be the tip of some awful iceberg, surely!) for the desired amount of time. Which is why the "slick" folks who aren't good get through while the non-slick but good person who slipped up and let something genuine and not-at-all-a-big-deal-actually get through is lumped in with the "abrasive" folks (any hint of something is taken—not entirely unreasonably, to be clear!—as proof of a problem) and doesn't get an offer.

The actually-good end up having to do precisely the covering-up you mention, because genuine and entirely fine attitudes and behaviors are, in interviewers' imaginations, often magnified to their worst possible extremes. It's very valuable information for a good employee to have that they need to cover up or avoid certain things in interviews that aren't actually a problem, because they do need to do that, and they may not realize how things that they think aren't a big deal and in fact are not are coming across in an interview context.

I write this as someone who is, I gather from feedback, pretty decent at that part of interviewing. Doesn't make it less gross-feeling and stressful.

I think people just don't want to have other people argue or get offended with them over the reasons, like on dating apps people just straight up ghost instead of having a potential confrontation.

If they told you what you did wrong you will come up with better ways to do the same thing without detection.

Google's detection system is weak. Really everyone's is. So they use arbitrary ways to detect you have done something wrong. Sometimes it is a false positive. Sometimes they didn't think it through.

Like the guy who visited Iran and used his online account there only to get locked out because of being identified as belonging to a country on the export control list.

I worked at a place where we passed on a candidate in favor of another for essentially being the wrong race.

No point in giving that kind of feedback.

> ..., you often get no feedback after job interviews.

There is nothing new in this.

Interesting, some of the topics in this paper remind me of this study on the old Elitist Jerks WoW forum: http://james.howison.name/pubs/bullard-howison-2015-elitist-...

The forums were notable because they were VERY strict, but also every single removed post, suspension or ban all got cross posted into a specific forum for all to read. For users who weren't trying to offend this served the same lesson the Paper here makes: how to not do it again, for everyone else it served as entertainment while the mods hammered down trolls.

Giving reason for taken actions seem like it should be common sense, yet for some reason in many areas just doesn't happen, just muddying underlining issue.

A case in point, I recently opened a bank account and needed to verify my identity by taking photo of my id and my face. I did it at least five times and always got just generic answer that the verification was unsuccessful. I thought that problem was with bad quality of camera or difference between my appearance on id and the taken photo and only later I realised that the id has already expired. Why didn't it tell me first time so I could use passport, instead of letting me try it over and over again?

The consequences here are also higher as the purpose is to avoid identity theft. At some point, I think there is some merit to obscurity. Although it certainly can result in a frustrating experience for legitimate customers.

It's an interesting paper, for sure. I think this only works in reality if 1) there's a clear reason the content was removed that a user could take steps to avoid in the future, and 2) you're willing to admit that reason to the user.

With so much content moderation these days coming from machine learning (violates #1), personal vendettas from human moderators (violates #1 and #2) and quasi-legal threats from third parties (violates #1 and #2), there's not much room for user education left.

The HN mods generally try to explain why they remove unacceptable comments, if the comment seems like it's from an actual person trying to participate. Also, we summarily one-click-delete lots of spam.

Those would also correlate "telling why content was removed" with "reduced future issues". But the causality goes the other way. Users that are likely to participate better in the future are more likely to get explanations, while spammers get nothing.

> The HN mods generally try to explain why they remove unacceptable comments, if the comment seems like it's from an actual person trying to participate.

This is a good policy, thank you for doing it this way.

Personally I feel that something along the lines of an explanation should also go with downvotes. When I'm downvoted, I would like some feedback on what people didn't like about my post -- do they think I was rude? or did I write something that is wrong, or at least, would need a source to back it up? Similarly, I would also like to give such feedback when I downvote. But downvoting and posting about it is not welcomed by the community.

On Slashdot there is (was?) a rough category to go along with votes, so you could upvote something because it's "informative" or "funny", and downvote because it's "flamebait" or... I don't remember the others. Something like this would make a very useful addition to HN, I feel. The feedback could of course be optional, and only visible to the poster, like the actual score on a post.


People sometimes seem to think the purpose of the voting mechanism is to provide an objective evaluation of the quality of their contributions, or even their inherent value as a human being. It's not.

The main purpose of the voting mechanism is to sort the responses within a thread, so that someone can skim a large comment section by reading only the first comments at each level of nesting. If it's working right, it should include the most interesting/relevant/accurate comments.

Tom_mellior, I can't quickly find any heavily downvoted comments of yours. One that's slightly downvoted is https://news.ycombinator.com/item?id=21417799, which is kind of low-effort. A downvote reasonably reflects the fact that it doesn't belong near the top of any list of the most interesting/relevant/accurate comments on the subject matter of the paper that a busy reader shouldn't miss if they're skimming quickly.

Most of your comments are great, so thanks for your contribution!

> A downvote reasonably reflects the fact that it doesn't belong near the top of any list of the most interesting/relevant/accurate comments

Yes. But often it's not clear, and would be useful to know, whether it's the "interesting" or the "relevant" or the "accurate" part that was lacking. These are all along the lines of the categories that I mentioned. I do want to stress that I did not propose a "you are a worthless human being" category.

> Tom_mellior, I can't quickly find any heavily downvoted comments of yours.

It doesn't have to be heavy for the reason to be interesting. And for whatever it's worth, there is this one: https://news.ycombinator.com/item?id=21362694 where I was downvoted for explaining why I downvoted someone. This is OK, since the HN guidelines discourage discussions about voting. Still, I think some sort of "-1, ad hominem" feedback would have been useful for the original poster.

Are you a mod here? If so, welcome! I had thought it was still just Scott and Dan.

I can’t imagine what it’s like to moderate large sites like HN. I read it with Show Dead turned on and I’m amazed at how many people have been shadow banned with no clue they’ve been posting frequently for years with no idea 99% of people will never see what they wrote.

In most cases it looks like dang does give them a reason when they’re banned but I don’t think many of them read it.

When we see a banned account that has been consistently posting good comments, we unban it. If you see one of those, you should let us know at hn@ycombinator.com. We appreciate users looking out for each other and always look closely at these requests.

Much more common is banned accounts that post a mixture of good comments and ones that break the site guidelines. Those, unfortunately, we can't unban, for the obvious reason that the bad comments do more harm than the good ones add value. That's why we introduced vouching: so the good comments can still make it through.

Sometimes we unban accounts and as soon as they're unbanned they revert to breaking the site guidelines. Then we ban them again and they start posting good comments again. The human heart has murky depths.

I make it a habit to check seemingly frivolously killed comments and vouch for them in the case of apparent shadowbans. You can help too!

It's a fine idea, but ISTR that if you do this too often to what they consider validly killed comments then they just tune your vouching value to zero. You have no way to know this, so it's really hellvouching.

Can’t you just look at the comment while logged out?

One vouch doesn't unkill a comment (just as one downvote doesn't kill a comment). If you vouch and the comment is immediately unkilled, then since you were probably the person whose vouch made the final difference you probably do have a positive vouching value. It is rare to see that, however, so you don't really know your own (or anyone's) vouching value. The term "hellvouch" was intended to represent that we don't know what our vouching power, not that we don't know whether a comment has been unkilled or not.

We can see whether someone else's comment has been killed whether we're logged in or out, so maybe I'm not following your question...

Just before I saw this I actually read this: https://www.newyorker.com/news/letter-from-silicon-valley/th...

Dumb question - how can you tell what posts are from shadowbanned people?

Turn on showdead in your options and click on the name of someone with a dead comment, their history will show all of their comments in the same condition.

I plan to read this, but after working on content moderation at Quora I have doubts. Yes, reasonable and more casual content creators like feedback. For bad actors feedback acts to draw a line in the sand that they constantly toe.

See: https://en.m.wikipedia.org/wiki/Wikipedia:Don%27t_stuff_bean...

But please don’t think I’m stuck on one side, there are good counterpoints that are similar to those about disclosing security vulnerabilities.

There are definitely different effects to the crowd and to the individual depending on the approach.


> line in the sand that they constantly toe

Isn't the problem the binary consequence, complete removal or completely left alone?

Instead, a continuous incentive gradient can push people away from the line. For example, comments on HN and Ars Technica that are downvoted are displayed grayed-out so they're harder to read and more likely to be skipped over when skimming; on Ars Technica, if a comment is sufficiently downvoted, its contents are collapsed into a stub, and you have to click the expand button to read it.

Besides pushing people away from the line, another advantage of a continuous incentive gradient instead of a discrete punishment is that because the stakes are lower, subjective disagreements are less divisive. If a comment straddles the line of violating the rules, and what's at stake is whether the comment is removed or shown, then community members will argue much more vehemently with each other and with mods than if what's at stake is whether the comment should be more or less gray, or whether the comment should be completely removed or merely collapsed.

(Zuck has discussed similar ideas about "borderline content": https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-f... )

Experiencing this now. I produced an instructional video for the non-profit I work for, and youtube took it down as "spam", won't explain why, and has terminated our account with no strikes. Why wouldn't I try to get around that ridiculous ban?

Does anyone know of any datasets of moderation logs?

Or some way to collect moderation log data for reddit?

The lack of explanation is intentional because they dont want people gaming the system and getting around the filters. That's the idea anyway.

That's why I'm not using Reddit. Mods have too much power, and seems like you can lose your account if the mod is not a fan of you

Otoh reddit mods has 0 accountability, "if you don't like them make your own sub" basically that's the official stance

Sounds reasonable to me. Reddit is not and never was a democracy; at best it’s a benevolent dictatorship - just like every other social media platform.

It costs nothing to start a new subreddit if you want different rules. And if lots of Redditors agree with you, they’ll happily follow you.

> benevolent dictatorship

Regardless, it's still frustrating. Especially when they cannot tell you which "rule" you broke, so they resort to telling you they can use their discretion to ban you for any reason (which is usually some sort of agenda that becomes apparent after you've tried to have a civil discussion about why you were banned). It's what drove me away from that platform, and I'm sure I'm not the only one, so I guess not telling users also prevents future issues, in a way.

In my experience, what often happens is that telling them the reason often results in a repeat of the same behavior, but modified only minimally in an attempt to avoid violating the letter of the rules, but not necessarily the spirit of them. This leads to a constant back and forth and is exhausting to the community and moderators.

Sometimes you just have to show boorish guests the door.

Well, you sorta posted this on a submission about a paper that indicates the opposite; telling people why something was removed leads to less content being removed and other issues, so I think that might be more of a perceptual bias rather than reality.

Good point. :)

Then you've encountered much nicer mods than I have. My initial questions into my banning were more often than not met by a boorish mod hurling insults, even though I always kept it civil. There's plenty of subreddits putting this type of bad mod behavior on display, so I know I wasn't the only one. In fact, bad mods were so prevalent when I left, it was a daily running joke in a lot of communities.

Write a bot that has the max power, run it on a cloud server, let it change its own password and have its own paypal account so no one can turn it off, just requires donations to keep running.

Let it run on a set of rules, and count votes to change rules. Now you have DemocracyBot

You have sort of reinvented Nomic. I've not seen it applied to moderation before, but it'd definitely be interesting. https://en.wikipedia.org/wiki/Nomic

Bitcoin, would be easy to get a paypal account shut down.

Makes me chuckle. Democrats most times are against "at-will" situations, except when it benefits them.

(Just like every other group: self-serving.)

It's not about democracy or ideology at all. If we have the ability to design a better place for people to share ideas and discuss what's on their mind, we ought to make it as good as we can. One of the strengths of Reddit's "subreddit stewardship design" is how it allows the site to scale while still having people take ownership of each subreddit, allowing them to give it a more personal touch and better able to respond to specific community needs.

One of the weaknesses of this is how, at least on some subs, mods are able to censor discussion largely unnoticed, without being held accountable to the community. See r/declineintocensorship, r/watchredditdie, and r/yallcantbehave for several examples of egotistical mods overextending their role with no real benefit to the community.

I can't help but think that there's a way to hold mods more accountable to their communities. For example, if the modded comment wasn't spam, it could say "this post was removed for breaking rule x which states y. Click here to see the comment," with the mod's username attached.

I would also like to see statistics on removed posts, where a subreddit gets some sort of penalty for censoring too much, such as not appearing on /r/all, or becoming downweighted in some way.

> It costs nothing to start a new subreddit if you want different rules. And if lots of Redditors agree with you, they’ll happily follow you.

Which certainly doesn't violate any laws.

The unintended consequence, however, is more isolation and less engagement with ideas you don't already subscribe to or people you don't normally encounter.

This is not a Reddit specific problem of course, it applies to Twitter and Facebook or any other social media platform. But it's concerning and difficult to find any easy and obvious solutions.

As long as they put the reason inside a PDF things should be ok. That way they simply won't read it, problem solved.

This really shouldn't come as a surprise to basically anyone who has ever moderated a public community.

Yes, but reduces your power of censoring arbitrarily without needing to justificate.

The universal user rule should be "Don't be a dick!".

Doesn't work: https://www.ashedryden.com/blog/codes-of-conduct-101-faq#coc...


Not everyone understands what is unacceptable behavior, especially when we are talking about a group of people that is mostly homogenous and has very little interaction with people different than they are.

We focus specifically on what isn't allowed and what violating those rules would mean so there is no gray area, no guessing, no pushing boundaries to see what will happen. "Be nice" or "Be an adult" doesn't inform well enough about what is expected if one attendee's idea of niceness or professionalism are vastly different than another's. On top of that, "be excellent to each other" has a poor track record [link, now broken, previously described a real-world situation where people holding grudges tried to weaponize a community's sole "be excellent" rule against each other].

You may have been running an event for a long time and many of the attendees feel they are "like family", but it actually makes the idea of an incident happening at the event even scarier. If someone is new and not part of "the family" will they be believed? Will they be treated like an invading outsider?

Remember that everyone who has harassed or assaulted someone is a parent, sibling, child, or friend to someone else. We don't always know people as well as we think we do.

Not everyone understands what is unacceptable behavior?

I mean, if you go to a restaurant, a movie theatre, or a supermarket, do you know how NOT to be an asshole? It's not hard. Most people can manage it on a day-to-day basis.

I don't know where you're veering off to in your last sentence either. Weird.

It sounds like you've never worked in a restaurant, movie theatre, or supermarket, because I don't know a single person who has worked at one for an extended length of time who hasn't had to deal with people being an asshole and then arguing that they weren't being an asshole.

Of course most people can manage not to be an asshole on a day-to-day basis, but to deal with the small minority of bad apples who could ruin a whole barrel, it helps to spell out specific banned behavior in detail to quickly shut down arguments about whether a specific behavior is being an asshole or not.

The last sentence is pointing out that just because you've only ever seen someone be a nice person, doesn't mean they've never been an asshole when you weren't there.

Also, I didn't write that sentence. Everything after the first sentence is quoted from the link I provided.

I actually worked in one of the most expensive hotels in London. If people were assholes to the staff or broke any of the hotel rules (which were extremely lax WRT prostitution etc.), they would be marked in the system as a "5 star guest". Any time in the future they tried to book a room, the hotel would simply tell them they had no availability.

Ah, like requiring down-voters to comment.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact