Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Moderator Mayhem, a game about the difficulties of content moderation (engine.is)
221 points by randylubin on May 11, 2023 | hide | past | favorite | 121 comments



In the tutorial, the very first card I was presented with was reported for being "illegal", and the description was (from memory. I might not have it quite right)

"A post about the movie Cocaine Bear recommends doing cocaine while watching the movie and contains a photograph of a bag of white powder with the hashtag #cocaine"

That... seems like an obvious joke to me. And even if it's not, while "doing cocaine" is illegal, photographs of white powders are not, nor is suggesting that people do cocaine, nor is portraying them apparently doing so. We can know this is true because the producers and actors who worked on the film Cocaine Bear were not arrested.

Content moderation is hard, but I'm not very impressed that the very first card shown got it (in my opinion) totally wrong.


That’s the point though, right? Encouraging illegal activity may not itself be illegal but moderators aren’t arbiters of the law, they’re protecting the integrity of the website for the intended user base. If you want to allow people to encourage cocaine usage on your website, you have to be prepared to deal with law enforcement — because when someone follows the advice and dies with the website open on their computer, the website operators will get a call… — and not many people want to risk their entire company because “it’s legal to encourage people to take cocaine!”

Moderation is hard and that example is… the perfect example. User reports are rarely accurate but still valuable.


Certainly social media companies can choose to disallow suggestions that people take illegal actions. But that's not what the moderation policy for "illegal" content says.

It says "Content that violates local or international laws or regulations."

I don't know all the laws of all the countries, but my local laws make very little content illegal. Various types of obscenity, true threats, etc. "You should do drugs while you watch a movie about drugs" is definitely not one of those.

It would be very easy, if that's what they want, to change it to "Content that encourages breaking local or international laws or regulations."


Visit a platform like LinkedIn, Twitter or Facebook and try to report content: the complex decision tree of what to report and why will be a good demonstration or how unintuitive the reality of content moderation is. Every user of every platform has a different understanding of even simple concepts, like “scam” and “spam”. Report reasons are not an accurate classification from a trained expert, they’re a thing a user chooses from a list that has to be designed to both provide valuable context to a moderator and capture the wide range of different understandings different users have.

Suggesting that a reason should be added for the specific situation you’ve encountered demonstrates the naive understanding most people have of moderation and that’s what this game is good at.

Platforms like Facebook and Twitter will have spent tens of thousands of people hours thinking about something as simple as the list of report reasons.


There are plenty of difficult judgment calls in content moderating.

But the difference between "illegal content" and "content that promotes breaking the law" isn't really one of them to be made by the moderator. This is just bad instructions from the game maker.


You don’t think content moderators have to deal with: “hey kid, you should go shoot up a school?”

Just cleanly not in their domain?


I think that content moderators often have to deal with that issue, and they are given rules like "remove posts that suggest that others commit violent acts" or something like that. And those are reasonable rules for moderators to enforce.

But the rules that this game give do not include that rule, and it's a mark of a poorly designed game that the rule says "don't allow illegal content" and then when you (correctly!) apply that rule, the game says "you should have not allowed this because it's suggesting that someone do something illegal". Those aren't the same rule!

ETA: Like lots of places have rules against reposting copyrighted content. But a post that said "Psst, kid, you should go download a movie from the Pirate Bay" should not be removed under that rule. Because the content of the post isn't copyrighted. If they also had a rule that said "don't encourage piracy", they could reasonably take it down under that rule.


I see I see now what you’re saying. You’ve convinced me :)


Always great to have a discussion with open-minded people. Cheers, internet buddy!


How is this different from moderation?

This is exactly what I encountered - mods would debate these specific issues. Is the spirit of the rule or the word of the rule met.

For global teams it’s even worse.

The rules tend to be American, and they apply globally - which means that things that are illegal locally are given a pass, and this creates the same debate in the mod team.

Moderation is phyrric. Today you come up with a good argument for behavior, tomorrow you churn your team and have 60% new people. The same argument comes up and its decided in the other direction.


You know, I'm nearly certain that that post does violate some "international law or regulation".

Do I know which one? Nope, not a clue. But many countries have pretty weak free speech protections, and pretty strict drug laws. The idea that that violates none of them seems quite unlikely.

The fact that your local laws have strong free speech protections doesn't really matter when the criteria is "local or international". If it violates north korea's, china's, or singapores laws it still violates that restriction.


Maybe the policy should be rewritten then. If no content is allowed that violates any country's free speech restriction, then it probably means the site will remove criticism of the government of North Korea.


Well, that's a problem that social media companies have with China and India, both of which are big markets.

These are actual problems.


> the website operators will get a call

Then what we actually need isn't more content moderation, we need explicit laws explicitly _protecting_ website operators and any other publishers who are actually following the law.


I mean not really... sometimes some content just sucks and you don't want it on your site regardless of its legality.

Start shitposting here on HN and see how long your posts hang around.


But then that becomes a site owner editorial decision rather than a fear of legal reprisals decision.


How can one protect the integrity of a website if they're undermining free speech rights and simultaneously derailing conversations and fragmenting communities while doing so?

Though we didn't first define what "integrity" meant, it now seems impossible to formulate a meaning for it that isn't absurdly ironic given the actions performed to protect it.

One could be forgiven if we were talking about a forum or website that exists in a totalitarian regime like China... better that there be some conversation than the only other option, none at all. But in the US or Europe or any other nation that considers itself even minimally enlightened?

Moderation is not the solution. It may even be the problem.


> How can one protect the integrity of a website if they're undermining free speech rights

What free speech rights? You have no legal right to compel someone else to display your words on their website.


That's an interesting theory. I'm not sure you understand the implications of it though. Were it so simple, then it is plausible enough to construct a society where you have no free speech at all, anywhere, without your legal rights being violated.

In fact, you all seem to be constructing that society even as we speak. Given the audience of this particular forum, more than a few of you are literally doing just that, rather than being accomplices and bystanders.

I'm more of a free speech fundamentalist myself.


It's not a theory, it's literally how free speech works in the real world. Private entities can't be compelled by the government to to publish content they don't want to publish, just like they also can't be silenced by that government.

What you seem to be asking for is a world where the government can override the autonomy of publications and force them to publish specific items. How is that "free" speech?


No, that's not how it works in the real world. that's how it works in your low-fidelity perception of the real world. You've never much bothered to check if your model matches the observations you might make.

Look at the statement "private entities can't be compelled by the government". Which government are we even talking about here? It's quite clear that many governments do actually compel such things when it suits them. Have you never noticed?

You might switch to making the argument that "private governments shouldn't x", and I suppose if you were clever they might even be good arguments. You might also make the argument that this government or that government doesn't currently do it... but that boils down to "I want it to continue not compelling x". Hardly an argument, except maybe for the status quo.

You don't know what I'm asking for. It's an idea far enough outside your narrow cone of comprehension and even narrower cone of imagination that we might as well not even be speaking the same language.


Leaping from "I get to decide what's on my website" to "you've created a society" is pretty silly. Your argument is missing any meaningful linkages. It would be stronger if you articulated them instead of handwaving about plausibility.

Personally I don't think social media speech is that important. It's ad-based, convenience-driven, and fickle. If it were to disappear people would just move to different media.


If some magical event occurred where it did disappear, what do you imagine would be the media that people would move to?

I'm coming up blank. Could it even be non-electronic in this day and age? If it were, it'd almost certainly look identical to current social media. If it didn't look like that, what, we'd all get ham licenses and tap out our opinions in Morse? Maybe we'd set up landline partylines and gossip until the wee hours of the morning?

Would people start mailing in letters to the editor again to magazine printed on cheap acid-soaked paper? Wearing sandwich signs and walking down the sidewalks in major metropolitan areas?


It's a "Jump to Conclusions Mat". You see, you have this mat, with different conclusions written on it that you could jump to!


The "illegal" rule requires a lot of clarification. The interpretation here seems to be "It's against our policy to post about engaging in illegal activities", or maybe "it's against our policy to advocate illegal activities".

Services based in the US are not legally obligated to prevent users from writing about using cocaine, or pictures of white powders they're claiming are cocaine, or writing that it's desirable to use cocaine, so the post itself is not illegal. Real policies for content moderators should be much more detailed than what's in this game so that most moderators will reach the same conclusions about whether a post violates the policy.


It's ironic that the first post on this thread is about missing the point of the first card that user saw.


It's not, because the site tells you there is a perfect answers. On this card, it tells you should ban it.

OP didn't miss anything, the website is not subtly showing you how content moderation is difficult, it's telling you: we want to moderate it that way and you are failing or succeeding. That's not a lesson. That's gameplay.


What was the point of it, and how did I miss it?


This is showing how hard content moderation really is. It's judgement calls and very often no one is happy. Seeing more information takes an angering amount of time, which makes me think this is a fun game that makes an excellent point.

Edit: In some ways, a card that's incredibly contextual as the first one is a brilliant move.


I'm not convinced that putting a real head-scratcher as the very first choice in a tutorial mode is a brilliant move at all. Tutorials are supposed to be easy and hand-holding.

But also: this isn't actually a hard judgment call. It's just bad instructions. There's an objective difference between "content that breaks laws" and "content that promotes the breaking of laws" and the rules could easily be changed to indicate the one they want.

If you were playing a platformer tutorial and you came to a gap and the tutorial instructions said "Press A to jump over the gap" and then when you tried you fell in and died and the tutorial then said "That was too big a gap to jump over. You should have pressed [Down] to climb over it instead", would you think "Ah, what a brilliant meditation on trusting trust and how the right choice is not obvious" or would you think "this is a shitty tutorial for a dumb game"? I'm not saying there's for sure a right answer here. I could imagine a well-made game where the tutorial straight-up lies to you and gives you the wrong directions. But most games like that are just badly written and poorly thought out.


"Content that breaks laws" "Content that promotes the breaking of laws" "Content that is clearly satire/parody/a joke"

A lot of time context matters. In a forum thread called "What would be the worst title for a self help book?" a post that just says "Give up, kill yourself" is not actually promoting self harm. It's saying that its the worst advice.

assuming the entire point of the game is "moderation is harder than you think, stop assuming all mods are power tripping, they have tough choices to make!" -- the tutorial isn't supposed to be teaching you how to answer the questions properly, because answering questions properly isn't the point of the game. It's teaching you that about how hard moderation is.

Also, I've played plenty of games where the tutorial involves dying and then the follow up is learning how not to die (generally in games where dying is common and they want to indicate that "dying isn't the same as game over, this isn't Super Mario for NES")


I think that content moderation is in fact quite difficult, even given clear rules to follow, because so much content requires lots of context to understand.

But that makes me even more annoyed at this game which rather than presenting me with legitimately difficult judgment calls, just gave me clear rules that were not the rules that the actual game used when determining whether I did the right thing.

It's possible that this is a cleverly designed thing to make you realize that the real rules are unwritten and the whole thing is a Kafkaesque contraption with no correct answers. Or it's just a quickly-made game where no one proofread the actual instructions they were giving players.


It seems like a very thoughtful game to me. I’m giving it the benefit of the doubt.


> Content moderation is hard, but I'm not very impressed that the very first card shown got it (in my opinion) totally wrong

The authors are very clear that there are no right or wrong answers. Interpreting it that way misses the entire point of the exercise.


I mean, when I approved leaving up the cocaine post, the game that they authored gave me a big red X and said I did it wrong that it clearly was illegal and should have been taken down.

Where were they clear that there are no right or wrong answers?


The users who submit reports are fallible, the moderators are fallible, and the bosses who rate the moderators are also fallible. Sometimes you'll be right and the system will still say you're wrong.


> the bosses who rate the moderators are also fallible. Sometimes you'll be right and the system will still say you're wrong.

those are not the difficulties of moderating though, those are the difficulties of general employment.


https://www.techdirt.com/2023/05/11/moderator-mayhem-a-mobil...

> Also, there’s often no “correct” answer, so the game won’t tell you if you got something “right” or “wrong” because often there is no right or wrong.


...that seems like an obvious joke to you?

My God, we must run in different circles. I had people I know who shamelessly posted almost that exact thing. A few of them, lol.

They weren't joking. :P


It’s not just about what’s legal, it’s whatever the boss says


This game was made by a political action group for startups. Here is their agenda: https://engineis.squarespace.com/s/Startup-Policy-Agenda-202...


I 100% appreciate you linking to this.

Also this game is interesting and I don't mind at all if they are using the game to make a point. Perhaps we all need to learn this lesson. Especially elected officials could stand to learn the problem with moderation.


Mike Masnick gets really upset when you point out the motives for his work. Keep doing it, thanks.


The game is accurate, played it and it matched my experience well.

Does this mean that we should disregard it? What do we do if the person we dont like is correct


Skimming the key takeaways their agenda seems mostly reasonable? Did I miss something?


This is more like a game about shitty management communication. The rules are fairly arbitrary, and you're not told about any of them until you screw up, and even clear-cut cases like someone posting someone else's phone number are used to slap your wrist (apparently the second person consented, but nobody explains why that's your problem. Surely the second person can post their own phone number). Some advertisement is okay, other advertisement is forbidden. There's never a policy about which is which, you just have to guess. If this is how social media websites handle content reviewing it's no wonder the industry is a cesspit.

In all after playing for about ten minutes, I have to conclude that this is some kind of corporate apologetics for bad content management. The pointless timer really drives home that this problem could be solved by spending more money on training content reviewers and hiring more of them. I came away from the game feeling like it's a sequel to Papers, Please.


"corporate apologetics" - This is pretty clearly about the actual experience of working as a front-line worker in content management. Percieved time pressure, edge cases, lack of context, juggling 100 different priorities.


It's hosted by Engine, a lobbying organization for startups, so it seems likely that they think it will show people that companies hosting user-generated content face a hard situation (and double-binds where whatever they do will anger people).

I was personally more convinced of the double-bind issue after playing: different groups of people (at every level) have different norms, the Internet helps bring them all in contact with each other, and I can see how that's a recipe for making them angry with each other and also with the intermediary.


Moderation is a back-stop service with plenty of candidates in the pool; companies fire new moderators for coming into the job with the wrong "taste" all the time because it's much, much easier to cycle someone out and grab someone new than to train someone who has the wrong "common sense."


TBH I think this is a feature of human nature. It's impossible to impose your will and opinion at scale because you have to rely on other people with their own will. You can have small, curated communities, but you can't build Twitter without it devolving into, well, Twitter.

The entire concept of modern social media where every site wants to be everything to everyone is wrong. Even the "fediverse" gets this wrong IMHO. There's plenty of room on the web for all of us. Links are features, not a threat to your retention metric.


You should intern as a content moderator and create a game based on your real experience.


I've been involved plenty of content moderation, just not as an outsourcing target for a microblog company. I suspect the job is much easier when the rules are explicit, provided in advance, and the moderation team is sufficiently staffed, which is the environment I worked in. I have other priorities than making a computer game about it, so we'll never know for sure.


Where is my ability to ban anyone that disagrees with my point of view or my agenda? I think this game needs to be modified to autoban anyone that appeals my mod decisions :)


[flagged]


That’s essentially parts of 4chan. That is the natural level of communal equilibrium for that rule set.

Theres a big difference between you reading alone, and a group interacting with each other.

If there is no way to stop someone from spamming individuals, harassing targets, flooding with hate speech - people who dont like it, eject themselves. You are left with the spammers/harassers - and they either enjoy it, or they leave to target other moderated communities.


I loved the part where I lost a bunch of time to attend a mandatory nerf battle then got fired for not doing enough work.


This reminded me that "Papers Please" exists.


"Glory to Arstotzka"

If you liked that game, you might like Police Contraband. There is some bad combat aspects forced into the game, but it's enjoyable.


Of which there are now iOS and Android versions!


Of which I've been very grateful for


Wow,

The game is tense and boring and it illustrates both why paid Facebook "moderators" suffer PTSD and why this so-called "moderation" fails to improve the content of discussion or build community.

Actually worthwhile moderation involves someone caring about the whole direction of the discussion and not merely considering "is this content over the line", especially since a large portion of "don't talk about drugs" or whatever is pure CYA. "Don't say that 'cause someone might sue us".


That works when you're dealing with something the scale of HN and have the resources to hire dang. At the scale of Facebook or Twitter, though? The sort of moderation you're describing is just impossible.


I moderate a largish, active Facebook group on a controversial topic (5k members, 3-5 posts a day). I spend about 10 minutes per day on this on average. But the fact that I care about the topic, that I can directly ask people to not to engage in shit and that people want to have a decent are basically what makes things much easier.

So it's possible on Facebook. It's just not possible for just Facebook. Reddit seems a better model for community at scale but somehow I haven't jumped into that.


Very interesting, got into level 6 then got a bit bored.

For eduction purpose would be cool to explain rules after the fact.

Small feedback: when there is an appeal, I'm not sure wether green means approving the appeal or stay with the initial decision.


Is https://www.engine.is/ an industry lobbying group? That's what it looks like.


Yes, it is.


The next obvious question is who funds it.


I thought they would have a membership list but I don't see it. They have this geographically-oriented index of (some of) their members

https://www.engine.is/startupseverywhere

which I guess may be meant to appeal to legislative staff (because they can see that there are likely to be startups in their districts?).


I think the cards should just be what the post is, not a description of it. The descriptions (a) are hard to pattern match quickly and (b) the creators value judgments are inserted (for instance “ethnic slur” is ambiguous in many real cases, eg negro could be legit in a spanish-speaking or referencing context, or “white trash” which could both be a slur and the title of an edgy Vice documentary). In my experience the moderator personality types have actual blind spots in their biases around political topics (the misinformation, harassment, hate speech categories). They seem entirely unaware that had they lived 10 years earlier they would have completely different metrics.

That said, one important takeaway I got is that moderation needs priorities. While I’m thinking for 15s about whether a review about a herbal health product is “medically misleading” I have CSAM in the queue right after.


> I think the cards should just be what the post is, not a description of it.

This would make the blue look-into-it-more button useless

> the creators value judgments are inserted (for instance “ethnic slur” is ambiguous in many real cases, eg negro could be legit in a spanish-speaking or referencing context, or “white trash” which could both be a slur and the title of an edgy Vice documentary).

You’re supposed to use the blue look-into-it-more button to figure this kind of stuff out


> You’re supposed to use the blue look-into-it-more button

Isn't that the problem? If a report is in regards to the content of a post, and we have limited time, why not start with the actual content being reported first and have the reporter's arbitrary comments be secondary?


That button was just weird and added extra time, while speed was a goal. Also I don’t think that button always had raw user content, it was also sometimes just a description, just with more details like a hint. Iirc


yes. the point of the button is to add extra time. the point of the button is to decide if you want to make a decision based on the limited context you have directly in front of you or do you want to spend some time getting more context to make a better decision. which is absolutely a thing in real moderation


>or “white trash” which could both be a slur and the title of an edgy Vice documentary

It's only not considered a slur in the second case because Vice has money, you and your friends and powerful people like Vice, and the "white trash" don't reflexively assault you like beasts when they hear the words uttered. The meaning is understood and remembered regardless.


It's probably more of a spectrum of offensive to inoffensive, so the binary choice between permissible and not is going to vary depending on context. Maybe it could be a slider with "acceptable ranges" depending on platform? For example, I think 4chan would not mind most of these posts. But GlobalSocialNet might have some problems with some.


Elsewhere in this thread I and other people talked about how Engine probably made this with the goal of making people more sympathetic to tech companies' challenges with content moderation, by giving people some form of first-hand experience.

It turns out that we don't have to speculate about this motivation, because they spelled it out quite explicitly in a blog post:

https://www.engine.is/news/category/moderator-mayhem


A neat game, but my enjoyment of it is somewhat dulled since I watch my mom play this in real life on a daily basis - her job is moderating product reviews.


Oh so she’s the person who removes my one star review when I get a letter saying I’ll get a $25 Amazon gift card for giving the product a 5-star review? I just can’t trust Amazon reviews after that happened to me, I was so mad.


Amazon is not one of her clients, but I agree that what happened to you sucks, and is driving down Amazon's quality in general. It's yet another reason why I don't shop there anymore.


Did it enhance your empathy for her work?


Not really; I've sat with her as she worked, and she shares enough things that I feel like this didn't add much to my knowledge.

(And she doesn't take Nerf breaks!)


i wonder does she know about gpt


If she did, how do you suppose it would help?


The gpt could moderate the product reviews for her. It doesn't have to be completely automatic. It could automatically do the easy ones, and for the medium ones tell a strong guess but needs confirmation, and for the hard ones it tells a more complete context to consider and the user has to decide.


This is a silly idea for a multitude of reasons, which I started enumerating, and then quickly stopped.


ok thanks for your consideration


You might find this article from Mike Masnick of TechDirt, one of the contributors to the game, interesting:

https://www.techdirt.com/2023/05/11/moderator-mayhem-a-mobil...

He's been writing about the difficulties of site moderation for a long time.


"skip tutorial" was like that onetime i bought mod privilege on twitch using channel points.

first few minutes : yeah this is cool this is what i wanted

not very long later : this is awful, please take it back


This to me highlights the many delights of freedom of speech. You can choose what level of chaos to permit. To enforce a global ethic this way seems kinda snooty.


meh, its a bad interface

if i wanted to moderate something more effectively i would make a better one

i'd rather blame the difficulty of moderation itself on people's inability to innovate


You don't actually get paid for playing this game either. Just like real life!


I guess I’m boring somehow, I got 100% in several rounds then got promoted after round 8 (which ends the game!).

The trick seems to be that you have to check the “extra info” frequently, and be reasonably quick to decide as soon as you see it. It doesn’t seem possible to make the right decision on many of the cards without that context. It is kinda fun to see all the attempts to game the system.


I expected the tick to mean "take action on review/appeal", and the cross to mean "reject review/appeal", so that threw me for a few of them.

A few of the cards were really frustrating: the correct solution to some of them would be writing a ten-word response, or making an edit, but I only had "take down" and "leave up" buttons. (‘It's just a game!’ I remind myself.) I do hope real content moderators aren't so bereft of tooling.

The story is… really boring, actually? There are a couple dozen little things going on in the background, but not many, and there's not enough gameplay time for them to be explored. I'd hoped for something more like Papers, Please. (Judging by the achievements, there are actually several possible stories; I can only seem to get the election one.)

The sentiment of the general public doesn't seem to behave realistically, either: not when I made obviously-unpopular decisions, and not when the CEO made obviously-unpopular decisions.


Hey- not bad!

I got 4 out of 5 stars with high ratings.

One thing I'd add: doxxing of a quasi legal type such as posting details about a politicians family member and their private dealings.

Certainly captures the feel of going through modqueue and making snap decisions.

Glad I don't have a boss constantly reviewing my decisions though.


This gave me PTSD from working in T&S. Impressive level of detail, nice work creating it!


Skipped the tutorial and got fired pretty soon, seems unrealistic ;-)


Is the point of the game supposed to be that the only possible way to keep up is to blindly approve 90% of them and then actually look only at the ones that get appealed?


Mobile UI broke completely on me before my first round was complete (Chrome, Android). Otherwise seems well-done and a useful learning tool.


I'd call that breakdown realistic.


The "see more" button does not seem to be working for me, which is pretty much required to play the game effectively :(


Are you clicking it or holding it? (You need to hold it)


Would love to see @dang's take on this!


This game is pretty accurate.

Theres a relatively well known video of multiple mods at a seminar asked to decide if pieces of content should stay up or be taken down.

At no point, was there consensus.

Content that most people would consider heinous, were allowed up by at least one mod, with a strong reason to do so.


I wonder if we could solve content moderation at scale through gamification. Imagine a game where the content you saw was actual content from somewhere and your decisions could influence if it gets taken down. In exchange, you get points.


What do I do with the points ?


This would be better if there was more moral/ethical depth. It also doesn't explore how content moderation policies shape discourse and how moderators often times abuse such policies to manipulate posts that fit their ideology.


Does Twitters community notes kind of fix moderation or could that also be manipulated?


Kind of both. Community notes and moderation are both emotional humans making the best decisions they can.

There is no “fix”. There’s no magic. It’s just us in here.


Notes usually go too far in the other direction, for instance you will rarely see notes on mainstream politicians even when they are speaking objective falsehoods. This is because that system requires an agreement from people who usually disagree for a note to show up.


I like the idea of this, but the need to push the More Context button on everything makes it kinda worthless (vs just showing the context for all)


Got fired after 2 rounds.

Well made!


Extremely well-executed all in all. I like it.


Requires closer looks too often.


this is a bad game, because it doesn't reflect reality

reality dictates that what is or isn't acceptable content is not a function of platform policy, it's a function of the legal jurisdiction in which the content is produced and/or consumed

this isn't really complicated


all games distort reality in some form in order to be playable


the problem here is that the "distortion" is core to the thesis of the game

you can't evaluate the acceptability of a given post on a content platform without knowledge of the relevant legal jurisdiction

the game doesn't seem to understand this core property, so (shrug) useless I guess


I don't get what the point of this is. Just approve everything and move on


This said on a site that basically only exists because the moderation is extremely well done, and is definitely not just "approve everything".


While I appreciate the work that dang does, it might be a bit much to insinuate that the site only exists because of it.


If there's any other special sauce, I haven't noticed it.


The secret sauce is that there isn't such a torrent of awful content in need of moderation that we need more than a handful of moderators, despite a pretty big userbase.


Some stuff is illegal in some countries and will get your platform sued or taken down completely. Sometimes competitors and malicious users will spam undesirable content to smear your brand identity, get you blacklisted by search engines, or simply eat up your time and money. Sadly, not everything can be approved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: