Hacker News new | past | comments | ask | show | jobs | submit login

Rapid banning of trolls seems to be an effective troll guard. It also works well with spam.

Trolling and spam are both self-perpetuating problems. Users are ruder on sites where everyone else is rude, and spammers are more likely to submit links to sites they get traffic from. So you can prevent both problems by never letting them get a foothold.

Deletion doesn't have to be manual, especially in the case of spam. Spammers smart enough to measure the traffic they get from HN quickly give up. And the dumb ones obligingly continue to post from banned accounts and IP addresses. So currently 80-90% of spam is killed by software rather than humans.

Flagging turns out to be a feature that saves a lot of work. So does rate-limiting submissions from newly created accounts (and, obviously, the IP addresses they use).

One general approach I've found very useful is not to protect against a certain type of abuse till it arises. Aside from obvious things like not letting people vote more than once, you don't need much protection when you first launch.




Randy Farmer did an interview with "Community Management" - where he talked about the importance of minimizing the effects of vandalism. Yahoo Answers solved their spam issue by using the community to move from 12 hours to 30 second response times.

http://thefarmers.org/Habitat/2008/06/randy_interviewed_on_c... (video and transcript)

I've been interested in this question myself for userscripts.org

The question seems tied to a reputation system (to determine who is allowed to moderate).

Joel and Jeff of Stack Overflow discuss reputation systems in a recent podcast: http://blog.stackoverflow.com/2008/10/podcast-26/

Which makes the question for me: how do you implement a reputation system on an existing site without pissing off users?

Small details can lead to large consequences for your users, and when you have 6MM pageviews a month and 50K+ users, it seems like the wrong place to learn by making mistakes.

Perhaps transparency around actions performed by the community moderators, so those who weren't granted new powers can provide a check against abuses. Providing a mechanism to view deleted comments/userscripts (but not who as to limit retribution), others can alert admin (me) if moderation needs reversed.

Anyone have experience adding community moderation to an existing popular site?


Thanks, Paul. That helps.

When you ban trolls, do you ban the IP address, limit the account or delete the account all together?

Re: flagging. Does that just mark a link or comment for review by an editor, or is there an automated banning process there, too?


I'd rather talk about details by email. pg at this site.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: