Hacker News new | comments | show | ask | jobs | submit login

Considering it's probably tricky to programmatically determine what a nasty comment is, I'm assuming you'll figure out whether a comment is good/bad based on the ratio of upvotes to downvotes, and penalize those who voted against the grain.

I get this, but wouldn't this lead to making HN more conformist than it sometimes already is? "Either you agree with the majority of us about X, or..."

it's probably tricky to programmatically determine what a nasty comment is

I've actually been working on that problem with a bot that assists in moderating a subreddit using a text classifier. It's tricky, and needs more work, but it is not impossible.

it's not that hard. Those that work with classifiers, this kind of thing is pretty easy. Identifying sarcasm and irony are hard, but 'nasty comments' can be identified pretty simply using the well known text classifier algorithms. You find the training data and use it to train something like an SVM.

I'd be surprised if pg hasn't experimented with it at least a bit given his history with text classification algorithms.

Out of curiosity, what subreddit is it?


As you might expect, a subreddit about a politician with (in)famously devoted followers attracts its share of strife. It can be difficult to distinguish legitimate arguments from flamebait, and there's no shortage of people eager to take any bait offered. I should note that I'm not actively running the moderation bot at the moment.

I'm sure a bot that algorithmically classifies mean comments is possible. I'm not sure that the same can be said about trolling. Poe's law? Deep cover trolling?

When one of the criteria of trolling is the hidden intent of the person writing, then there's no physical process that can reliably find a trolling, short of looking inside their head.

A well-executed troll is, by definition difficult for humans to detect.I don't think there's much chance of reliably doing it with software. Fortunately, most political squabbling on reddit consists simply of people expressing scorn or outrage that someone would post something on the internet that disagrees with their deeply-held beliefs. That's a bit easier to detect.

If you do ever get a moderation bot running (especially in something like /r/ronpaul) I would be eager to read a writeup of your experience.

Improving the quality of discourse is a subject close to my heart, and I've never thought about how robots can help us act more human to each other.

I plan to. I've been doing a lot of work with text classification over the past couple years and would like to base a startup on it. I just need to come up with a product that's commercially viable and non-evil.

I'm sure that the author of "A plan for spam" (introducing the idea of bayesian filtering to recognize spam) can find a way to classify nasty comments.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact