I get this, but wouldn't this lead to making HN more conformist than it sometimes already is? "Either you agree with the majority of us about X, or..."
I've actually been working on that problem with a bot that assists in moderating a subreddit using a text classifier. It's tricky, and needs more work, but it is not impossible.
As you might expect, a subreddit about a politician with (in)famously devoted followers attracts its share of strife. It can be difficult to distinguish legitimate arguments from flamebait, and there's no shortage of people eager to take any bait offered. I should note that I'm not actively running the moderation bot at the moment.
When one of the criteria of trolling is the hidden intent of the person writing, then there's no physical process that can reliably find a trolling, short of looking inside their head.
Improving the quality of discourse is a subject close to my heart, and I've never thought about how robots can help us act more human to each other.