Hacker News new | comments | show | ask | jobs | submit login

Interesting, but with an important flaw the author acknowledges:

> The important downside of these results is that the people using the system were not aware that points are calculated in a different way.

As I mentioned in that HN-meta-whining thread, you can't engineer away jerks, and social problems need social solutions. People will naturally adapt to any sort of engineered restrictions and find away around them regardless. Personally I think the only way to maintain a very high standard of quality in a community is strict hands-on moderation and swift ejection of members who don't match the community's goals and vibe.

(In either case I think the thread this post refers to was about the negativity in comments, not the quality of submissions. Though I might be remembering wrong.)

People don't divide cleanly into good actors vs bad actors. Feedback mechanisms such as karma and downvoting on HN allows people to learn how to behave within the community. The better the feedback mechanism, the better people will behave.

Properly weighting the votes from each user would allow more effective moderation to be crowdsourced from the whole community. Everyone should be a moderator, but only with the degree of authority that he has earned.

That weighting can be done with a technical approach as has been proven by search* companies. How does Google deal with the fact that there is no moderation on the web in general, and a multibillion dollar incentive to manipulate their results? Google results are really clean and usable despite the above limitations. Their approach is purely technical.

This proposal is an effort to enhance the ability of the crowd to moderate HN more effectively by improved weighting.

*Note that Google is almost solely a ranking problem, and the the term "search" is a bit of a misnomer. If you could see the bottom ranked matches for any query I'm sure you would see plenty of unsavory content, but the first results (the ones you actually see) are really clean.

1. Google is not transparent with its ranking algo. We would need to be.

2. It actually is very cleaning divided, bots & humans. Bad humans have no real power due to point 3.

3. Google has a click traffic amount that is so high that it can only be taken advantage of by bots. Hackernews is tiny in comparison and any algo can be manipulated with just a few bad actors.

> 3.

Case in point: Digg.

Why can't you engineer away jerks? I mean, we have democracy, free markets etc. and among their main contributions is to engineer away the effects of bad behavior. In a democracy, you won't get the best decision, but you will get an acceptable decision. In a free market competitors force business to serve the customer.

Similarly, a techical solution which gives jerks a smaller influence seems an excellent partial solution. Sure, you might still need policing, but it isn't either or.

Because anyone is free to sign up, and you have the tyranny of the majority.

And the majority of people are jerks.

Even if it were true (and I really don't think the majority is the problem), the whole point of the proposed system is that while everyone can sign up, only quality posters and commenters will progress to have any influence over the site.

I agree totally with the limitations of a technical solution to a social problem.

Furthermore, even if the technical solution work to any extent, it doesn't work as well as it could if it were visible. The attitude of relying on monkey-see monkey-do for establishing your social boundaries is wishful thinking if your approach to unacceptable comments and people are to sweep them under the rug.

That kind of tinkering is bound to create a community where the only people talking are those who most adamantly adhere to the echo-chamber. We already know there are certain opinions on the periphery that we just don't discuss on HN, no matter how politely or clearly articulated they are. They go against the groupthink and are good way to get shadowbanned.

The lack of visible moderation has not served HN well. Community management isn't rocket science, but sometimes it does have to get a bit ugly. The price you pay for letting people disagree is disagreement. You can keep it within bounds, but wall-papering over the bounds every time they are crossed doesn't actually improve anything, it just makes the same wounds happen again and again.

"Personally I think the only way to maintain a very high standard of quality in a community is strict hands-on moderation and swift ejection of members who don't match the community's goals and vibe."

Many communities I've seen have done this in a way that drops-off over time.

Screw up any of your first 3 posts... you're out of here.

Posts 3 > 10... you're going to get a warning, and may still be out of here.

Posts 10 > 25... warnings only, only extreme cases get banned.

Above 25 posts you're largely left to it, though once in a blue moon someone will screw up so spectacularly that they'll get pulled up by the community.

Even then, the "pulled up by the community" is literally that. No moderators or banning, but the communities that enforce rules strictly for new members and than relax once you get comfortable within the constraints, generally are also the ones in which if you suddenly made an extremely racist post the solution is to be ripped to shreds by your peers rather than to censor and hide it... it reinforces the "that behaviour isn't acceptable here".

This is currently the most successful pattern I've tried. It works far better than other systems I've seen.

Of_Prometheus: There is a deep irony in your post, in that it's marked as dead and not visible to many.

The question was, "How does that system deal with the old members > new members hierarchy that would develop? I think the problem with giving a member a longer stick the longer they stay/comment is that older members are given greater leniency when they do decide to be jerks, and new members who may very well develop into excellent members are kicked aside for possibly minor errors, thus creating an unfair advantage."

That hierarchy does exist. I'm not seeking to eradicate it.

The older members tend to be the ones that have established the tone and quality. This can be used to determine the rules that are used in the early stages of membership of a site.

But generally that hierarchy refers to inner circles and cliques who are protective of what the site evolves into for them, and this isn't addressed in any way by the system to integrate new members.

Old members will eventually leave, the question is more whether the value of the community is preserved when that happens.

Of_Prometheus: you're probably hell banned. You'll see your posts, but others who didn't explicitly enable the option will not.Which is also something that I see a lot lately... pretty well written comments from hell banned accounts. Makes me a bit uneasy - maybe there was some reason for those actions, but I rarely see one in the author's history.

Perhaps we need a jerk flag. If enough people flag a user as a jerk for making inappropriate comments, then that user would be banned. People could upvote the offending comments all they like, but after X many jerk flags, that user is no longer a HN'er.

Edit: You could say that only experienced users (members for several years with X many points) may use the jerk flag. Or something like that.

I've never been on a site where the jerk flag wasn't abused.

If the majority of a site has $opinion_x, chances are people with $opinion_y will earn jerk points every time they express themselves.

My suggestion is to get rid of down-voting. Griefers that want to abuse the system would need to do a lot more work if their only tool was up-voting. Accentuate the positive!

This also reduces the problem of people down-voting posts simply because they disagree with an opinion.

That sounds rather like the current hellban system.

I think the comments voting system might also benefit from this.

But the downside of the testing results described is not necessarily negative. Maybe if people knew that their reputation and voting influence are on the line, they would try harder to write more constructive comments. Thats all speculation of course and the only way to find out is to experiment :)

> Maybe if people knew that their reputation and voting influence are on the line, they would try harder to write more constructive comments. Thats all speculation of course and the only way to find out is to experiment :)

You could ask folks first. Not as reliable, but a lot less expensive.

Here's an unsolicited data point. I've got reasonable karma. I don't care about karma. And, I'm pretty sure that the some of the most interesting folks don't care either.

That's not quite true. I do care about all the karma whinging. I dislike it.

Yep. Know who else tried to wave an algorithm at a community to fix it? Digg.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact