Hacker News new | comments | show | ask | jobs | submit login

It's a genuine problem and has been growing gradually worse for a while. I think the cause is simply growth. When a good community grows, it becomes worse in two ways: (a) more recent arrivals don't have as much of whatever quality distinguished the original members, and (b) the large size of the group makes people behave worse, because there is more anonymity in a larger group.

I've spent many hours over the past several years trying to understand and mitigate such problems. I've come up with a bunch of tweaks that worked, and I have hopes I'll be able to come up with more.

The idea I'm currently investigating, in case anyone is curious, is that votes rather than comments may be the easiest place to attack this problem. Although snarky comments themselves are the most obvious symptom, I suspect that voting is on average dumber than commenting, because it requires so much less work. So I'm going to try to see if it's possible to identify people who consistently upvote nasty comments and if so count their votes less.




I wish you'd consider investing less time in technical countermeasures to dumbness (all of them are gameable, and the brand of dumbness you're fighting is not at all incompatible with an intuitive talent for that gamesmanship) and more time into just setting better community norms.

It is hard to imagine that the site guidelines you write years and years and years ago when this place was tiny are, intact and without change, the optimal guidelines for the site in 2012.

It's also the case that the things you say to guide the tone of the community, for better or worse, have a huge impact on how the community works.

If it should be a norm of HN that criticisms be kept constructive, and that the community should have a default position of supporting entrepreneurship (and, more generally, of supporting attempts to build anything) --- and, that should be a norm. --- why can't you just say so? Not in comments, buried in random threads, but in somewhere prominent on the site. Like, for instance, the guidelines. Which you need to update. Please.


Having used to be mod/admin on a rather busy and strife-filled SMF-forum the past 6-7 years, I can confirm that this works surprisingly well.

I say "surprisingly" because, like a lot of tech hackers, I often think about a technical solution before I think about the simple social one. But I've noticed time and time again just how much effect the simplest changes in user-interface can have on the entire "mood" of a community. And "simple change in UI", includes the guidelines, or--I don't know--there's actually a lot of space for a notice below the comment submission form where you can give people the "right" idea (what you want to see in this community).

The personal anecdote goes a little something like this: We have a real solid group of mod/admins with a hands-off approach (way more hands-off than the invisible ones at HN, but that's a different gripe), so we had our fun with the innocent tiny changes, like changing the descriptions of individual sub-forums ever-so-slightly, like a pun, or a little in-joke. After a few years however, all of the descriptions had mutated into something that was not at all descriptive anymore. All the "old" members (which includes the mods of course) knew perfectly well what the sub-forums were meant for. But the new members did not. As new members look to older members for social context, there was a sort of generational loss in relevance, and at some point people almost entirely just posted whatever, wherever. It was chaos! And so I had the brilliant idea that, hey we can change those descriptions so that they're both descriptive and somewhat funny (utter genius, I know).

The amazing thing is, that it took only about 2-3 weeks before people starting posting on-topic things in the right sub-forums again.

The moral of the story is: people really tend to respond really well to just flat-out being told what to do. Really well. Amazingly well. Frighteningly well.

Better than some tech solution that tries to subtly "guide" them, in particular (or at least much easier to implement).


Your experiment matches Dan Ariely's experiment very well. In his book, "Predictably Irrational" when people are made to remember the ten commandments or even the thought of ten commandments, or any such honesty pledge, their dishonesty level drops drastically.

One of these days, I'm going to make a chapterwise summary of that book so that I can remember the experiments and behaviours at a glance.


I wonder why such a phenomenon occurs, when five of the commandments have nothing to do with morality.

Also, in the Army we had the Warrior's Ethos, the Army Values, the Soldier's Creed, and the Creed of the Non-Commissioned Officer crammed down our throats daily. While are some awesome people in the Military, I've run into just as many dishonorable people as I have in the general population.


You are missing a control, so you can't really judge. Maybe it would have been a stinking, fetid den of iniquity and moral puss had they not.

Or, more likely, they might be somewhat worse, for some reasonable definition of somewhat.


How is it inaccurate to state that the morality of military members was the same as the general population, but not inaccurate to suggest that only degenerates who need constant supervision join the military in the first place?


"I've run into just as many dishonorable people as I have in the general population."

Can you think of any group this would not be true for given a sample size or population as large as the military?


The point was that I can't. The study mentioned above claims that people act more honestly when they are reminded of a set of values to which they are supposed to adhere. I was simply stating that while I was in the military (for about a decade), we were constantly reminded of the set of values which we were supposed to follow, yet I saw no signs that such a practice has any effect whatsoever on an individual's behavior.

Honest, decent people who join the military (or any other group) will likely remain honest and decent. Dishonest people will most likely remain dishonest until their actions prevent them from getting what they want.


In simple, harsh words, basically you say that teaching of good values has no value. I think that the things are much more complex that that.

Perhaps those values from military doesn't have/had any resonance because they weren't, in fact, good? Or, told in a good way? (I'm thinking now that tyranny in teaching generates repulse not acceptance)

Also, if we consider that the teaching was correct (both as content and form - I dunno of course) who knows if the "soldiers who didn't change" would be worse if this teaching would be applied?

I think that the human being is the victim of influence and good influences play a determinant role on his behavior. But now what means "good" - this is entirely another chapter...


You make very good points. There could obviously be people who had rose-tinted view of the military only to have it shattered in the first week. That could have easily made them bitter about the whole teaching process.

Secondly, I don't think the ideals help if you don't agree with them. You choose to become an Doctor and remember the Hippocratic oath. Same with Engineers and Lawyers. If you find half way through, that you don't want to be an Engineer or a Doctor, you can always bail out and switch. But with the military, bailing out is completely out of the question.


I said no such thing. The teaching of good values entails quite a bit more than merely repeating a set of values or a specific creed.

While people may or may not like what the military does, most of its creeds/values are for the most part positive things. The Army values for example are Duty, Loyalty, Selfless Service, Respect, Honor, Integrity, and Personal Courage - pretty generic and not nefarious at all.


Did you hang out with those people during civilian life? How can you say the various credos had no effect on them if you do not have the same experience without the credos.


I was clearly speaking from my personal experiences and perceptions, which are obviously not intended to be considered the equivalent of a formal scientific study.

But since you asked, why would one have to have experience with that exact group of people prior to coming to a conclusion? I agree that it would be ideal if that was the case, but there are many studies considered to be scientifically valid that use one group of people who come from similar circumstances as a control, while only testing on another group. In my example I suppose the control would have been the people I know from outside the military. As I stated though, my comment was just an anecdote from my personal experiences, not a peer-reviewed, published article intended to expose the author of the other study as a fraud.


And SoftwareMaven said that you don't have a control. What he meant is that, if these values hadn't been reminded there's a very high probability that honest people might have done dishonest things, and dishonest people might have done dishonest things of far greater magnitude.


There was someone who knew that before Dan Ariely: every priest of the past two thousand years, who led a congregation on daily prayers or confession.


The priests did not write research papers and publish it in peer-reviewed journals. Neither did they bother with controls in experiments. So please don't be snarky.


This makes a lot of sense, and is worth experimenting. The comment/reply box should have some simple placeholder text, along the lines of: Don't be a dick.


How about "Be kind and constructive." instead?


Oh definitely something more like that. I was not serious about the exact wording I used, I just wanted to convey the point.


The SMF forum? Or another forum running SMF? I only ask because SMF (well actually its predecessor YaBB SE) was what got me active in contributing to open source projects. Solid software that SMF.


No, no, another forum running SMF.

But solid software, really? I give them one thing: it works. Can't recall encountering any real big bad bugs either, so yeah in that sense, quite solid.

But given that you contributed to it I assume that you've seen the code? I've had to make some modifications/hackery here and there myself and the amounts of time I've cursed at it for being a tangled and obscure mess... combine that with the childish language in the comments, urg.

I guess that's one important thing it taught me: never be flippant or rude in your code comments because it'll invariably end up making you look like a fool. Kind of like that "Muphry's Law[sic]" someone quoted in another thread. In "production code", but I found it easier to change my habit to "in general" because you never know who's going to see your code, and even your 3-months-future-self might as well be a different person when reading code is considered (except future-me does tend to share my sense of humour).

Because of that, all my own modifications ended up looking like rather ugly hacks as well because there was simply no "coding style" to join in step with.

(and the templating system (that is neither). and the amount of semi-duplicate subroutines and their names!!. "post", "post2", "message". And trying to guess in which of the five "/∗ probably the most important module of SMF ∗/" giant includes you're going to find a certain routine is about as predictable as trying to guess the name of a PHP library array function without reading the docs ... I really should stop, sorry :) )

It does work very well, I must repeat that. But only as long as the original magicians are still there taking care of it I'm afraid.


I've not contributed for a good number of years. I just didn't have the time. I assumed it was still running smooth. Maybe it has fallen off in recent years. Sad. But they had a pretty good MOD system I thought for the time. But I agree that the code was not organized all too well.


We used it since at least 2007 (not exactly sure, we switched from PhpBB around the time exploits were discovered every other week or so). It's still quite smooth and works fine, I just don't like looking at the code, nothing changed in that respect :)


Ok, I'll try that too.


If you would ever consider putting a quote in the guidelines, I heartily recommend and humbly submit Teddy Roosevelt's Man in the Arena

  It is not the critic who counts; not the man who points 
  out how the strong man stumbles, or where the doer of deeds
  could have done them better. The credit belongs to the man
  who is actually in the arena, whose face is marred by dust
  and sweat and blood; who strives valiantly; who errs, who
  comes short again and again, because there is no effort
  without error and shortcoming; but who does actually strive
  to do the deeds; who knows great enthusiasms, the great 
  devotions; who spends himself in a worthy cause; who at the
  best knows in the end the triumph of high achievement, and
  who at the worst, if he fails, at least fails while daring
  greatly, so that his place shall never be with those cold
  and timid souls who neither know victory nor defeat.
http://en.wikipedia.org/wiki/Citizenship_in_a_Republic


The man in the arena. Classic. And fitting with the theme of entrepreneurship too, at least for me. When I started my first company I went to a business after hours event to meet and learn from people who'd done it before. One very bright woman took me aside into an office away from the party, told me there's one thing I need to know about business and that it's about the journey, not the destination then she turned on a computer, went online, and printed out this exact quote. She said its what got her through the hard times and after having to read it many times I can say it gets me through them too.


I really enjoy that, that is a great quote. It is the last thing I read before posting this comment and I instantly started looking for a more diplomatic way to tell you that I think it's a bit poetic and would likely be skipped by people who don't have the time to decipher the metaphors. Then again, it seems to have worked.


We can always make the understanding of metaphors a required step for signing up.


Tongue-in-cheek, how hard would be to make a captcha?

Examples from [1]:

Metaphor: Argument is war I shot down his argument He couldn't defend his position She attacked my theory

Love is a journey: Our marriage is at a crossroads We've come a long way together He decided to bail out of the relationship

[1] http://pinker.wjh.harvard.edu/articles/media/2006_09_30_then...


Great stuff, love TR, should be read by all entrepreneurs and my kids too. But I would point out, snark off, that the Man in the Arena in ancient Rome was a slave, butchering another slave while rich citizens watched.


"A thing is not necessarily true because a man dies for it." -- Oscar Wilde


As an example, Tim Ferris has this right above the comment submission box (getting to users right when they're about to take an action):

  Comment Rules: Remember what Fonzie was like? Cool. That’s 
  how we’re gonna be — cool. Critical is fine, but if you’re 
  rude, we’ll delete your stuff. Please do not put your URL in 
  the comment text and please use your PERSONAL name or 
  initials and not your business name, as the latter comes off 
  like spam. Have fun and thanks for adding to the 
  conversation! (Thanks to Brian Oberkirch for the inspiration).
Screenshot: http://cl.ly/Ipex Guidelines are nice, but most people won't see them. It might work better to get to them right when they're taking the action.


How about just adding a flagging option for each comment and making the flagging weight related to the karma of the account. Therefore the opinion of people who have in the past made high quality contributions are weighted stronger than those of the others. And this is something I am suggesting with an almost pathetic karma...


That sounds like a second voting system, or am I understanding incorrectly?


I'm going to resubmit an idea that has been brought up before (it's not mine- it may be yours, in fact): two buttons for voting. One for agreement/disagreement and one as a rating specifically for quality of the comment.

In the "old days," there was more-or-less agreement that the vote was for quality of comment -- feedback about the value of the participant to the community -- and agreement/disagreement was voiced (if necessary) in comments. As a rule I (and others) would upvote comments we disagreed with as long as they were thoughtful. There were exceptions, of course, as you have aknowledged, but this was a pretty good guideline.

Now I think things have turned around a bit, and votes have started signifying agreement/disagreement more than quality of comment. I think this takes the community in the wrong direction.


"Cool, now there're two ways for me to punish someone I don't like." The flaw, if you want to call it that, with this suggestion is that it is a mechanism based on the idea that people are decent, reasonable, and well-informed. Which is not a bad thing, but it is not well suited to fixing a problem caused be people behaving indecently, unreasonably, and/or with little experience being "Good HN Citizens."


How the votes get turned into "punishment" will have to be tested and tweaked. I think it is worth a try, because it sets the expectation for what your votes mean be the nature of the button that you are about to press.

You seem to be thinking that this is largely trollish behavior. I suspect, however, that most of it comes from ignorance of the expectations of the community. I'm not sure there is a fix for the trolls. But for noobs that watch reality tv shows and want to come here and be the acerbic judge I think we can stop it by keeping the behavioral expectations in the forefront.

The flaw with the idea isn't what you say. I think it's important to expect decent, reasonable behavior. The flaw is, 'ok, we have two buttons. Now what?'

I would start with having the quality score 'float' the article and pretty much ignore the agree/disagree score in karma metrics. But I would probably keep that a secret.


True, unfortunately. But the flipside of having more people on the site is that there is no longer any need to ensure that everyone's comments become public on every story. You can afford to be choosier about what actually shows up, you just have to come up with a proper filtering mechanism. I think karma can be adequate, there just needs to be a more effective system of distributing and using it than what HN uses currently.


It could be too radical, but we could end anonymity in the sense of keeping public track of who upvoted, downvoted or flagged what.


The obvious fix for that is to have slashdot style moderation where you have a single vote to assign to one of many categories: offtopic, flame bait, funny. Insightful, etc.


Why not make the upward facing arrow be text like "good quality" or "useful" or "thoughtful"?


Also remove the downvote button and replace it with "flag this comment"


People would click on [flag] too often, making it hard for mods to monitor the queue of flagged content.


Also I should be allowed to veto at least one YC applicant per batch.


You of course are allowed to joke because more people know who you are and allow it (so your comments aren't greyed down for everyone to see). (I've noticed this a few times at least). So now the poor puppy of a newbie user thinks it's ok to tell a joke (because he doesn't know who you are) or be funny and then BAM he gets slapped with a newspaper full of downvotes and runs away yelping all confused with his tail between his legs. What happened what did I do so wrong?

Now personally I'd rather be able to tell jokes on HN than be able to veto. But then again I'd have to sift through the bad jokes that other's tell.

In any community like this it takes some time to learn the rules. I'm sure there are quite a few people who don't necessarily know who "pg" is for that matter.


Maybe overthinking this one a little.


May I please be the first to say "WTF"? I know who you are, I know what you contribute, but... WTF?


It's an old joke. You make a request and somebody acquiesces more easily than you expected, so you follow up with an absurd request like "Also, can I have a million dollars?"

And now I went and explained it. Sorry, tptacek.


Ah, got it (humble pie eaten, apologies tptacek).


People should stop piling onto you with downvotes. No apology necessary.


That too, but also, I'm going to wear him down with this someday.


To add to your thought when negotiating it's always important to offer nominal kickback generally. Lest the person feel they left money on the table.

If I am selling something and asking $1000 and the person comes back with an offer of $500 and assuming I am willing to take that offer (because I was shooting high) I need to offer some "kickback". This could be as simple as saying something like "well, if I accept can you pay within 5 days?" Or perhaps, "can't do $500 but I'll consider $600". Otherwise the buyer feels perhaps "hmm wow that was to quick and to easy" and might back out of the deal. Strictly my experience over many years.


Well in every joke lies a bit of truth: http://www.scribd.com/doc/25450984/Freedman-Compliance-WITHO...


Oh, thanks. Now I get it. I think the problem was I missed that this was a reply to pg's acquiescence. It was several pages down by the time I read it so the joke was lost.


Given the risk/reward tradeoff for a YC investment, vetos are pretty worthless. Even if you were the most brilliant vetoer ever, there is very little upside for YC.

Instead you should ask to be able to green-light one applicant per batch. That way, it would be possible for your contribution to be very valuable.


whoosh


Apart from the content itself, I would suggest to place the guideline link more prominently. Maybe even at the top, directly before the "New" link.

To support this proposal: When I first arrived here (from your homepage), I didn't notice anything about guidelines until I stumbled upon a comment which mentioned them. Then I eventually found them on the bottom of the page (which can be quite long), where I seldom look at.


I think they're mentioned when new users sign up, and also when they make their first post.

I'm not keen to make a new account to test this out though.


What I do personally is keep a (relatively short) file of hn usernames I find particularly good and those I find particularly bad. I go and read comments from those I find good sometimes, in addition to encountering them through the normal course of articles.

The bad ones, I reevaluate whenever I see another comment, but I've never yet seen a good reason to remove someone from that list. Sadly they never seem to end up hellbanned. It's uncorrelated largely with agreeing or disagreeing with views; it's just that there are a group of people who are, as the OP said, really negative, irrational, and destructive to civil discourse.


Not to deter you from the social standards approach, but I thought the concept of measuring comments was clever.

The problem I see is - How do you know if a comment is negative or not? Surely, you can tell, but problematically that might be tough.

Maybe swap out the "post comment" button for 2 "commit" buttons 1 that says "Positive" the other that says "negative". (This could be done a myriad of ways, but you get the point)

You could then measure a users tenancy to post useful positive/negative comments based on the average votes that type of comment receives.

posters that consistently have low average scores for their comments could lose posting rights, for a period.


I could see keyword association much like PG's email spam filter working for this.


"... community should have a default position of supporting entrepreneurship (and, more generally, of supporting attempts to build anything) --- and, that should be a norm."

I disagree and find such a community engineering approach dangerous for the health of the very community it's trying to "save". I think there are two big problems with it: (i) It gives too much power to the gatekeeper(s), be it software or human, and (ii) it creates a bias for positive comments.

This is the typical dilemma: Bad comments and a free forum are two sides of the same coin. Although I hate snarky, nonconstructive comments as much as everybody else, I think that trying to curtail one may also damage HN's innovative and free atmosphere, which is its main asset.


> Although I hate snarky, nonconstructive comments as much as everybody else

Sure, I sorta agree, but ... there's snark, and then, well, there's snark.

Sometimes snarky, non-constructive comments can be really funny and/or satisfying. I don't want them to dominate the conversation, and a lot of snark is just sort of stupid, but ...


It seems to me that the (arguably) best discussion sites are those governed by benign dictatorship: firm moderation and clear guidelines. Yes, there are edge cases where judgement must be made but clear guidelines make this easier.

Whenever I think of a good example of a community that has managed to stay pretty high-quality for some time, I can't help but think of Matt Haughey's Metafilter. I'm sure the nominal charge for membership helps, but the (paid) moderators are very good.


I'm sympathetic to your point, but I fear that the worst offenders are among those least likely even to read the guidelines, much less adhere to them.


Unfortunately I agree, it is no big secret that tech as a segment has a fairly sizable portion of it's population with smartest guy in the room syndrome. The problem with smartest guy in the room syndrome is they already know the answer. Therefore they are not information seekers and they disregard anything that is objective to their world view. It is what killed Slashdot and sadly it is becoming unbearable here. Self policing does not help with this personality type, because they have no vested interest in anything other than everyone seeing them as the smartest guy in the room. Their concern about the community only extends to their recognition. It is a self centered world view and therefore I don't think that self enforced measures will work on this personality type. I almost think a third measure is needed, like a flag as abusive. Knowing that they can be flagged as abusive (even if the system dumps the flag and does not use it in any calculations). Will cause them to reflect on the policy, as it directly threatens their ability to use HN as a platform for self glorification.


HN does have up votes, down votes, and a third flag option.

You have to view an individual comment to get the flag option. I'm not sure what the down votes are for. "This comment does not belong on HN", maybe.

I have no idea what the flag is for. "This content is so bad it needs to be removed, not just hidden", perhaps.

I agree with your post though! I'd be wary about having too many options for marking posts.


Currently downvotes are simply "I disagree" (causing plenty of good comments to pile at the bottom, side-by-side with spam/hate/douchebaggery).


"Currently downvotes are simply "I disagree" "

My observation is that downvotes also signify comments that have one of more errors of fact. Of course "fact" is also open to interpretation and not absolute.

You could say "you can only be President for two terms of 4 years each" but someone could dispute that because Roosevelt was President for longer or because in their country that is not the case. Etc.

A downvote because of inaccuracy without a comment isn't very helpful of course.


Originally a downvote meant that the comment was impolite, stupid humor, or way off topic. If it was incorrect or you disagreed, you were expected to say why. I personally dont like driveby downvotes.


I like this post a lot. It perfectly sums up why I quit reading Slashdot. There was a time when reading Slashdot posts could expose you to new ideas, real world examples of technical solutions, and other people trying to learn more. Now it's mostly pedantic sniping, and lame cliched jokes from socially frustrated wankers.

I do most of my reading on Reddit (front page reddits are just cotton candy, but there are good ones to be found). I only started reading HackerNews about a year ago, and mostly just for the articles from lesser known blogs that don't get posted on Reddit. Being something of a HackerNews newbie I haven't been around long enough to notice a change, but I hope it doesn't go the way of Slashdot.


It's good that you bring that point up. I disagree with you, and if I could be so bold, I'll paraphrase tptacek's post above: communities are built on principles that goad people to excellence, not forbid them from mediocrity. Communities are built by principles, not by laws. I firmly believe this, and it's why I agree that more good will come on HN from stating what people OUGHT to do instead of listing the kinds of behaviours they oughtn't engage in.


True, but wouldn't it better rally the rest of the community to keep those offenders in line? If their harmful comments don't get upvotes, they lose their audience.


Yes, that's true. However, giving proper guidelines can help in a lot of cases. When a person uses a site for the first time, he'd search at least somewhat for some basic guidelines. After that, he'd be tied to the same routine, and might not easily change with the changing guidelines.

I know that the worst offenders won't read the guidelines, but a little faith in the rest of them would go a long way improving the quality of content on this site.


Exactly.

There is not going to be a "one-size-fits-all" solution to this problem. A multi-faceted solution including both technical and non-technical elements would likely work best.

The easiest part of the solution to implement would of course be the text in the vicinity of the comment box. This text would establish certain core tenets that we can all vote on with Paul.

The technical aspect of the solution could monitor how often certain users are being down voted. The down-vote history of an account could then be used to "weigh it down" somewhat, so that it takes more up-votes in the future to bring their comment to a higher position on the page.


That is okay, they don't have to do that.

So long as it is a group norm to do adhere to the guidelines those who don't will be downvoted and their conduct will have little influence on the community.


Sometimes saying an idea is pointless is positive feedback, there are a lot of people here that 'launch a startup' which basically means some half-assed website that does something utterly pointless and then expect the community to be like some kind of benevolent uncle and pat them on the back and say what an awesome idea and execution. Many many ideas really are pointless and negative criticism is very much a part of being an adult. This isn't grade school where it's all about self esteem boosting.

The OP sounds like he should find some other support group, this is a community of smart people that will quite happily - and quite rightly - rip any pointless idea apart.

If you're going to editorialize and discount any comments/votes you consider not aligned with boosting anyone's self esteem, or what you consider negative you will create an artificial community - one that is the website equivalent of the old boys network, where the old timers consider themselves superior to the newcomers. Negative comments can be as useful as all the back patting attaboys.

If you really want to do this, the simplest solution is one I am sure you're aware of. Pick a few dozen people you 'like' (their voting habits) to train a Bayesian classifier, then you can weight those those that don't fall in-line with the way you'd like to see you community behave.

Can't we all just act like adults and learn to deal with the negative comments - possibly even learn from the? Do we really need to protected from snarky comments?

This reminds me of a comment thread I saw earlier about people getting bent out of shape by cell phone usage in restaurants! I find lots of things are annoying, loud trucks/buses, cyclists that ignore road signs, rude people etc etc, but I am not going to complain and try to stop buses or give discounts to people who drive to discourage cycling. People need to start learning to cope with the real world and not expecting 'the system' to protect us from ourselves. It's embarrassing to be part of a community that needs voting bias that only encourages positive comments.


Can't we all just act like adults and learn to deal with the negative comments

Actually I find that harsh comments are more common among children and noobs than adults and experts. A random fan watching a football game in a bar is far more likely to use harsh language about a player who misses a pass than another professional football player would.

What's happened here is itself evidence of this trend. The comments haven't gotten more negative because we were all a bunch of fools initially, and now the real experts have arrived; rather the opposite.


Maybe you could attack this directly. Put a field the profile form for "something I've made." Don't allow anyone without an entry there to vote.

Half of the benefit would be from filtering out people who aren't really invested in the community. The other half would be from making people think about the magnitude of their own contributions before they criticize someone elses.


Trust me, there are a lot of creative trolls.


I usually just watch and read the articles but I've been wanting to get more involved for a while. I know I have a future doing the kinds of things you guys have done, and someday very soon I hope that I will. By keeping people like me who haven't yet accomplished anything out, you would be denying a real chance at digital mentorship.

I know how much I've learned and how grateful I am for having the chance to learn from this community, I'm sure there are many others as well that feel the same.

So to the community as a whole thank you for the chance to learn until I've mustered up the courage to take the plunge.


You would still be able to comment and submit. Your voice would still be there. This would just test the hypothesis that people who have put themselves out there are more sympathetic to other people who have put themselves out there.


I think it's also important to distinguish "negative" from "critical thinking".

While we can probably mostly agree that sniping (especially personal attacks) are rotten, I suspect useful insights, that are critical in nature, often get thrown in with the dirty laundry, and labelled as "negative". "Awesome site dude!! SO happy for you :) :)" and a whole bunch of potentially fake, shiny superlatives can also become a turnoff.

Quite likely this is a minority viewpoint.


I joined Hacker News 1,570 days ago (~4 years). I launched my first two ventures here. One of them, Feedback Army, was concieved and built entirely from my interaction in this community. Three years ago, the feel of this community was very much one of support and maybe, a little self esteem boosting. It certainly worked for me and I was very grateful to this community for what it contributed to helping shape me as an entrepreneur.

I still come here for the articles and I occasionally read the comments but I don't participate much. I feel the same way as the original poster. I see someone post something genuine and then I see someone else rip it apart in a condescending way.

I used to come to HN and on the other side of the usernames, I pictured the leaders in our field, the pioneers of the next steps of our information age, and people who I wanted to emulate.

It's not this way for me any more.


I don't think the problem is so much negativity as it is tone. I think HN is still a very civil place (relative to many other communities), but anyone can see that hostile jabs are gradually getting more popular, and hostile jabs are, for whatever reason, an infectious meme and when you are seeing a lot of them, it takes a lot of restraint to keep from tossing them in yourself.

A particular example of a recent top-voted comment comes to mind (I'm not going to identify it because my purpose is not to name and shame), which was full of perfectly legitimate and constructive criticisms, but the tone of the comment was, well, mocking.

And what's the point of that? Making legitimate criticisms and dressing them in derisive language is a great way to raise an army of contempt against someone, but why?

If you want the target of your criticism to take the criticism into account, you need to express it civilly. You don't need to find positive things to say, or even claim that you think there's a kernel of good idea to be found (although genuine compliments are great too, of course). All you have to do is be polite and respectful. Imagine that the creator of what you're criticizing has just shown it to you excitedly. Read your comment, and decide if what you're writing is something that an acceptable member of society would say to that person face to face. Not the content, but the actual words.

And if you don't want the target of the criticism to listen to you? Then maybe you should just keep your thoughts to yourself.


I have a sneaking suspicion it was my comment. If it wasn't, then you've made me think very hard about what I wrote (it was a mock email). It was pretty harsh, and now I think of it the tone was unwarranted. I thought it was amusing at the time and held truth to power, but now I think it was a dick move.

I can't remove the comment now, but I can consider making my criticism more constructive and positive. With the email, my concerns were legitimate, but I could have just expressed what those concerns were, not satirized the original email.


That wasn't the comment that I had in mind, but I'm glad my comment made you think about it. I'm certainly not going to claim immunity to this effect myself. I'm sure if I went through my comment history, I'd find some harsh comments of my own. I do make a conscious effort to notice it before I submit, but it's easy to slip up.

There's no denying that writing a smackdown is fun. We just have to remember that it's not harmless fun.


Here's a thread from 847 days ago full of people being mean and negative about Groupon: http://news.ycombinator.com/item?id=1288116

Today, Groupon stock is worthless, and the company is doomed. Should the people in that thread be penalized for being right, but mean?


Most companies fail. It's a safe bet to predict failure. It's pretty lame to celebrate that failure from the sidelines.

Vision is not "how is this guaranteed to fail?" but how could it possibly succeed despite the odds?

Which voice do you want to bring to this community?


Groupon didn't "fail", instead it has been astonishingly successful.

However, around the time of it's IPO it was receiving so much undeserved hype that in comparison to THOSE lofty goals it appears to have failed. HN readers at the time were pointing out the hype, and I think that is a GOOD thing.

Nevertheless, I agree that the tone of the community could be better and I will strive to adjust my own comments to reflect that.


Thank you for the correction. My intent was not to speak to the state of Groupon. Groupon has had far more success and impact than anything I have created to date. I felt it best to not argue with the parent and focus on the attitude behind the comment and not an assessment of Groupon.


There's a way to be negative (right or wrong) without being mean. I think that's the heart of the problem here.


a la pg's "I worry, though, that..."


Those comments are mean?

I can see they are skeptical, but I would hardly describe them as "mean".


I agree with you. Maybe I'm part of the problem, but I actually see nothing wrong with those comments.

Now, if that were a 'Show HN' post I could see where people may want the comments to offer constructive feedback, but that thread seems like a matter-of-fact discussion of the business model of the business.


If Groupon is a failure, I'd like to have several of those...


There's a difference between merely hostile, negative commentary and constructive, well-founded criticism.

The problem that HN faces is that because of the voting system those drive-by valueless negative comments are getting amplified all out of scale relative to their value.

Imagine being in a restaurant. It's not so bad if there is someone in the far corner having an argument that you can just barely overhear or something a few tables over having a cell phone conversation. However, if the volume of those conversations were amplified such that you could barely continue conversing with the person across the table from you then it starts to become a very serious issue, and if that became the norm at that restaurant you'd probably stop going there as often.


Part of the problem with HN (and reddit) is that the format is still linear. A comment system results in multi-dimensional conversations both in subject and in tone.

Some comments are on-topic and are helpful, some are on-topic and humorous. Some are off-topic and also helpful, some are off-topic and worthless. As so on, and so on.

With only one direction we can "push" a comment (and only one direction with which we can view comments, up & down), there is not all that much filtering that can go on.

Sometimes I enjoy reading the "funny" responses. What if this was a separate comment axis that I could view? Others may not be interested in such a thing, and could ignore it.

In any case, the format of HN doesn't lend itself well when people from all over the world bring their own views into a single topic. I don't know if a good way of solving it easily other than being ruthless about removing content that doesn't belong.


I did always like Slashdot's commenting system for precisely these reasons. It seems like their meta-mod system worked pretty well too to ensure that people's votes were actually meaningful. Oh, and the limit on the number of votes you could give really made it feel like your vote mattered and you couldn't waste it.


It's not a matter of handling negative comments or constructive criticism as an adult. That's been here for at least as long as I have been here (5+ years). It's a matter of those comments being derisive, abrasive potshots (oftimes at the person rather than product) providing no value. Assholeism for it's own sake, the net effect of which is to lower the level of discourse here a la the broken window theory.


YC relies on a constant flow of grist for the VC mill.

If YC News doesn't encourage budding entrepreneurs to throw themselves at long-shots, and into the VC startup culture, then it doesn't serve pg's interests.


> Can't we all just act like adults and learn to deal with the negative comments - possibly even learn from the? Do we really need to protected from snarky comments?

Being able to function in the face of people making negative comments about you on the Internet is one of the basic survival skills of the 21st century.

> People need to start learning to cope with the real world and not expecting 'the system' to protect us from ourselves.

This perspective directly contradicts many HN'ers political leanings.


> Being able to function in the face of people making negative comments about you on the Internet is one of the basic survival skills of the 21st century.

While this is true, it does not justify overly negative comments. I'm reminded of hecklers who yell at stand-up comedians that they should be able to deal with heckling if they're on stage; they might be correct, but they're still being a jackass.


One possible solution (that you might already be doing) is to make the value of the vote proportional to how long the user has been a member, especially down-votes. That solves the problem of more recent arrivals not having the same qualities as the original members.

To be very clear so as to prevent conspiracy theories, this is something that we never did on reddit because we didn't want to put that much power into the hands of the older users. However, HN has a different philosophy and it might make sense here.

To say it one more time, reddit doesn't do this and never did.


It will never be as simple as throwing in a single tweak like that.

Google is the best example to learn from. Search is a misnomer, Google is about ranking. They've put many PhD-centuries of effort into deciding which of the three million matches to put first (ranking). Choosing the three million matches (search) isn't where Google adds its value.

Lots of lessons have been learned there, I'm sure HN can tap into that pool of knowledge by opening up data to the right people.

HN is an awesome resource. I often read the comments before the articles because I expect them to be more useful. It's completely worthwhile to invest a lot of effort in maintaining that greatness as it scales.


...says the new guy to the old guy.

Sorry I couldn't help myself.


(haha - I totally misunderstood, my apologies)

EDIT: (total rewrite)

The cool thing about using metrics and machine learning is that the results speak for themselves and opinions and factions are less important. Intead of guessing what matters in advance, you get to discover what matters after the fact.


Relax buddy. No need to get so defensive. It's just a notable coincidence when debating the merits of age-weighted voting.


On one hand, it seems like having the value tied to the voter's karma would make more sense, since someone here from the beginning with little karma says very little about their contributions or judgement.

On the other hand, either of these solutions seems destined to increase the inevitable echo-chamber herd mentality groupthink that already tries to creep into any community.


That would leave people like me out.

I often find the news here, so I'm very unlikely to submit something. And I only comment when I feel like I'm going to add value to the community or discussion, which isn't that frequently. So I have a clear judgement on when I contribute, it's not much or often, but it tends to present some value.

I've had an account for 2.5 years, and used the service for even longer, yet my karma is still less than 300.


I echo this sentiment. I have been here longer and does not comment unless I have something of value to say, but do actively vote on stories or comments. People whom I know in real life and frequent here have the same usage pattern too. I would hazard a guess that there is a substantial amount of people in this category. So a purely karma based approach may not be sufficient.


About the same here. I'm a member for ~3.5 years and don't comment very often, not to say very seldom (my karma is at 23 points). When I write down my thoughts, I really want to add value to a thread, which I also feel isn't very often. Humble or shy - don't know.

I do frequently vote up good articles regarding programming, hacking, startups, and such things. That's the things I'm interested in since I was a young boy and that's my daily life. I'm a freelancing coder (yes, I see that term as positive, same way as hacking) since '96 and tried my own startups since then over and over. That said, I value good and constructive and sometimes also funny comments and being able to upvote them I frequently do so.

Are my upvotes even counted? I don't know, I don't really care. But I admit it gives a bit of a strange feeling to be "less worth" in a community where you participate - we can argue about that - daily.

Nevermind, just wanted to say.

Small addendum: After submitting this comment I remembered that I wanted to add that I'm from Europe and people not being from the States and therefore in a different timezone often experience to comment on already discussed topics (this also happens on Reddit a lot). Maybe this is also a part of why I'm not commenting that much.


Having things tied to the users karma will result in the StackOverflow type issues. There are some high score moderators that close questions that really should be left open. e.g. i don't agree with most of their decisions however i don't have enough karma ( or time to chase karma) to become the same level.

I see the necessity of moderating however there has to be other considerations. i.e a number of lower karma users can override the high karma users decisions.


I agree with this. It degrades relatively quickly into a Wikipedia core group of elite editors kind of thing, where the disconnected ideals of a clique dominate.


Wasn't Digg killed by "power users" with all the power? I worry this might happen here too.


I don't think so. HN has been hell banning for some time, often quite unjustly, but the general quality hasn't been affected by it.


Are you sure? I can see a correlation between the rise of hell banning and the decline in general quality.


I recently started browsing with dead links active and have noticed a couple of seeming hellbans on people where it really didn't feel warranted to me based on the recent history around where they got hellbanned.

Of course, this forum isn't a public commons so the moderators can ban whoever they want, but there are some cases where it seems fairly arbitrary.


While it is often unjust, it is often extremely just also. Personally, I don't know if the tone has really changed. I do know that I see less demonstrations of work done on the front page. That's a shame :(


But that would only work to a certain extent. All you would need is one lucky submission (think Steve Jobs' passing) to get around this.


Perhaps only consider the karma earned from comments and exclude the karma earned from article submissions.


Interesting because I'm experiencing part of the problem that others are discussing here. You see I disagree with your idea but there is no way to indicate that w/o downvoting (which I didn't do). And to simply reply by saying "I disagree" isn't appropriate either. And maybe I don't want to take the time to detail exactly why I disagree but would like you to know that? But you don't because all you see is a net number positive or negative.

(By the way I would summarize why I disagree now that I've written the above as simply that it seems like a "last man over the bridge" advantage. I mean it's entirely possible that some distinguished person whose vote should count would be greatly disadvantaged by the weighting system you propose.)

I hate to complicate things but there doesn't appear to be anyway to solve this issue w/o the ability to indicate more clearly what you feel is positive or negative about a comments. One up/down button simply can't cover everything.


Too few votes for the number of comments. Lack of feedback is disenfranchising (unless you fake vote counts for comment-makers to see, which could have a worse effect of giving the wrong signals.)

Also, you can't count on there being proportional amounts of activity among the earlier users currently. Any cross section of the earlier user group could have stopped being active on the site.


My thought is that each registered user should be able to choose their own weighting of comments (from age-of-membership, average-karma-per-post, combinations of upvotes and downvotes, etc). Would this not make HN much more vibrant and allow a broader-range of discussion within the same thread?


That would be an ideal case, but turns out to be extremely difficult at scale.


Please consider releasing a dataset, even to a limited set of qualified folks under NDA.

There are lots of ways to spread the work out. A number of the people here have backgrounds in search (which is really all about ranking and not much about searching). Even a closed Kaggle competition could yield some really interesting algorithms (Anthony is a great guy and would be happy to discuss it I'm sure. Note that a closed competition is done under NDA by only the most qualified participants).

Spreading it out might be fun, hope your consider it. Lots of smart guys love HN and would probably want to help.


> So I'm going to try to see if it's possible to identify people who consistently upvote nasty comments and if so count their votes less.

Why don't you automate this process? All of the people you upvote have the most credibility, the posts they upvote have a little less, and people who upvote those posts gain a little credibility too, and so on. Tuned right, voting itself becomes a function of whatever the base of the community is.

New users -- depending on what kind of posts they upvote and who upvotes their posts -- must be able to gain credibility quickly, not just by upvoting the top posts, but picking winning and losing comments in the long term.


That, and assign a couple people he knows well that will make similar good judgement calls the same privileged, that way you increase the amount of work being done.

I think a method like this is necessary, as I really don't think you could determine post quality with an algorithm itself. You can't automate that type of intelligence.


If the client allowed us to pick someone other than pg as the root, then people could comment however they like and more easily view the comments they value.

Given HN's current latency, though, you'd need a few orders of magnitude more resources.


pg, I have notice two (2) recent trends that have, in my opinion, degraded some of the quality of this site:

1. Postings that are more advertising than informational, which I briefly commented on here: http://news.ycombinator.com/item?id=4350447; and

2. Members who submit their own postings.

Briefly, while I appreciate the sharing of information, any content that becomes an obvious sales pitch doesn't belong on this site. When I read a posting from this site, I want to learn and be informed - I do not want be sold on something. And - there may be disagreement with my second point - I do not find any significant value when a member submits their own positing, because I feel they're trying to sell me on something. (Please understand this does not apply to "Show HN" - just to articles and blog postings.)

Overall, I really appreciate members asking for feedback on their projects. They should be challenged and not degraded or insulted. This is a great community with great insights, so it's up to everyone to be "good neighbors".

Edit: Grammar.


The idea I'm currently investigating, in case anyone is curious, is that votes rather than comments may be the easiest place to attack this problem. Although snarky comments themselves are the most obvious symptom, I suspect that voting is on average dumber than commenting, because it requires so much less work. So I'm going to try to see if it's possible to identify people who consistently upvote nasty comments and if so count their votes less.

It seems plausible that lazy voting is identifiable as a user-specific signal, and that lazy voting (contrary to the welcome message

http://ycombinator.com/newswelcome.html

"don't be rude or dumb in comment threads") is a big part of the reason why comments that don't fit the community culture stay so visible. May I suggest that it may also be possible to identify good comment voting (upvotes on comments that link to reliable sources to answer questions, or that say "I'm sorry" or "thank you" as part of a comment) as a user signal? It will be interesting to see what happens to comment scores over time if more overt thoughtfulness (in all senses of the word "thoughtfulness") is upvoted often and comment rudeness or stupidity are (net) downvoted as the software tweaks are implemented.

P.S. I have been puzzled by a downvote and upvote pattern I seem to be observing in this thread itself, with of course only my own score on my own comment being DIRECTLY observable to me as I revisit this thread from time to time.


You could view this as a UX issue. New members don't get enough information from the HN interface to infer how to behave. Now we've got a theory, how can we address it?

(a) more recent arrivals don't have as much of whatever quality distinguished the original members

Try to make it obvious who distinguished members are, so people can emulate them. Colour-coded usernames based on age/reputation, starred 'high quality' comments, reputable upvoters (quora-style).

(b) the large size of the group makes people behave worse, because there is more anonymity in a larger group.

Try to reduce anonymity. Tiny little grey names are fine for 100 users, but after a point they all look the same. We're visual animals - maybe we need recognizable avatars, symbolic or otherwise, so we recognize the people we're replying to. Even allowing users to choose a single unicode symbol in addition to their username might be enough.


Considering it's probably tricky to programmatically determine what a nasty comment is, I'm assuming you'll figure out whether a comment is good/bad based on the ratio of upvotes to downvotes, and penalize those who voted against the grain.

I get this, but wouldn't this lead to making HN more conformist than it sometimes already is? "Either you agree with the majority of us about X, or..."


it's probably tricky to programmatically determine what a nasty comment is

I've actually been working on that problem with a bot that assists in moderating a subreddit using a text classifier. It's tricky, and needs more work, but it is not impossible.


it's not that hard. Those that work with classifiers, this kind of thing is pretty easy. Identifying sarcasm and irony are hard, but 'nasty comments' can be identified pretty simply using the well known text classifier algorithms. You find the training data and use it to train something like an SVM.


I'd be surprised if pg hasn't experimented with it at least a bit given his history with text classification algorithms.


Out of curiosity, what subreddit is it?


/r/ronpaul

As you might expect, a subreddit about a politician with (in)famously devoted followers attracts its share of strife. It can be difficult to distinguish legitimate arguments from flamebait, and there's no shortage of people eager to take any bait offered. I should note that I'm not actively running the moderation bot at the moment.


I'm sure a bot that algorithmically classifies mean comments is possible. I'm not sure that the same can be said about trolling. Poe's law? Deep cover trolling?

When one of the criteria of trolling is the hidden intent of the person writing, then there's no physical process that can reliably find a trolling, short of looking inside their head.


A well-executed troll is, by definition difficult for humans to detect.I don't think there's much chance of reliably doing it with software. Fortunately, most political squabbling on reddit consists simply of people expressing scorn or outrage that someone would post something on the internet that disagrees with their deeply-held beliefs. That's a bit easier to detect.


If you do ever get a moderation bot running (especially in something like /r/ronpaul) I would be eager to read a writeup of your experience.

Improving the quality of discourse is a subject close to my heart, and I've never thought about how robots can help us act more human to each other.


I plan to. I've been doing a lot of work with text classification over the past couple years and would like to base a startup on it. I just need to come up with a product that's commercially viable and non-evil.


I'm sure that the author of "A plan for spam" (introducing the idea of bayesian filtering to recognize spam) can find a way to classify nasty comments.


What if people who up-vote nasty comments don't really do it consistently, they just vote a lot? Maybe votes need to be precious, so people use them more carefully. Slashdot addressed the problem by limiting the mod points available and introducing meta-moderation. Meta-moderation won't work in an up/down voting system like this, but limiting votes, for example to a certain number per day, is a powerful option. You could even have tiers, where people with more karma would get more votes per day. Maybe every 100 karma points gets you an extra vote per day.


Have you read the PageRank paper? It's mostly intuitive so it's not painful to read:

http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf

Note expecially section 6, "Personalized PageRank", where they replace the random jump with a jump back to some basis pages. Your own up votes/downvotes probably make a pretty good set of reference points, so a PageRank-like algorithm could be heavily weighted according to your own judgement.

To be totally clear, I'm not suggesting everyone get personalized results, but rather that HN is the same for everyone, one view "personalized" according to your judgment (and obviously that propagates through to include the judgement of the people you respect, and they respect, etc).

Yes this is a handwave, and Google has gone lightyears beyond this early paper. But intuitively it may help you think about the problem. It's really going to be some iterative algorithm that flows through all the upvotes and downvotes to determine the authority of each vote, perhaps in a similar way to how the authority of each link on the web can be calculated.


I'd like to make a somewhat unrelated observation as a new user and someone who is new to programming.

I've noticed a submission might get a huge amount of upvotes yet the highest voted comment goes against the article/post submitted.

I understand there are a few things that could be going on: you can learn from bad articles/posts; those who like the article/post submitted find it unfair to downvote just because they don't agree on a contentious issue; or it becomes a what side can out-vote the other side (the one against vs. the one in favour of the submission).

Is there a commonly held view on why this happens? Sometimes I click upvote to save an article I've not read yet, not necessarily because it is a great or good article. However, I also sometimes click upvote because there is a comment inside it that I find very enlightening. Would it help at all to be able to save comments like we 'save' submitted articles? My apologies if this is too unrelated and meant to go somewhere else (email). Please let me know if this is the case. Thank you!


Two issues here:

1) Stories become popular despite negative sentiment in comments, or stated another way, some of the most popular stories on the front page of HN have little support in the comments. This indicates that the people who up-vote the stories in question are either not actually in support of the story's premise, or choose to remain silent in the discussion.

2) People are using up-votes as a way to bookmark stories, since that's the only flagging mechanism available to them.

The root of the first problem as I see it is that it is too easy too up-vote stories. This creates a low threshold for groups of organized users, i.e. friends of the poster, to get stories on the front page. Perhaps limiting the ability to up-vote stories in particular would have a positive impact here.

The second problem has an extremely simple solution: add a way for users to flag stories without up-voting them. Maybe a star icon that adds a story to their 'saved' directory.


> People are using up-votes as a way to bookmark stories

Whoa... mind blown. I never noticed that feature before. Thank you Wise Weasel. (Not that I would up vote a post just to save it. but it is nice to know I have a list of all articles I've up-voted.)

+1 on having a "save" feature. However, I would shy away from using the term "flag" as a way to get something into the saved list. Flagging a post is already available and has a very different meaning.


This happens quite frequently on reddit as well. I'd imagine it's due to the fact that the set of people who vote on a story are different than the set who vote on comments. For example, people could find a title appealing and upvote without actually reading the article or any of the comments. At the same time, the people who take the time to read the article (and corresponding comments) vote up the comments criticising the article. Since it takes relatively more time to read and understand an article, the knee-jerk reaction voters cause the story to be highly rated even though the top comment is critical.


It seems lile that could be fixed by tracking which articles a user clicks and disregard an upvote if they didnt read the story or they read it 5 seconds ago.


What if they already read the story from a different source?


I suspect that people often post negatively about a submission and then upvote it so their negative comments will be seen and upvoted by more people.


I'm in the same boat with you on upvotes. If my voting activity is going to be tied to the quality of my posts and comments, please decouple the upvotes with the saving.


Hey PG - Honest feedback from someone in one of the largest tech communities in the world (Linux). Technical solutions won't solve the problem. Even things that various communities have tried around governance won't solve the problem.

It basically takes strong leadership, preferably from one person or a (very) small group of people.

Look at GNOME these days. They are rudderless and mostly because they have absolutely NO leadership either from community people or RedHat (GNOME is basically RedHat). They are dying slowly.

Look at the Kernel, though. Linus controls it, sets the tone (often harsh!) and it hums along.

Look at Ubuntu. People may hate the direction, but Mark runs that show and they people know what they are doing and where they are going.

IMO, the idea that the community will govern itself has never held up. Communities are formed around ideas or interests, but are led by people (leaders).


maybe the new kids don't know any better? can we crowd-educate them as to acceptable comments? stackoverflow has used strict categorization of "reasons for flagging post" to great success in influencing how the crowd moderates.

  "-1, poor tone"
  "-1, offtopic"
  "-1, factually incorrect"


Yeah, the slashdot approach to mods might also be worth looking into. They do a sort of randomized statistical moderation thing that makes its less autocratic than other schemes, while still retaining the human nuance.


Extra information in the upvote might help a lot. Right now I can only say "more of this" when I click the up-arrow. But does that mean I like the comment? I like the user? I like the tone? Sometimes I agree with one of those but not the other.


That's a neat idea. -1 doesn't mean much without a comment explaining why.

I do have a concern, though. If this system is implemented, doesn't that mean that the "new kids" will also have access to this system and may use it to exasperate HN's "new kids" problem?

What about modifying the existing HN guidelines to include comments and promote the guidelines more? For example, cite a downvote with an appropriate quote from the guidelines.


If the current rule applies that says you don't get to down-vote until you reach N karma, the "new kids" don't really get to do much. I for one would value a system that forced a down-vote to be accompanied by some sort of explanation of why. I suppose at less than a year old I am still a "new kid" so I will say in my infancy (a few months old) I had a comment down-voted for reasons that were not obvious to me. I was polite. I didn't say anything negative. Yet I was down voted. When I commented on begin down voted and inquired why, that too was down-voted. Some one was kind enough to explain to me that my original comment was "a generic compliment" and basically useless. It was a pleasant compliment (something the OP seems to be advocating here)... yet still garnered down votes. Ironic, huh?


The tone of that comment came across badly. You pointed out they had made a typo - that was your only feedback. You suggested they might not be a human. You used the word "regret", but you clearly have no regret.

There is no irony as to why you were voted down. It was entirely justified.


I think you may have misread my comment here and are addressing the wrong comment. I was talking about months ago... not in this thread. I understand why I was down voted on the comment you are talking about. Clearly I was being a smart ass... taking the OP's suggestion to the extreme. Some people got it.


Good point about the karma limit. I was writing under a different assumption.

What I was proposing in my earlier post was exactly as you say: explain the downvote.

Maybe let everyone downvote, but have tiny radio buttons with reasons next to them. :-)


But with a clear cut system like the one mentioned above, why shouldn't new members get to downvote as well? The system would clearly convey its intended usage, and in the process bring new members up to speed faster. If you still want to impose a restriction, it definitely shouldn't be karma based. I find it a bit counter-intuitive. [Assume maximum evil] You're going to allow someone to downvote others based on how quickly and effectively he can troll the hivemind to give him 500-1000 or heck even 5000 karma?


"New" members shouldn't downvote because they might not downvote the right things, or for the right reasons. I do not consider lurkers "new members."

I also disagree with the idea of karma-driven downvote system. However, I wrote the above with the assumption that those who can downvote know HN values and enforce them. As you pointed out, yes the karma-downvote thing doesn't quite accomplish it's goal (assuming the goal is to allow senior members to act as enforcers). It should change. However that wasn't quite the point of my post.

My original point is: if you gonna say smack, gimme a damn good reason why.


Ah yes. You've got a good point. I find it confusing on how to allow downvotes then. Is age of the account a fair measure? Then what about people who've been here without registering, or people who actually have good discretion.

Maybe enforce a stealth downvote. This would be a case where the downvote button is visible but you use heuristics and probabilistic models to determine to whether actually accept the guys' vote or not, and if yes, what weight to assign it.


Slashdot has been doing something similar for a while. I like their commenting system.


I'm sure you already do, but just to get it off my chest: please use unusual amounts of caution when implementing a technical solution like this.

My gut feel is that your example of the voting system tweak will make users like me might not count. Granted, I don't comment nor contribute directly by providing links. I have various reasons for my silence like the fact English is not my first language (nor even my second language, so it takes me forever to compose something in English, and I still run the risk of my comment turning into grammar crit, with the point I try to make being lost in that noise), a lot of people on the site is intimidatingly intelligent, my points have already been made, etc, etc.

But. I do vote. Due to my non-existent karma I cannot downvote, only upvote. It's limited power, but I try to use it wisely. I cannot downvote the horrible negativity I saw, for example, in the wikipedia design thread (a submission I found fascinating even though it was flawed, as it was lovely to see how their thought process worked as they redesigned the visuals), but I did upvote one or two comments that I thought were thoughtful and added to the discussion. I would like to believe that I made some difference to the tone of the thread, even if it's in a very limited way.

Those that see the world as all shades of gray are not generally very vocal - you don't see a vocal middle ground in most arguments. But this doesn't mean we don't exist. And I don't know how "negative" comments will be classified, and I think people like me run a very real risk of silently becoming false positives when we stupidly upvote things using our own dodgy rules.

So - please consider unexpected side effects such technical things introduce. I guess what I'm saying is I'm already put off by some of the unwritten rules I perceive on this site, if the voting system is also rigged to make some people silently powerless I don't see how to be useful at all. It is off course your prerogative to decide that my type, the silent-but-voting user is unwanted, and if so, that's fine. But I hope I contributed something from time to time.


Why not simply limit the size of the community? Heck charge a membership fee...

Making it invite only would have (IMHO) improved things a massive amount. Have people do a test to gain entry.

Any community that is free and open will just deteriorate over time.


The problem with invite-only is that you get an exceptionally inbred set of ideas and it becomes little more than a self-congratulatory circle jerk.

I wouldn't mind a small membership fee which would confer voting rights.


Metafilter seems to be doing alright with its $5 membership fee, but then they also have pretty hands-on moderators (as opposed to community votes)


I posted a friend's start-up here (she wants traction), and her effort was summarily eviscerated. But not until after I'd already sent her a link to the thread. I feel terrible about that now. I think nasty comments tied with a product or site announcement might be a low-of-lows to look for? Or at least finding site announcements might be easier than finding "all nasty comments" and then evaluate that subset first?


I see exactly one comment on your friends post and its rational.


It seems rational from the outside, and when my brother and I played out customer #1, we provided similar feedback, but with a lot more nuance. It's quite a bit more demoralizing when a someone on HN dismisses it out of hand in one sentence. The tone was was not neutrally rational, it was dismissive. Not supportive. For example, lead with something positive, anything, then challenge her on why is the price what is is.


Rather than trying to identify nasty comments, how about weighting votes in such a way that it's proportional to how often a person has voted over time. I would think that a vote from someone who upvotes something occasionally is worth more than someone who has a tendency to upvote more often. Of course, this won't be perfect in that you will have people trying to game the system with idle accounts, but I think with some work it could be doable.

'Nasty' comments can be difficult for humans to spot, let alone machines. A lot of important information is lost when we go to a strictly text only medium. If we're going with the group to decide what's good and what isn't, then we run the risk of group think as well. So I am a little cautious about endorsing any automatic filtering of content in this way.

People who are new also, have the ability to upvote but not downvote. I think this might be causing a disproportate amount of upvoting, which may also pertain to 'nasty' comments. Perhaps maybe disabling upvoting for new people for a certain period and/or after a receiving a certain karma score might alievate this a little?


Pg: here is an idea I have been keeping inside me for over a decade:

User-account reputation karma.

Based on the voting system that slash of had, but on the character of the account rather than the comment the account made. Here is how it works:

You have your comment up down arrows, then username, then a drop down to mark the character of the account at the time they made the post.

So you could see a comment by samstave and upvote/downvote it, and then also tag my character as [insightful|helpful|abrrasive|troll] etc (whatever the tags are you want to make)

These tags affect the overall account. So if I were tagged by a ton of people as a troll account the more troll votes I get, the worse my account experience is... And eventually too many could lead to hellban.

On the other end, having a very high positive account character would make for a better experience - see stories esrlier, be able to affect other accounts in some way etc. or just a leader board of who is the most technical, helpful, insightful, innovative etc...


One idea I've had to help mitigate this problem is to develop a sort of "intelligence" score, where you weigh votes from high scoring individuals over votes from low scoring individuals.

The easiest way I can think of to do this is to use a Flesch-Kincaid grade level test, which uses sentence length and word length to estimate reading level. A quick and dirty calculation for any post might look like: (character count / word count) + (word count / sentence count) = Writing Level.

The problem, of course, is that it's only a tiny subset of users that actually comment with any frequency so you can quickly kill the democratic appeal of the site, but I'm frankly less concerned that my vote is actually counted and more concerned with seeing quality content on the front page.

* wiki - http://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readabil...


I actually like this idea. The idea of Flesch-Kincaid tests. There was a guy on reddit who analyzed the words on different subreddits and he came to the conclusion that the average word was 'because', and average word length was around 6.7

I would think we could modify the scoring system a bit where common English words are penalized, and uncommon words are weighted a bit more. The only problem I see is, if someone plans to post code, that's going to wreck their score. Another thing would be that most of our programming names/terminologies contain very few letters. So, an exclusion/inclusion list would definitely be needed.


Wasn't OP's 11kclub.com post dead'ed by mods?

http://news.ycombinator.com/item?id=4323459


To (a) and (b) I would also add (c): The signal to noise ratio goes down, so it's more difficult to find and upvote the best comments. Moreover, when there already are a lot of comments - and this can happen very fast - new comments, even very good ones, have a very difficult time emerging, and I guess this creates a disincentive.

One solution to try could be to give more time "up there" to comments from commenters with a high karma (only considering the karma from comments, not that from stories, which IMHO is a completely different signal).


I have noticed a bit that the top ranking comment will then contain a bunch of fragmented, semi-unrelated sub-conversations and the top comment almost becomes an OP of its own.


Yes, I noticed the same thing. Sometimes the first comments becomes so long, that it's almost impossible to reach the other comments - I even built a Chrome extension to be able to collapse them, but when the first comment is really too long it becomes difficult to manage anyway.


I've observed that simple, positive feedback like "Looks good, keep it up" gets downvoted as nonconstructive. Karma is more than internet points here since the thresholds drive so many features, so I bet people who would otherwise comment in the affirmative simply don't. Paradoxically, nit picking and verbose argument can score a lot.

I don't want to offer any particular suggestion because I don't have any data to support anything. A hunch, though, based on your idea: if someone want to downvote, a comment must be provided.


I think the cause is simply growth.

Absolutely, the more the original community is diluted, the more anonymous people feel, and the less restricted they are by perceived norms (as there are none). You can see this in extreme form on some forums like Kuroshin (remember that?) which have been abandoned to the trolls and are now overrun.

The problem with most systems to control quality is defining what is good, and what is not good. That's not a simple problem and I'm not convinced you'd ever be able to get a crowd to decide on it satisfactorily - there's a reason that mass-entertainment panders to the lowest common denominator, and a crowd's votes are likely to trend the same way and any automated system is subject to gaming and the ignorance of crowds as soon as you let a lot of people have input, however small. Another solution used by sites like reddit or stackoverflow is to segment the communities into small enough groups to be self-policing.

Another approach I wondered about recently was along the lines of 'A plan for Spam' - it'd be interesting to use the massive comment base of HN to build up some sort of corpus of good and bad comments and apply Bayesian filtering to comments. You could use this to:

Set an initial point score for comments based on their content

Weight comment votes from users who consistently scored highly

You would have to seed the corpus of course, and let it learn from trusted users' votes, but otherwise this sort of system would in theory continue to work as long as there were enough good comments being posted.


I think those a great ideas. One question I would have is how would one prevent people from karma burning? Could enough poor comments form a high-karma person make them forfeit their karma/gravitas --thus highly discourage such behavior?


It’s hard to encourage and nurture positivity on the web. Tumblr’s done it by not explicitly building comments into the product—so if you’re going to say something critical, it shows up in your personal space. It's conflicting to build a growing community vs. a good one. One of the only ways to do it is to have some sort of asymmetric follow relationship that allows all users to create their own smaller communities.

Clay Shirky: “The downside of going for size and scale above all else is that the dense, interconnected pattern that drives group conversation and collaboration isn’t supportable at any large scale. Less is different — small groups of people can engage in kinds of interaction that large groups can’t.”

David Foster Wallace: “TV is not vulgar and prurient and dumb because the people who compose the audience are vulgar and dumb. Television is the way it is simply because people tend to be extremely similar in their vulgar and prurient and dumb interests and wildly different in their refined and aesthetic and noble interests.”

David Foster Wallace: “We should keep in mind that vulgar has many dictionary definitions and that only a couple of these have to do with lewdness or bad taste. At root, vulgar just means popular on a mass scale. It is the semantic opposite of pretentious or snobby. It is humility with a comb-over. It is Nielsen ratings and Barnum’s axiom and the real bottom line. It is big, big business.”

http://christmasgorilla.com/post/29694673248/its-a-genuine-p...


Some thoughts,

Communicate the influencers more... most people follow leaders, and it would help set the tone for new members.

HN's guidelines should be turned into LightBox style documentation. Put it where it matters right next to the input boxes (and in the case of dupe-checking do it by AJAX and communicate it inline).

Guidelines should encourage the positive, and where something is negative then it should be pointed out that they should attack the argument and not the person.

Have a very small set of very clear rules that are enforced transparently, openly and brutally. The lines will only trip up the worst offenders. If you move these lines, only do so because you've listened to the community and they believe the lines are wrong. I use a system on one of my sites in which I track how much a thread is read, and if a post is flagged by more than a certain number of people then I know the community is saying something got through and the rules either aren't being enforced or might not be right.

Mostly: This is not a technical problem. It's a people problem.

There may be some tools you can build to help, but it is people who will use those tools and it is they that make them effective. Even then, there's only so much you can do without fully empowering others.

One ends up realising that a forum is a microcosm of society, and the owner/leader either chooses a form of government in which the citizens will be happiest and yet the aims of the government are achieved, or the owner/leader delays such a decision whilst the population grows and increasingly friction occurs.


Another simple idea (that you've no doubt thought of): using Twitter (or App.net) authentication. That way people are a bit less anonymous and more accountable. I can imagine people would hate this, but they might be the same people who are the haters to begin with.

I don't think this phenomenon is just on Hacker News. I believe the recent negativity shift is an actual change in the startup community's perceptions of the world.

When things are "hot" in our industry, and seemingly trivial companies are exiting for billions, the people who are still struggling start to develop a somewhat bad attitude. I see it all the time particularly from people who are working on "serious stuff" and then lash out when they see Instagram's success (which they've been saying for years: "sure, but what's their revenue model??").

I don't condone it and honestly think it's genuinely strange. Now is as amazing a time as any to be founding a startup. Success of others (no matter how trivial) only makes things better.


How about just getting rid of this upvote / downvote point system? I have to say, when I first joined, I didn't get it. Now I do. It's a competition. But not one that seems to improve the quality of the posts or comments. One that improves ... not sure what. So is it valuable?

Remember the BBS forums and forums of old days when you didn't have points and rankings? Yeah. The forums often degenerated there too. Points didn't seem to help or matter. But moderated forums worked well. If you want high quality, you'll need a walled garden. But walled gardens rarely work and stifle creativity. However if you think a point system is a substitute for moderated comments. Well, then, the results speak for themselves.

So, points and voting be gone, maybe? I doubt the quality would change much. Even PG admits that it's not the voting per se, but the entrance of a larger volume of people.


I don't agree with this comment, but ironically someone voted him down and neatly highlighted the problem. Why did they downvoted the post? It is a constructive suggestion, and has its merits. That it got downvoted is pretty much what is wrong! It might not be a snarky response, but it's still a nasty way to criticise!


A couple questions:

1. Why not assign karma by running something like pagerank over the vote graph? I think giving overwhelming weight to high karma old-timers would make HN more like them, which would be an improvement, right?

2. What about assigning karma not by your comments, but by the responses to them?

I figure karma-from-replies would shift the conversation to "what should we be asking about this", whereas karma-from-comments incents "what can i say that the masses will like".

Given commenting has a higher work threshold than voting, it might be "manipulated" less. Gaming it would be more obvious. And subjectively, posts that are questions seem to lead to higher quality discussion (more genuine?). Maybe you'd have to put more work into getting users to not feed trolls and assholes, and ask/reply more to excellent comments. Not sure if practical...

Would love to hear anyone's thoughts on this.


(2) probably would not work. Commenters could troll, and then the community replying in disagreement would up-vote their comment.

Here are my ideas: (A)Not showing someone their karma score ever, unless it is negative.

(B)Leaving a line between the comment field and the reply button saying, "Stay constructive."

(C)That line of text could link to the guidelines.

(D)Detect if someone says 'idiot' and then change the line to, "You said 'idiot', do you still want to post?"


(D)Detect if someone says 'idiot' and then change the line to, "You said 'idiot', do you still want to post?"

I like this idea. In particular, I like the idea of analyzing content - but not programatically rejecting the post completely. Another forum that I frequent tries to keep things civil by banning certain words. Sounds good on paper, but then you try to use the phrase "knee-jerk reaction" and the system rejects your post because it has "jerk" in it. D'oh.

Pointing out the possible (but not definite) incivility, but allowing the post anyway (after a confirmation step) could be a nice way to remind people to be polite, but without being overly draconian.


I actually quite like those suggestions. Subtle but I could see them working well.


The idea I'm currently investigating, in case anyone is curious, is that votes rather than comments may be the easiest place to attack this problem.

I think the principle that (give or take passing a karma threshold) all contributors are equal and one vote is one vote has probably outlived its usefulness.

Perhaps up and/or down votes from established, positive contributors should simply carry more weight.

Another possibility might be to give a handful of extremely highly regarded contributors a third option to super-downvote a comment by a new or low karma contributor, such that just a couple of those (or even just one) would cause the offending comment to be killed immediately. I'm thinking that rapidly killing off things like Redditesque extended joke threads that obviously don't suit the culture on HN might help, by the "broken windows" theory.


Although I want to agree with you, I'm highly skeptical on the super-downvote feature. In my relatively short time here, I've seen comments by certain top contributors being argued viciously(is that the word, English is not my first language sorry). People made good points and even attacked the top commenters points, quite a lot of which I felt was valid. Now on a purely egotistical basis if he/she wanted to superdownvote the entire thread nobody could possibly do anything to prevent it. Basically what I'm saying is that even they are humans.

I'd rather have a system where instead of simple up/down vote you have an elaborate system of 'Offtopic', 'Poor taste', 'Incorrect' as mentioned in a comment somewhere above. But also, I'd like to add positive halfs of these such as 'Insightful', and maybe one or two other labels. The idea being that they can select only one out of the total, and each label should secretly have different values. So maybe flagging a comment as Incorrect isn't as penalizing as putting one as Off-topic or Poor taste. Insightful may have different positive value as compared to maybe something labelled as 'Excellent advice' or something.

With a lot of options, I don't think newbies would actually click randomly on any button without knowing the meaning and context of each button. Anyway, the buttons need not have full names, simple short letters like P/OT/I/EA/X may be enough. Of course, this automatically requires to have a way to retrieve your vote, if you accidently click on the wrong button, but it should still allow only one option to be used.


Are people banned/ghosted here? I'm assuming so, because that's how most forums work, but I'm not sure since this seems to be custom built.

While I don't think that HN has hit this level yet, one solution that I've read about involved people who trolled or posted general nastiness. When people had reached a certain threshold or pissed off the wrong high-level mod, they got "ghosted". What this meant was that people who were "alive" or considered "living" could see "living" contributors' posts, but could not see "ghost" posts. Ghosts, however, could see other ghosts' posts, so they could, for what it was worth, writhe in their own debauchery and piss each other off, but not the pleasant and constructive members. I thought it was an interesting solution and apparently it worked very well.


There is a hellban system in place.. your posts are hidden but you don't know it. You have to explicitly go to your options and enable "showdead" so these work.

Problem is it's not entirely clear what some people have done to get hellbanned sometimes.. I'm not sure what the requirements for getting it reversed are, either. I've seen some very constructive comments in that mess before..


With a big site like HN I don't know how you can control it. However, in a more focused site, the answer would be hands-on involvement. It doesn't take that much involvement, just a little.

The thing about systems is that we tend to think of them as if people are not a part of them. But people are a part of them.

One thought I would suggest is that it might be worth giving people with very high karma (maybe high enough that only a few people have this) an ability to post highlighted reprimands, and an automatic banning of a user if he or she gets more than, say, 5 of these in a month. These could then be reviewed of necessary.

One thing I would think about is that rather than having a system where computers try to control people, have a system which empowers the community leaders to set standards of interaction.


As I mentioned below, I've run a social network called Scribophile for about 5 years now. What I've learned is that you simply can't engineer away a social problem. People will always find a way to be jerks; the only way to mitigate them is either through:

1) Social pressure: the community must evolve in such a way that its members as a whole frown upon negativity. Hard, because it takes time and can go wrong. Or,

2) Moderation: a group of members you trust is tasked with actively deleting useless comments and threads and banning useless contributors. A zero-tolerance policy for stupidity. Is that elitist? Maybe. But an elite community is hard to grow when idiots are tolerated.

There is literally no way to engineer away jerks. A social problem requires a social solution.


The idea I'm currently investigating, in case anyone is curious, is that votes rather than comments may be the easiest place to attack this problem. Although snarky comments themselves are the most obvious symptom, I suspect that voting is on average dumber than commenting, because it requires so much less work. So I'm going to try to see if it's possible to identify people who consistently upvote nasty comments and if so count their votes less.

Interesting - also, if it is as you suspect and voting is easier and thus part of the problem, perhaps adding more friction to the process of voting would be useful. Something like having to spend karma to vote, or having a limited number of votes per day to spend.


Another technique could be to split the ranking into two modes, a general mode and a power mode(where rules are much strict and biased towards a filtered set of core community users). Using the power mode, readers would view the community core (who defines the site ethos) but may not be able to interact (vote,comment and submission not appearing here without upvote from core members).

Such an approach should be taken with a goal to gradually adhere the general user behavior towards the community core. May be add a tab for this.

The thinking is based on the premise that the site still has a lot of its core users and community values have not changed over time.


How about weighting votes from veteran members more, and if new members consistently vote similarly to veterans, their votes are fast-tracked to being weighted the same?

I can see it bubbling up quality content without punishing promising new members.


How about giving an option to OP to judge which comment is being helpful for him one way or other. If a member is constantly being penalized by OP for giving negative/useless comment then revoke voting rights.

Stackoverflow model could also work.

My 2 Cents.


Private comments and automatically fold/collapse long/low-point threads and negative-comment scoring comments.

Optionally mark your comments as private for grammar/spelling/technical errors, way off topic comments and personal requests.

I think graying out works for comments scoring 0 or maybe -1, but beyond that, might as well fold/collapse them.

I also think something like xkcd's Robot9000[0] would be useful to prevent some clutter like "me too!", "awesome!", "Thanks"

[0]: http://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-at...


Perhaps you could have a system where users vouch for other users who make good submissions, and then have submissions ranked with something along the lines of PageRank, with "vouching for" taking the place of "linking to".

The advantage of identifying good users over identifying bad ones is that recognition of goodness can propagate: if you believe A to be a good user and A believes B to be a good user, that is some indication to you that B is a good user. Replace "good" with "bad" and this no longer holds.


Paul, what about making votes have values diferent of +1|-1 depending of who is voting?

People who have been longer in the community and have a good kharma would have the power to vote +1|-1, while people who have less kharma or have joined the community recently would be able to vote, say, +0.1|-0.1, and there would be a lot of people in between (+0.3|-0.3, +0.5|-0.5, etc.)

It just makes sense that your upvotes/downvotes should have more voting power than the ones coming from, say, myself.


There is no agreement on exactly what a vote in either direction means as there is no restriction on anyone doing either for any reason. And to different people it means different things.

As far as "who is voting" for arguments sake a member of SCOTUS joins HN and decides to cast a vote. Or a member of SCOTUS joins HN and after a period of 2 years has low karma, because, they don't have much time on their hands. I'm not exactly sure why someone who has not been a member for a long time or doesn't have a high karma score isn't to be trusted with voting. And why someone who decides to spend a lot of time on HN should be given more privileges.


An alternative idea to technical countermeasures is to do it the way the SA forums have doing it for 10+ years. $10 to get in. Appoint moderators who can ban or probate users at will if they post trash or break the rules. If you're banned you shell out another $10 if you want to join again.

(And no, this is not comparable to app.net. The money isn't sufficient, you need the moderators just as much. You need both.)


1) Define moderators as subset of voters that you trust. One of such moderators would be you.

2) Rate every other voter against moderators' votes. If voter upvoted the same as moderator, then increase weight of such voter in the future. If voter upvoted against moderator vote -- decrease future voter's weight.

3) For every comment calculate weighted upvote/downvote total based on voters clicks and weights calculated in #2.


How about an "outside community norms / not nice" vote on any comment? If a user accumulates too many of them (relative to positive votes?) their comments get pushed down.

I think it's different from the existing "flag", or at least how I understand it: Currently, I'd only hit flag if something is deeply offensive / illegal / spam. And I think flag only exists on posts?


>identify people who consistently upvote nasty comments

What's your definition of a "nasty comment"? How do you distinguish between a "nasty comment" and a comment that voices an unpopular opinion?

I understand what you're trying to accomplish, however I worry that a side effect will be the further silencing of opinions that go against the HN popular consensus.


There's also a UX component to this: I have accidentally upvoted nasty comments and downvoted comments I liked because the touch targets for the vote icons are so small on my phone. Even worse, it appears that I can't undo my mistaken votes either.


I have one solution for this. I just read some another post on tor networks and taking ideas from it I think it can be really good if hackernews is removed from this ip addressing layer and taken inside the tor. This would help as hopefully nt many ppl wud be willing to install tor and taking too many pains to come to hacker news :)

PS: I hope


One thing that would solve the problem with growth would be partitioning the "community" into smaller groups that are manageable by each user. Reddit does it through subreddits and it works somewhat. I would favor an approach where users could white/gray/blacklist other users, and use these as modifiers to the overall score.



I agree with tptacek - I think a forum can benefit from posting its guidelines somewhere prominent, especially during signup for new people.

That way, the community knows what the values are, etc. To be honest, looking at HN I'm not sure where on the site to find these values right now, myself. I just try to be my own reasonable self :)


I would agree, the ideal solution would be a self-organizing trust network where higher-quality comments tended to be promoted. I've previously suggested one way to implement such a system here: http://news.ycombinator.com/item?id=3473753 .


Good. Something needs to be done. Every time I click on a link about a start up or a new site, I see the negative feedback and I cringe at the thought of putting anything I do on here only to reach an inevitable fate that could permanently damage the reputation of the project during its launch phase.


I would like to see agree/disagree buttons -- no other voting site has this but I think it would solve the disagree=downvote / agree=upvote problem that Hacker News and Reddit have. By separating out the two options, you might get people more to upvote good comments and downvote poor comments.


Thanks for your response, I appreciate you looking into it. As I say I'm a huge HN fan but with the growth of the community there has been a sharp rise in people seeking to make a name for themselves and seeking to criticse rather than critique constructively.


I've thought about this for quite a while and have concluded: you could start a community and weight comments based on concurrence with the community (most likely from the get go) but unfortunately it's a fine line between stagnation and simplification.


You could color code people who are in the current (or past) YC cycle. You could also make quality participation an element to the application process (like it once was) - the forum has been cannibalized in a way.


Moving the snarky comments from the top of a comment thread to the bottom won't help much. The snark sets the tone of the thread and disuades better comments. You need a way to remove them.


What if the poster had an option to restrict the ability to comment on a submission to members with a certain threshold of karma or account age, not to exceed the poster's own level?


I think it would be worth taking a look at the Slashdot voting system, which IMO does the best job I've seen of getting the best signal out of the noise of mass voting.


What about consistently downvoted decent comments? Voting is anonymous, which isn't bad. But if you got people to explain why they downvoted, then that would help.


pg - I'm curious if you've ever given thought to the idea of exposing a filter or ranking Domain Specific Language to us ... something that we could enter in a spot in our user-profile ... so that we could either choose the current ranking algorithm or could write our own.

It seems that HN may be different things to different people, so perhaps allowing the presentation to be customized might put people at ease?


Make upvotes and downvotes visible (i.e. not the final number the actual list of users). Knowing that no one will know fosters petty up/down votes.


I haven't been here in quite awhile, and immediately I see meta-threads about how this site operates...

But the latter half of your comment intrigues me...


how about a 'quarantine' period. users can only comment after being registered, after voting and after viewing a certain number of posts/comments. I believe communities change (sometimes for worse) when new users don't get what the community is about and just hit the ground posting/commenting.


I think it should be a more aggressive approach.

Delete insulting comments that cross the line. Trolling is like pornography. You know it when you see it.

After three offenses, delete that person's account. After 5, ban their IP address.


Would Kaggle be a good fit to help solve this problem?


Real Names. I believe that I am much less a jerk on HackerNews than I am on Reddit, where I post under a pseudonym.


I did have one idea. I don't know if it would work. I've often wished that Reddit had an "adult table", which is NOT a separate place, but a separate DIMENSION of comments in the SAME thread.

Let me explain. Currently everything has one score. So, give it a second score that also starts at 0. Instead of score "1 point" it would score "1 point (1 point at adult table)."

Then the adults at the adult table would immediately downvote anything that isn't adult (this only concerns its score at the adult table) and can upvote highly reasoned but nuanced posts, and so forth. However, there are not so many at the latter table. So, the former table might go from -2 to +100 maybe the second score has most comments untouched (at 1), with a few downvoted, and a few upvoted.

This way, you could have a children's table (current reddit) with its thousahds of votes, and good comments languishing at 0, and the adult table, where the same 0 normal score comments can have high votes at the adult table, and so forth.

So, here on hackernews, you could have a "serious" or a "founder's" table. And you could have a snarky table for people that just like having everything picked apart - where, "correctly" (per policy) we (the serious table) would vote the snarky comments down.

So, to illustrate. Here is a snarky comment "Oh, I see. So it's like a coupon system where you insert yourself into a transaction and book the whole transaction as revenue, deducting the actual 'rest of the transaction' as cost. So even though you lose on every transaction, you make it up in volume! Hey, it worked for groupon, right? :)"

This starts at 0, 0 for the snarky and serious table respectively. The serious table immediately gives it a downvote, so it is now at 0, -1, and then another, 0, -2, and soon it's hidden here: 0, -3. Meanwhile the people shifting through the trash say, "hey, ZING!!" and promote it, it gets to 5, -3, then 10, -3 then - if this place degenerates into reddit, hundreds of points, -3 respectively.

The serious or adult table is not impacted. It's not even visible. Meanwhile the snarky guys can have their children's talk.

-----

(here is the rest of my comment. I shifted the above to the top for more visibilty.)

like many others I had a "rude introduction" to hackernews, not from real connections in the startup world (which I have) but from low-quality forums. In my case I regret even the choice of username, and would change it if I could (maybe indicated to show that it's new). I wouldn't now shirk away from using my real name or a close alias - something I never do online.

So, in my case what would have mitigated my behavior is seeing where my traffic was coming from and treating accordingly. I certainly wasn't typing it into the browser URL bar. Secondly, you could have a prominent button "New to hackernews? Learn about this community, which is a more serious one than most online communities: a lot of personal connection is on the line". Then explain why, and what we get out of behaving the way we do.

Finally, as of now I can either upvote or downvote a nasty comment. There is no right answer. But you can change this in a minute. If defining the correct way to vote as "downvote any negative comment not leavened by positivity - we need all the positive thinking we can get", then I would start voting correctly. It's like Wikipedia. I don't add what I know - I add what I can prove with a reference. I don't delete uncited folklore because it's not a good read - I do so because that's the correct action.

So, policies (as is the case with Wikipedia) help immensely.


I had an idea, similar to your suggestion, to vote on the link and the discussion separately. Although my idea would not separate children from adults (an idea I like very much), it would allow good links to rise without endorsing snarky and low-quality commentary on the link. Likewise, readers would feel more free to upvote quality discussion without necessarily endorsing the subject.

It is too bad that more than a couple voting toggles would be too cumbersome; otherwise both ideas could co-exist as four sets of up and down arrows.


Why don't you charge eg 10 cents per comment?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: