I've spent many hours over the past several years trying to understand and mitigate such problems. I've come up with a bunch of tweaks that worked, and I have hopes I'll be able to come up with more.
The idea I'm currently investigating, in case anyone is curious, is that votes rather than comments may be the easiest place to attack this problem. Although snarky comments themselves are the most obvious symptom, I suspect that voting is on average dumber than commenting, because it requires so much less work. So I'm going to try to see if it's possible to identify people who consistently upvote nasty comments and if so count their votes less.
It is hard to imagine that the site guidelines you write years and years and years ago when this place was tiny are, intact and without change, the optimal guidelines for the site in 2012.
It's also the case that the things you say to guide the tone of the community, for better or worse, have a huge impact on how the community works.
If it should be a norm of HN that criticisms be kept constructive, and that the community should have a default position of supporting entrepreneurship (and, more generally, of supporting attempts to build anything) --- and, that should be a norm. --- why can't you just say so? Not in comments, buried in random threads, but in somewhere prominent on the site. Like, for instance, the guidelines. Which you need to update. Please.
I say "surprisingly" because, like a lot of tech hackers, I often think about a technical solution before I think about the simple social one. But I've noticed time and time again just how much effect the simplest changes in user-interface can have on the entire "mood" of a community. And "simple change in UI", includes the guidelines, or--I don't know--there's actually a lot of space for a notice below the comment submission form where you can give people the "right" idea (what you want to see in this community).
The personal anecdote goes a little something like this: We have a real solid group of mod/admins with a hands-off approach (way more hands-off than the invisible ones at HN, but that's a different gripe), so we had our fun with the innocent tiny changes, like changing the descriptions of individual sub-forums ever-so-slightly, like a pun, or a little in-joke. After a few years however, all of the descriptions had mutated into something that was not at all descriptive anymore. All the "old" members (which includes the mods of course) knew perfectly well what the sub-forums were meant for. But the new members did not. As new members look to older members for social context, there was a sort of generational loss in relevance, and at some point people almost entirely just posted whatever, wherever. It was chaos! And so I had the brilliant idea that, hey we can change those descriptions so that they're both descriptive and somewhat funny (utter genius, I know).
The amazing thing is, that it took only about 2-3 weeks before people starting posting on-topic things in the right sub-forums again.
The moral of the story is: people really tend to respond really well to just flat-out being told what to do. Really well. Amazingly well. Frighteningly well.
Better than some tech solution that tries to subtly "guide" them, in particular (or at least much easier to implement).
One of these days, I'm going to make a chapterwise summary of that book so that I can remember the experiments and behaviours at a glance.
Also, in the Army we had the Warrior's Ethos, the Army Values, the Soldier's Creed, and the Creed of the Non-Commissioned Officer crammed down our throats daily. While are some awesome people in the Military, I've run into just as many dishonorable people as I have in the general population.
Or, more likely, they might be somewhat worse, for some reasonable definition of somewhat.
Can you think of any group this would not be true for given a sample size or population as large as the military?
Honest, decent people who join the military (or any other group) will likely remain honest and decent. Dishonest people will most likely remain dishonest until their actions prevent them from getting what they want.
Perhaps those values from military doesn't have/had any resonance because they weren't, in fact, good? Or, told in a good way? (I'm thinking now that tyranny in teaching generates repulse not acceptance)
Also, if we consider that the teaching was correct (both as content and form - I dunno of course) who knows if the "soldiers who didn't change" would be worse if this teaching would be applied?
I think that the human being is the victim of influence and good influences play a determinant role on his behavior. But now what means "good" - this is entirely another chapter...
Secondly, I don't think the ideals help if you don't agree with them. You choose to become an Doctor and remember the Hippocratic oath. Same with Engineers and Lawyers. If you find half way through, that you don't want to be an Engineer or a Doctor, you can always bail out and switch. But with the military, bailing out is completely out of the question.
While people may or may not like what the military does, most of its creeds/values are for the most part positive things. The Army values for example are Duty, Loyalty, Selfless Service, Respect, Honor, Integrity, and Personal Courage - pretty generic and not nefarious at all.
But since you asked, why would one have to have experience with that exact group of people prior to coming to a conclusion? I agree that it would be ideal if that was the case, but there are many studies considered to be scientifically valid that use one group of people who come from similar circumstances as a control, while only testing on another group. In my example I suppose the control would have been the people I know from outside the military. As I stated though, my comment was just an anecdote from my personal experiences, not a peer-reviewed, published article intended to expose the author of the other study as a fraud.
But solid software, really? I give them one thing: it works. Can't recall encountering any real big bad bugs either, so yeah in that sense, quite solid.
But given that you contributed to it I assume that you've seen the code? I've had to make some modifications/hackery here and there myself and the amounts of time I've cursed at it for being a tangled and obscure mess... combine that with the childish language in the comments, urg.
I guess that's one important thing it taught me: never be flippant or rude in your code comments because it'll invariably end up making you look like a fool. Kind of like that "Muphry's Law[sic]" someone quoted in another thread. In "production code", but I found it easier to change my habit to "in general" because you never know who's going to see your code, and even your 3-months-future-self might as well be a different person when reading code is considered (except future-me does tend to share my sense of humour).
Because of that, all my own modifications ended up looking like rather ugly hacks as well because there was simply no "coding style" to join in step with.
(and the templating system (that is neither). and the amount of semi-duplicate subroutines and their names!!. "post", "post2", "message". And trying to guess in which of the five "/∗ probably the most important module of SMF ∗/" giant includes you're going to find a certain routine is about as predictable as trying to guess the name of a PHP library array function without reading the docs ... I really should stop, sorry :) )
It does work very well, I must repeat that. But only as long as the original magicians are still there taking care of it I'm afraid.
It is not the critic who counts; not the man who points
out how the strong man stumbles, or where the doer of deeds
could have done them better. The credit belongs to the man
who is actually in the arena, whose face is marred by dust
and sweat and blood; who strives valiantly; who errs, who
comes short again and again, because there is no effort
without error and shortcoming; but who does actually strive
to do the deeds; who knows great enthusiasms, the great
devotions; who spends himself in a worthy cause; who at the
best knows in the end the triumph of high achievement, and
who at the worst, if he fails, at least fails while daring
greatly, so that his place shall never be with those cold
and timid souls who neither know victory nor defeat.
Examples from :
Metaphor: Argument is war
I shot down his argument
He couldn't defend his position
She attacked my theory
Love is a journey:
Our marriage is at a crossroads
We've come a long way together
He decided to bail out of the relationship
Comment Rules: Remember what Fonzie was like? Cool. That’s
how we’re gonna be — cool. Critical is fine, but if you’re
rude, we’ll delete your stuff. Please do not put your URL in
the comment text and please use your PERSONAL name or
initials and not your business name, as the latter comes off
like spam. Have fun and thanks for adding to the
conversation! (Thanks to Brian Oberkirch for the inspiration).
In the "old days," there was more-or-less agreement that the vote was for quality of comment -- feedback about the value of the participant to the community -- and agreement/disagreement was voiced (if necessary) in comments. As a rule I (and others) would upvote comments we disagreed with as long as they were thoughtful. There were exceptions, of course, as you have aknowledged, but this was a pretty good guideline.
Now I think things have turned around a bit, and votes have started signifying agreement/disagreement more than quality of comment. I think this takes the community in the wrong direction.
You seem to be thinking that this is largely trollish behavior. I suspect, however, that most of it comes from ignorance of the expectations of the community. I'm not sure there is a fix for the trolls. But for noobs that watch reality tv shows and want to come here and be the acerbic judge I think we can stop it by keeping the behavioral expectations in the forefront.
The flaw with the idea isn't what you say. I think it's important to expect decent, reasonable behavior. The flaw is, 'ok, we have two buttons. Now what?'
I would start with having the quality score 'float' the article and pretty much ignore the agree/disagree score in karma metrics. But I would probably keep that a secret.
Now personally I'd rather be able to tell jokes on HN than be able to veto. But then again I'd have to sift through the bad jokes that other's tell.
In any community like this it takes some time to learn the rules. I'm sure there are quite a few people who don't necessarily know who "pg" is for that matter.
And now I went and explained it. Sorry, tptacek.
If I am selling something and asking $1000 and the person comes back with an offer of $500 and assuming I am willing to take that offer (because I was shooting high) I need to offer some "kickback". This could be as simple as saying something like "well, if I accept can you pay within 5 days?" Or perhaps, "can't do $500 but I'll consider $600". Otherwise the buyer feels perhaps "hmm wow that was to quick and to easy" and might back out of the deal. Strictly my experience over many years.
Instead you should ask to be able to green-light one applicant per batch. That way, it would be possible for your contribution to be very valuable.
To support this proposal: When I first arrived here (from your homepage), I didn't notice anything about guidelines until I stumbled upon a comment which mentioned them. Then I eventually found them on the bottom of the page (which can be quite long), where I seldom look at.
I'm not keen to make a new account to test this out though.
The bad ones, I reevaluate whenever I see another comment, but I've never yet seen a good reason to remove someone from that list. Sadly they never seem to end up hellbanned. It's uncorrelated largely with agreeing or disagreeing with views; it's just that there are a group of people who are, as the OP said, really negative, irrational, and destructive to civil discourse.
The problem I see is - How do you know if a comment is negative or not? Surely, you can tell, but problematically that might be tough.
Maybe swap out the "post comment" button for 2 "commit" buttons 1 that says "Positive" the other that says "negative". (This could be done a myriad of ways, but you get the point)
You could then measure a users tenancy to post useful positive/negative comments based on the average votes that type of comment receives.
posters that consistently have low average scores for their comments could lose posting rights, for a period.
I disagree and find such a community engineering approach dangerous for the health of the very community it's trying to "save". I think there are two big problems with it: (i) It gives too much power to the gatekeeper(s), be it software or human, and (ii) it creates a bias for positive comments.
This is the typical dilemma: Bad comments and a free forum are two sides of the same coin. Although I hate snarky, nonconstructive comments as much as everybody else, I think that trying to curtail one may also damage HN's innovative and free atmosphere, which is its main asset.
Sure, I sorta agree, but ... there's snark, and then, well, there's snark.
Sometimes snarky, non-constructive comments can be really funny and/or satisfying. I don't want them to dominate the conversation, and a lot of snark is just sort of stupid, but ...
Whenever I think of a good example of a community that has managed to stay pretty high-quality for some time, I can't help but think of Matt Haughey's Metafilter. I'm sure the nominal charge for membership helps, but the (paid) moderators are very good.
You have to view an individual comment to get the flag option. I'm not sure what the down votes are for. "This comment does not belong on HN", maybe.
I have no idea what the flag is for. "This content is so bad it needs to be removed, not just hidden", perhaps.
I agree with your post though! I'd be wary about having too many options for marking posts.
My observation is that downvotes also signify comments that have one of more errors of fact. Of course "fact" is also open to interpretation and not absolute.
You could say "you can only be President for two terms of 4 years each" but someone could dispute that because Roosevelt was President for longer or because in their country that is not the case. Etc.
A downvote because of inaccuracy without a comment isn't very helpful of course.
I do most of my reading on Reddit (front page reddits are just cotton candy, but there are good ones to be found). I only started reading HackerNews about a year ago, and mostly just for the articles from lesser known blogs that don't get posted on Reddit. Being something of a HackerNews newbie I haven't been around long enough to notice a change, but I hope it doesn't go the way of Slashdot.
I know that the worst offenders won't read the guidelines, but a little faith in the rest of them would go a long way improving the quality of content on this site.
There is not going to be a "one-size-fits-all" solution to this problem. A multi-faceted solution including both technical and non-technical elements would likely work best.
The easiest part of the solution to implement would of course be the text in the vicinity of the comment box. This text would establish certain core tenets that we can all vote on with Paul.
The technical aspect of the solution could monitor how often certain users are being down voted. The down-vote history of an account could then be used to "weigh it down" somewhat, so that it takes more up-votes in the future to bring their comment to a higher position on the page.
So long as it is a group norm to do adhere to the guidelines those who don't will be downvoted and their conduct will have little influence on the community.
The OP sounds like he should find some other support group, this is a community of smart people that will quite happily - and quite rightly - rip any pointless idea apart.
If you're going to editorialize and discount any comments/votes you consider not aligned with boosting anyone's self esteem, or what you consider negative you will create an artificial community - one that is the website equivalent of the old boys network, where the old timers consider themselves superior to the newcomers. Negative comments can be as useful as all the back patting attaboys.
If you really want to do this, the simplest solution is one I am sure you're aware of. Pick a few dozen people you 'like' (their voting habits) to train a Bayesian classifier, then you can weight those those that don't fall in-line with the way you'd like to see you community behave.
Can't we all just act like adults and learn to deal with the negative comments - possibly even learn from the? Do we really need to protected from snarky comments?
This reminds me of a comment thread I saw earlier about people getting bent out of shape by cell phone usage in restaurants! I find lots of things are annoying, loud trucks/buses, cyclists that ignore road signs, rude people etc etc, but I am not going to complain and try to stop buses or give discounts to people who drive to discourage cycling. People need to start learning to cope with the real world and not expecting 'the system' to protect us from ourselves. It's embarrassing to be part of a community that needs voting bias that only encourages positive comments.
Actually I find that harsh comments are more common among children and noobs than adults and experts. A random fan watching a football game in a bar is far more likely to use harsh language about a player who misses a pass than another professional football player would.
What's happened here is itself evidence of this trend. The comments haven't gotten more negative because we were all a bunch of fools initially, and now the real experts have arrived; rather the opposite.
Half of the benefit would be from filtering out people who aren't really invested in the community. The other half would be from making people think about the magnitude of their own contributions before they criticize someone elses.
I know how much I've learned and how grateful I am for having the chance to learn from this community, I'm sure there are many others as well that feel the same.
So to the community as a whole thank you for the chance to learn until I've mustered up the courage to take the plunge.
While we can probably mostly agree that sniping (especially personal attacks) are rotten, I suspect useful insights, that are critical in nature, often get thrown in with the dirty laundry, and labelled as "negative". "Awesome site dude!! SO happy for you :) :)" and a whole bunch of potentially fake, shiny superlatives can also become a turnoff.
Quite likely this is a minority viewpoint.
I still come here for the articles and I occasionally read the comments but I don't participate much. I feel the same way as the original poster. I see someone post something genuine and then I see someone else rip it apart in a condescending way.
I used to come to HN and on the other side of the usernames, I pictured the leaders in our field, the pioneers of the next steps of our information age, and people who I wanted to emulate.
It's not this way for me any more.
A particular example of a recent top-voted comment comes to mind (I'm not going to identify it because my purpose is not to name and shame), which was full of perfectly legitimate and constructive criticisms, but the tone of the comment was, well, mocking.
And what's the point of that? Making legitimate criticisms and dressing them in derisive language is a great way to raise an army of contempt against someone, but why?
If you want the target of your criticism to take the criticism into account, you need to express it civilly. You don't need to find positive things to say, or even claim that you think there's a kernel of good idea to be found (although genuine compliments are great too, of course). All you have to do is be polite and respectful. Imagine that the creator of what you're criticizing has just shown it to you excitedly. Read your comment, and decide if what you're writing is something that an acceptable member of society would say to that person face to face. Not the content, but the actual words.
And if you don't want the target of the criticism to listen to you? Then maybe you should just keep your thoughts to yourself.
I can't remove the comment now, but I can consider making my criticism more constructive and positive. With the email, my concerns were legitimate, but I could have just expressed what those concerns were, not satirized the original email.
There's no denying that writing a smackdown is fun. We just have to remember that it's not harmless fun.
Today, Groupon stock is worthless, and the company is doomed. Should the people in that thread be penalized for being right, but mean?
Vision is not "how is this guaranteed to fail?" but how could it possibly succeed despite the odds?
Which voice do you want to bring to this community?
However, around the time of it's IPO it was receiving so much undeserved hype that in comparison to THOSE lofty goals it appears to have failed. HN readers at the time were pointing out the hype, and I think that is a GOOD thing.
Nevertheless, I agree that the tone of the community could be better and I will strive to adjust my own comments to reflect that.
I can see they are skeptical, but I would hardly describe them as "mean".
Now, if that were a 'Show HN' post I could see where people may want the comments to offer constructive feedback, but that thread seems like a matter-of-fact discussion of the business model of the business.
The problem that HN faces is that because of the voting system those drive-by valueless negative comments are getting amplified all out of scale relative to their value.
Imagine being in a restaurant. It's not so bad if there is someone in the far corner having an argument that you can just barely overhear or something a few tables over having a cell phone conversation. However, if the volume of those conversations were amplified such that you could barely continue conversing with the person across the table from you then it starts to become a very serious issue, and if that became the norm at that restaurant you'd probably stop going there as often.
Some comments are on-topic and are helpful, some are on-topic and humorous. Some are off-topic and also helpful, some are off-topic and worthless. As so on, and so on.
With only one direction we can "push" a comment (and only one direction with which we can view comments, up & down), there is not all that much filtering that can go on.
Sometimes I enjoy reading the "funny" responses. What if this was a separate comment axis that I could view? Others may not be interested in such a thing, and could ignore it.
In any case, the format of HN doesn't lend itself well when people from all over the world bring their own views into a single topic. I don't know if a good way of solving it easily other than being ruthless about removing content that doesn't belong.
If YC News doesn't encourage budding entrepreneurs to throw themselves at long-shots, and into the VC startup culture, then it doesn't serve pg's interests.
Being able to function in the face of people making negative comments about you on the Internet is one of the basic survival skills of the 21st century.
> People need to start learning to cope with the real world and not expecting 'the system' to protect us from ourselves.
This perspective directly contradicts many HN'ers political leanings.
While this is true, it does not justify overly negative comments. I'm reminded of hecklers who yell at stand-up comedians that they should be able to deal with heckling if they're on stage; they might be correct, but they're still being a jackass.
To be very clear so as to prevent conspiracy theories, this is something that we never did on reddit because we didn't want to put that much power into the hands of the older users. However, HN has a different philosophy and it might make sense here.
To say it one more time, reddit doesn't do this and never did.
Google is the best example to learn from. Search is a misnomer, Google is about ranking. They've put many PhD-centuries of effort into deciding which of the three million matches to put first (ranking). Choosing the three million matches (search) isn't where Google adds its value.
Lots of lessons have been learned there, I'm sure HN can tap into that pool of knowledge by opening up data to the right people.
HN is an awesome resource. I often read the comments before the articles because I expect them to be more useful.
It's completely worthwhile to invest a lot of effort in maintaining that greatness as it scales.
Sorry I couldn't help myself.
EDIT: (total rewrite)
The cool thing about using metrics and machine learning is that the results speak for themselves and opinions and factions are less important. Intead of guessing what matters in advance, you get to discover what matters after the fact.
On the other hand, either of these solutions seems destined to increase the inevitable echo-chamber herd mentality groupthink that already tries to creep into any community.
I often find the news here, so I'm very unlikely to submit something. And I only comment when I feel like I'm going to add value to the community or discussion, which isn't that frequently. So I have a clear judgement on when I contribute, it's not much or often, but it tends to present some value.
I've had an account for 2.5 years, and used the service for even longer, yet my karma is still less than 300.
I do frequently vote up good articles regarding programming, hacking, startups, and such things. That's the things I'm interested in since I was a young boy and that's my daily life. I'm a freelancing coder (yes, I see that term as positive, same way as hacking) since '96 and tried my own startups since then over and over. That said, I value good and constructive and sometimes also funny comments and being able to upvote them I frequently do so.
Are my upvotes even counted? I don't know, I don't really care. But I admit it gives a bit of a strange feeling to be "less worth" in a community where you participate - we can argue about that - daily.
Nevermind, just wanted to say.
Small addendum: After submitting this comment I remembered that I wanted to add that I'm from Europe and people not being from the States and therefore in a different timezone often experience to comment on already discussed topics (this also happens on Reddit a lot). Maybe this is also a part of why I'm not commenting that much.
I see the necessity of moderating however there has to be other considerations. i.e a number of lower karma users can override the high karma users decisions.
Of course, this forum isn't a public commons so the moderators can ban whoever they want, but there are some cases where it seems fairly arbitrary.
(By the way I would summarize why I disagree now that I've written the above as simply that it seems like a "last man over the bridge" advantage. I mean it's entirely possible that some distinguished person whose vote should count would be greatly disadvantaged by the weighting system you propose.)
I hate to complicate things but there doesn't appear to be anyway to solve this issue w/o the ability to indicate more clearly what you feel is positive or negative about a comments. One up/down button simply can't cover everything.
Also, you can't count on there being proportional amounts of activity among the earlier users currently. Any cross section of the earlier user group could have stopped being active on the site.
There are lots of ways to spread the work out. A number of the people here have backgrounds in search (which is really all about ranking and not much about searching). Even a closed Kaggle competition could yield some really interesting algorithms (Anthony is a great guy and would be happy to discuss it I'm sure. Note that a closed competition is done under NDA by only the most qualified participants).
Spreading it out might be fun, hope your consider it. Lots of smart guys love HN and would probably want to help.
Why don't you automate this process? All of the people you upvote have the most credibility, the posts they upvote have a little less, and people who upvote those posts gain a little credibility too, and so on. Tuned right, voting itself becomes a function of whatever the base of the community is.
New users -- depending on what kind of posts they upvote and who upvotes their posts -- must be able to gain credibility quickly, not just by upvoting the top posts, but picking winning and losing comments in the long term.
I think a method like this is necessary, as I really don't think you could determine post quality with an algorithm itself. You can't automate that type of intelligence.
Given HN's current latency, though, you'd need a few orders of magnitude more resources.
1. Postings that are more advertising than informational, which I briefly commented on here: http://news.ycombinator.com/item?id=4350447; and
2. Members who submit their own postings.
Briefly, while I appreciate the sharing of information, any content that becomes an obvious sales pitch doesn't belong on this site. When I read a posting from this site, I want to learn and be informed - I do not want be sold on something. And - there may be disagreement with my second point - I do not find any significant value when a member submits their own positing, because I feel they're trying to sell me on something. (Please understand this does not apply to "Show HN" - just to articles and blog postings.)
Overall, I really appreciate members asking for feedback on their projects. They should be challenged and not degraded or insulted. This is a great community with great insights, so it's up to everyone to be "good neighbors".
It seems plausible that lazy voting is identifiable as a user-specific signal, and that lazy voting (contrary to the welcome message
"don't be rude or dumb in comment threads") is a big part of the reason why comments that don't fit the community culture stay so visible. May I suggest that it may also be possible to identify good comment voting (upvotes on comments that link to reliable sources to answer questions, or that say "I'm sorry" or "thank you" as part of a comment) as a user signal? It will be interesting to see what happens to comment scores over time if more overt thoughtfulness (in all senses of the word "thoughtfulness") is upvoted often and comment rudeness or stupidity are (net) downvoted as the software tweaks are implemented.
P.S. I have been puzzled by a downvote and upvote pattern I seem to be observing in this thread itself, with of course only my own score on my own comment being DIRECTLY observable to me as I revisit this thread from time to time.
(a) more recent arrivals don't have as much of whatever quality distinguished the original members
Try to make it obvious who distinguished members are, so people can emulate them. Colour-coded usernames based on age/reputation, starred 'high quality' comments, reputable upvoters (quora-style).
(b) the large size of the group makes people behave worse, because there is more anonymity in a larger group.
Try to reduce anonymity. Tiny little grey names are fine for 100 users, but after a point they all look the same. We're visual animals - maybe we need recognizable avatars, symbolic or otherwise, so we recognize the people we're replying to. Even allowing users to choose a single unicode symbol in addition to their username might be enough.
I get this, but wouldn't this lead to making HN more conformist than it sometimes already is? "Either you agree with the majority of us about X, or..."
I've actually been working on that problem with a bot that assists in moderating a subreddit using a text classifier. It's tricky, and needs more work, but it is not impossible.
As you might expect, a subreddit about a politician with (in)famously devoted followers attracts its share of strife. It can be difficult to distinguish legitimate arguments from flamebait, and there's no shortage of people eager to take any bait offered. I should note that I'm not actively running the moderation bot at the moment.
When one of the criteria of trolling is the hidden intent of the person writing, then there's no physical process that can reliably find a trolling, short of looking inside their head.
Improving the quality of discourse is a subject close to my heart, and I've never thought about how robots can help us act more human to each other.
Note expecially section 6, "Personalized PageRank", where they replace the random jump with a jump back to some basis pages. Your own up votes/downvotes probably make a pretty good set of reference points, so a PageRank-like algorithm could be heavily weighted according to your own judgement.
To be totally clear, I'm not suggesting everyone get personalized results, but rather that HN is the same for everyone, one view "personalized" according to your judgment (and obviously that propagates through to include the judgement of the people you respect, and they respect, etc).
Yes this is a handwave, and Google has gone lightyears beyond this early paper. But intuitively it may help you think about the problem. It's really going to be some iterative algorithm that flows through all the upvotes and downvotes to determine the authority of each vote, perhaps in a similar way to how the authority of each link on the web can be calculated.
I've noticed a submission might get a huge amount of upvotes yet the highest voted comment goes against the article/post submitted.
I understand there are a few things that could be going on: you can learn from bad articles/posts; those who like the article/post submitted find it unfair to downvote just because they don't agree on a contentious issue; or it becomes a what side can out-vote the other side (the one against vs. the one in favour of the submission).
Is there a commonly held view on why this happens? Sometimes I click upvote to save an article I've not read yet, not necessarily because it is a great or good article. However, I also sometimes click upvote because there is a comment inside it that I find very enlightening. Would it help at all to be able to save comments like we 'save' submitted articles? My apologies if this is too unrelated and meant to go somewhere else (email). Please let me know if this is the case. Thank you!
1) Stories become popular despite negative sentiment in comments, or stated another way, some of the most popular stories on the front page of HN have little support in the comments. This indicates that the people who up-vote the stories in question are either not actually in support of the story's premise, or choose to remain silent in the discussion.
2) People are using up-votes as a way to bookmark stories, since that's the only flagging mechanism available to them.
The root of the first problem as I see it is that it is too easy too up-vote stories. This creates a low threshold for groups of organized users, i.e. friends of the poster, to get stories on the front page. Perhaps limiting the ability to up-vote stories in particular would have a positive impact here.
The second problem has an extremely simple solution: add a way for users to flag stories without up-voting them. Maybe a star icon that adds a story to their 'saved' directory.
Whoa... mind blown. I never noticed that feature before. Thank you Wise Weasel. (Not that I would up vote a post just to save it. but it is nice to know I have a list of all articles I've up-voted.)
+1 on having a "save" feature. However, I would shy away from using the term "flag" as a way to get something into the saved list. Flagging a post is already available and has a very different meaning.
It basically takes strong leadership, preferably from one person or a (very) small group of people.
Look at GNOME these days. They are rudderless and mostly because they have absolutely NO leadership either from community people or RedHat (GNOME is basically RedHat). They are dying slowly.
Look at the Kernel, though. Linus controls it, sets the tone (often harsh!) and it hums along.
Look at Ubuntu. People may hate the direction, but Mark runs that show and they people know what they are doing and where they are going.
IMO, the idea that the community will govern itself has never held up. Communities are formed around ideas or interests, but are led by people (leaders).
"-1, poor tone"
"-1, factually incorrect"
I do have a concern, though. If this system is implemented, doesn't that mean that the "new kids" will also have access to this system and may use it to exasperate HN's "new kids" problem?
What about modifying the existing HN guidelines to include comments and promote the guidelines more? For example, cite a downvote with an appropriate quote from the guidelines.
There is no irony as to why you were voted down. It was entirely justified.
What I was proposing in my earlier post was exactly as you say: explain the downvote.
Maybe let everyone downvote, but have tiny radio buttons with reasons next to them. :-)
I also disagree with the idea of karma-driven downvote system. However, I wrote the above with the assumption that those who can downvote know HN values and enforce them. As you pointed out, yes the karma-downvote thing doesn't quite accomplish it's goal (assuming the goal is to allow senior members to act as enforcers). It should change. However that wasn't quite the point of my post.
My original point is: if you gonna say smack, gimme a damn good reason why.
Maybe enforce a stealth downvote. This would be a case where the downvote button is visible but you use heuristics and probabilistic models to determine to whether actually accept the guys' vote or not, and if yes, what weight to assign it.
My gut feel is that your example of the voting system tweak will make users like me might not count. Granted, I don't comment nor contribute directly by providing links.
I have various reasons for my silence like the fact English is not my first language (nor even my second language, so it takes me forever to compose something in English, and I still run the risk of my comment turning into grammar crit, with the point I try to make being lost in that noise), a lot of people on the site is intimidatingly intelligent, my points have already been made, etc, etc.
But. I do vote. Due to my non-existent karma I cannot downvote, only upvote. It's limited power, but I try to use it wisely. I cannot downvote the horrible negativity I saw, for example, in the wikipedia design thread (a submission I found fascinating even though it was flawed, as it was lovely to see how their thought process worked as they redesigned the visuals), but I did upvote one or two comments that I thought were thoughtful and added to the discussion. I would like to believe that I made some difference to the tone of the thread, even if it's in a very limited way.
Those that see the world as all shades of gray are not generally very vocal - you don't see a vocal middle ground in most arguments. But this doesn't mean we don't exist. And I don't know how "negative" comments will be classified, and I think people like me run a very real risk of silently becoming false positives when we stupidly upvote things using our own dodgy rules.
So - please consider unexpected side effects such technical things introduce. I guess what I'm saying is I'm already put off by some of the unwritten rules I perceive on this site, if the voting system is also rigged to make some people silently powerless I don't see how to be useful at all. It is off course your prerogative to decide that my type, the silent-but-voting user is unwanted, and if so, that's fine. But I hope I contributed something from time to time.
Making it invite only would have (IMHO) improved things a massive amount. Have people do a test to gain entry.
Any community that is free and open will just deteriorate over time.
I wouldn't mind a small membership fee which would confer voting rights.
'Nasty' comments can be difficult for humans to spot, let alone machines. A lot of important information is lost when we go to a strictly text only medium. If we're going with the group to decide what's good and what isn't, then we run the risk of group think as well. So I am a little cautious about endorsing any automatic filtering of content in this way.
People who are new also, have the ability to upvote but not downvote. I think this might be causing a disproportate amount of upvoting, which may also pertain to 'nasty' comments. Perhaps maybe disabling upvoting for new people for a certain period and/or after a receiving a certain karma score might alievate this a little?
User-account reputation karma.
Based on the voting system that slash of had, but on the character of the account rather than the comment the account made. Here is how it works:
You have your comment up down arrows, then username, then a drop down to mark the character of the account at the time they made the post.
So you could see a comment by samstave and upvote/downvote it, and then also tag my character as [insightful|helpful|abrrasive|troll] etc (whatever the tags are you want to make)
These tags affect the overall account. So if I were tagged by a ton of people as a troll account the more troll votes I get, the worse my account experience is... And eventually too many could lead to hellban.
On the other end, having a very high positive account character would make for a better experience - see stories esrlier, be able to affect other accounts in some way etc. or just a leader board of who is the most technical, helpful, insightful, innovative etc...
The easiest way I can think of to do this is to use a Flesch-Kincaid grade level test, which uses sentence length and word length to estimate reading level. A quick and dirty calculation for any post might look like: (character count / word count) + (word count / sentence count) = Writing Level.
The problem, of course, is that it's only a tiny subset of users that actually comment with any frequency so you can quickly kill the democratic appeal of the site, but I'm frankly less concerned that my vote is actually counted and more concerned with seeing quality content on the front page.
* wiki - http://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readabil...
I would think we could modify the scoring system a bit where common English words are penalized, and uncommon words are weighted a bit more. The only problem I see is, if someone plans to post code, that's going to wreck their score. Another thing would be that most of our programming names/terminologies contain very few letters. So, an exclusion/inclusion list would definitely be needed.
One solution to try could be to give more time "up there" to comments from commenters with a high karma (only considering the karma from comments, not that from stories, which IMHO is a completely different signal).
I don't want to offer any particular suggestion because I don't have any data to support anything. A hunch, though, based on your idea: if someone want to downvote, a comment must be provided.
Absolutely, the more the original community is diluted, the more anonymous people feel, and the less restricted they are by perceived norms (as there are none). You can see this in extreme form on some forums like Kuroshin (remember that?) which have been abandoned to the trolls and are now overrun.
The problem with most systems to control quality is defining what is good, and what is not good. That's not a simple problem and I'm not convinced you'd ever be able to get a crowd to decide on it satisfactorily - there's a reason that mass-entertainment panders to the lowest common denominator, and a crowd's votes are likely to trend the same way and any automated system is subject to gaming and the ignorance of crowds as soon as you let a lot of people have input, however small.
Another solution used by sites like reddit or stackoverflow is to segment the communities into small enough groups to be self-policing.
Another approach I wondered about recently was along the lines of 'A plan for Spam' - it'd be interesting to use the massive comment base of HN to build up some sort of corpus of good and bad comments and apply Bayesian filtering to comments. You could use this to:
Set an initial point score for comments based on their content
Weight comment votes from users who consistently scored highly
You would have to seed the corpus of course, and let it learn from trusted users' votes, but otherwise this sort of system would in theory continue to work as long as there were enough good comments being posted.
Clay Shirky: “The downside of going for size and scale above all else is that the dense, interconnected pattern that drives group conversation and collaboration isn’t supportable at any large scale. Less is different — small groups of people can engage in kinds of interaction that large groups can’t.”
David Foster Wallace: “TV is not vulgar and prurient and dumb because the people who compose the audience are vulgar and dumb. Television is the way it is simply because people tend to be extremely similar in their vulgar and prurient and dumb interests and wildly different in their refined and aesthetic and noble interests.”
David Foster Wallace: “We should keep in mind that vulgar has many dictionary definitions and that only a couple of these have to do with lewdness or bad taste. At root, vulgar just means popular on a mass scale. It is the semantic opposite of pretentious or snobby. It is humility with a comb-over. It is Nielsen ratings and Barnum’s axiom and the real bottom line. It is big, big business.”
Communicate the influencers more... most people follow leaders, and it would help set the tone for new members.
HN's guidelines should be turned into LightBox style documentation. Put it where it matters right next to the input boxes (and in the case of dupe-checking do it by AJAX and communicate it inline).
Guidelines should encourage the positive, and where something is negative then it should be pointed out that they should attack the argument and not the person.
Have a very small set of very clear rules that are enforced transparently, openly and brutally. The lines will only trip up the worst offenders. If you move these lines, only do so because you've listened to the community and they believe the lines are wrong. I use a system on one of my sites in which I track how much a thread is read, and if a post is flagged by more than a certain number of people then I know the community is saying something got through and the rules either aren't being enforced or might not be right.
Mostly: This is not a technical problem. It's a people problem.
There may be some tools you can build to help, but it is people who will use those tools and it is they that make them effective. Even then, there's only so much you can do without fully empowering others.
One ends up realising that a forum is a microcosm of society, and the owner/leader either chooses a form of government in which the citizens will be happiest and yet the aims of the government are achieved, or the owner/leader delays such a decision whilst the population grows and increasingly friction occurs.
I don't think this phenomenon is just on Hacker News. I believe the recent negativity shift is an actual change in the startup community's perceptions of the world.
When things are "hot" in our industry, and seemingly trivial companies are exiting for billions, the people who are still struggling start to develop a somewhat bad attitude. I see it all the time particularly from people who are working on "serious stuff" and then lash out when they see Instagram's success (which they've been saying for years: "sure, but what's their revenue model??").
I don't condone it and honestly think it's genuinely strange. Now is as amazing a time as any to be founding a startup. Success of others (no matter how trivial) only makes things better.
Remember the BBS forums and forums of old days when you didn't have points and rankings? Yeah. The forums often degenerated there too. Points didn't seem to help or matter. But moderated forums worked well. If you want high quality, you'll need a walled garden. But walled gardens rarely work and stifle creativity. However if you think a point system is a substitute for moderated comments. Well, then, the results speak for themselves.
So, points and voting be gone, maybe? I doubt the quality would change much. Even PG admits that it's not the voting per se, but the entrance of a larger volume of people.
1. Why not assign karma by running something like pagerank over the vote graph? I think giving overwhelming weight to high karma old-timers would make HN more like them, which would be an improvement, right?
2. What about assigning karma not by your comments, but by the responses to them?
I figure karma-from-replies would shift the conversation to "what should we be asking about this", whereas karma-from-comments incents "what can i say that the masses will like".
Given commenting has a higher work threshold than voting, it might be "manipulated" less. Gaming it would be more obvious. And subjectively, posts that are questions seem to lead to higher quality discussion (more genuine?). Maybe you'd have to put more work into getting users to not feed trolls and assholes, and ask/reply more to excellent comments. Not sure if practical...
Would love to hear anyone's thoughts on this.
Here are my ideas:
(A)Not showing someone their karma score ever, unless it is negative.
(B)Leaving a line between the comment field and the reply button saying, "Stay constructive."
(C)That line of text could link to the guidelines.
(D)Detect if someone says 'idiot' and then change the line to, "You said 'idiot', do you still want to post?"
I like this idea. In particular, I like the idea of analyzing content - but not programatically rejecting the post completely. Another forum that I frequent tries to keep things civil by banning certain words. Sounds good on paper, but then you try to use the phrase "knee-jerk reaction" and the system rejects your post because it has "jerk" in it. D'oh.
Pointing out the possible (but not definite) incivility, but allowing the post anyway (after a confirmation step) could be a nice way to remind people to be polite, but without being overly draconian.
I think the principle that (give or take passing a karma threshold) all contributors are equal and one vote is one vote has probably outlived its usefulness.
Perhaps up and/or down votes from established, positive contributors should simply carry more weight.
Another possibility might be to give a handful of extremely highly regarded contributors a third option to super-downvote a comment by a new or low karma contributor, such that just a couple of those (or even just one) would cause the offending comment to be killed immediately. I'm thinking that rapidly killing off things like Redditesque extended joke threads that obviously don't suit the culture on HN might help, by the "broken windows" theory.
I'd rather have a system where instead of simple up/down vote you have an elaborate system of 'Offtopic', 'Poor taste', 'Incorrect' as mentioned in a comment somewhere above. But also, I'd like to add positive halfs of these such as 'Insightful', and maybe one or two other labels. The idea being that they can select only one out of the total, and each label should secretly have different values. So maybe flagging a comment as Incorrect isn't as penalizing as putting one as Off-topic or Poor taste. Insightful may have different positive value as compared to maybe something labelled as 'Excellent advice' or something.
With a lot of options, I don't think newbies would actually click randomly on any button without knowing the meaning and context of each button. Anyway, the buttons need not have full names, simple short letters like P/OT/I/EA/X may be enough. Of course, this automatically requires to have a way to retrieve your vote, if you accidently click on the wrong button, but it should still allow only one option to be used.
While I don't think that HN has hit this level yet, one solution that I've read about involved people who trolled or posted general nastiness. When people had reached a certain threshold or pissed off the wrong high-level mod, they got "ghosted". What this meant was that people who were "alive" or considered "living" could see "living" contributors' posts, but could not see "ghost" posts. Ghosts, however, could see other ghosts' posts, so they could, for what it was worth, writhe in their own debauchery and piss each other off, but not the pleasant and constructive members. I thought it was an interesting solution and apparently it worked very well.
Problem is it's not entirely clear what some people have done to get hellbanned sometimes.. I'm not sure what the requirements for getting it reversed are, either. I've seen some very constructive comments in that mess before..
The thing about systems is that we tend to think of them as if people are not a part of them. But people are a part of them.
One thought I would suggest is that it might be worth giving people with very high karma (maybe high enough that only a few people have this) an ability to post highlighted reprimands, and an automatic banning of a user if he or she gets more than, say, 5 of these in a month. These could then be reviewed of necessary.
One thing I would think about is that rather than having a system where computers try to control people, have a system which empowers the community leaders to set standards of interaction.
1) Social pressure: the community must evolve in such a way that its members as a whole frown upon negativity. Hard, because it takes time and can go wrong. Or,
2) Moderation: a group of members you trust is tasked with actively deleting useless comments and threads and banning useless contributors. A zero-tolerance policy for stupidity. Is that elitist? Maybe. But an elite community is hard to grow when idiots are tolerated.
There is literally no way to engineer away jerks. A social problem requires a social solution.
Interesting - also, if it is as you suspect and voting is easier and thus part of the problem, perhaps adding more friction to the process of voting would be useful. Something like having to spend karma to vote, or having a limited number of votes per day to spend.
Such an approach should be taken with a goal to gradually adhere the general user behavior towards the community core. May be add a tab for this.
The thinking is based on the premise that the site still has a lot of its core users and community values have not changed over time.
I can see it bubbling up quality content without punishing promising new members.
Stackoverflow model could also work.
My 2 Cents.
Optionally mark your comments as private for grammar/spelling/technical errors, way off topic comments and personal requests.
I think graying out works for comments scoring 0 or maybe -1, but beyond that, might as well fold/collapse them.
I also think something like xkcd's Robot9000 would be useful to prevent some clutter like "me too!", "awesome!", "Thanks"
The advantage of identifying good users over identifying bad ones is that recognition of goodness can propagate: if you believe A to be a good user and A believes B to be a good user, that is some indication to you that B is a good user. Replace "good" with "bad" and this no longer holds.
People who have been longer in the community and have a good kharma would have the power to vote +1|-1, while people who have less kharma or have joined the community recently would be able to vote, say, +0.1|-0.1, and there would be a lot of people in between (+0.3|-0.3, +0.5|-0.5, etc.)
It just makes sense that your upvotes/downvotes should have more voting power than the ones coming from, say, myself.
As far as "who is voting" for arguments sake a member of SCOTUS joins HN and decides to cast a vote. Or a member of SCOTUS joins HN and after a period of 2 years has low karma, because, they don't have much time on their hands. I'm not exactly sure why someone who has not been a member for a long time or doesn't have a high karma score isn't to be trusted with voting. And why someone who decides to spend a lot of time on HN should be given more privileges.
(And no, this is not comparable to app.net. The money isn't sufficient, you need the moderators just as much. You need both.)
2) Rate every other voter against moderators' votes.
If voter upvoted the same as moderator, then increase weight of such voter in the future.
If voter upvoted against moderator vote -- decrease future voter's weight.
3) For every comment calculate weighted upvote/downvote total based on voters clicks and weights calculated in #2.
I think it's different from the existing "flag", or at least how I understand it: Currently, I'd only hit flag if something is deeply offensive / illegal / spam. And I think flag only exists on posts?
What's your definition of a "nasty comment"? How do you distinguish between a "nasty comment" and a comment that voices an unpopular opinion?
I understand what you're trying to accomplish, however I worry that a side effect will be the further silencing of opinions that go against the HN popular consensus.
PS: I hope
That way, the community knows what the values are, etc. To be honest, looking at HN I'm not sure where on the site to find these values right now, myself. I just try to be my own reasonable self :)
It seems that HN may be different things to different people, so perhaps allowing the presentation to be customized might put people at ease?
But the latter half of your comment intrigues me...
Delete insulting comments that cross the line. Trolling is like pornography. You know it when you see it.
After three offenses, delete that person's account. After 5, ban their IP address.
Let me explain. Currently everything has one score. So, give it a second score that also starts at 0. Instead of score "1 point" it would score "1 point (1 point at adult table)."
Then the adults at the adult table would immediately downvote anything that isn't adult (this only concerns its score at the adult table) and can upvote highly reasoned but nuanced posts, and so forth. However, there are not so many at the latter table. So, the former table might go from -2 to +100 maybe the second score has most comments untouched (at 1), with a few downvoted, and a few upvoted.
This way, you could have a children's table (current reddit) with its thousahds of votes, and good comments languishing at 0, and the adult table, where the same 0 normal score comments can have high votes at the adult table, and so forth.
So, here on hackernews, you could have a "serious" or a "founder's" table. And you could have a snarky table for people that just like having everything picked apart - where, "correctly" (per policy) we (the serious table) would vote the snarky comments down.
So, to illustrate. Here is a snarky comment "Oh, I see. So it's like a coupon system where you insert yourself into a transaction and book the whole transaction as revenue, deducting the actual 'rest of the transaction' as cost. So even though you lose on every transaction, you make it up in volume! Hey, it worked for groupon, right? :)"
This starts at 0, 0 for the snarky and serious table respectively. The serious table immediately gives it a downvote, so it is now at 0, -1, and then another, 0, -2, and soon it's hidden here: 0, -3. Meanwhile the people shifting through the trash say, "hey, ZING!!" and promote it, it gets to 5, -3, then 10, -3 then - if this place degenerates into reddit, hundreds of points, -3 respectively.
The serious or adult table is not impacted. It's not even visible. Meanwhile the snarky guys can have their children's talk.
(here is the rest of my comment. I shifted the above to the top for more visibilty.)
like many others I had a "rude introduction" to hackernews, not from real connections in the startup world (which I have) but from low-quality forums. In my case I regret even the choice of username, and would change it if I could (maybe indicated to show that it's new). I wouldn't now shirk away from using my real name or a close alias - something I never do online.
So, in my case what would have mitigated my behavior is seeing where my traffic was coming from and treating accordingly. I certainly wasn't typing it into the browser URL bar. Secondly, you could have a prominent button "New to hackernews? Learn about this community, which is a more serious one than most online communities: a lot of personal connection is on the line". Then explain why, and what we get out of behaving the way we do.
Finally, as of now I can either upvote or downvote a nasty comment. There is no right answer. But you can change this in a minute. If defining the correct way to vote as "downvote any negative comment not leavened by positivity - we need all the positive thinking we can get", then I would start voting correctly. It's like Wikipedia. I don't add what I know - I add what I can prove with a reference. I don't delete uncited folklore because it's not a good read - I do so because that's the correct action.
So, policies (as is the case with Wikipedia) help immensely.
It is too bad that more than a couple voting toggles would be too cumbersome; otherwise both ideas could co-exist as four sets of up and down arrows.