I've also found that groups of downvotes can happen at different times during a comment's lifecycle. There's no clear pattern, but I've had comments get a bunch of downvotes early on right after the post, then a few hours later end up in clearly positive territory. I think the limit on downvotes saved the comment, it probably would have ended up in double or triple digit downvote territory during the period of time when people were piling on, but the hard limit prevented an irretrievable negative downward spiral.
I also find myself far less concerned about people piling on downvotes, because it might change and end back up in positive territory (and if it doesn't my karma isn't annihilated because I happened to be a bad mood at the moment I posted something).
I also think that not showing scores has helped a lot with breaking negative cycles (as well as breaking overly positive cycles). It infuriated lots of old users when it was instituted, but I've observed that it appears that echo chamber style group think has broken here and that "famous names" don't just get to the top of a comment tree for posting an offhand comment like they did when scores were being shown. Almost immediately, the mix of people at the top of the comments for a topic started to become more interesting and varied.
The way they “objectively” decided on “quality” of posts, by which to judge the ‘before’ picture of a poster was via some machine learning analysis of the keywords in the post in question, implicitly assuming that their keyword analysis was a better judgment of quality than the explicit voting on the site. This seems like a very poor assumption to me.
It’s quite plausible (to me) that the posts’ keywords in the negatively-voted posts looked okay to their algorithm while the content still turned out to be trolling or bullshit (hence the downvotes). It would be quite reasonable to assume that posts full of trolling or bullshit were by posters who were inclined to future trolling/bullshit posts.
I’d be very interested to see what results they’d get if they ran the clock in reverse. I.e. pulled some posts of similar “quality” based on their metric but different voted scores, and then looked at the several posts before that, from those posters.
The problem gets worse when simple disagreement gets overloaded into the same downvoting that's meant to control destructive behavior. That sends the message that disagreement is punishable. Further, with the fading/de-ranking/hiding that's applied to low-scoring posts on HN and similar sites, it even sends the message that unpopular views should be censored.
My theory – which I've not yet convinced a social site to try – is that two separate axes of voting could help. The first is whether a post is valuable or destructive; the second is whether the reader agrees or disagrees.
In that way, the natural tendency to want to express an opinion with a single click, or 'win' a poll of popularity, can be captured and channeled into the agree/disagree measure, without the morale- and behavior-spoiling implications that come with the visibility-penalty/muzzling of "that's destructive" downvotes.
It works like so: There are two buttons: "Like" and "Wrong". If you like the comment, click Like. If you think it's wrong, click Wrong (e.g. because of factual errors or because you disagree). — It's not possible to click any "Don't like" button however, or "Correct" button.
The Wrong button is not meant as a downvote. It doesn't affect comment sort order, and simply results in a warning that "Many people think the comment is wrong, please see replies for details". (If many people think the comment is wrong, in comparison to how many people likes it.)
This demo page uses the Like-Wrong voting system:
However, I don't see 'like' and 'wrong' as opposites, nor as particularly useful categories.
The vagueness of 'like' has done wonders for Facebook, where it may be the right choice, but in more purposeful discussions, it conflates all sorts of different signals. Do you like it because it adds to the joint project of the conversation? Or just because you agree? Or because a statement contains flattering lies?
Similarly, 'wrong' obscures: wrong for mutually-uplifting progress in the conversation? Wrong on the facts? 'Wrong' also implies a negative judgment – an error that must be corrected – so there's still censure/sting.
In comparison, 'agree'/'disagree' (as an axis) makes clear that what is being applied is a more subjective, contingent evaluation. Mere agreement need not amplify (uprank) a comment, nor need mere disagreement squelch (downrank/fade) a comment.
A separate axis, for 'laudable'/'problematic', is how norms of behavior could be expressed – and these votes may affect placement/visibility. Ideally, the 'down' aspect of these ticks would be rarely used, because of the negative-spirals that punishment can cause. (It might even be the one kind of vote that requires extra gestures to cast.)
Not also that a 'disagree' and a 'laudable' should both be castable on the same contribution. (We could call that 'the Voltaire Vote': I disagree with what you say but defend your right (and manner) of saying it.)
Such contributions are, in my worldview, the most valuable. I can find endless validation of my existing biases/beliefs with preferred sources and carefully-crafted Google searches. Open discussion is for finding challenging new insights – things you might disagree with initially, or forever, but are still valuable.
Hmm, the purpose with 'like' might not be to clarify why someone liked it, but rather to show to people what's worth reading. If many people like something, it's more likely to be worth reading. In this sense, I think 'Like' makes sense.
I agree that 'Like' is ambiguous and could mean lots of different things. However, once you've read the comment, what the 'Like' means is probably fairly clear? (If it's a joke, useful information, or strong agreement.)
You mentioned people clicking 'Wrong' as some kind of general negative judgment — I hope they won't do that. Only factual errors and disagreement, I'm hoping.
The axis you mention agree/disagree + laudable/problematic — I'm thinking they might be too complicated? Wouldn't people need to read the FAQ to clearly understand how to use these four buttons? — However I do think that something like what you're describing could work at a site like Hacker News.
There's some forum software, Vanilla forums, that sometimes show Like/Agree/Disagree buttons. However, when I studied a few discussions, people essentially never clicked Agree/Disagree, they only used the Like button. -- However, I just found a discussion where people actually do use the Agree/Disagree buttons, see: http://forums.unknownworlds.com/discussion/134718/what-is-it...
Perhaps that's little bit what you were looking for (?), only that they're using 'Awesome' instead of 'Laudable'/'Problematic'.
It is possible to click 'Like' and 'Wrong' at the same time actually (a bit like the Voltaire Vote). — However 'Like' + 'Wrong' does sound a bit vague to me, I agree. It could be interesting to explore some possibility for people to clarify why they liked something — perhaps a dialog could appear, that was easy to dismiss, where you could click some button that clarified why you liked it or thought it was wrong.
But also, if there's only one vaguely-negative (or vaguely-positive) button available, it'll be used for all similar feelings when the reader is only lazy enough for a 'tick' rather than a composed-comment. (Even with these problems, though, collecting such 'lazy' easy-single-gesture feedback seems a good thing, for many kinds of sites: driving engagement and community self-governance way way up.)
So I don't think you can confine 'wrong' to "factual errors and disagreement". And I still find it odd that you'd want mix to mix 'wrong' with 'disagree'. Saying "you're wrong!" in a conversation is a challenge - it may need to be backed up, or require a defense/fix (or else relative statuses suffer damage). Saying "we disagree" can instead be a friendly, "I hear and understand you, but we neither will nor have to agree today" - and both sides retain their relative status.
People also often choose to compete for the "most agreed" - but if there's no punishment (in de-ranking/fading/loss-of-status/implied-censure) when you lose, those competitions can be a lot more civil and productive: the mobthink/piling-on/"we-must-enforce-and-cast-out" impulses are kept on a shorter leash.
I doubt in most cases that a secondary prompt requiring explanation is a good idea, except if explicitly desired as a way to slow/discourage a certain option (the 'problematic' censure/flag), or as initial 'training wheels' for new users. You'd not want to add it as a complication to the positive-feedback ticks that (if all is going well) you want to multiply to boost mood and engagement.
And yes, the UI and explanation for a two-indepdendent-axis system is a major challenge, even moreso because people are now trained-up on other systems. Some hunches:
• while the axes would be in a virtual compass-rose – up:laudable, down:problematic, left:agree, right:disagree – the actual controls would have some distance from each other, and (if possible) from the leftmost position that Reddit/HN/etc has trained-in
• the agree/disagree (opinion) totals are always shown – not just net but total of each – but as a left-right sparkline-like graph, and with no effect on ranking/fading. This rewards people for those votes: they see the tally change, and thus know that they've been counted.
• the laudable/problematic (moderation) totals are never shown, but are somewhat inferrable by ranking/visibility. As with HN, site moderators might override default positioning to help set norms and cap campaigning that spills into the moderation-axis.
And of course, all of it is very subtle until you try to tick/hover over it – low distraction from flow of text – and much is probably hidden from new users, until they incrementally familiarize with site norms and features.
But perhaps 'Wrong' should be renamed to 'Disagree' (?). 'Disagree' might work also for factual errors. — If there were the buttons 'Like' and 'Disagree', I'm thinking this would be like the system you suggest, except that there was no 'Problematic' button, and that 'Agree' and 'Laudable' were merged into one single 'Like' button.
A sparkline graph would have been cool :-) I'm thinking it's hard to implement in comparison to how many people would understand it and find it really interesting though.
One reason I like the idea with a 'Like' button is that it seems to work well: have a look at http://www.discourse.org — they have a Like button, and nothing else. I find it useful and interesting to know what other people like, even if I don't know if it's because the comment is high-quality or it's because people strongly agree (well, until I've read the comment). And instead of any 'Problematic' button, I think they want people to simply ignore, or possibly flag, low quality comments.
In addition to 'Like', I'm thinking it's good to have some way to inform people about bad/harmful advice. The downvotes work well at StackOverflow / StackExchange I think.
So that's roughly my motivation behind Like + Wrong/Disagree.
Do we need to sugarcoat this? Being rejected for a job stings, having your paper rejected from peer review stings, etc. But in all such cases quality is maintained because of those rejections, and there's no way around it. It may be a factor that downvoting happens in public, but a moderation system where every comment needs to be reviewed by some peer review committee would be way too expensive and too slow for a forum.
Your idea of a two dimensional voting system reminds of Slashdot's more elaborate system. Some people say it works quite well, IIRC. Ultimately, I don't think one should hope to find a technical solution to a social problem.
So a healthy thing – good-faith disagreement – is being laden with the bad-feelings and retaliatory behavior that comes with punishment. A social problem caused by a technical choice may indeed have a technical fix.
With "social problem" I meant something else, namely the fact that there are trolls and low quality comments etc. I don't think those can be eradicated by a technical solution, because once you have a high quality community there is an incentive for people to take part in it even at the cost of having to defeat countermeasures.
What is this "technical fix" you allude to? Eliminating downvotes sounds like a non-starter to me. You could limit the people who can cast them, but that lowers their effectiveness because the probability that a bad comment is downvoted would plummet.
The other axis remains and still has a traditional censorious downvote – but my hypothesis is that it will then be used less, avoiding the negativity created by its current overuse for multiple purposes.
That 'spite' you mention isn't a constant across all communities, across all time: it grows with cycles of negative, stinging punishment-feedback. (The study which started this thread was suggestive of that, regardless of its other limitations.)
I strongly suspect each of the following is true:
• threads on certain contentious subjects have lots of competitive and retaliatory downvotes, driven from the impulses "I can't let only these favored comments sink/fade" and "if those partisans are downvoting me, I'll downvote them"
• that habit/mood, once learned, spills over into other threads, to the detriment of discourse and positive/productive feelings
A separate axis for disagreement might catch some of that and express it where it does less spite-generation.
Downvoting is not the only way to express disagreement; you can reply to the post and express your opinion (like I just did now).
Hundreds or thousands can issue a single tick that's tallied and displayed in summary form.
The prototypical example of this type of system is Facebook. And there's a reason that people continually clamor for a Dislike button: not all content is good, in terms of either quality and sentiment, and allowing only upvotes reinforces a hive mind mentality, which is a bad thing in the long run.
(Fun Facebook Fact: Reporting a Facebook post actually decreases the number of Likes by 1.)
In other words, when somebody posts that their dog just passed away or whatever, I want to hit "dislike", not to pull down the like score but as a separate emotional indicator.
And allowing downvotes fixes this?
The best forum I take part in doesn't have upvotes. Chronology and individual filters only.
Upvotes are a quality filter, and downvotes are a quality filter for the quality filter.
Dislike != downvote.
Dislike can also be "I don't like this thing that happened to you" / sympathy.
With a enough downvotes at HN, one's posts can be invisible to some depending upon their settings. Also, without enough upvotes one cannot downvote others. So, if one has enough sock accounts with upvotes, one can use those powerful accounts to downvote others they don't agree with into oblivion. People who don't use sockpuppets are at the mercy of those who do.
This situation creates a lot of incentive to create accounts where one censors their opinions, attaches to the "groupthink" in threads with sockpuppet accounts and troll for upvotes so one can can acquire many empowered accounts with the power to exact revenge and censorship on others.
Instead of working to censor for rudeness, it works to get others to align with groupthink in hopes that one can get into a positive upvote territory and eventually be able to downvote others and defend themselves from attacks by upvoting themselves with socks.
I wonder how many sockpuppet accounts are commenting and competing here on HN to simply be able to acquire power against one another? How many comments are simply disengenous trolling for upvotes?
It would be really enlightening if HN decided to stop making downvotes and upvotes anonymous where we could all see the patterns and make the connections between grassroots downvotes/upvotes and systematic downvotes/upvotes from multiple sockpuppets.
I'd love to see that.