
Down-Vote Leads To A Vicious Circle Of Negative Feedback - ecesena
https://medium.com/the-physics-arxiv-blog/aad9d49da238
======
bane
HN used to have this problem in a big way years ago, where somebody would say
something challenging to the group think and everybody would just dump all
over the comment and "downvote the comment to oblivion". I think at some point
pg put a cap on negative comment scores at -5 or something, so the point that
this "this is an unpopular comment" got across, but didn't drain a user's
karma by -100 or whatever.

I've also found that groups of downvotes can happen at different times during
a comment's lifecycle. There's no clear pattern, but I've had comments get a
bunch of downvotes early on right after the post, then a few hours later end
up in clearly positive territory. I think the limit on downvotes saved the
comment, it probably would have ended up in double or triple digit downvote
territory during the period of time when people were piling on, but the hard
limit prevented an irretrievable negative downward spiral.

I also find myself far less concerned about people piling on downvotes,
because it might change and end back up in positive territory (and if it
doesn't my karma isn't annihilated because I happened to be a bad mood at the
moment I posted something).

I also think that not showing scores has helped a lot with breaking negative
cycles (as well as breaking overly positive cycles). It infuriated lots of old
users when it was instituted, but I've observed that it appears that echo
chamber style group think has broken here and that "famous names" don't just
get to the top of a comment tree for posting an offhand comment like they did
when scores were being shown. Almost immediately, the mix of people at the top
of the comments for a topic started to become more interesting and varied.

------
jacobolus
The methodology here is quite simplistic, and suspect.

The way they “objectively” decided on “quality” of posts, by which to judge
the ‘before’ picture of a poster was via some machine learning analysis of the
keywords in the post in question, implicitly assuming that their keyword
analysis was a better judgment of quality than the explicit voting on the
site. This seems like a very poor assumption to me.

It’s quite plausible (to me) that the posts’ keywords in the negatively-voted
posts looked okay to their algorithm while the content still turned out to be
trolling or bullshit (hence the downvotes). It would be quite reasonable to
assume that posts full of trolling or bullshit were by posters who were
inclined to future trolling/bullshit posts.

I’d be very interested to see what results they’d get if they ran the clock in
reverse. I.e. pulled some posts of similar “quality” based on their metric but
different voted scores, and then looked at the several posts _before_ that,
from those posters.

------
gojomo
Not confident of their methodology, but it's clear to me that the 'sting' of
downvotes introduces problems along with whatever other moderation benefits
they might provide.

The problem gets worse when simple disagreement gets overloaded into the same
downvoting that's meant to control destructive behavior. That sends the
message that disagreement is punishable. Further, with the fading/de-
ranking/hiding that's applied to low-scoring posts on HN and similar sites, it
even sends the message that unpopular views should be censored.

My theory – which I've not yet convinced a social site to try – is that two
separate axes of voting could help. The first is whether a post is valuable or
destructive; the second is whether the reader agrees or disagrees.

In that way, the natural tendency to want to express an opinion with a single
click, or 'win' a poll of popularity, can be captured and channeled into the
agree/disagree measure, without the morale- and behavior-spoiling implications
that come with the visibility-penalty/muzzling of "that's destructive"
downvotes.

~~~
KajMagnus
I have actually implemented a voting system with two axes, fairly much like
you suggested, I think: One axis is whether the comment is valuable (if people
like it), the other is whether the reader disagrees, or things the comment is
factually wrong.

It works like so: There are two buttons: "Like" and "Wrong". If you like the
comment, click Like. If you think it's wrong, click Wrong (e.g. because of
factual errors or because you disagree). — It's not possible to click any
"Don't like" button however, or "Correct" button.

The Wrong button is not meant as a downvote. It doesn't affect comment sort
order, and simply results in a warning that "Many people think the comment is
wrong, please see replies for details". (If many people think the comment is
wrong, in comparison to how many people likes it.)

This demo page uses the Like-Wrong voting system:
[http://www.debiki.com/demo/-71cs1/demo-
page-1](http://www.debiki.com/demo/-71cs1/demo-page-1)

~~~
gojomo
Thanks for the pointer, interesting, especially the use of a large-horizontal
layout (with mini-map!).

However, I don't see 'like' and 'wrong' as opposites, nor as particularly
useful categories.

The vagueness of 'like' has done wonders for Facebook, where it may be the
right choice, but in more purposeful discussions, it conflates all sorts of
different signals. Do you like it because it adds to the joint project of the
conversation? Or just because you agree? Or because a statement contains
flattering lies?

Similarly, 'wrong' obscures: wrong for mutually-uplifting progress in the
conversation? Wrong on the facts? 'Wrong' also implies a negative judgment –
an error that must be corrected – so there's still censure/sting.

In comparison, 'agree'/'disagree' (as an axis) makes clear that what is being
applied is a more subjective, contingent evaluation. Mere agreement need not
amplify (uprank) a comment, nor need mere disagreement squelch (downrank/fade)
a comment.

A separate axis, for 'laudable'/'problematic', is how norms of behavior could
be expressed – and _these_ votes may affect placement/visibility. Ideally, the
'down' aspect of these ticks would be rarely used, because of the negative-
spirals that punishment can cause. (It might even be the one kind of vote that
requires extra gestures to cast.)

Not also that a 'disagree' and a 'laudable' should both be castable on the
same contribution. (We could call that 'the Voltaire Vote': I disagree with
what you say but defend your right (and manner) of saying it.)

Such contributions are, in my worldview, the most valuable. I can find endless
validation of my existing biases/beliefs with preferred sources and carefully-
crafted Google searches. Open discussion is for finding challenging new
insights – things you might disagree with initially, or forever, but are still
valuable.

~~~
KajMagnus
> The vagueness of 'like' [...] conflates all sorts of different signals. Do
> you like it because it adds to the joint project of the conversation? Or
> just because you agree? Or because a statement contains flattering lies?

Hmm, the purpose with 'like' might not be to clarify _why_ someone liked it,
but rather to show to people what's worth reading. If many people like
something, it's more likely to be worth reading. In this sense, I think 'Like'
makes sense.

I agree that 'Like' is ambiguous and could mean lots of different things.
However, once you've read the comment, what the 'Like' means is probably
fairly clear? (If it's a joke, useful information, or strong agreement.)

\---

You mentioned people clicking 'Wrong' as some kind of general negative
judgment — I hope they won't do that. Only factual errors and disagreement,
I'm hoping.

\---

The axis you mention agree/disagree + laudable/problematic — I'm thinking they
might be too complicated? Wouldn't people need to read the FAQ to clearly
understand how to use these four buttons? — However I do think that something
like what you're describing could work at a site like Hacker News.

There's some forum software, Vanilla forums, that sometimes show
Like/Agree/Disagree buttons. However, when I studied a few discussions, people
essentially never clicked Agree/Disagree, they only used the Like button. --
However, I just found a discussion where people actually do use the
Agree/Disagree buttons, see:
[http://forums.unknownworlds.com/discussion/134718/what-is-
it...](http://forums.unknownworlds.com/discussion/134718/what-is-it-about-
gorges) Perhaps that's little bit what you were looking for (?), only that
they're using 'Awesome' instead of 'Laudable'/'Problematic'.

\---

It is possible to click 'Like' and 'Wrong' at the same time actually (a bit
like the Voltaire Vote). — However 'Like' \+ 'Wrong' does sound a bit vague to
me, I agree. It could be interesting to explore some possibility for people to
clarify why they liked something — perhaps a dialog could appear, that was
easy to dismiss, where you could click some button that clarified why you
liked it or thought it was wrong.

~~~
gojomo
If 'like' is supposed to mean 'worth reading', maybe it should really be
labelled 'recommended'?

But also, if there's only one vaguely-negative (or vaguely-positive) button
available, it'll be used for all similar feelings when the reader is only lazy
enough for a 'tick' rather than a composed-comment. (Even with these problems,
though, collecting such 'lazy' easy-single-gesture feedback seems a good
thing, for many kinds of sites: driving engagement and community self-
governance way way up.)

So I don't think you can confine 'wrong' to "factual errors and disagreement".
And I still find it odd that you'd want mix to mix 'wrong' with 'disagree'.
Saying "you're wrong!" in a conversation is a challenge - it may need to be
backed up, or require a defense/fix (or else relative statuses suffer damage).
Saying "we disagree" can instead be a friendly, "I hear and understand you,
but we neither will nor have to agree today" \- and both sides retain their
relative status.

People also often choose to compete for the "most agreed" \- but if there's no
punishment (in de-ranking/fading/loss-of-status/implied-censure) when you
lose, those competitions can be a lot more civil and productive: the
mobthink/piling-on/"we-must-enforce-and-cast-out" impulses are kept on a
shorter leash.

I doubt in most cases that a secondary prompt requiring explanation is a good
idea, _except_ if explicitly desired as a way to slow/discourage a certain
option (the 'problematic' censure/flag), or as initial 'training wheels' for
new users. You'd not want to add it as a complication to the positive-feedback
ticks that (if all is going well) you want to multiply to boost mood and
engagement.

And yes, the UI and explanation for a two-indepdendent-axis system is a major
challenge, even moreso because people are now trained-up on other systems.
Some hunches:

• while the axes would be in a virtual compass-rose – up:laudable,
down:problematic, left:agree, right:disagree – the actual controls would have
some distance from each other, and (if possible) from the leftmost position
that Reddit/HN/etc has trained-in

• the agree/disagree (opinion) totals are always shown – not just net but
total of each – but as a left-right sparkline-like graph, and with no effect
on ranking/fading. This rewards people for those votes: they see the tally
change, and thus know that they've been counted.

• the laudable/problematic (moderation) totals are never shown, but are
somewhat inferrable by ranking/visibility. As with HN, site moderators might
override default positioning to help set norms and cap campaigning that spills
into the moderation-axis.

And of course, all of it is very subtle until you try to tick/hover over it –
low distraction from flow of text – and much is probably hidden from new
users, until they incrementally familiarize with site norms and features.

~~~
KajMagnus
'Recommend' instead of 'Like' would probably work, yes. Hmm, people are used
to 'Like' and it might sound a bit more friendly / less formal. — Also, 'Like'
can be used as an 'I strongly agree' button (fewer buttons needed).

But perhaps 'Wrong' should be renamed to 'Disagree' (?). 'Disagree' might work
also for factual errors. — If there were the buttons 'Like' and 'Disagree',
I'm thinking this would be like the system you suggest, except that there was
no 'Problematic' button, and that 'Agree' and 'Laudable' were merged into one
single 'Like' button.

A sparkline graph would have been cool :-) I'm thinking it's hard to implement
in comparison to how many people would understand it and find it really
interesting though.

\---

One reason I like the idea with a 'Like' button is that it seems to work well:
have a look at [http://www.discourse.org](http://www.discourse.org) — they
have a Like button, and nothing else. I find it useful and interesting to know
what other people like, even if I don't know if it's because the comment is
high-quality or it's because people strongly agree (well, until I've read the
comment). And instead of any 'Problematic' button, I think they want people to
simply ignore, or possibly flag, low quality comments.

In addition to 'Like', I'm thinking it's good to have some way to inform
people about bad/harmful advice. The downvotes work well at StackOverflow /
StackExchange I think.

So that's roughly my motivation behind Like + Wrong/Disagree.

------
minimaxir
> _What’s needed now, of course, is a test of this idea. There are certainly
> social networks that allow up voting but not down voting (Medium being one
> of them)._

The prototypical example of this type of system is Facebook. And there's a
reason that people continually clamor for a Dislike button: not all content is
_good_ , in terms of either quality and sentiment, and allowing only upvotes
reinforces a hive mind mentality, which is a bad thing in the long run.

(Fun Facebook Fact: Reporting a Facebook post actually decreases the number of
Likes by 1.)

~~~
andreasvc
People clamor for a dislike button on Facebook because it is an emotion they
would like to express with a button. Whether it would lead to better comment
threads is a completely different matter, and if Reddit is any indication it
does not protect against hivemind, and in here as well it occurs frequently
that comments are downvoted merely for containing arguments that people happen
to disagree with.

~~~
bane
I think with FB, people don't necessarily want a button that's just a counter
to the "like" button. They want a "dislike" button, but on its own axis. So
n-likes and m-dislikes don't add up to n - m = score but n + m = score.

In other words, when somebody posts that their dog just passed away or
whatever, I want to hit "dislike", not to pull down the like score but as a
separate emotional indicator.

~~~
Turing_Machine
Exactly. When some one announces that they have cancer, the last thing you
want to do is click "like". Slashdot has several options, not just "like" and
"dislike".

------
Cowicide
As most everyone here knows...

With a enough downvotes at HN, one's posts can be invisible to some depending
upon their settings. Also, without enough upvotes one cannot downvote others.
So, if one has enough sock accounts with upvotes, one can use those powerful
accounts to downvote others they don't agree with into oblivion. People who
don't use sockpuppets are at the mercy of those who do.

This situation creates a lot of incentive to create accounts where one censors
their opinions, attaches to the "groupthink" in threads with sockpuppet
accounts and troll for upvotes so one can can acquire many empowered accounts
with the power to exact revenge and censorship on others.

Instead of working to censor for rudeness, it works to get others to align
with groupthink in hopes that one can get into a positive upvote territory and
eventually be able to downvote others and defend themselves from attacks by
upvoting themselves with socks.

I wonder how many sockpuppet accounts are commenting and competing here on HN
to simply be able to acquire power against one another? How many comments are
simply disengenous trolling for upvotes?

It would be really enlightening if HN decided to stop making downvotes and
upvotes anonymous where we could all see the patterns and make the connections
between grassroots downvotes/upvotes and systematic downvotes/upvotes from
multiple sockpuppets.

I'd love to see that.

------
ecesena
Original paper: How Community Feedback Shapes User Behavior
[http://arxiv.org/abs/1405.1429](http://arxiv.org/abs/1405.1429)

------
Codhisattva
This makes me wonder about the negative feedback inherent in code reviews.

~~~
greenyoda
There can be both negative feedback and positive feedback in code reviews. And
if you've written buggy or unmaintainable code, wouldn't you want someone to
tell you about it so you could learn to write better code in the future? I
sometimes wish I'd had someone to review my code when I was starting out as a
programmer.

------
xname
Check my experiment attracted loser downvoter(s):

[https://news.ycombinator.com/item?id=7761389](https://news.ycombinator.com/item?id=7761389)

