While HN does stay civil, I think it is too limited in general. Don't get me wrong, the reason I use HN is because the discussion hasn't devolved into /. or reddit. However, HN would do well to encourage dissent.
I also get that HN, being mostly about tech startups, has different priorities than me personally.
I am forever in search of a place online to have good discussions and debate with a respectful community. The comment sections on news sites are rarely the place — even for very respectable publications. I'm fully in favor of NPR's move and other sites should follow.
I wish downvoting wouldn't be allowed without a "reply" to parent post, that way if I'm being downvoted nonstop I know why. Sometimes I get downvoted like crazy and have no clue as to why. Or even if I guess it doesn't add to the conversation to just shut me out because you disagree without teaching me why. I wonder how HN would be different if to downvote you had to respond to a post.
I think the real issue of requiring comments on downvotes is that it would either disallow anonymous downvotes, or encourage anonymous comments. Perhaps de-anonymizing downvotes without requiring a comment might be a better first step: if you aren't brave enough to have your name associated with the downvote, then maybe you shouldn't be downvoting. I'm sure this would cause its own problems, but I suspect it would be a net positive. At times I think it might even be good to have all voting and flagging actions on HN to be public record.
On HN, unpopular comments are, often, still voted down despite the facts and despite the presentation.
I usually reserve these instances for when I know more than a little on the topic, generally professionally speaking, and often first hand. But, if it is unpopular, it doesn't matter. It's predictable.
1. there are replies and up-voting, but no down-voting;
2. you can classify a comment reply, at time of posting, as a "rebuttal";
3. up-voting a rebuttal comment will also be considered as down-voting its parent;
4. the aggregate score of a subthread is calculated as a Euclidean distance, with all the positively-scored comments as dimensions. Thus, if you've got one great comment made in response to a bad comment, the thread will stay un-collapsed; if you've got two equally-great comments, the thread will rise, etc. (Likewise, if you've got good discussion happening in response to a really bad post, the discussion's value will still propel the post to the hot page.)
I often upvote both a comment and its rebuttal, because in many debates I have no dog in the race and I often find that people on both sides have valuable or otherwise interesting insights. Even if I don't agree with them completely, I sometimes find that I'm able to view an issue from another perspective that I hadn't considered before.
Not to mention, many debates really just don't have a clear answer, and to pit commentors against each other like that is to enable one of them to be declared "winner", which I think would be misleading in many cases.
In a discussion being had in good faith, a thread will frequently go into a pseudo-debate mode to "find the truth" of a statement someone offered. Kind of like we're doing here. We all add anecdotes and counterfactuals and so forth, and see where the inductive process takes us. None of these comments would/should be tagged as rebuttals.
But there's a very specific situation where, I think, this feature is an important addition: when the original comment (or the post-link heading the discussion!) just doesn't know what it's talking about, and the reply outlines why this is so.
The concept is less like "this post disagrees with its parent, and so up-voting (agreeing) with it should mean down-voting (disagreeing) with its parent", and more like "this post is a petition to flag/retract its parent, and up-voting it is signing that petition."
This is why I had point 4 in the above, which might otherwise seem an unrelated feature: negatively-scored posts are not "disagreed with", but rather "flagged dead." But negatively-scored posts must still stay visible, in order to give requisite context to their positively-scored rebuttal-comments. And, in fact, a thread containing a negatively-scored top-comment might even still be the top-sorted thread, if its replies are considered valuable enough.
(One might still want to add some visual effect to remind those reading the post that the community considers it "retracted." Perhaps adding a strikethrough that disappears on hover, or a background of faint red Xes. You don't want to make the post illegible—it's still necessary context for its subthread, unlike the current HN/Reddit system where the post "fades" to nothing, and then the whole subthread gets considered a lost cause and collapsed.)
I wonder if the problem is the binary nature of so many things. 5 star/1 star reviews are the most common, we have up and down votes, but no real context for that. I wonder if the problem that needs to be solved isn't context, rather than directional.
This is me spitballing, so feel free to IGNORE everything below.
I was listening to the Freakonomics podcast on proper voting (e.g. elections) the other day - this one: http://freakonomics.com/podcast/idea-must-die-election-editi... - and there was an idea called quadratic voting. http://ericposner.com/quadratic-voting/
Anyway, the idea is that we all get X votes, and we can vote more on some things, e.g. if we REALLY want marijuana legal, I can vote 4 votes. The twist is that the cost of extra votes is the square of the number of votes, 2 costs 4, 3 costs 9 (we all get maths works).
The theory is this lets people vote on what they care about, much less than simply voting equally.
I was wondering there isn't some way to create a system where you earn votes, that you can spend in some similar way, e.g. you "earn" upvotes from others, and can spend them elsewhere. If everyone got a single vote, but you earned extra votes you could spend in interesting ways at increasing cost, e.g. you could heavily downvote an idea at an exponential rate, it might make people less a victim of common denominator views, give fringe views more airtime, and make people consider a vote more deeply. Not just yes/no but how much do I hate/love this idea? Enough to blow 100 points on 10 down/up votes?
Anyway, just some random braincrack (https://www.youtube.com/watch?v=0sHCQWjTrJ8) I needed to get out.
2. there are above-'unit-weight' up- and down-vote mechanisms that are explicit, but require something humans consider very-slightly onerous: solving a CAPTCHA, paying a tenth of a cent of pre-paid credit, etc.
In my conception, the small votes would be named "OK" and "Meh", and the large votes "Love" and "Hate", with about a 10x difference between their power. The site, of course, would be called "Mehddit." ;)
I am wary of this, especially when it comes to the web. I can see this system being "controlled" by early and/or very active users.
That is the single most compelling reason, why I find most people (on|off)line disturbing. They just want to drown out dissent without helping other people argumentatively to understand the opposing view.
OK, that is probably, because they oftentimes do not have real arguments and haven't rationalized their position at all.
Maybe the past paragraph is just me being a misanthrope. Or maybe the internet is just too big - as bigger discourses tend to deteriorate.
Even for the one in a million chance of a person reading my stuff, I have to at least try to convey my POV with arguments.
And I am far from achieving this every time. Very far, actually.
But not sure if those ideas for technical focus (with "right" A to the Q) website can be easily applied to more political website such as NPR.
It's never clear what the downvote even means.. at least with an upvote or "Like" it doesn't matter too much if you misinterpret the result, but the downvote on certain sites is too coarse-grained to be useful.
But I don't think you need subcomments on comments, I think it's just overcomplicated. If the reply with downvote is nonsensical, it can be downvoted as well. Or it can be even flagged.
One need only look at the number of articles that deal with ideas like basic income, regulatory responses to climate, energy, or finance, public transport, or other areas where systematic centrally planned responses are supported as being the way forward and you can't help but see that progressivism is the larger ideal supported here.
It's hardly surprising though. This is group of professionals that engages in build systems to tackle problems in new ways. We convince ourselves that we're the disruptors who's vision can change industries and our way of life. A natural extension of that viewpoint is to believe that the public sphere can be molded, too, if only the right programs are put into effect under the watchful eye of intelligent and visionary custodians... which has much in common with progressivism.
Of course, that's my personal viewpoint (and I am certainly not a "progressive").
The typical such user has a long history of breaking the rules here, getting banned, and creating new accounts. They often do so shortly after proclaiming that they're leaving HN forever and will never set foot in this crappy, horribly-run echo chamber again. Most HN readers would be surprised to learn how small the number of users generating all this drama really is. The rest of us just want to use the site for its intended purpose, which isn't ideological flamewars or meta grandstanding.
There's nothing more common than posturing as a brave independent thinker standing up to the mob. Therefore that posture has the interesting property of being self-refuting.
I'm hoping that you base your moderating decisions on the actions of the user in question ("human behind the keyboard" user regardless of account name) and not the "typical such user"? Do you have evidence that you are chosing not to release that 'james-watson' is such a user?
Most HN readers would be surprised to learn how small the number of users generating all this drama really is.
Perhaps there would be some way to publicly enlighten us about the serial offenders? For example, maybe you could explain the full backstory of why this post was killed: https://news.ycombinator.com/item?id=12313089.
In and of itself, it doesn't appear to be too egregious, and thus it would appear to bolster the poster's argument. But presumably there is more to the story?
this crappy, horribly run echo chamber again
The most interesting thing to me is that there are often multiple conflicting "echo chamber" claims in flight, with each side feeling like the lone outcast. There's an excellent example even in this subthread right here: https://news.ycombinator.com/item?id=12308842 (I'm referring to the back-and-forth in the thread, rather than the individual comment I linked).
To some, HN is a hotbed of socialism, and to others the epitome of evil capitalism. My recent conclusion is that (counterintuitively) HN is frequently accused of being an "echo chamber" because it has greater diversity of opinion than most other spaces online. The truly anechoic chambers aren't called out as such because the filtering is so effective, whereas "leaky" spaces like HN are assigned the label.
[Edit: I just noticed that "anechoic" doesn't quite fit the narrative here, but don't know how to reword it. The point was supposed to be that full echo chambers and anechoic chambers may have more in common with each other than each does with the points in the middle.]
Perversely, this might mean that accusations of being an echo chamber is a good metric for diversity of opinion. If the norm is that one lives in a world where one normally hears no fundamental disagreement, it can be disconcerting to be in a place where there is no clear "right way of thinking". Only when people stop proclaiming it to be an echo chamber is the canary dead.
To answer your other concerns: Yes, we go out of our way to try to make moderation decisions individually. I don't think it would work to publish information about users' past accounts—that would be a surefire shitstorm. As you and others already figured out, 12313089 was flagged by users—that's what [flagged] always means, both on comments and stories. Vouches were turned off on direct replies, but I was inspired by the below to enable them.
* e.g. the subthread starting at https://news.ycombinator.com/item?id=12003178, plus https://news.ycombinator.com/item?id=12003205
> The most interesting thing to me is that there are often multiple conflicting "echo chamber" claims in flight, with each side feeling like the lone outcast.
Confirmation bias (specifically confirmation bias on bias) + terse domain specific terminology (of many different dialects) favored by members for it's efficiency of expression and the subsequent loss of some implications by those less versed in that DSL + normal communication inefficiency in expressing thought = arguments where both sides are mostly in agreement + tendency to attribute opposing positions more strongly or more often then they actually exist.
> Perversely, this might mean that accusations of being an echo chamber is a good metric for diversity of opinion.
I think the contrapositive might be easier to rely on. No accusations of being an echo chamber probably means there isn't enough diversity, while accusations indicate that there's at least enough diversity for people to form a perception that there is an echo chamber, whether there is one or not.
I can explain it. It says right there that it was [flagged]. Enough people hit the "flag" link to automatically kill it.
Separately, although multiple flags can kill a comment, it's still subject to moderator review. Since Dan commented in this thread, this probably implies that he consciously decided to let the user flagging stand rather than reverse it. My phrasing may have been poor, but I wondered why this was.
I've argued elsewhere in this thread that it would be interesting for both flags and downvotes to be public. I don't expect Dan to release this information here, but I'd personally be very interested to know who those user's were in this case, and on what basis they were flagging it.
Separately, your tone seems particularly condescending. Is this by design? Why?
The vouch button appears for me.
> and on what basis they were flagging it.
I didn't flag it, but it contains some deliberately provocative phrasing from someone who's previously had user flags (and a ban) for their posting style.
>> the original post which dang replied to and subsequently killed.
These comments tend to attract downvotes and flags because they're untrue. For one thing that post doesn't appear to have been killed, and if it had been killed it probably would have been user flags, not mods, that did the killing.
Interesting. I might understand this now. To discourage retaliation (I think) both downvoting and flagging are not allowed for direct responses to one's own stories and comments. Since "vouch" was added late, it reuses the same logic, even though the "retaliatory vouching" is not really a danger.
In the context of discussing perceived bias in moderation, I didn't find that particular comment to deliberately provocative. While context is important for interpretation, I think flagging (like vouching) should be done comment-by-comment rather than based on previous actions under a different account. Killing comments based on historic behavior makes "recovery from mistakes" more difficult, whether the mistake is on the part of the moderator or the poster.
It's also not clear to me exactly why the FD3SA account (should we consider this the same user for purposes of flagging?) was banned. He ('james-watson') believes it was because the content of the posts and not the style. I doubted this, and suspect he was banned due to his expressed intention "to return hostility in kind". While bans on this basis may be good policy, if this is true 'james-watson' would have a reasonable argument that this is punishment is indeed for thoughtcrime. As one of the targets, you are of course entitled to have your own interpretation.
For one thing that post doesn't appear to have been killed, and if it had been killed it probably would have been user flags, not mods, that did the killing.
I agree, and made the same point here: https://news.ycombinator.com/item?id=12315815. While this might be a good reason to downvote or rebut, I don't think that flagging is an appropriate response to factual inaccuracy.
Yep, that's exactly right. Downvotes are also disabled for comments older than a certain time interval (IIRC it's currently 8 hours).
> Since "vouch" was added late, it reuses the same logic, even though the "retaliatory vouching" is not really a danger.
That may well be true, and enabling the "vouch" link for replies to one's own comments sounds like a good idea. You should email email@example.com about it -- they're usually very responsive.
Never mind, maybe I found it, or at least the account: https://news.ycombinator.com/item?id=9039872. A little bristly, and I'm not sure if the assumptions are correct (Is it unequivocal that "Status is zero sum"?) but doesn't seem ban-worthy. But since it's a year before the account appears banned, maybe I'm missing a later comment with the exact quote?
Presuming this is the account you are referring to, it looks to have actually been banned for the similarly themed but much more aggressive comment "Sexual dimorphism is real. Your impotent rage and ridiculous ideologies will never change that fact. I will be greatly amused by your kind's zealous need to tilt at windmills." (https://news.ycombinator.com/item?id=11226294), which when challenged by Dan you defended by saying "I have a policy to return hostility in kind." (https://news.ycombinator.com/item?id=11227788).
I think this counts as leaving out an essential detail. The problem is not that discussing sexually dimorphism is off limits (others appear to have done so without being banned) but that taking "he did it first" is not a acceptable excuse for rude behavior here. Or maybe I have the timeline slightly off? It's hard to tell when some of the edits occurred.
In any case, I'm going to keep my rose colored glasses on for a bit longer. I'd encourage you to keep making scientifically accurate statements about sexual dimorphism from your new account, but given that some interpretations have sensitive implications, it would probably be wise to approach it as politely as possible.
In fact, I have a feeling I'll get banned for this comment.
No, I don't think you will. You can actually discuss many "controversial" topics here as long as you try to do so politely. Assuming I found the right thread, it looks like the ban wasn't because you raised a forbidden topic, but for your stated policy of returning hostility in kind. "Tit for tat" has it's place in game theory, but in real games (and communities) it has some serious defects if blindly applied.
The main defect in this context is that if all players adhere to the simplest version, it has a tendency to end in a "death spiral" of defection. Eventually, a poorly phrased response is interpreted as an insult, and after that it's a permanent race to the bottom. The strategy can be improved by adding some degree of "forgiveness" to allow both parties to reset to cooperation. Adjusting the trigger to require multiple offenses before retaliation ("tit for two tats") can also help to account for inevitable communication errors.
I have witnessed HN darlings get away with far, far worse ad-hominem vulgarity with nary a peep from the moderators
Likely, but enforcement is always going to be "spotty", so it's almost impossible to distinguish bias from bad luck in any particular case. And perhaps they are "darlings" because they interact politely with the moderators?
For history's sake, let's link the original post which dang replied to and subsequently killed.
I don't know if it is intentional, but I think you are conflating different issues. Dan marked the thread off topic and explained why. Subsequently, another comment in the thread was killed by user flags. It is unlikely that Dan killed it himself.
There was zero rule breaking in that one, just thoughtcrimes
Arguably, although the counter-argument that Dan presents is that "attractive nuisances" are against the rules. He explains his reasoning (which I agree with) in a more recent response: https://news.ycombinator.com/item?id=12163939. The goal of moderation is not ensuring technical compliance with arbitrary rules, rather then goal is "protecting civil, substantive discussion".
Like the common (and often truncated) quote that "democracy is the worst form of Government except all those other forms", the standard against which HN should be judged is not whether the moderation is perfect, but how it compares to the alternatives. Based on the current state of HN relative to other internet discussion sites, I'd claim that the moderation seems to be working pretty well. Where do you think is doing a better job?
Separately, are "clevernickname" and "FD3SA" both your accounts? The transition it the thread from one account to the other seems odd to explain otherwise. If so, the "zero rule breaking in that one" seems like an odd claim, akin to the "Freemen on the Land" claims of immunity from the the courts based on being "Incorrectly Identified". See page 75 here: https://thelastbastille.files.wordpress.com/2014/02/meads-v-...
If HN is about entrepreneurship and technological progress, why do they ruthlessly suppress established scientific consensus?
I know this must feel like a well framed question, but it's really not. Why should we assume "HN is about entrepreneurship and technological progress" any more than assuming "the moon is made of blue cheese"? Who exactly is the "they" that is suppressing scientific consensus, and if this was the goal why choose such a round-about way to do so? For that matter, what does "consensus" have to do with science?
And here's my hypothesis: because it does not fit the current political zeitgeist.
A scientific hypothesis should make a falsifiable prediction. Is this the sense in which you are using the term? If so, what predictions does your hypothesis make that are different from a plausible "null hypothesis" like "the moderators are trying their best to keep HN as a place where substantive discussion is possible"? How would we go about testing these predictions?
No, I did not use sockpuppet accounts. Unlike fulltime moderators, I have things to do other than spend every hour of the day on HN to score political points. I only had one previous account, and it was banned as a result of that thread.
As to your other questions, you are losing the forest for the trees. What I am saying is this:
You are free to make of it what you will. Obviously, neither I, or Dr. Cronin, or the Holy Ghost himself can convince you of a fact that you do not wish to be convinced of; the choice is yours.
I appreciate the clear statement. My guess based on Dan's comments is that he thinks you are associated with other undisclosed accounts, and that this might explain some of the confusion as to what is acceptable discourse.
As to your other questions, you are losing the forest for the trees.
Possibly. Also possible that we are in different forests, or that I care more about trees than forests.
What I am saying is this: https://www.edge.org/conversation/helena_cronin-getting-huma...
Thanks, I will read and consider. At a glance, I think it reflects my beliefs as well. I read the "Ant and the Peacock" long ago, but don't recall the specifics of her argument.
My favorite book in this area, which I think is compatible with Cronin, is Sarah Blaffer Hrdy's "Mother Nature": https://www.amazon.com/Mother-Nature-Maternal-Instincts-Spec.... If you haven't read it, I'm guessing you'd love it.
Hrdy also wins my personal award for "Best Evolutionary Development Theorist That's Been Almost Completely Ignored". Happily that seems to be changing a bit recently, with her work finally starting to get respect: http://blogs.scientificamerican.com/primate-diaries/raising-...
It tends to be supported by some more pragmatic libertarians, who recognize that some form of welfare is necessary, and see UBI as the least-overhead form that, if not the cheapest economically, keeps the associated bureaucracy (and the government sprawl induced by that) to the minimum.
However, it is also a popular idea on the left, especially among the more individualist-minded liberals, left libertarians etc.
Basic income would mean that part of GDP is redistributed in purely democratic manner - everybody gets the same piece of pie, whether they deserve it or not. It's very similar to democracy, but on economic level. And democracy is in fact very leftist idea (if you look at history of the left), because again, it proposes that all people should the exact same political power (one person, one vote), regardless what they are or what they do or contribute.
The fact that today's right (at least partially) accepts some of these ideas, such as democracy or UBI, is actually a success of left, or manifestation of the reality having a liberal bias. :-)
But, yes, both downvoting and flagging can and have been abused here, despite measures to prevent them.
Not sure about that, could you find one such comment? Many people would upvote any greyed comment that comes with substantive arguments, regardless of their political view.
Greying is one of the key factors in the anti-dissent, party-line mode here, which I've noticed as well. It was weird to pull scores and then implicitly put them back with greying, so you can at least tell if a comment is <= 0. One difference from Reddit is that Reddit explicitly suggests that you not downvote to register disagreement, whereas HN does not take a stand and pg clarified that he's fine with it a long time ago. So that ties together voting and agreement, which then ties together greying and agreement, which helps gives rise to the anti-dissent environment you're observing.
I'd love to see some analysis on whether a grey comment is more likely to be downvoted, as well, because I'm almost positive it is. Once someone hits me with a 0, which is discernible as slightly greyer by itself, that comment is almost guaranteed to end up very negative.
My opinions often diverge from HN and I am rewarded with barely-visible commentary very often, so I've been trained over time to resent voting rather than wish to contribute here, which I don't think is the spirit nor the intent of the greying feature. I don't think HN wants to be an echo chamber, to be clear, it's just that the quirks of the system create incentives that give rise to it. The last thing on planet Earth that I care about is my karma score, but I do care that what I say matters, and I cannot really talk about it because it sounds like complaining about downvoting.
Slashdot used to be a great place (like HN was a few years ago), and many of the best moderation systems originated there.
Meta-moderation kept everything fair because you could moderate the moderators. The way it worked was that people with high karma would get asked at the top of their page to please take time and meta-moderate 10 comments a day. You get presented with 10 random comments and the moderation +/-, and simply check a radio button indicating whether the moderation was fair or unfair.
The way it was enforced, I assume, is that people who were meta-moderated as unfair too often would lose their ability to moderate completely.
Ultimately, the question is whether negatives of people who down vote on disagreement (as opposed to more objective items, such as lack of evidence for assertions or overly personal and/or aggressive comments) outweighs the benefit of a very effective way for the community to self moderate, given there is occasional pushback against subjective down voting when it happens.
The problem will of course seem larger if some of the common community views conflict with your own, and you've been subjected to the consequences of this. This is countered by the human tendency (IMO) to view unexplained negativity as unwarranted. Were you downvoted because your comment was lacking in some regard, or because you stepped into a pet issue of the community. It's easier to believe the second, and while it is the case sometimes, it's hard to tell what is perception and what is reality.
Sure, I'd grant that as intent, but where this falls down is that it instead incentivizes me to just not be a part of the community. Why would I put care and work into a comment if I'm going to be rewarded by the time invested becoming essentially worthless? Arguably, silencing someone because they diverged from "community norms" is slightly hostile; I have a hard time believing your position that a community is served well by being built atop hostile moderation. You might build a community, but it might be a much smaller, much less desirable community. (I don't know.) It's also a weird incentive to throw down, as you allude to, because "acceptable opinion" and "acceptable comment" are extremely conflated. So now one finds oneself writing comments that contain acceptable opinions, which is the care in crafting that you're describing.
Look, the opinion of HN from outside and the opinion of HN from inside are wildly different. I think this community tells itself things about civility, moderation, and so on; your comment sounds good, for example! It's just slightly off-base and overlooks some consequences, and it's not obviously wrong because we're within HN discussing it.
> Were you downvoted because your comment was lacking in some regard, or because you stepped into a pet issue of the community. It's easier to believe the second, and while it is the case sometimes, it's hard to tell what is perception and what is reality.
Careful; this is lightly making the case that I'm unable to discern the difference and therefore off-base for criticizing a real issue. I can't prove this to you, but trust me when I say that I can tell when I've "earned" the downvotes. And I do, occasionally.
(And, were I less civil, I'd use colorful four-letter words to describe community "norms," as I'd hope any hacker would. I didn't become a hacker because I cared about normality.)
It is slightly hostile. It's also how groups enforce culture and less codified systems of conduct. Not only in the active consequences to the offender, but in the visible consequences to others. I think it works in some cases, and does not in others. in the case of HN, it's ham-fisted, but I doubt more intricate systems that allow people to choose levels of response would actually work at all, given the amount of time people are generally willing to spend on such things (and the variability in the scale and type of response based on initial state).
> It's also a weird incentive to throw down, as you allude to, because "acceptable opinion" and "acceptable comment" are extremely conflated. So now one finds oneself writing comments that contain acceptable opinions, which is the care in crafting that you're describing.
I'm not really sure what you mean by the first portion of that sentence. I do agree one can find themselves writing comments that conform to opinion and not just just an accepted method of argumentation. That's up to you to combat on your own, here, as the rules are now. People will value different aspects of discussion than you, and you have to deal with that in every discussion anyway. When speaking about something you know your audience is sensitive to, you either make some level of effort to present it in a way that minimizes miscommunication and irrational reaction, or you don't. That's true of every singe instance of communication, I'm not sure why we would expect the problem to be solved here for some reason.
> Careful; this is lightly making the case that I'm unable to discern the difference and therefore off-base for criticizing a real issue.
It most definitely is not. It's making a case that people, in general, are bad at this in my opinion, because it challenges their view of themselves. That doesn't mean you are off-base, that doesn't mean it's not a real issue (I did agree with you, for example), but that the level of importance you attribute to this phenomenon is highly subjective. We agree there's a problem, I think we disagree on the scope and whether it outweighs the benefits imparted, and this is meant as a possible explanation of why why disagree.
> (And, were I less civil, I'd use colorful four-letter words to describe community "norms," as I'd hope any hacker would. I didn't become a hacker because I cared about normality.)
I assume you became a hacker because you like to know how things work. I like to know how things work. "Norms" (culture), are about how groups of people work, or more specifically, it's the informal rules they establish to allow interoperability. The "standards" (in the IEEE sense) by which we are enabled to build our more complex system on top of. Sure, there are negative aspects, such inefficiencies, cruft, and errors, but it's allowed us to get to where we are, so I'm disinclined to view them quite as negatively as you seem to. Something better may come along, but given the irrationality of all people, I'm not sure it will work if it's all that different. I'm interested in hearing alternatives though.
Exactly, you have to care about what you say, and that's exactly why I love HN comments so much. Also, you make it sounds like there are such great comments that disappear because of the downvotes, but it's not the case. The only disappearing comments are stupid jokes or trolls. I don't remember any well-argued position that I was unable to read because of the downvote.
This is not only completely false, it's rapidly provable as completely false. You are, quite literally, lying to yourself. As mentioned elsewhere in the thread, discussion on controversial issues is Exhibit A.
Two years ago I did an analysis of HN comment scores (back when they were publicly accessible). There are far fewer comments with -1 score than with 0 score: http://minimaxir.com/img/hn-comments/distribution_comment_po...
Just offering my perspective.
Not to knock the Anti-GMO argument, but it's similar to other conspiracy arguments. I can make so many rational arguments against it, but I know from the outset if I see a comment that says "Well of course the Hyper Loop isn't going to work, it was designed by Lizard people in order to keep us complacent!" I know that I will never convince that person to see what I would consider a rational viewpoint. It's a lost cause. And as I user of HN, I would consider such comments to be noise, and I would downvote them.
Edit: I have the unpopular opinion (at least on HN) of not liking the "right to be forgotten", not to tangent too much, but I think that the ability to substantially change as a person is an important concept of human development, and that by allowing people to willfully scrub their actions from the record, they encourage the myth that people can't change, and that we're always how we currently are. And that, I think, trains people to be less forgiving. It's harder to forgive when you don't think someone can change.
: although I do disagree with it (even beyond the screaming naturalistic fallacies)
Edit: To put it another way, I'll bet short term studies concluded that asbestos was "safe" to use as a building material. In the long run though, that didn't really turn out to be the case.
The problem is treating GMOs as a category. Maybe there's good reason to be concerned that genetic modificiations that make crops less attractive to pests are hazardous to long-term health (I have no idea if there are), but what does that concern have to do with the risk of introducing genes to grow larger fruits?
I'm sure you can find reasons to be worried about any possible application of genetic engineering, but if you're willing to get that creative, you can find reason to be worried about any agricultural innovations. Maybe you claim that the introduction of a lentil gene to soy will indirectly create come carcinogen through protein interactions, but why don't you worry about the same issue when a new fertilizer is introduced? It seems to me that it's only because transgenic plants are new and scary and people are uneasy about "playing god".
Asbestos was ALWAYS known to be dangerous  so not a great example. I can't think of a case off the top of my head where long-term safety was incorrectly assumed from short-term studies. Perhaps some medicines? Anyway, in principle you're right that short term studies say little about long term impacts on health.
Off the top of my head, there are several consequences that fall into the realm of possibility. Larger fruits will require more resources from the host plant, which could alter its development in unpredictable ways. More nutrients could be taken from the soil in order to make larger fruits, leading to earlier depletion and requiring more crop rotation (a process many farmers put off due to profits). Larger fruits will attract larger animals to graze (just look at what happens when hikers throw apple cores / other compost into the woods, it alters the movement patterns of multiple species).
I understand these examples sound hyperbolic, and I think GMO's are definitely beneficial in some situations. However, introducing changes to the very genetics of our ecosystem faster than they would naturally occur has definite effects. To think that we can adequately anticipate, react, and solve issues caused by these effects seems a bit hopeful, at least until we have a very advanced computational model simulation of our ecosystem.
"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should."
- Ian, Jurassic Park
For most of human society, memory was constrained to the brain, which fuzzily remembers at best, and within mere years only can make summaries of past events.
Now with high quality video from phones, voice records, and text preserved in perpetuity, people can not only merely remember a person held inane ideas, but also see exactly what, when, and in which context they held those ideas.
So long as that's possible, it's easy for others to critique by saying "people can change, but X person said Y and that's beyond the event horizon for me".
Absolutely true. But is HN more closed minded than the folks you see everyday?
Personally, the HN community is more open minded than 95% of the people I personally interact with daily -- even after including what you've just mentioned.
I bet a lot of the reason people down vote you about Elon Musk is that maybe you don't accept him as a "hacker" and don't see the way he's challenging the status quo and pushing technology, cars, transportation and energy systems into the future. Hacking the auto industry, the solar industry, the space industry. What he does very much fits the "Ethos" of what makes hackernews hackernews..
Personally I love the negativity of reddit and I enjoy it too. Snarky, sarcastic, vulgar or insulting comments in my opinion are part of discourse too.
Can you give some examples where someone criticized Musk and was downvoted?
[edit, adding a link to a comment]
Here is one (th13's reply was downvoted): https://news.ycombinator.com/item?id=11030447
(This is more tedious than I thought, I'll see if I can dig up some more. Looks like in the future I should make note of these as I see them.)
[edit, adding a second link to a comment]
You may need to follow the 'parent' link to see the telltale gray of the downvote: https://news.ycombinator.com/item?id=10775287
(Seriously, this is tedious. I know there are more, but I won't dig up anymore unless I'm pressured to do so.)
An hyperloop is going to be a regrettable experience for everyone involved.
I think the fact that it's difficult to find examples of this is proof that there's not as much of a slant as you think there is.
Funnily enough, right around the time I made an acct here (4+ years ago, after ~1 year of lurking), I saw an internal post by a well-known Googler talking about the recent drastic decline of HN comment quality and looking for a replacement. I still get value out of HN comments (clearly), but it's definitely changed substantially in the last few years, and not really for the better. I've resigned myself to every high-quality Internet community without heavy, opinionated moderation being a slow treadmill towards popularity and then mediocrity, or worse. Though this is of course something of a "you're not in traffic, you are traffic" problem.
Maybe for the next community, instead of making it public - potential users will instead have to find the site by solving a bunch of hidden puzzles. Think secret club.
Jokes aside: a major part of why HN works is that the community here self selects for several traits which ensure healthier communication.
As a forum becomes more mainstream, many things happen to attenuate the signal to noise ratio. (Pg had an excellent essay on the mid brow argument effect, which for many other forums is a problem they wish they had)
Eventually part of the crowd who came first realize it's not as useful anymore and decide to go somewhere else. The question is where.
We've seen eternal September come to almost every place and there's only a few traits which can help protect against it.
From my experience with many forums/modding, demographics are 80% of the issue, the remainder is topic of discussion, goal orientedness in discussion and moderator quality.
So the best solution is to create the forum and never advertise it. Let potential users figure out how to find it and how to get in.
Shrug, that just depends on where people choose to set their standards; there are much worse fora and much better ones. As I said, I still clearly find HN comments at least worth reading, but the distinction isn't as binary as you seem to think it is.
the interesting thing about this is that HN comment system seems to discourage dialogue - i.e. you are not notified when someone replies to your comment. It seems to be more geared towards comment-it-and-forget-it.
It's rarely a problem if someone forgets to follow up since someone else will usually do it, at least if their views are shared by other people.
That being said, the "threads" page is useful for finding and responding to new comments in your threads if necessary.
Unlike Slashdot where you have to drill down quite a bit.
You're missing the point of e40's response (and others of its kind). It's not "stop whining, don't use it if you don't like it". It's informing you that there are still parts of Reddit you can get value out of if you change your usage patterns a little bit, instead of your blunt-instrument approach that leads to losing gems like (e.g.) /r/AskHistorians just because /r/politics is terrible. There's of course nothing _wrong_ with you deciding that Reddit overall isn't worth the effort, but there's nothing wrong with someone giving you an alternative in case you were unaware.
When Reddit blew up in popularity, almost everyone I know who'd used it for a long time got sick of the default feed (both submissions and comments) and then at some point found out that being parsimonious in your subreddit subscriptions can give you a pretty high-quality feed. Reacting with defensiveness and hostility to someone giving you that advice in case you didn't know is frankly just bizarre.
It certainly tends to get a lot of people riled quite easily.
What's that thing about polite company not talking about politics, religion, or sex.
The average person, myself definitely included, probably doesn't have much worth listening to on the subject anyway. To paraphrase PJ O'Rourke: though there certainly are many political commentators who might be worth listing to. Under some circumstances. In a crisis. Maybe.
r/programming is another good example. The community there is very hostile and actively downvote and hate on most new things that aren't C++/Java/.NET/Python sometimes. It's been stereotyped as a bunch of overly angry .NET/Java devs, not sure how accurate that is but the angry part is right. It's just a waste of time.
Similarly, I dropped r/gaming years ago, but r/games has been reliable enough and good enough for such a period that it's evolved into my default place I look for gaming news. (The League of Legends subreddit also does pretty well on content and moderation, for something running at the scale of some of the defaults).
The problem I have with reddit, usually, is discovery - its difficult to find subreddits that are popular enough not to be dead, have the content I want, and are managed in good fashion to prevent the usual content/comment trash.
This is what is flawed about comments. It becomes an ego circle jerk...Which is necessary to even generate comments in the first place ("oo look how much Karma I get") but ultimately destroys a conversation.
You have a much higher percentage of users who are deeply interested in the raison d'être of the subreddit.
Beyond that, you either need moderation or have to make a new sub.
I haven't read the article this thread is about yet, either, because the title gives me all the information I need at this moment. I know how crazy the comments were getting for NPR, which I started reading regularly a while back because they allowed comments on their posts. Now, I skip the comments if I'm reading an NPR article because most of the time it's some guy spewing something about how Obama is destroying the country when the article has nothing to do with politics in the slightest.
People did this on a national scale, then are surprised to see Donald Trump have so much support.
Too much negativity is debilitating, but sticking your head in the sand has its downsides.
I have Adblock/Tampermonkey rules to hide comment sections from most news sites that I frequent. (If someone could point me to an extension that handles this, I'd be quite happy...)
I consider comment sections (or even comment counts) to be visual noise that distracts from grokking the actual content.
It's unfortunate, because I like hearing a variety of perspectives. But without highly aggressive curation, reading comments is a net negative on popular sites.
> Hacker News is about the only civil place I'm capable of contributing to a discussion to at this point.
that statement floored me
because historically, HN has been one of the most toxic comment forums I've seen. I've kept coming back despite it, not because of it. there's enough signal to justify it. but it's a kind of low-grade toxicity: a weird mix of passive aggressiveness, disagree-based-downvoting, "cite paper!"-ness, minutial edge case-oriented pedantry (that misses the forest for the trees) and neverending humblebragging.
To give just one example I can't think of any place I've seen more humble-bragging and stealth-but-blatant self-promotion as on HN. Whereas on Reddit I know I can always read discussions where the participants are friendly, funny, insightful, mature, organic, etc. Not always, not everywhere, but I don't have to wade through a shit salad like I've had to do here on HN for years.
I do think it's gotten better in the last year, maybe due to the work of dang and some of the other UI improvements.
I do agree that on sites like CNN, YouTube, the comment threads tend to be overrun with the LCD behavior. Lots of noise there.
what I mean is... I tried to make a subtle point, but it didn't come across. what I was trying to say was this is one of the most toxic sites I've seen for a certain mix of toxic behaviors: mostly passive-aggressive, hyper-nerdy/autism-spectrum types of comments, humblebragging, etc. I agree with you wholeheartedly, from my own experience that yes there are other sites where there is more direct abuse, more direct trolling and harassment, which doesn't happen (or nearly as much) here on HN. what I'm saying is, on the flip side, I see a lot more of a certain kind of... there needs to be a word for it... the slang terms I learned for it were douchebags, pinheads, compulsives, booksmart-yet-streetdumb, etc. I just have never encountered that as much over on Reddit (itself a broad category because so much variance between subreddits there, and front page vs not) as I have here on HN.
again, I wasn't saying there's no signal here, and no politeness. I see lots of politeness and maturity here. also lots of muck. thus my slang term "shit salad" -- mix of good and bad. my point was that there's a different assortment of bad behavior I see here than on Reddit, CNN etc.
it does appear to me to have improved a lot, especially over the last several months. we'll see. YMMV.
>>a weird mix of passive aggressiveness, disagree-based-downvoting, "cite paper!"-ness, minutial edge case-oriented pedantry
You are basically describing software developers.
They are passive aggressive because they don't like direct confrontation.
They downvote when they disagree because if they disagree that means you are wrong.
They nitpick on edge cases because edge cases are what they deal with everyday in their code, so they can't help it.
So really, it's not HN you have an issue with, but the developer crowd. ;)
Web technology scales, journalism scales (poorly, but a relatively small publication can pull big traffic), but right now there's just no substitute for someone at manually checking out reported comments and banning problem users. When you have a site with as much traffic as NPR, that would probably take dozens or hundreds, and these orgs are loathe to outsource it to cheap countries like the big web players do, mostly due to the ethical challenges.
Maybe moving comments to people's own social groups on FB/Twitter will help to defray the costs, I don't think they're really seeing any discussion value for the most part.
As such, bias and opinion is welcome, provided that it's analytical, verified by fact to a reasonable degree, and respectful of common etiquette. The genius in this approach, as far as I'm concerned, is that it manages to preserve the original purpose of comments: scalable content-generation!
Clearly, moderation is a Hard Problem, but one that I think benefits from an economic/incentives analysis. One conclusion I've drawn is that restricting comments to paid-consumers makes banishment and sock-puppetry costly enough that moderators can mop up the rest by hand.
To ask a specific question: what, exactly, remains "hard" with this approach? Do you think "free to read / pay to comment" is viable, in principle? Do you think the promise of publication is not a good incentive? Why?
It still doesn't solve the problem that for someone to _find_ those great comments, they have to _read_ them, and stop them from getting buried.
I'll err on the side of caution with revealing employee counts, but in my experience many of the FP/Atlantic/Mother Jones/Weekly Standard/Pick your midrange site are running on a single digit to low double-digit number of web production staff, many of whom are also trying to make a writing, article layout, or fact-checking quota. The suggestion that these magazines can either get those staffers to moderate tens of thousands of comments per day, or quadruple their web staff just to improve the comments ignores the business reality.
User moderation in the normal HN/Reddit way doesn't work well on news sites, it's too easy to game or brigade, and news sites can't or won't give add unpaid moderators to be gatekeepers.
That's what's hard; creating comments is scalable, filtering them is not. Leaving them unfiltered doesn't work either.
>I think that incentive [is good], particularly when you're trying to draw subject matter experts.
You bring up an excellent point. One of the fundamental problems with comments, I think, is that it creates a space in which ignorance and expertise are equally-weighted. In fact, it's often worse than that for reasons we all know: interesting issues are hard to distill into 300-or-so characters, and short, simple points are often more percussive.
Vetting credentials is a very good option IMHO for certain forums but not for others. Reddit's /r/askscience is an example of a forum in which it works well.
>It still doesn't solve the problem that for someone to _find_ those great comments, they have to _read_ them, and stop them from getting buried.
I wonder if this problem can't be solved through the use of machine-learning to classify comments into high-versus-low quality by grammatical and semantic analysis. This kind of first-pass filtering could, at the very least, help throw out the obvious trash and pre-select candidates for recognition.
Such a system can be tuned to minimize false-alarms (shitpost getting flagged as good), which I think represent the most problematic of classification errors. This is a nice problem-space for ML because the increase in misses implied by a bias against false-alarms doesn't degrade the service much: not having one's comment select for re-publication is unexceptional.
The cultural issue is that many news orgs are still run by people for whom the idea that technology could accidentally censor a valid criticism or ban a decent voice is just too risky. I think this is changing, and many newsrooms today a little more fluid than when I really cared about the problem 4 years ago.
The tech issue is a little bit of a cop out on my part. An ML approach is super attractive to me as a techie. Google (youtube), facebook, NYT, WaPO, and tons of other billion dollar orgs have this problem, and could loads of money by being seen as better communities.
On the more guerrilla side, hundreds of subreddits have automoderaters written by savvy, caring moderators.
They have terabytes of training data, already tagged, and world class ML experts on staff. If it was a tractable problem with business value, why wouldn't they have fixed it? I'm guessing it's the sort of thing that looks doable from the surface, but you get buried in the details.
Again, cop out answer, so please go prove me wrong!!
I understand, and I think that's probably the most difficult problem of the two. I'd just like to point out -- in the interest of discussion -- three things:
1. Pre-filtering for moderators is different (much safer) than auto-banning by a bot
2. It's valid both to filter informed opinions that are poorly expressed, and for a publisher to have a preferred "voice", i.e. a style of writing that it favors.
3. The argument can be made that machines are no more biased than human editors, and that in many cases, the biases of the former are known. As a corollary to this point, there exist certain ML techniques (e.g. randomized forrest classifiers) for which the decision process of an individual case can be retraced after the fact.
How do you think publishers would respond to these counter-points?
Counter-cop-out: someone has to be the first!
Somewhat-less-cop-outy-counter-cop-out: by your own admission, certain sites (e.g. Reddit) have high-quality automoderators.
I would argue that the problem is "approximately solved" and that this is sufficient for the purposes of moderating an internet news publisher. Again, I would make the signal-detection-theoretic point of my previous comment: I can selectively bias my automoderators in favor of reducing either false-alarms or misses. Of course, this brings us back to the cultural problem you mentioned.
By this I conclude that the bottleneck is cultural, which brings me to a follow-up question: what do you think is driving the increased tolerance towards accidentally censoring a "decent voice"? Is it the understanding that it doesn't matter so long as a critical mass of decent voices are promoted?
> How do you think publishers would respond to these counter-points?
In my experience 1 and 2 are fine, but 3 is actually a _net negative_ to some of them. People who by and large have come up through 10+ years of paying dues in a 'The patrician editor is always right' culture _hate_ giving up control, even when it makes their jobs easier.
Editors I've seen have balked at things like Taboola and outbrain, despite them being test-ably better than human recommendations, and saving staffers work. It's a fair argument that picking which stories to promote is a core part of the editorial job more so than comment moderation, but the attitude match is there. Editors at one DC media org I didn't work for shot down A/B testing any new features in the first place, because there was an assumption that the tech staff would rig it!
I don't want to paint 'editors' with too broad a brush, but there's definitely a cultural reluctance at the high level to automated decision making.
> What do you think is driving the increased tolerance towards accidentally censoring a "decent voice"? Is it the understanding that it doesn't matter so long as a critical mass of decent voices are promoted?
It doesn't matter to you and me. We think like HN'ers, where there are trillions of internet packets flowing around every day, and a few will get lost. They think like hometown newspaper editors parsing letters. When you take on the responsibility of being a gatekeeper, screwing it up is a big problem, every time.
I think increased tolerance is coming from more exposure to the sheer volume (Every week at FP the website gets more visits than people who have ever read the magazine in it's 50 years of existence combined), and a bit of throwing the hands up and saying "who knows"
Again, I'm speaking for a pretty specific niche of old-school newspapers and magazine people turned editors of major web properties, because those are where my friends work. Things are probably different at HuffPo or Gawker or internet native places, but clearly not that different because their communities are still toxic.
> I would argue that the problem is "approximately solved"
So I disagree here, but don't have evidence to back it up, other than years-old experience with Livefyre's bozo filter, which we didn't put enough work into tuning to give it a super fair shake.
Taking spam comments as mostly solved, I think there are 3 core groups of 'noise' internet comments:
1. People who don't have the 'does this add to the discussion' mindset to use HN's words. cloudjacker and michaelbuddy 's comments below demonstrate this pretty well. I'd lump cheapshot reddit jokes in here as well. They're not always poor writers, or even negative -- "Great article! love, grandma". Which falls back into the ethics of filtering them.
I suspect that this is 80%+ solveable.
2. The 'bored youth' and 'trolls' group. This is actually the worst group I think, because these are the people I suspect that make death threats and engage in doxxing and swatting. Filters will catch some of these people, but they're persistent, and many of them are tech-savvy and reasonably well educated. They can sometimes be hard to tell from honest extremists. A commenter from group 1 who is personally affronted can fall into this group, at which point they become a massive time suck. Hard to solve, but verified accounts help here in the US case.
3. Sponsored Astroturfing. Russia, Turkey, (pro/anti) Israel, China, Trump (presumably the DNC?) all have a large paid network of people just criss crossing the internet all day trying to make their support base look larger than it is. Especially in the US politics case, they often speak good english, and are familiar with both sides' goto logical fallacies. They'll learn your moderating style in a heartbeat, and adapt. Unsolveable.
Anyway, if someone builds a good bozo filter, they're almost certainly a zillionaire. I hope it happens, but I suspect we'll just start looking back on website comment sections like usenet, as a good idea that didn't scale very well, and find something better.
It's pathetic bottom-feeder crap.
Maybe if I fed the beast through tracking, I'd see higher quality recommendations, but I won't, and I don't. They only serve to tell me just how precariously miserable the current state of advertising, tracking, surveillance-supported media is. I'm hoping it will crash and burn, not because I want present media organisations to die, but until they do, we don't seem to stand any chance of something better.
(What better, you ask? Information as a public good, supported by an income-indexed tax.)
I agree that the '10 weight loss secrets' promoted junk to third party sites is bottom scraping.
Sufficiently that the in-site referrals fail for technical reasons.
Perhaps the comments sections for journalistic pieces from organizations like Ars, NPR, NYT, local news, etc could be more of a competition (like Slashdot). Top 300 comments get preserved, leave it open for a month with no comment limit and some light moderation, and let the conversation go wild (I like Reddit's system for this), then delete all but the top 300 at the end.
Adjust "300" and "top" to fit your organization's needs, just make sure they're clearly defined. Would also help limit the scope for an ML-based solution, too. :)
Moderators would still be needed but their workload would be reduced. And there would be money available for them since many would subscribe / donate just to be part of the community, which would make moderation less of a drain and more of the core profit-making.
I don't think you're correctly identifying the problem. In my experience, the problem with comments, especially on news sites, is a glut of bad comments, rather than a lack of good comments. This solution doesn't disincentivize bad comments.
slashdot nailed moderation, no one has attempted something similar. most systems are simple up/down vote or like/report
i am also starting to wonder if the agegroup being hired to implement "social" for websites is now young enough to have missed slashdot in it s prime.
the fact that people are still brainstorming from scratch instead of talking about how to improve slashdots model reeks of reinventing the wheel because they never heard of it.
That's me! Can you explain the Slashdot model and why it worked? Or point to a good write up about it somewhere else?
Slashdot's model was perhaps a little overcomplicated, but my favourite feature was the ability to tag up/down votes with flavours. +1 Informative was different to +1 Funny, and "Factually incorrect" was a different downvote to "Off-topic spam" (whatever they were called).
Other quirks off the top of my head: it capped at +5 and ... -1, I think? The score represented a thing closer to the up/down ratio than "Facebook likes". There was a dedicated -1 Overrated moderation for "I don't disagree that it's interesting, just not +5 interesting".
Also, logged in users got a fixed number of moderation points at random intervals, and you couldn't moderate in a story that you commented in. I'd like to believe this discouraged "throwing away" points on low-effort joke comments, but I'm not sure the facts of Slashdot comments entirely bears that out.
EDIT: And then there was meta-moderation...
especially all the Moderation, metamoderation, and Karma sections!!
A great deal of high-quality commentary was buried, however, often the best and most informative. That's fairliy much par for the course.
Much the early vibe on the site came from the fact that it was simply where intelligent people were commenting online -- especially the early Free Software crowd (well, early in terms of Web 1.0 -- there was the whole 1980s and early 1990s contingent as well).
ESR (before he went fully whackjob mode), Ted T'so, Alan Cox, Bruce Perens, Rasterman, and others.
Much that group seems split amongst HN, LKML, LWN, and Google+ these days, along with some blogs.
I'm about ready to blackhole that domain.
Needless to say an event happened and was reported the next day, so it could be a whole week between the Trump-of-the-day saying something and comment appearing about it. All of this would be filtered by the 'editor', however you did have frequent letters by the likes of Keith Flett, who somehow got his letters published more often than the other 3-5 million readers (as it was back then, just UK sales with poor distribution in places like Birmingham).
There were no 'likes' back then so you had to have something to say to bother writing in.
How do we get a digital equivalent? I don't buy the dead-tree paper these days so no idea if 'letters to the editor' still exists, but, back then it was good, very good.
I assume it would kill some collaboration/innovation like on HN or a meaningful subreddit, but maybe no one really ever has anything meaningful to say when reacting to general news...
I guess it would also produce duplication from many people not knowing something was said already (however, the duplicate reactions could be monetized later down the line maybe...)
The podcast is .NET Rocks and their comments seem to be pretty good overall.
Not everybody could promote or demote comments. You got randomly assigned the ability to moderate comments so when it came your turn you took it _seriously_.
That community had one of the highest quality comments. Then somewhere in the mid-2000's it got super anti-Microsoft and anti-anything-not-F/OSS. I'll give them credit; it probably reflected the highest quality comments of their userbase at the time.
There was a big bias towards early comments - moderators had to see your comment before they could upvote it to the top of the page, but once it was at the top more people would see it and keep it there; so a comment that would score well if posted as comment 10 would score nothing if posted as comment 50.
And karma tended to reward /popular/ comments, which were often things the hive mind agreed with, rather than high-effort comments. Discussion about DRM? Get in early with "DRM is impossible because" or "format-shifting should be a right" for a quick high score.
At one point I actually helped design something somewhat better (which was implemented in part). I've had further ideas since.
One of the biggest differences between slashdot, and a site like reddit is simply size. Reddit is now the 8th or 9th largest website in the U.S according to Alexa, it's getting as big as Twitter, and is larger than Netflix. Slashdot at it's peak popularity wasn't even a drop in that ocean of traffic & pageviews. When you get that big, your problems are of a different sort, requiring different solutions. Hell, I think reddit has single subreddits that are bigger than Slashdot was at its peak.
This is important because it's easy to have "high quality" when your traffic is low. It's easy to moderate and easy to keep people on-topic. I speak from experience -- I moderate one or more default subreddits on reddit, as well as smaller subreddits, and the smaller ones are much easier to handle. They're virtually on autopilot with minimal moderation required. The larger ones on the other hand... It's like a non-stop war.
It takes an author far, far longer to craft their work than it does for someone to heckle it.
If people weren't driving up page-views by coming back to the same article to see if their comment was liked or replied to, I think this would be a very easy decision for most sites: at some point you are responsible for all of the content on that page.
For every post you made, you had to enter a captcha.
Then, if your comment had words that were likely to be hateful, it’d show your comment again to you, and force you to enter the captcha again.
The worse the likely quality of your comment, the more captchas you’d have to enter.
But suggesting people engage instead on Facebook brings a whole new set of ethical concerns. (1) Facebook manipulates users. (2) Facebook reorders feed. (3) Facebook would lower priority of conservative news sites. And lets not forget that Facebook is probably outsourcing moderation anyway. Plus, Facebook commenters can be just as bad as regular site commenters.
I worked on the trending product. This did not happen. The whole thing goes back to one guy complaining now that he couldn't pick Breitbart for the highlighted slot for some story because it wasn't on the list of approved sites. And this list is actually available here https://cdn.ampproject.org/c/newsroom.fb.com/news/2016/05/in...
Of course no one ever asks why he wanted to pick a controversial site to highlight instead of say a boring straight forward wires service report like the AP.
Of course the story still appears, and the Brietbart could appear in slots 2-N by the personalized ranking algorithm, so it's not like it surpressed. He just wanted to shove it into the I personalized slot 1 where everyone would see it.
FWIW FP briefly used an embedded facebook widget, and a nonzero percent of their livefyre users logged in via FB.
It did little to nothing to stop abusive comments. The HN crowd cares a lot about what sort of history follows around our names and our handles. Many others, both in the western world and abroad, do not.
Here's the talk about it (in German):
- a chrome extension
Unfortunately at least 90% of internet comments are trolling, vitriolic, ignorant, generally useless, poorly written, unhelpful, add nothing to the topic, and basically serve as web pollution.
The one place I've actually found awesome comments was "the economist" (well, HN isn't bad either), and the ny times is kind of OK. Everywhere else feels pretty iffy...
While a facebook account gives some legitimacy, I also like sites where you can post anonymously or at least pseudonymously.
Those were your users.
Someone who posts "Sir your magazine and Hillary Clinton are tools of Israel and should be killed by Hamas, God willing", on all every story about the State Department, or "Oh $WRITER I see you live in DC and went to $COLLEGE, maybe I'll come pay a visit to the next alumni event and teach you some respect for $COUNTRY" isn't the target user for a major American publication. It doesn't want those kind of abhorrent sentiments to live alongside its brand on its website, and is under no obligation to give voice to their ideas.
They're an exceedingly small percent of total readers (when they're even real readers), but a much larger percent of online commenters, hence the problem in the first place.
Even in the non-bot non-astroturfing case, the people who make those comments may be actual readers (although they're exceedingly unlikely to be paying subscribers), but they definitely fall into the bucket of 'can be filtered out, to no appreciable loss'.
They're users in the sense that the website is free, and anybody can be a user, but not in the sense that the publication has a duty to them, in exchange for their money or attention.
What OP really wants are the good comments, which is more than just spam filtering and also more subjective. If an ill-informed, 13-year-old's comment would be considered low-value noise, website operators would need to engage in something resembling censorship, which has its own set of problems.
HN is failing at comments. During last years, the community deteriorated to the point where for many articles every single comment is grayed-out downvoted. That signifies quite a rift in community. HN used to be upvote-intensive excitement-driven but today it's downvote-intensive, annoyment driven.
Often justified. Sometimes not.
Snowden because it's nothing we don't already know, and refugees or gender politics because they always degenerate into political (i.e. not interesting) mud slinging matches.
On a side note, if a community with the general high quality and good moderation of HN can't have a good discussion on those topics online, I'm inclined to believe that having same is just plain impossible.
Personally, my thought process upon seeing one of these articles is something like:
1) Ugh, another one. Let's check the comments..
2) As expected, a dumpster fire. Nobody even RTFA. Let's look at the article..
3) Nothing even remotely new or interesting. Who voted this up? Flag.
Early "democratic" systems were often anything but -- about 14% of Athens' citizens could vote, and about 6% of the US at the time of George Washington's election. There are arguments for a broader electorate, but they come with distinct problems.
Vote brigading in particular is a standing issue on almost all online moderation systems. Some sort of trust cascade might help. It's what, say, the US electoral college was meant to provide initially, though how much of that function remains (and how it might manifest) is rather in question.
As for Snowden, a counterpoint is that some people see this as an issue which requires constant reminding. Advertising and propaganda both work through repitition, and sometimes the truth gets a chance for that as well. There's certainly enough repeat traffic on other topics at HN. (Though yes, many of those get beat down in the submission queue.)
Marking down the comments indicates a desire by some to to enforce groupthink. Why? Because many people use votes to indicate agree-disagree instead of a quality metric.
And given what I've seen of the algorithm here, page positioning is a lot more complicated than vote count weighted by time.
But let's say that two articles were in the queue, one pro-X, the other anti-X and the pro-X forces were dominant. Sure the pro-X article would hit the FP, but the anti-X forces would still comment on it and be down voted.
Also the bias is only visible in the comment section because down voted comments remain visible, whereas a down voted article gets flushed down the memory hole.
"Who voted this up?" is the community rift.
And you'll note I said "online".
We live online.
Also, internet forums have learned over multiple decades that otherwise interesting discussions can easily get derailed by people screaming at each other over unresolvable issues. If the community doesn't keep a lid on it to a degree, the quality of discourse goes into a downward spiral that it can never recover from. It attracts people who just want to argue about stuff and it drives away people who want to have interesting discussions. This has been seen time and time again, in newsgroup after newsgroup, mailing list after mailing list, web forum after web forum.
Holding back that inevitable decline is like fighting against entropy- if it stays popular, HN is almost guaranteed to decline, and become more and more like Slashdot circa 2010, right before it poofs out of existence and/or relevance. But if users actively push back against the tides of forum entropy (i.e., discussion getting drowned out by arguments), a forum can at least have a nice long run before that happens.
I think what people want to avoid on HN is the sort of discussions where people are just asserting hot takes back and forth to no other end than the act of publicly asserting hot takes. This was never fun to watch on Crossfire or First Take or whatever, it's not fun at awkward drunken family gatherings, and it doesn't fit in with the vibe of HN. It's invigorating to the participants but much less interesting to read, and for every poster there are hundreds or thousands of readers.
That applies to online forums just as much as it does to real life, some forums are just more focused than others (just like some households are way louder, more chaotic, and have more drama than others). Almost every place other than HN thrives on arguments, so at least there are plenty of places to have them.
HN has far better comments than any disqus comment feed, on average, in my opinion.
HN used to have far better comments, now it feels like a battlefield.
Politics seems to be the force that can bury any amount of advancements in other fields, hence interest.
The New York Times is probably the only site I know of that does comments well, and they are obviously heavily moderated. But, they're smart, sometimes funny, often insightful, and generally worthwhile to read.
Some general forums and social sites do comments reasonably well too, this one included. But Reddit is a toilet, and Facebook and Twitter are the dirtiest of cess pools.
as for brooks' column, you might be missing some context. brooks has made a career of talking out of both sides of his mouth and (annoyingly) providing intellectual cover from the NYT for a plethora of bad conservative ideas. now that they're blowing up in his face, he's backing away from these stances.
this comment articulates it well:
"He should realize that we’ve been trying to bring the tribal ethos to the U.S. for a long time, with strong local communities providing the sort of help and social services that bind people together and take care of each other as we get older, or fall short in some way.
But He Who Talks with Forked Tongue likes to imagine an egalitarian utopia where 99 percent of us are quietly stitching blankets while a few get to hoard the vital resources. When the tribesmen and women protested and occupied Wall Street, Brooks nearly went on the warpath, and wrote a column in the Times entitled “The Milquetoast Radicals,” (10/11/2011) in which he castigated the unwashed hippies who dared to protest the insane degree of income inequality in this country."
The David Brooks article was just an example. When Bernie Sanders was still campaigning every comment on Hillary/Bernie-related articles was about how the New York Times is wrong and that Bernie is the best, people will learn about the political revolution soon enough, etc. I was a huge fan of Bernie and I got bored of those comments instantly.
anyway, one of the few places i read that, for whatever reason, does carry an even mix of intelligent comments across the spectrum is interfluidity. for example:
My comment was mostly meta, calling out people for missing the point of an op-ed. The op-ed was from a privacy/civil liberties person about why a "no buy" list for guns would be a bad idea. He wasn't arguing on the merits for or against gun ownership, just that these secret lists on which LEO acts are dangerous.
Every comment was something along the lines of "What about my right not to be shot in the streets?!" - I tried to point this myopic view out, and every reply to my comment was "What about my right not to be shot in the streets?!".
As a smallish-government liberal (Public services are well and good but government should be kept in check by a powerful and vigilant population) I get torn up whenever I defend gun rights or spending reductions on that website.
There's the token payment. It doesn't cost much to post, but it does cost to be an asshat.
It's tiny. The userbase and total content volume are a small fraction of other social networks. Which I've objectively measured.
The S/N is amazingly high. Better than all but the best paid-contribution (e.g., professional media) sites.
Automatically generating a stupid sentence is hard. Stupidity is difficult to simulate. Not lack of coherence. Not pure noise. Not word salad. Not poorly trained neural networks. But stupidity. It's hard.
Well, you can pretty much come up with a stupid paragraph on any topic if you just write a small script that searches YouTube using the appropriate search pattern, and just grabs some random comment from the top link. Chances are pretty good it's a stupid statement.
On the volume vs. time-to-assimilate basis, nobody can read more than a minuscule fraction of online content.
1. require significant investment from the userbase to participate successfully
2. promise to assist the userbase with something that is critically valuable to them in exchange
3. regularly deliver on 2 to keep 1 worthwhile
4. provide reputation tracking and management utilities so that the user can cultivate a profile that reflects the investments made in point 1
5. for recurring participation, provide variable rewards that trigger the brain's hooks for surprise, which translates to enjoyment.
HN hits those points, but blogs definitely don't and don't want to. They want to bring in barely-interested readers from search, from anywhere on the web. Many of these readers won't even make it into the comment section. Thus, a good community tacked on to the bottom of their articles seems unlikely.
If those principles are required to cultivate a worthwhile community, the community should always occur external to the publication of the article. The community needs to be the centerpiece, not the article. I use HN this way; the discussion is the primary thing, the articles are the subjects submitted for the community's discussion.
The other caveat is that it's difficult to provide points 3 and 5 when you're just starting out. From what I've seen, it practically always has to be artificial until the momentum becomes self-driving (if there is a physical community that uses the online forum for spillover, this may not be applicable; this is basically what happened with HN). We need better solutions there.
Since individual blog posts have certain quantities of Google juice, comment sections will be overrun with spa--err, "SEO professionals". Other participants are often low-effort drive-bys. If the community isn't the principal focus, participation will be spotty and it will be hard to develop elements fundamental to meaningful community engagement.
Facebook and Twitter are normal people making normal comments on random stuff they see. These people generally feel a compulsion to let their feelings out and Facebook/Twitter provide it. I believe this is probably what was originally intended for blog comments, but because Facebook/Twitter are real, stable communities, the participants are inclined to leave comments there instead of on the target article itself. These comments are often loose and instinctive, which is not necessarily to say they're invalid or worthless, but a community won't form around individual postings because there's no common unifying dictum (point 2 is unfulfilled, and point 1 is minimal on random FB/Twitter posts).
It would be really nice to see someone figure out a good way to bring forums back in a way that didn't turn into a black hole.
I think that's why reddit grew so large, so quick. It does what forums do (provides similar discussions), except better.
Reddit definitely does better as 'front page of the community' but there are less 100's of posts over 2 months to a single entry on reddit that happen without a moderator's pin.
Isn't that what Reddit is trying to do? Creating a subreddit seems to be the default way to stand up a new forum. Whether the platform is up to the task.... jury is still out.
When done well, comments on the article form a discussion, debate, or some other meaningful exchange of knowledge.
But with the modern web and social media, it's usually all bile and trolling.
I've found just the opposite. Coincidentally, I all the subs I'm subscribed to have less than 40k subscribers.