Hacker News new | past | comments | ask | show | jobs | submit login
NPR Website to Get Rid of Comments (npr.org)
271 points by hampelm on Aug 17, 2016 | hide | past | web | favorite | 433 comments

This is an interesting move, and a topic I've been discussing with friends. I've stopped using sites like Reddit, because the comment sections are just toxic. I've gone as far as installing Chrome extensions that make comment sections disappear from popular sites like YouTube. It's too easy to get drawn into the negativity, and I'm completely over it. "Social Networking" has reached it's low point as far as I'm concerned. Hacker News is about the only civil place I'm capable of contributing to a discussion to at this point.

Even HN has declined, but what bothers me most is that HN has a definite range of views that are acceptable. This limits useful discourse. e.g. someone should be able to criticize Elon Musk without being downvoted to hell.

While HN does stay civil, I think it is too limited in general. Don't get me wrong, the reason I use HN is because the discussion hasn't devolved into /. or reddit. However, HN would do well to encourage dissent.

I also get that HN, being mostly about tech startups, has different priorities than me personally.

I am forever in search of a place online to have good discussions and debate with a respectful community. The comment sections on news sites are rarely the place — even for very respectable publications. I'm fully in favor of NPR's move and other sites should follow.

> e.g. someone should be able to criticize Elon Musk without being downvoted to hell.

I wish downvoting wouldn't be allowed without a "reply" to parent post, that way if I'm being downvoted nonstop I know why. Sometimes I get downvoted like crazy and have no clue as to why. Or even if I guess it doesn't add to the conversation to just shut me out because you disagree without teaching me why. I wonder how HN would be different if to downvote you had to respond to a post.

Also, I think that replies accompanying down-votes should be displayed directly next to the comment as a sort of tag (i.e. you can’t click a down-arrow alone, you have to type out something like “not accurate” or “flame” or “I don’t like his hair” or whatever the reason is) and then the comment appears as: foobar_user 1 hour ago | -1:“not accurate” -1:“flame” -1:“I don’t like his hair”.

I think providing short reasons isn't enough. If you wrote a long post and I downvoted it with "not accurate" it doesn't help much, does it?

"Not accurate" may be useful feedback even if it's false. For example, perhaps several such responses in a row would prompt the author to link to supporting data, or reword their comment to be clearer. The real question though isn't whether it's always helpful, but whether on average it's more useful than just a bare downvote. While some of the responses will be "not accurate", others will be an extremely actionable "link broken" or "missing word".

I think the real issue of requiring comments on downvotes is that it would either disallow anonymous downvotes, or encourage anonymous comments. Perhaps de-anonymizing downvotes without requiring a comment might be a better first step: if you aren't brave enough to have your name associated with the downvote, then maybe you shouldn't be downvoting. I'm sure this would cause its own problems, but I suspect it would be a net positive. At times I think it might even be good to have all voting and flagging actions on HN to be public record.

If I add a comment I jnow will be downvoted I simply move on with my day, not bothering to see the responses generated. This is because, even though I work hard to only put forth constructive criticism and in a manner such that my logic / reasoning is plainly spelled out,,,

On HN, unpopular comments are, often, still voted down despite the facts and despite the presentation.

I usually reserve these instances for when I know more than a little on the topic, generally professionally speaking, and often first hand. But, if it is unpopular, it doesn't matter. It's predictable.

Like Slashdot, if you remember that, +1 Insightful etc etc

I really love your idea

It's like slack reactions.

A hypothetical social-discussion model I've been meaning to build an MVP of:

1. there are replies and up-voting, but no down-voting;

2. you can classify a comment reply, at time of posting, as a "rebuttal";

3. up-voting a rebuttal comment will also be considered as down-voting its parent;

4. the aggregate score of a subthread is calculated as a Euclidean distance, with all the positively-scored comments as dimensions. Thus, if you've got one great comment made in response to a bad comment, the thread will stay un-collapsed; if you've got two equally-great comments, the thread will rise, etc. (Likewise, if you've got good discussion happening in response to a really bad post, the discussion's value will still propel the post to the hot page.)

> 3. up-voting a rebuttal comment will also be considered as down-voting its parent;

I often upvote both a comment and its rebuttal, because in many debates I have no dog in the race and I often find that people on both sides have valuable or otherwise interesting insights. Even if I don't agree with them completely, I sometimes find that I'm able to view an issue from another perspective that I hadn't considered before.

Not to mention, many debates really just don't have a clear answer, and to pit commentors against each other like that is to enable one of them to be declared "winner", which I think would be misleading in many cases.

Not every reply would be a rebuttal. Not even every reply challenging the premises of the parent comment would be a rebuttal. Maybe "rebuttal" is the wrong word.

In a discussion being had in good faith, a thread will frequently go into a pseudo-debate mode to "find the truth" of a statement someone offered. Kind of like we're doing here. We all add anecdotes and counterfactuals and so forth, and see where the inductive process takes us. None of these comments would/should be tagged as rebuttals.

But there's a very specific situation where, I think, this feature is an important addition: when the original comment (or the post-link heading the discussion!) just doesn't know what it's talking about, and the reply outlines why this is so.

The concept is less like "this post disagrees with its parent, and so up-voting (agreeing) with it should mean down-voting (disagreeing) with its parent", and more like "this post is a petition to flag/retract its parent, and up-voting it is signing that petition."

This is why I had point 4 in the above, which might otherwise seem an unrelated feature: negatively-scored posts are not "disagreed with", but rather "flagged dead." But negatively-scored posts must still stay visible, in order to give requisite context to their positively-scored rebuttal-comments. And, in fact, a thread containing a negatively-scored top-comment might even still be the top-sorted thread, if its replies are considered valuable enough.

(One might still want to add some visual effect to remind those reading the post that the community considers it "retracted." Perhaps adding a strikethrough that disappears on hover, or a background of faint red Xes. You don't want to make the post illegible—it's still necessary context for its subthread, unlike the current HN/Reddit system where the post "fades" to nothing, and then the whole subthread gets considered a lost cause and collapsed.)

Why remove downvoting directly? I'm not against the idea, I just want to understand the thinking.

I wonder if the problem is the binary nature of so many things. 5 star/1 star reviews are the most common, we have up and down votes, but no real context for that. I wonder if the problem that needs to be solved isn't context, rather than directional.

This is me spitballing, so feel free to IGNORE everything below.

I was listening to the Freakonomics podcast on proper voting (e.g. elections) the other day - this one: http://freakonomics.com/podcast/idea-must-die-election-editi... - and there was an idea called quadratic voting. http://ericposner.com/quadratic-voting/

Anyway, the idea is that we all get X votes, and we can vote more on some things, e.g. if we REALLY want marijuana legal, I can vote 4 votes. The twist is that the cost of extra votes is the square of the number of votes, 2 costs 4, 3 costs 9 (we all get maths works).

The theory is this lets people vote on what they care about, much less than simply voting equally.

I was wondering there isn't some way to create a system where you earn votes, that you can spend in some similar way, e.g. you "earn" upvotes from others, and can spend them elsewhere. If everyone got a single vote, but you earned extra votes you could spend in interesting ways at increasing cost, e.g. you could heavily downvote an idea at an exponential rate, it might make people less a victim of common denominator views, give fringe views more airtime, and make people consider a vote more deeply. Not just yes/no but how much do I hate/love this idea? Enough to blow 100 points on 10 down/up votes?

Anyway, just some random braincrack (https://www.youtube.com/watch?v=0sHCQWjTrJ8) I needed to get out.

Another social-discussion voting system I wanted to try out was one where:

1. both of the 'unit-weight' up- and down-votes were only done through implicit/passive actions (basically a piece of Javascript running in a browser extension, that tracks whether you read the whole article/comment—i.e. scrolled through it at a reasonable reading speed—before scrolling away/closing/going back/beginning to reply)

2. there are above-'unit-weight' up- and down-vote mechanisms that are explicit, but require something humans consider very-slightly onerous: solving a CAPTCHA, paying a tenth of a cent of pre-paid credit, etc.

In my conception, the small votes would be named "OK" and "Meh", and the large votes "Love" and "Hate", with about a 10x difference between their power. The site, of course, would be called "Mehddit." ;)

> I was wondering there isn't some way to create a system where you earn votes

I am wary of this, especially when it comes to the web. I can see this system being "controlled" by early and/or very active users.

I like this idea and would love to see it implemented. There has to be some open source solution that you can tweak to create this.

> disagree without teaching

That is the single most compelling reason, why I find most people (on|off)line disturbing. They just want to drown out dissent without helping other people argumentatively to understand the opposing view.

OK, that is probably, because they oftentimes do not have real arguments and haven't rationalized their position at all.

Maybe the past paragraph is just me being a misanthrope. Or maybe the internet is just too big - as bigger discourses tend to deteriorate.

Fixing "disagree without teaching" is tough when you have to do the teaching to thousands of people across as many or more places. I bet people just get burnt out after awhile.

I agree. Totally. Non the less, just disagreeing isn't the solution. Neither is justing staying silent.

Even for the one in a million chance of a person reading my stuff, I have to at least try to convey my POV with arguments.

And I am far from achieving this every time. Very far, actually.

I like stackoverflow type of feedback, comments, vote up/down system. Not perfect, but a lot of thinking go into that part of the website design.

But not sure if those ideas for technical focus (with "right" A to the Q) website can be easily applied to more political website such as NPR.

I like that downvoting on Stackoverflow costs their equivalent of karma.

Whenever I find that happening on a comment, I comment on it to call people out. That seems to slow the decline (and judging from my points, people seem to appreciate it.)

Lobsters gets this right where you have to provide a reason from a drop down when downvoting

I agree.

It's never clear what the downvote even means.. at least with an upvote or "Like" it doesn't matter too much if you misinterpret the result, but the downvote on certain sites is too coarse-grained to be useful.

I totally agree, I wish there was a requirement to reply when downvoting.

But I don't think you need subcomments on comments, I think it's just overcomplicated. If the reply with downvote is nonsensical, it can be downvoted as well. Or it can be even flagged.

But sometimes the point of downvoting is to bury a comment so that it doesn't dominate the discussion. If downvoting required a reply it would achieve the opposite effect.

Only if the replies were visible on a buried comment, or perhaps instead of hiding or automatically collapsing, they could be "moved" to the user's Threads page.

The problem with HN is that 90% of the downvote justifications would be "this differs from my worldview".

In the end that is really all that internet discussions boil down to, so why fight it? I could write a long rebuttal, backed up with anecdotes from my own worldview and cherry picked stats and facts I googled theory seconds ago to confirm my prior worldview and it's superiority to yours, but what is the point? No one has their minds changed by HN comment threads or witty political commentary on Facebook. This site caters to people who think that a few hours bouncing through Wikipedia and some Google searching for supporting articles makes them an expert on just about any subject. Expecting meaningful debate and reasoned discussion in such a forum is a fool's errand. On anything beyond narrow technical topics it is better to accept it for the shallow bar chatter that it is and not take it too seriously.

I'm reading this super late, but I wanted to say I agree completely.

Then at least it is stated.

There's a nice irony to the downvotes you're getting on this post.

Re HN: also anything that can be coerced into a political discussion will devolve quickly. Conservative views typically wind up grayed, even when they are substantive arguments.

Right-libertarian views are generally most popular here - both "conservative" and "liberal" (American definitions of both) views are generally not popular. And god forbid you have views that would be considered slightly left of center in much of the EU.

I have to disagree. While I do agree that the "liberal-right" (referring to the "classical liberal" definition) certainly is tolerated rather more than American style conservatives, Progressive/Left viewpoints are rather more in vogue here than anything that could be considered libertarian. That a vocal and supportive minority of libertarian types exist here doesn't change the predominate viewpoint of the community as a whole.

One need only look at the number of articles that deal with ideas like basic income, regulatory responses to climate, energy, or finance, public transport, or other areas where systematic centrally planned responses are supported as being the way forward and you can't help but see that progressivism is the larger ideal supported here.

It's hardly surprising though. This is group of professionals that engages in build systems to tackle problems in new ways. We convince ourselves that we're the disruptors who's vision can change industries and our way of life. A natural extension of that viewpoint is to believe that the public sphere can be molded, too, if only the right programs are put into effect under the watchful eye of intelligent and visionary custodians... which has much in common with progressivism.

Of course, that's my personal viewpoint (and I am certainly not a "progressive").

I think this is pretty true for economic issues, but not for diversity/identity politics/social justice. On those issues, the HN-hivemind is pretty solidly leftist (though not to the extremes of some corners of the web).

Wait, what? Really? I would point to the comments on just about any "women in tech" story as a counterexample.


Users who make self-flattering claims about why they were banned from HN—always for some allegedly tinpot reason—are unreliable narrators. The real reason is invariably more sordid, which is why they omit to supply links.

The typical such user has a long history of breaking the rules here, getting banned, and creating new accounts. They often do so shortly after proclaiming that they're leaving HN forever and will never set foot in this crappy, horribly-run echo chamber again. Most HN readers would be surprised to learn how small the number of users generating all this drama really is. The rest of us just want to use the site for its intended purpose, which isn't ideological flamewars or meta grandstanding.

There's nothing more common than posturing as a brave independent thinker standing up to the mob. Therefore that posture has the interesting property of being self-refuting.

The typical such user has a long history of breaking the rules here, getting banned, and creating new accounts.

I'm hoping that you base your moderating decisions on the actions of the user in question ("human behind the keyboard" user regardless of account name) and not the "typical such user"? Do you have evidence that you are chosing not to release that 'james-watson' is such a user?

Most HN readers would be surprised to learn how small the number of users generating all this drama really is.

Perhaps there would be some way to publicly enlighten us about the serial offenders? For example, maybe you could explain the full backstory of why this post was killed: https://news.ycombinator.com/item?id=12313089.

In and of itself, it doesn't appear to be too egregious, and thus it would appear to bolster the poster's argument. But presumably there is more to the story?

this crappy, horribly run echo chamber again

The most interesting thing to me is that there are often multiple conflicting "echo chamber" claims in flight, with each side feeling like the lone outcast. There's an excellent example even in this subthread right here: https://news.ycombinator.com/item?id=12308842 (I'm referring to the back-and-forth in the thread, rather than the individual comment I linked).

To some, HN is a hotbed of socialism, and to others the epitome of evil capitalism. My recent conclusion is that (counterintuitively) HN is frequently accused of being an "echo chamber" because it has greater diversity of opinion than most other spaces online. The truly anechoic chambers aren't called out as such because the filtering is so effective, whereas "leaky" spaces like HN are assigned the label.

[Edit: I just noticed that "anechoic" doesn't quite fit the narrative here, but don't know how to reword it. The point was supposed to be that full echo chambers and anechoic chambers may have more in common with each other than each does with the points in the middle.]

Perversely, this might mean that accusations of being an echo chamber is a good metric for diversity of opinion. If the norm is that one lives in a world where one normally hears no fundamental disagreement, it can be disconcerting to be in a place where there is no clear "right way of thinking". Only when people stop proclaiming it to be an echo chamber is the canary dead.

I think what you say about the 'echo chamber' bias is right, and I enjoyed reading it because this is something I've been pondering for a while* . It's an interesting case of a point so subtle that nearly everyone not only misses it but is sure that the opposite is true.

To answer your other concerns: Yes, we go out of our way to try to make moderation decisions individually. I don't think it would work to publish information about users' past accounts—that would be a surefire shitstorm. As you and others already figured out, 12313089 was flagged by users—that's what [flagged] always means, both on comments and stories. Vouches were turned off on direct replies, but I was inspired by the below to enable them.

* e.g. the subthread starting at https://news.ycombinator.com/item?id=12003178, plus https://news.ycombinator.com/item?id=12003205

Oh, wow. I find myself equally glad and sad I missed this when it was originally happening. It's exactly the type of community meta-analysis and naval gazing I seem to be drawn to, but really isn't very good for me. :)

> The most interesting thing to me is that there are often multiple conflicting "echo chamber" claims in flight, with each side feeling like the lone outcast.

Confirmation bias (specifically confirmation bias on bias) + terse domain specific terminology (of many different dialects) favored by members for it's efficiency of expression and the subsequent loss of some implications by those less versed in that DSL + normal communication inefficiency in expressing thought = arguments where both sides are mostly in agreement + tendency to attribute opposing positions more strongly or more often then they actually exist.

> Perversely, this might mean that accusations of being an echo chamber is a good metric for diversity of opinion.

I think the contrapositive might be easier to rely on. No accusations of being an echo chamber probably means there isn't enough diversity, while accusations indicate that there's at least enough diversity for people to form a perception that there is an echo chamber, whether there is one or not.

> For example, maybe you could explain the full backstory of why this post was killed: https://news.ycombinator.com/item?id=12313089.

I can explain it. It says right there that it was [flagged]. Enough people hit the "flag" link to automatically kill it.

That's the usual answer, but I think there may be more happening here than that. One slight oddity is that there was no "vouch" button visible to me on the dead comment. This made me wonder if there was a separate mechanism in play here. Out of self-interest, it also made me wonder if maybe my "vouching" privileges have been removed.

Separately, although multiple flags can kill a comment, it's still subject to moderator review. Since Dan commented in this thread, this probably implies that he consciously decided to let the user flagging stand rather than reverse it. My phrasing may have been poor, but I wondered why this was.

I've argued elsewhere in this thread that it would be interesting for both flags and downvotes to be public. I don't expect Dan to release this information here, but I'd personally be very interested to know who those user's were in this case, and on what basis they were flagging it.

Separately, your tone seems particularly condescending. Is this by design? Why?

> One slight oddity is that there was no "vouch" button visible to me on the dead comment.

The vouch button appears for me.

> and on what basis they were flagging it.

I didn't flag it, but it contains some deliberately provocative phrasing from someone who's previously had user flags (and a ban) for their posting style.

In particular:

>> the original post which dang replied to and subsequently killed.

These comments tend to attract downvotes and flags because they're untrue. For one thing that post doesn't appear to have been killed, and if it had been killed it probably would have been user flags, not mods, that did the killing.

The vouch button appears for me.

Interesting. I might understand this now. To discourage retaliation (I think) both downvoting and flagging are not allowed for direct responses to one's own stories and comments. Since "vouch" was added late, it reuses the same logic, even though the "retaliatory vouching" is not really a danger.

I didn't flag it, but it contains some deliberately provocative phrasing from someone who's previously had user flags (and a ban) for their posting style.

In the context of discussing perceived bias in moderation, I didn't find that particular comment to deliberately provocative. While context is important for interpretation, I think flagging (like vouching) should be done comment-by-comment rather than based on previous actions under a different account. Killing comments based on historic behavior makes "recovery from mistakes" more difficult, whether the mistake is on the part of the moderator or the poster.

It's also not clear to me exactly why the FD3SA account (should we consider this the same user for purposes of flagging?) was banned. He ('james-watson') believes it was because the content of the posts and not the style. I doubted this, and suspect he was banned due to his expressed intention "to return hostility in kind". While bans on this basis may be good policy, if this is true 'james-watson' would have a reasonable argument that this is punishment is indeed for thoughtcrime. As one of the targets, you are of course entitled to have your own interpretation.

For one thing that post doesn't appear to have been killed, and if it had been killed it probably would have been user flags, not mods, that did the killing.

I agree, and made the same point here: https://news.ycombinator.com/item?id=12315815. While this might be a good reason to downvote or rebut, I don't think that flagging is an appropriate response to factual inaccuracy.

> To discourage retaliation (I think) both downvoting and flagging are not allowed for direct responses to one's own stories and comments.

Yep, that's exactly right. Downvotes are also disabled for comments older than a certain time interval (IIRC it's currently 8 hours).

> Since "vouch" was added late, it reuses the same logic, even though the "retaliatory vouching" is not really a danger.

That may well be true, and enabling the "vouch" link for replies to one's own comments sounds like a good idea. You should email hn@ycombinator.com about it -- they're usually very responsive.

That's a good idea you guys came up with. We've enabled vouching on direct replies.

I routinely flag drama. I didn't originally flag that comment, but I would have had I seen it before this part of the thread, because it's drama. The trend away from drama on this site in the modern Dan era of moderation is heartening, but we can always use less of it.


PG was far more aggressive about banning people than we are; partly because he had less time, and partly less patience. After years of trying to be more patient, I'm unconvinced that patience in this area is a virtue. It seems to lead to more trouble and no fewer complaints. When efforts to do things a certain way lead to condemnation for the opposite, it seems like a sign that something's not working.

HN is overall polite and mature, especially compared to Reddit. Yet it can't escape its background culture. Some truths will always be taboo.

Perhaps I live with rose colored glasses, but I do not believe that anyone has ever be be banned from HN for making that quoted statement unless you have left out some essential detail. My guess would be that you made that statement, and subsequently you were banned for something else. Perhaps you could post a link to the thread in question?

Never mind, maybe I found it, or at least the account: https://news.ycombinator.com/item?id=9039872. A little bristly, and I'm not sure if the assumptions are correct (Is it unequivocal that "Status is zero sum"?) but doesn't seem ban-worthy. But since it's a year before the account appears banned, maybe I'm missing a later comment with the exact quote?

Presuming this is the account you are referring to, it looks to have actually been banned for the similarly themed but much more aggressive comment "Sexual dimorphism is real. Your impotent rage and ridiculous ideologies will never change that fact. I will be greatly amused by your kind's zealous need to tilt at windmills." (https://news.ycombinator.com/item?id=11226294), which when challenged by Dan you defended by saying "I have a policy to return hostility in kind." (https://news.ycombinator.com/item?id=11227788).

I think this counts as leaving out an essential detail. The problem is not that discussing sexually dimorphism is off limits (others appear to have done so without being banned) but that taking "he did it first" is not a acceptable excuse for rude behavior here. Or maybe I have the timeline slightly off? It's hard to tell when some of the edits occurred.

In any case, I'm going to keep my rose colored glasses on for a bit longer. I'd encourage you to keep making scientifically accurate statements about sexual dimorphism from your new account, but given that some interpretations have sensitive implications, it would probably be wise to approach it as politely as possible.

In fact, I have a feeling I'll get banned for this comment.

No, I don't think you will. You can actually discuss many "controversial" topics here as long as you try to do so politely. Assuming I found the right thread, it looks like the ban wasn't because you raised a forbidden topic, but for your stated policy of returning hostility in kind. "Tit for tat" has it's place in game theory, but in real games (and communities) it has some serious defects if blindly applied.

The main defect in this context is that if all players adhere to the simplest version, it has a tendency to end in a "death spiral" of defection. Eventually, a poorly phrased response is interpreted as an insult, and after that it's a permanent race to the bottom. The strategy can be improved by adding some degree of "forgiveness" to allow both parties to reset to cooperation. Adjusting the trigger to require multiple offenses before retaliation ("tit for two tats") can also help to account for inevitable communication errors.

Welcome back.

[Responding to myself since I can't respond to the the flagged and dead original. You can click on your username and turn on "Show Dead" to view it.]

I have witnessed HN darlings get away with far, far worse ad-hominem vulgarity with nary a peep from the moderators

Likely, but enforcement is always going to be "spotty", so it's almost impossible to distinguish bias from bad luck in any particular case. And perhaps they are "darlings" because they interact politely with the moderators?

For history's sake, let's link the original post which dang replied to and subsequently killed.

I don't know if it is intentional, but I think you are conflating different issues. Dan marked the thread off topic and explained why. Subsequently, another comment in the thread was killed by user flags. It is unlikely that Dan killed it himself.

There was zero rule breaking in that one, just thoughtcrimes

Arguably, although the counter-argument that Dan presents is that "attractive nuisances" are against the rules. He explains his reasoning (which I agree with) in a more recent response: https://news.ycombinator.com/item?id=12163939. The goal of moderation is not ensuring technical compliance with arbitrary rules, rather then goal is "protecting civil, substantive discussion".

Like the common (and often truncated) quote that "democracy is the worst form of Government except all those other forms", the standard against which HN should be judged is not whether the moderation is perfect, but how it compares to the alternatives. Based on the current state of HN relative to other internet discussion sites, I'd claim that the moderation seems to be working pretty well. Where do you think is doing a better job?

Separately, are "clevernickname" and "FD3SA" both your accounts? The transition it the thread from one account to the other seems odd to explain otherwise. If so, the "zero rule breaking in that one" seems like an odd claim, akin to the "Freemen on the Land" claims of immunity from the the courts based on being "Incorrectly Identified". See page 75 here: https://thelastbastille.files.wordpress.com/2014/02/meads-v-...

If HN is about entrepreneurship and technological progress, why do they ruthlessly suppress established scientific consensus?

I know this must feel like a well framed question, but it's really not. Why should we assume "HN is about entrepreneurship and technological progress" any more than assuming "the moon is made of blue cheese"? Who exactly is the "they" that is suppressing scientific consensus, and if this was the goal why choose such a round-about way to do so? For that matter, what does "consensus" have to do with science?

And here's my hypothesis: because it does not fit the current political zeitgeist.

A scientific hypothesis should make a falsifiable prediction. Is this the sense in which you are using the term? If so, what predictions does your hypothesis make that are different from a plausible "null hypothesis" like "the moderators are trying their best to keep HN as a place where substantive discussion is possible"? How would we go about testing these predictions?

There's a lot of disparate claims you are making here without a central argument.

No, I did not use sockpuppet accounts. Unlike fulltime moderators, I have things to do other than spend every hour of the day on HN to score political points. I only had one previous account, and it was banned as a result of that thread.

As to your other questions, you are losing the forest for the trees. What I am saying is this:


You are free to make of it what you will. Obviously, neither I, or Dr. Cronin, or the Holy Ghost himself can convince you of a fact that you do not wish to be convinced of; the choice is yours.

I only had one previous account, and it was banned as a result of that thread.

I appreciate the clear statement. My guess based on Dan's comments is that he thinks you are associated with other undisclosed accounts, and that this might explain some of the confusion as to what is acceptable discourse.

As to your other questions, you are losing the forest for the trees.

Possibly. Also possible that we are in different forests, or that I care more about trees than forests.

What I am saying is this: https://www.edge.org/conversation/helena_cronin-getting-huma...

Thanks, I will read and consider. At a glance, I think it reflects my beliefs as well. I read the "Ant and the Peacock" long ago, but don't recall the specifics of her argument.

My favorite book in this area, which I think is compatible with Cronin, is Sarah Blaffer Hrdy's "Mother Nature": https://www.amazon.com/Mother-Nature-Maternal-Instincts-Spec.... If you haven't read it, I'm guessing you'd love it.

Hrdy also wins my personal award for "Best Evolutionary Development Theorist That's Been Almost Completely Ignored". Happily that seems to be changing a bit recently, with her work finally starting to get respect: http://blogs.scientificamerican.com/primate-diaries/raising-...

And with this thread, we have conclusively proven the assertion upthread (https://news.ycombinator.com/item?id=12309120).

Good day.

And there is a threat of a #hnwatch brigading to enforce it

That's just not true. For example, universal basic income--a leftist idea even by European standards--has been discussed several times on HN. Lots of substantive comments both for and against.

Are you sure UBI is a leftist idea? The things that go along with UBI read like a right-wing wishlist: eliminate the minimum wage, discontinue welfare and unemployment benefits, and end the food stamp program. In its place, the free market would determine how best to spend basic income.

The core notion of it, though, which is government handing out money to people, is rather contrary to pure right-wing (especially libertarian) thinking; even more so when it comes to the "universal" part.

It tends to be supported by some more pragmatic libertarians, who recognize that some form of welfare is necessary, and see UBI as the least-overhead form that, if not the cheapest economically, keeps the associated bureaucracy (and the government sprawl induced by that) to the minimum.

However, it is also a popular idea on the left, especially among the more individualist-minded liberals, left libertarians etc.

It was originally a right wing idea, often known as mincome (minimum income) or a reverse tax. Milton Friedman was one of the earliest proponents of the idea.

Yes, it is a leftist idea. Definition of left is that they care about social and economic equality. (The understanding - IMHO wrong - of left as proposing big state ultimately comes from Communist Manifesto, where Marx used "the State" in a certain sense as a shorthand.)

Basic income would mean that part of GDP is redistributed in purely democratic manner - everybody gets the same piece of pie, whether they deserve it or not. It's very similar to democracy, but on economic level. And democracy is in fact very leftist idea (if you look at history of the left), because again, it proposes that all people should the exact same political power (one person, one vote), regardless what they are or what they do or contribute.

The fact that today's right (at least partially) accepts some of these ideas, such as democracy or UBI, is actually a success of left, or manifestation of the reality having a liberal bias. :-)

Neocons are basically leftists, so there's not really anything surprising.

Yea, basic income isn't really that associated with leftism. The super-hard-right libertarian stance of "let people starve if they can't hack it" obviously doesn't like it, but otherwise it's basically a simple combination of wanting a social floor but without the attendant complexity, perverse incentives, and inefficiency of means-tested gov't programs. This view fits pretty comfortably in the center and center-right, while the left often just sees it as a way to low-key dismantle social programs that they're fond of.

Basic income seems to do quite well.

This is because it is de facto HN policy to be OK with downvoting to register disagreement. pg put that one down and it hasn't changed.

I've noticed flagging being used similarly, too. The argument that downvoting should be used to register disagreement falls apart when you consider that only users with enough (>= 500, I think?) karma have disagreement privileges.

I think the 500 limit is justified as a means to discourage vote-brigading. Age of the account, by itself, is not a good enough indicator, because a person might create hundreds of accounts to be used for brigading at a later date.

But, yes, both downvoting and flagging can and have been abused here, despite measures to prevent them.

Flagging is extremely effective in taking content off the front page. I understand the intention, to quickly kill spam and avoid flame war topics, but it's extremely powerful in even preventing discussion on issues and it is probably a greater contribution to groupthink than downvoting comments is. Agenda setting, controlling which topics are discussed at all, is way more powerful than controlling the range of views expressed on those topics.

I think you can "vouch" for posts and submissions now to counter the flagging. Of course, that assumes you will catch them before they fall off the front page or get auto-deleted.

I was referring to comment flagging, for which there appears to be no recourse (at least from my 140something-karma account's view). Similar problem(?), though.

To me, HN's "bias" is not so much about left/right, but rather the individualism/collectivism aspect of things. "Regulation is a good tool to weed out the bad stuff; free market can't solve this or that; ban human drivers when self-driving vehicles mature; taxes as means of control" and so on. We agree to disagree on many cases very maturely but very often I see just discussion that would warrant to rename this site to "Collectivist news".

> Conservative views typically wind up grayed, even when they are substantive arguments.

Not sure about that, could you find one such comment? Many people would upvote any greyed comment that comes with substantive arguments, regardless of their political view.

The anti-dissent phenomenon is amplified by the greying of comments, which continues to be a bad idea all these years later. The literal message and incentive there is: leave a comment that gets upvotes or it will get progressively harder to read until nobody can actually read it. So now you have to care about what you say, because it will be completely removed from the site by natural voting if you don't toe a certain line, meaning folks cannot objectively review your comment on its merits alone unless they go out of their way to highlight it. The absence of a showgrey even though showdead exists is an interesting thing that somewhat describes intent behind the feature, too.

Greying is one of the key factors in the anti-dissent, party-line mode here, which I've noticed as well. It was weird to pull scores and then implicitly put them back with greying, so you can at least tell if a comment is <= 0. One difference from Reddit is that Reddit explicitly suggests[0] that you not downvote to register disagreement, whereas HN does not take a stand and pg clarified that he's fine with it a long time ago[1]. So that ties together voting and agreement, which then ties together greying and agreement, which helps gives rise to the anti-dissent environment you're observing.

I'd love to see some analysis on whether a grey comment is more likely to be downvoted, as well, because I'm almost positive it is. Once someone hits me with a 0, which is discernible as slightly greyer by itself, that comment is almost guaranteed to end up very negative.

My opinions often diverge from HN and I am rewarded with barely-visible commentary very often, so I've been trained over time to resent voting rather than wish to contribute here, which I don't think is the spirit nor the intent of the greying feature. I don't think HN wants to be an echo chamber, to be clear, it's just that the quirks of the system create incentives that give rise to it. The last thing on planet Earth that I care about is my karma score, but I do care that what I say matters, and I cannot really talk about it because it sounds like complaining about downvoting.

[0]: https://www.reddit.com/wiki/reddiquette#wiki_in_regard_to_vo...

[1]: https://news.ycombinator.com/item?id=117171

HN could remove this disagreement moderation bias by implementing a meta-moderation feature similar to Slashdot.

Slashdot used to be a great place (like HN was a few years ago), and many of the best moderation systems originated there.

Meta-moderation kept everything fair because you could moderate the moderators. The way it worked was that people with high karma would get asked at the top of their page to please take time and meta-moderate 10 comments a day. You get presented with 10 random comments and the moderation +/-, and simply check a radio button indicating whether the moderation was fair or unfair.

The way it was enforced, I assume, is that people who were meta-moderated as unfair too often would lose their ability to moderate completely.

Slashdot is the worst example of a fair and balanced community. The meta moderation tends to lend itself to confirmation bias.

I agree the greying out of comments is problematic, but I think it also serves an important purpose, and that's to incentivize care in the crafting of your comment. It is the punishment of community norms as much as the negative points are. Unfortunately, community norms have a way of adopting views as well as actions, and we have what you refer to as anti-dissent mode.

Ultimately, the question is whether negatives of people who down vote on disagreement (as opposed to more objective items, such as lack of evidence for assertions or overly personal and/or aggressive comments) outweighs the benefit of a very effective way for the community to self moderate, given there is occasional pushback against subjective down voting when it happens.

The problem will of course seem larger if some of the common community views conflict with your own, and you've been subjected to the consequences of this. This is countered by the human tendency (IMO) to view unexplained negativity as unwarranted. Were you downvoted because your comment was lacking in some regard, or because you stepped into a pet issue of the community. It's easier to believe the second, and while it is the case sometimes, it's hard to tell what is perception and what is reality.

> I agree the greying out of comments is problematic, but I think it also serves an important purpose, and that's to incentivize care in the crafting of your comment.

Sure, I'd grant that as intent, but where this falls down is that it instead incentivizes me to just not be a part of the community. Why would I put care and work into a comment if I'm going to be rewarded by the time invested becoming essentially worthless? Arguably, silencing someone because they diverged from "community norms" is slightly hostile; I have a hard time believing your position that a community is served well by being built atop hostile moderation. You might build a community, but it might be a much smaller, much less desirable community. (I don't know.) It's also a weird incentive to throw down, as you allude to, because "acceptable opinion" and "acceptable comment" are extremely conflated. So now one finds oneself writing comments that contain acceptable opinions, which is the care in crafting that you're describing.

Look, the opinion of HN from outside and the opinion of HN from inside are wildly different. I think this community tells itself things about civility, moderation, and so on; your comment sounds good, for example! It's just slightly off-base and overlooks some consequences, and it's not obviously wrong because we're within HN discussing it.

> Were you downvoted because your comment was lacking in some regard, or because you stepped into a pet issue of the community. It's easier to believe the second, and while it is the case sometimes, it's hard to tell what is perception and what is reality.

Careful; this is lightly making the case that I'm unable to discern the difference and therefore off-base for criticizing a real issue. I can't prove this to you, but trust me when I say that I can tell when I've "earned" the downvotes. And I do, occasionally.

(And, were I less civil, I'd use colorful four-letter words to describe community "norms," as I'd hope any hacker would. I didn't become a hacker because I cared about normality.)

> Arguably, silencing someone because they diverged from "community norms" is slightly hostile; I have a hard time believing your position that a community is served well by being built atop hostile moderation.

It is slightly hostile. It's also how groups enforce culture and less codified systems of conduct. Not only in the active consequences to the offender, but in the visible consequences to others. I think it works in some cases, and does not in others. in the case of HN, it's ham-fisted, but I doubt more intricate systems that allow people to choose levels of response would actually work at all, given the amount of time people are generally willing to spend on such things (and the variability in the scale and type of response based on initial state).

> It's also a weird incentive to throw down, as you allude to, because "acceptable opinion" and "acceptable comment" are extremely conflated. So now one finds oneself writing comments that contain acceptable opinions, which is the care in crafting that you're describing.

I'm not really sure what you mean by the first portion of that sentence. I do agree one can find themselves writing comments that conform to opinion and not just just an accepted method of argumentation. That's up to you to combat on your own, here, as the rules are now. People will value different aspects of discussion than you, and you have to deal with that in every discussion anyway. When speaking about something you know your audience is sensitive to, you either make some level of effort to present it in a way that minimizes miscommunication and irrational reaction, or you don't. That's true of every singe instance of communication, I'm not sure why we would expect the problem to be solved here for some reason.

> Careful; this is lightly making the case that I'm unable to discern the difference and therefore off-base for criticizing a real issue.

It most definitely is not. It's making a case that people, in general, are bad at this in my opinion, because it challenges their view of themselves. That doesn't mean you are off-base, that doesn't mean it's not a real issue (I did agree with you, for example), but that the level of importance you attribute to this phenomenon is highly subjective. We agree there's a problem, I think we disagree on the scope and whether it outweighs the benefits imparted, and this is meant as a possible explanation of why why disagree.

> (And, were I less civil, I'd use colorful four-letter words to describe community "norms," as I'd hope any hacker would. I didn't become a hacker because I cared about normality.)

I assume you became a hacker because you like to know how things work. I like to know how things work. "Norms" (culture), are about how groups of people work, or more specifically, it's the informal rules they establish to allow interoperability. The "standards" (in the IEEE sense) by which we are enabled to build our more complex system on top of. Sure, there are negative aspects, such inefficiencies, cruft, and errors, but it's allowed us to get to where we are, so I'm disinclined to view them quite as negatively as you seem to. Something better may come along, but given the irrationality of all people, I'm not sure it will work if it's all that different. I'm interested in hearing alternatives though.

> So now you have to care about what you say

Exactly, you have to care about what you say, and that's exactly why I love HN comments so much. Also, you make it sounds like there are such great comments that disappear because of the downvotes, but it's not the case. The only disappearing comments are stupid jokes or trolls. I don't remember any well-argued position that I was unable to read because of the downvote.

> The only disappearing comments are stupid jokes or trolls.

This is not only completely false, it's rapidly provable as completely false. You are, quite literally, lying to yourself. As mentioned elsewhere in the thread, discussion on controversial issues is Exhibit A.

> I'd love to see some analysis on whether a grey comment is more likely to be downvoted

Two years ago I did an analysis of HN comment scores (back when they were publicly accessible). There are far fewer comments with -1 score than with 0 score: http://minimaxir.com/img/hn-comments/distribution_comment_po...

You could really use a log scale there, but I'm glad you did the work. Thanks!

As somebody who generally posts in a contrarian manner (to a right-libertarian): there's a lot less grey here in general than there used to be. I also think that in recent years greying algorithms are protecting controversial comments to some extent (comments with lots of upvotes and downvotes.) I've had negatively rated comments that were not grey. Also, as mentioned below, greying seems to attract attention to comments, getting them upvoted until they ungrey. Downvotes are limited; I don't think anything gets lower than -3 or -4, so greying doesn't have the ability to reinforce itself very much. You have to keep posting to allow people to keep downvoting you.

Just offering my perspective.

Our HN can decline 5 x more and it would still be wonderful. With out ever seeing, talking to, meeting any HNer I am connected with smarter, curiouser, open minded (er) people than I work with everyday.

Really? To me, the thing that is most discouraging about HN is the rampant closed mindedness. For example, anyone who questions the long term safety of genetically modifying organisms and eating them, will always be downvoted to oblivion.

It's probably because doubting things isn't very useful. I can question the long term safety of walking up stairs, but unless I can provide an argument such that rational people are convinced they should also be concerned, I wouldn't be surprised at downvotes.

Not to knock the Anti-GMO argument[0], but it's similar to other conspiracy arguments. I can make so many rational arguments against it, but I know from the outset if I see a comment that says "Well of course the Hyper Loop isn't going to work, it was designed by Lizard people in order to keep us complacent!" I know that I will never convince that person to see what I would consider a rational viewpoint. It's a lost cause. And as I user of HN, I would consider such comments to be noise, and I would downvote them.

Edit: I have the unpopular opinion (at least on HN) of not liking the "right to be forgotten", not to tangent too much, but I think that the ability to substantially change as a person is an important concept of human development, and that by allowing people to willfully scrub their actions from the record, they encourage the myth that people can't change, and that we're always how we currently are. And that, I think, trains people to be less forgiving. It's harder to forgive when you don't think someone can change.

[0]: although I do disagree with it (even beyond the screaming naturalistic fallacies)

My questioning the long term safety of GMO's is logical though. Peer reviewed studies on the topic just plain don't address the very real possibility that GMO's can have very harmful effects 20, 30, or 50+ years down the road. Both to our environment and our bodies. Believing that a short term study that concludes they're "safe" also means they're safe in the long term is quite a leap of faith, and illogical.

Edit: To put it another way, I'll bet short term studies concluded that asbestos was "safe" to use as a building material. In the long run though, that didn't really turn out to be the case.

Maybe I'm making a good case for eliminating comments by contributing to this tangent. On the other hand, maybe I'm making a good case for my claim that HN's comments are not nearly as plagued by groupthink as people claim.

The problem is treating GMOs as a category. Maybe there's good reason to be concerned that genetic modificiations that make crops less attractive to pests are hazardous to long-term health (I have no idea if there are), but what does that concern have to do with the risk of introducing genes to grow larger fruits?

I'm sure you can find reasons to be worried about any possible application of genetic engineering, but if you're willing to get that creative, you can find reason to be worried about any agricultural innovations. Maybe you claim that the introduction of a lentil gene to soy will indirectly create come carcinogen through protein interactions, but why don't you worry about the same issue when a new fertilizer is introduced? It seems to me that it's only because transgenic plants are new and scary and people are uneasy about "playing god".

Asbestos was ALWAYS known to be dangerous [1] so not a great example. I can't think of a case off the top of my head where long-term safety was incorrectly assumed from short-term studies. Perhaps some medicines? Anyway, in principle you're right that short term studies say little about long term impacts on health.

[1] https://en.wikipedia.org/wiki/Asbestos#Discovery_of_toxicity

>but what does that concern have to do with the risk of introducing genes to grow larger fruits?

Off the top of my head, there are several consequences that fall into the realm of possibility. Larger fruits will require more resources from the host plant, which could alter its development in unpredictable ways. More nutrients could be taken from the soil in order to make larger fruits, leading to earlier depletion and requiring more crop rotation (a process many farmers put off due to profits). Larger fruits will attract larger animals to graze (just look at what happens when hikers throw apple cores / other compost into the woods, it alters the movement patterns of multiple species).

I understand these examples sound hyperbolic, and I think GMO's are definitely beneficial in some situations. However, introducing changes to the very genetics of our ecosystem faster than they would naturally occur has definite effects. To think that we can adequately anticipate, react, and solve issues caused by these effects seems a bit hopeful, at least until we have a very advanced computational model simulation of our ecosystem.

"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should." - Ian, Jurassic Park

I feel it is hard to change you reputation so long as people remember your previous actions.

For most of human society, memory was constrained to the brain, which fuzzily remembers at best, and within mere years only can make summaries of past events.

Now with high quality video from phones, voice records, and text preserved in perpetuity, people can not only merely remember a person held inane ideas, but also see exactly what, when, and in which context they held those ideas.

So long as that's possible, it's easy for others to critique by saying "people can change, but X person said Y and that's beyond the event horizon for me".

Wait, I didn't realize the prevailing opinion was that "right to be forgotten" is a good thing. I thought the pseudo-libertarian strain combined with the strong archival beliefs of many on HN would be turn common opinion against it.

> To me, the thing that is most discouraging about HN is the rampant closed mindedness.

Absolutely true. But is HN more closed minded than the folks you see everyday?

Personally, the HN community is more open minded than 95% of the people I personally interact with daily -- even after including what you've just mentioned.

That hasn't been my experience. There are views that HN generally opposes, like being anti-GMO, and newcomers who don't know what HN's opinions are might accidentally run into something and get punished for it. But once you know what the standard HN opinions are, you just need to be very clear about exactly how you depart from them when you voice dissent. This means disclaimers and hedging — poor style in traditional writing, but necessary on the internet where you can usually assume that someone protesting GMO foods is committing the naturalistic fallacy.

Dude. I work in 2 jobs where the act of hearing any sentence with more than one multi-sylable word is like trying to find my Ashton-Tate manuals in the attic.

The purpose of community is to express common interests & shared values. Hackernews tends to express a shared value around Elon Musk. That is perfectly fine. It's something that should be respected even if you have differing views.

I bet a lot of the reason people down vote you about Elon Musk is that maybe you don't accept him as a "hacker" and don't see the way he's challenging the status quo and pushing technology, cars, transportation and energy systems into the future. Hacking the auto industry, the solar industry, the space industry. What he does very much fits the "Ethos" of what makes hackernews hackernews..

HN has become some kind of left liberal forum these days.

Personally I love the negativity of reddit and I enjoy it too. Snarky, sarcastic, vulgar or insulting comments in my opinion are part of discourse too.

> This limits useful discourse. e.g. someone should be able to criticize Elon Musk without being downvoted to hell.

Can you give some examples where someone criticized Musk and was downvoted?

Sure, let me dig some up.

[edit, adding a link to a comment]

Here is one (th13's reply was downvoted): https://news.ycombinator.com/item?id=11030447

(This is more tedious than I thought, I'll see if I can dig up some more. Looks like in the future I should make note of these as I see them.)

[edit, adding a second link to a comment]

You may need to follow the 'parent' link to see the telltale gray of the downvote: https://news.ycombinator.com/item?id=10775287

(Seriously, this is tedious. I know there are more, but I won't dig up anymore unless I'm pressured to do so.)

I know that on HN, we shouldn't downvote unless something doesn't add to the discourse, but I'm pretty sure th13 is just factually wrong, his arguments are weak, and it really doesn't add much to the discourse (e.g.:

An hyperloop is going to be a regrettable experience for everyone involved.

I think the fact that it's difficult to find examples of this is proof that there's not as much of a slant as you think there is.

Not difficult, just tedious. My being lazy and wanting to do something productive isn't proof of anything. Also, there isn't really an advantage to digging them up, since it is clear you'll just refute whatever I dig up.

Well, I might disagree with whatever you dig up; I can only really refute it if I present a good argument against it, which maybe I wouldn't be able to for some comments.

Making other people do the legwork so that you can idly critique them is a classic gish-gallop tactic, and does not add to the discussion.

What about asking other people to do the legwork to show some proof of their assertions?

Trying to bring up gender equality issues, which is a genuine problem in our industry, also has a similar response. There are a variety of 'taboo' subjects that will get you grayed out and gone in no time at all.

> Hacker News is about the only civil place I'm capable of contributing to a discussion to at this point.

Funnily enough, right around the time I made an acct here (4+ years ago, after ~1 year of lurking), I saw an internal post by a well-known Googler talking about the recent drastic decline of HN comment quality and looking for a replacement. I still get value out of HN comments (clearly), but it's definitely changed substantially in the last few years, and not really for the better. I've resigned myself to every high-quality Internet community without heavy, opinionated moderation being a slow treadmill towards popularity and then mediocrity, or worse. Though this is of course something of a "you're not in traffic, you are traffic" problem.

You and I are around the same group, but I've long since abandoned participating in these conversations. Honestly I am in these comments because I had hoped this was the conversation being had. The HN community drove me away long ago.

Maybe for the next community you need to figure out where it is in order to be self selected for membership.

I've had several people try to discern the meaning of your comment, but none of us could figure it out. You have words here that together sound like a sentence, but don't make any actual sense.

The implication is that if one were truly too good for HN, then one would already know about "the next HN": that exclusive comment site at which sober mature discussion among intelligent sophisticated people takes place much as it used to long ago on HN. Which is funny not least because the level of discussion here even today seems vastly superior to what one finds elsewhere online. Part of that quality, may be a willingness to welcome those for whom English is a second language.

I have never felt 'too good' for HN at all. That's one of the reasons I loved it in the past- I can pick almost any link and look at almost any comment section to find information that's over my head and the content and conversations were challenging and force me to learn- and I loved that. At a certain point I felt that it turned in to a bunch of mean girls, and I left. I sincerely wanted OP to clarify his comment, because it seemed to be saying something poignant I just couldn't understand it. I see how my phrasing seems insensitive.

Fair enough. I apologize for misunderstanding.

How's this:

Maybe for the next community, instead of making it public - potential users will instead have to find the site by solving a bunch of hidden puzzles. Think secret club.

Jokes aside: a major part of why HN works is that the community here self selects for several traits which ensure healthier communication.

As a forum becomes more mainstream, many things happen to attenuate the signal to noise ratio. (Pg had an excellent essay on the mid brow argument effect, which for many other forums is a problem they wish they had)

Eventually part of the crowd who came first realize it's not as useful anymore and decide to go somewhere else. The question is where.

We've seen eternal September come to almost every place and there's only a few traits which can help protect against it.

From my experience with many forums/modding, demographics are 80% of the issue, the remainder is topic of discussion, goal orientedness in discussion and moderator quality.

So the best solution is to create the forum and never advertise it. Let potential users figure out how to find it and how to get in.

Ah, that's a great clarification. I wasn't trying to be argumentative in my response, I was sincerely trying to determine what you meant. I'm glad you did, because your comment is great. Thanks!

hey thanks. I was half worried it was a terrible comment.

There is always going to be the elitist attitude of newcomers ruining things for the "OG" people. However there is certainly no way HN compares to the rest of the drivel out there. Always going to be a fine line between popularity, and trash.

> However there is certainly no way HN compares to the rest of the drivel out there

Shrug, that just depends on where people choose to set their standards; there are much worse fora and much better ones. As I said, I still clearly find HN comments at least worth reading, but the distinction isn't as binary as you seem to think it is.

>Hacker News is about the only civil place I'm capable of contributing to a discussion to at this point.

the interesting thing about this is that HN comment system seems to discourage dialogue - i.e. you are not notified when someone replies to your comment. It seems to be more geared towards comment-it-and-forget-it.

I think it's prohibitive enough to not allow things to spiral out of control in some crummy argument. I feel it's more of a peer review system. State your comment on what you've read/seen, and move onto the next thing. Which honestly is fine with me.

Yes, and this has always bothered me. There's a chrome extension (Hacker News Enhancement Suite, broken at the moment due to recent HN changes) that made it easier to see your own comments. Since I no longer use the extension, I rarely check for replies. Bums me out.

Threads serves the purpose for recent stuff though right? I mean, do you really care about people commenting on stuff from days ago?

Sometimes I'll ask a question and forget I asked it, or someone will ask me a question. I'd like to know about those cases more reliably. But if it contributes to any degradation of HN's high quality comments then I'd prefer to keep things the way they are.

Fair, I think they could simply just sort by most recently commented posts you've created, and that would just solve that issue.

Click on your username in the upper right corner, then navigate to "comments" on the left.

That's the same function as "threads" in the main heading.

Didn't realized that was what "threads" was... thanks.

I click on "threads" on the top menu. Not sure when it was added but it shows all my comments with replies inlined.

The issue op has, is that it's difficult to sift through that, especially older comments.

I use hnreplies.com for this

What if you try http://www.hnreplies.com/ ?

It really is annoying. It allows for a very brief and mostly superficial conversation on whatever happens to be on the front page in the last few hours, and then it's done. That's no way to have a legitimate, in depth discussion. And the result is definitely a stunting of the quality of the conversation in the name of... I'm not sure, forestalling flame wars, keeping people interested by keeping the front page churning, I don't know.

It encourages dialogue between more than two people, which is often a good thing. The quality of a discussion usually improves when more people participate.

It's rarely a problem if someone forgets to follow up since someone else will usually do it, at least if their views are shared by other people.

That being said, the "threads" page is useful for finding and responding to new comments in your threads if necessary.

In my opinion, that's a feature not a bug. While it probably reduces some of the sense of community, it also makes it much more difficult to keep flaming on until the thread becomes a dumpster fire. A user has to actively seek to pick a fight and keep up with it which most normal people don't have time for.

But you can still easily see comments if you look at your page.

Unlike Slashdot where you have to drill down quite a bit.

The threads link at the top should work for recent topic replies, but you can also use a service like HN Watcher if you want to be emailed when someone replies to you. While I find the emails useful, I don't think it improves the quality of my comments.

It depends on the subreddit. There are many where civil conversation is the norm. r/politics and places like that are horrible. Just remove them from your default set.

You know, I get this response every time I talk about this topic. I really dislike the "well no one is making you use / buy / watch / listen" to something. People can be critical of something they want to succeed. It happens a lot in certain game communities. What if I WANT to talk about politics, in a civil manner? Any sufficiently popular subreddit, just devolves into complete trash.

> You know, I get this response every time I talk about this topic. I really dislike the "well no one is making you use / buy / watch / listen" to something. People can be critical of something they want to succeed. It happens a lot in certain game communities. What if I WANT to talk about politics, in a civil manner? Any sufficiently popular subreddit, just devolves into complete trash.

You're missing the point of e40's response (and others of its kind). It's not "stop whining, don't use it if you don't like it". It's informing you that there are still parts of Reddit you can get value out of if you change your usage patterns a little bit, instead of your blunt-instrument approach that leads to losing gems like (e.g.) /r/AskHistorians just because /r/politics is terrible. There's of course nothing _wrong_ with you deciding that Reddit overall isn't worth the effort, but there's nothing wrong with someone giving you an alternative in case you were unaware.

When Reddit blew up in popularity, almost everyone I know who'd used it for a long time got sick of the default feed (both submissions and comments) and then at some point found out that being parsimonious in your subreddit subscriptions can give you a pretty high-quality feed. Reacting with defensiveness and hostility to someone giving you that advice in case you didn't know is frankly just bizarre.

That's probably due to the nature of politics.

It certainly tends to get a lot of people riled quite easily.

What's that thing about polite company not talking about politics, religion, or sex.

The average person, myself definitely included, probably doesn't have much worth listening to on the subject anyway. To paraphrase PJ O'Rourke: though there certainly are many political commentators who might be worth listing to. Under some circumstances. In a crisis. Maybe.

One of the issues that makes political discussions extra toxic is that because there are hundreds of millions of dollars spent on propaganda using the most sophisticated psychological techniques available everybody is full of outrage-inducing hatreds against "the other side". This makes it really hard to have a constructive exchange of ideas without falling into a minefield of unexamined, implanted mental prejudices, even if you try really hard.

My least popular comment ever was downvoted for trying to point out the problem with assuming that the people who fought against the healthcare bill are the same ones who hate the idea of SS/Medicare reform. Are we really so camp-y in our politics that you can just assume that the people who don't like something you like are on 'the other side', and that we can ascribe the opposite of all our other preferences to those people as well?

It's not just r/politics—that's just one example.

r/programming is another good example. The community there is very hostile and actively downvote and hate on most new things that aren't C++/Java/.NET/Python sometimes. It's been stereotyped as a bunch of overly angry .NET/Java devs, not sure how accurate that is but the angry part is right. It's just a waste of time.

I think it's grown that way over time. Used to be very focused on functional and dynamic languages back in the day. Hell, Perl was the hot topic back in 2005-2007 era reddit.

Right, which just reinforces my initial statements above. Popularity breeds trash. Forums move from a small group of passionate users, to general population. This is where quality deteriorates quickly.

It's just not politics, it's anything sufficiently popular to attract a large portion of the general public. Only niche categories, with a small group of passionate people work. Otherwise, it turns into a free for all.

Agree. Only the small subreddits with a tight focus work for me.

Not so (imo). The neutralpolitics subreddit has grown continually for some years now, especially in the last 12 months, and I'd say that the moderators are doing a solid job of maintaining appropriate discussion and keeping things where they were intended at the start: rational, sourced, with a heavy hand on emotional content and personal attacks.

Similarly, I dropped r/gaming years ago, but r/games has been reliable enough and good enough for such a period that it's evolved into my default place I look for gaming news. (The League of Legends subreddit also does pretty well on content and moderation, for something running at the scale of some of the defaults).

The problem I have with reddit, usually, is discovery - its difficult to find subreddits that are popular enough not to be dead, have the content I want, and are managed in good fashion to prevent the usual content/comment trash.

/r/politicaldiscussion was created as a response to the malarkey that has become /r/politics and they place an emphasis on civil, reasoned debate. May be worth your time.

Thanks, just over Reddit in general. Quit about a year ago.

Can't really say I blame you. Found any other interesting communities other than HN, obviously since we're both here?

Not really, spending my time on side projects, and hiking.

I don't think this comment adds to the discussion at all. The comment is actually a great microcosm to the dearth of comment quality. 1)You would never know about this reply unless you sought it out. 2) Once you replied to it you said nothing of value to anyone. You even threw in an off topic item. Let me guess, you don't have a TV either? Perhaps you are a vegan....

This is what is flawed about comments. It becomes an ego circle jerk...Which is necessary to even generate comments in the first place ("oo look how much Karma I get") but ultimately destroys a conversation.

As a very rough heuristic, it seems that subreddits with subscriber counts below about 10,000 seem to be able to maintain what makes them great (if they can be said to be great) without moderation.

You have a much higher percentage of users who are deeply interested in the raison d'être of the subreddit.

Beyond that, you either need moderation or have to make a new sub.

I actually enjoy articles that allow comments, because I like to read what people have to say about it before I actually read it. I don't really know why, maybe to gauge whether or not I want to get more information about the topic that the article title doesn't already provide.

I haven't read the article this thread is about yet, either, because the title gives me all the information I need at this moment. I know how crazy the comments were getting for NPR, which I started reading regularly a while back because they allowed comments on their posts. Now, I skip the comments if I'm reading an NPR article because most of the time it's some guy spewing something about how Obama is destroying the country when the article has nothing to do with politics in the slightest.

The problem is that you've now created a bubble for yourself.

People did this on a national scale, then are surprised to see Donald Trump have so much support.

Too much negativity is debilitating, but sticking your head in the sand has its downsides.

Same boat. No longer frequent Reddit except for a very small number of subs.

I have Adblock/Tampermonkey rules to hide comment sections from most news sites that I frequent. (If someone could point me to an extension that handles this, I'd be quite happy...)

I consider comment sections (or even comment counts) to be visual noise that distracts from grokking the actual content.

It's unfortunate, because I like hearing a variety of perspectives. But without highly aggressive curation, reading comments is a net negative on popular sites.

So true, there is maybe 10% comments of actual value anymore on reddit. The most valuable sources of info on the web used to be forums or comments, now there is very little. I used to search hackernews comments for useful info, but that is getting less and less. We really need a good place for expert info and opinions. Hopefully my friend working on a reddit like site, but with a whole new voting algorithm will help fill this void.

I've always thought of technology as being the sole driver behind the fragmentation of communities into small niche communities. It's interesting seeing that comments appear to be one aspect of technological change that drive people to seek niche communities (going off of another comment that it's better to frequent smaller subreddits rather than the larger, default ones).

Not sure where I stand in this one but when we removed a distribution list for all the condo owners, all the negativity which usually comes from few people stopped and all the owners lived happily ever after. So maybe taking some of the online discourse out it is not as bad as we think. People now come to condo meetings to talk about burning issues.

you had me until this:

> Hacker News is about the only civil place I'm capable of contributing to a discussion to at this point.

that statement floored me

because historically, HN has been one of the most toxic comment forums I've seen. I've kept coming back despite it, not because of it. there's enough signal to justify it. but it's a kind of low-grade toxicity: a weird mix of passive aggressiveness, disagree-based-downvoting, "cite paper!"-ness, minutial edge case-oriented pedantry (that misses the forest for the trees) and neverending humblebragging.

To give just one example I can't think of any place I've seen more humble-bragging and stealth-but-blatant self-promotion as on HN. Whereas on Reddit I know I can always read discussions where the participants are friendly, funny, insightful, mature, organic, etc. Not always, not everywhere, but I don't have to wade through a shit salad like I've had to do here on HN for years.

I do think it's gotten better in the last year, maybe due to the work of dang and some of the other UI improvements.

I do agree that on sites like CNN, YouTube, the comment threads tend to be overrun with the LCD behavior. Lots of noise there.

If this is one of the most toxic forums you've come across, you must be very fortunate almost to the point of being sheltered. Not saying this is the best ever, but it's nowhere near the worst. The worst traits on HN seem to be an abundance of nitpicking with some disagreements on a few pet topics. Many other sites have active anti-social posters that abuse users for fun, start flamewars for the "lulz" and will argue any side of any issue that causes grief to someone. Not even close to seeing that sort of nonsense here.

you're right and wrong. :-) (which is the best kind of right/wrong? haha)

what I mean is... I tried to make a subtle point, but it didn't come across. what I was trying to say was this is one of the most toxic sites I've seen for a certain mix of toxic behaviors: mostly passive-aggressive, hyper-nerdy/autism-spectrum types of comments, humblebragging, etc. I agree with you wholeheartedly, from my own experience that yes there are other sites where there is more direct abuse, more direct trolling and harassment, which doesn't happen (or nearly as much) here on HN. what I'm saying is, on the flip side, I see a lot more of a certain kind of... there needs to be a word for it... the slang terms I learned for it were douchebags, pinheads, compulsives, booksmart-yet-streetdumb, etc. I just have never encountered that as much over on Reddit (itself a broad category because so much variance between subreddits there, and front page vs not) as I have here on HN.

again, I wasn't saying there's no signal here, and no politeness. I see lots of politeness and maturity here. also lots of muck. thus my slang term "shit salad" -- mix of good and bad. my point was that there's a different assortment of bad behavior I see here than on Reddit, CNN etc.

it does appear to me to have improved a lot, especially over the last several months. we'll see. YMMV.

I mean, when you say:

>>a weird mix of passive aggressiveness, disagree-based-downvoting, "cite paper!"-ness, minutial edge case-oriented pedantry

You are basically describing software developers.

They are passive aggressive because they don't like direct confrontation.

They downvote when they disagree because if they disagree that means you are wrong.

They nitpick on edge cases because edge cases are what they deal with everyday in their code, so they can't help it.

So really, it's not HN you have an issue with, but the developer crowd. ;)

The nit-picking and edge-casing is what spoils it for me. You can write 500 words about a general topic and someone will reply, "See in that sentence, you said 'all' there instead of 'some' and I can show you a counter-example! Your entire message is therefore wrong!!"

bingo! well said. part of why it's become, for me, that whenever I make a comment on HN, I almost always regret it. I'm so tired of having to think about filtering my comments to cater to every possible fine-grained exceptional case, or conforming to P.C.ness, or this site's particular participant hive-mind (which seems focused disproportionately around what it's like being a current semi-autistic teen/twenty-something white American male programmer of a relatively privileged background financially. Sorry if this hurts, but it's the best broadstrokes description I can think of for the median commenter personality I've seen here on HN.)

HN is/will decline. It's fate according to Peterson's Hellbanned Laws: http://blog.timrpeterson.com/2016/06/03/hellbanneds-laws.htm...

Then make sure to NOT talk about HN. Hidden gems only stay gems if you do NOT blab about them.

HN doesn't appeal to the general population. It works because it's niche to a specific crowd, that is probably more educated, and working towards a specific goal of improving things in general.

Rule No. 1.

I see comment moderation as one of the 'unsolved problems' left in this generation of the web. When I worked at Foreign Policy we worked hard to integrate new commenting tools and encourage power users, but we were just buried by the threats, spam, and low-value noise.

Web technology scales, journalism scales (poorly, but a relatively small publication can pull big traffic), but right now there's just no substitute for someone at manually checking out reported comments and banning problem users. When you have a site with as much traffic as NPR, that would probably take dozens or hundreds, and these orgs are loathe to outsource it to cheap countries like the big web players do, mostly due to the ethical challenges.

Maybe moving comments to people's own social groups on FB/Twitter will help to defray the costs, I don't think they're really seeing any discussion value for the most part.

What are your thoughts on incentivizing constructive comments? I've seen publishers (The Guardian, if memory serves) select thoughtful comments and re-print them as micro-articles in their own right. This seems to solve part of the problem by setting an established, if not mostly-objective, standard for comment quality: journalistic publication standards.

As such, bias and opinion is welcome, provided that it's analytical, verified by fact to a reasonable degree, and respectful of common etiquette. The genius in this approach, as far as I'm concerned, is that it manages to preserve the original purpose of comments: scalable content-generation!

Clearly, moderation is a Hard Problem, but one that I think benefits from an economic/incentives analysis. One conclusion I've drawn is that restricting comments to paid-consumers makes banishment and sock-puppetry costly enough that moderators can mop up the rest by hand.

To ask a specific question: what, exactly, remains "hard" with this approach? Do you think "free to read / pay to comment" is viable, in principle? Do you think the promise of publication is not a good incentive? Why?

I think that incentive idea is great, and is a smart move to build a community, particularly when you're trying to draw subject matter experts. I like how some of the Ask<X> reddits do it, by flagging people with verified advanced degrees. People think that news sites are afraid of conflicting opinions, but in my experience that's nonsense, it just has to be well thought out and not "DEATH TO <ISRAEL/ARABS/SUNNI/TURKS/AMERICA>", which is the vast majority.

It still doesn't solve the problem that for someone to _find_ those great comments, they have to _read_ them, and stop them from getting buried.

I'll err on the side of caution with revealing employee counts, but in my experience many of the FP/Atlantic/Mother Jones/Weekly Standard/Pick your midrange site are running on a single digit to low double-digit number of web production staff, many of whom are also trying to make a writing, article layout, or fact-checking quota. The suggestion that these magazines can either get those staffers to moderate tens of thousands of comments per day, or quadruple their web staff just to improve the comments ignores the business reality.

User moderation in the normal HN/Reddit way doesn't work well on news sites, it's too easy to game or brigade, and news sites can't or won't give add unpaid moderators to be gatekeepers.

That's what's hard; creating comments is scalable, filtering them is not. Leaving them unfiltered doesn't work either.

Thanks for your reply!

>I think that incentive [is good], particularly when you're trying to draw subject matter experts.

You bring up an excellent point. One of the fundamental problems with comments, I think, is that it creates a space in which ignorance and expertise are equally-weighted. In fact, it's often worse than that for reasons we all know: interesting issues are hard to distill into 300-or-so characters, and short, simple points are often more percussive.

Vetting credentials is a very good option IMHO for certain forums but not for others. Reddit's /r/askscience is an example of a forum in which it works well.

>It still doesn't solve the problem that for someone to _find_ those great comments, they have to _read_ them, and stop them from getting buried.

I wonder if this problem can't be solved through the use of machine-learning to classify comments into high-versus-low quality by grammatical and semantic analysis. This kind of first-pass filtering could, at the very least, help throw out the obvious trash and pre-select candidates for recognition.

Such a system can be tuned to minimize false-alarms (shitpost getting flagged as good), which I think represent the most problematic of classification errors. This is a nice problem-space for ML because the increase in misses implied by a bias against false-alarms doesn't degrade the service much: not having one's comment select for re-publication is unexceptional.


RE:Machine learning: I think there are two problems with that approach, one cultural and one technological.

The cultural issue is that many news orgs are still run by people for whom the idea that technology could accidentally censor a valid criticism or ban a decent voice is just too risky. I think this is changing, and many newsrooms today a little more fluid than when I really cared about the problem 4 years ago.

The tech issue is a little bit of a cop out on my part. An ML approach is super attractive to me as a techie. Google (youtube), facebook, NYT, WaPO, and tons of other billion dollar orgs have this problem, and could loads of money by being seen as better communities.

On the more guerrilla side, hundreds of subreddits have automoderaters written by savvy, caring moderators.

They have terabytes of training data, already tagged, and world class ML experts on staff. If it was a tractable problem with business value, why wouldn't they have fixed it? I'm guessing it's the sort of thing that looks doable from the surface, but you get buried in the details.

Again, cop out answer, so please go prove me wrong!!

>cultural issue

I understand, and I think that's probably the most difficult problem of the two. I'd just like to point out -- in the interest of discussion -- three things:

1. Pre-filtering for moderators is different (much safer) than auto-banning by a bot

2. It's valid both to filter informed opinions that are poorly expressed, and for a publisher to have a preferred "voice", i.e. a style of writing that it favors.

3. The argument can be made that machines are no more biased than human editors, and that in many cases, the biases of the former are known. As a corollary to this point, there exist certain ML techniques (e.g. randomized forrest classifiers) for which the decision process of an individual case can be retraced after the fact.

How do you think publishers would respond to these counter-points?

>technical problem

Counter-cop-out: someone has to be the first!

Somewhat-less-cop-outy-counter-cop-out: by your own admission, certain sites (e.g. Reddit) have high-quality automoderators.

I would argue that the problem is "approximately solved" and that this is sufficient for the purposes of moderating an internet news publisher. Again, I would make the signal-detection-theoretic point of my previous comment: I can selectively bias my automoderators in favor of reducing either false-alarms or misses. Of course, this brings us back to the cultural problem you mentioned.

By this I conclude that the bottleneck is cultural, which brings me to a follow-up question: what do you think is driving the increased tolerance towards accidentally censoring a "decent voice"? Is it the understanding that it doesn't matter so long as a critical mass of decent voices are promoted?

omginternets we're starting to run into HN flame-war restrictions, and I'm working so apologies if responses come slowly.

> How do you think publishers would respond to these counter-points? In my experience 1 and 2 are fine, but 3 is actually a _net negative_ to some of them. People who by and large have come up through 10+ years of paying dues in a 'The patrician editor is always right' culture _hate_ giving up control, even when it makes their jobs easier.

Editors I've seen have balked at things like Taboola and outbrain, despite them being test-ably better than human recommendations, and saving staffers work. It's a fair argument that picking which stories to promote is a core part of the editorial job more so than comment moderation, but the attitude match is there. Editors at one DC media org I didn't work for shot down A/B testing any new features in the first place, because there was an assumption that the tech staff would rig it!

I don't want to paint 'editors' with too broad a brush, but there's definitely a cultural reluctance at the high level to automated decision making.

> What do you think is driving the increased tolerance towards accidentally censoring a "decent voice"? Is it the understanding that it doesn't matter so long as a critical mass of decent voices are promoted?

It doesn't matter to you and me. We think like HN'ers, where there are trillions of internet packets flowing around every day, and a few will get lost. They think like hometown newspaper editors parsing letters. When you take on the responsibility of being a gatekeeper, screwing it up is a big problem, every time.

I think increased tolerance is coming from more exposure to the sheer volume (Every week at FP the website gets more visits than people who have ever read the magazine in it's 50 years of existence combined), and a bit of throwing the hands up and saying "who knows"

Again, I'm speaking for a pretty specific niche of old-school newspapers and magazine people turned editors of major web properties, because those are where my friends work. Things are probably different at HuffPo or Gawker or internet native places, but clearly not that different because their communities are still toxic.

> I would argue that the problem is "approximately solved" So I disagree here, but don't have evidence to back it up, other than years-old experience with Livefyre's bozo filter, which we didn't put enough work into tuning to give it a super fair shake.

Taking spam comments as mostly solved, I think there are 3 core groups of 'noise' internet comments:

1. People who don't have the 'does this add to the discussion' mindset to use HN's words. cloudjacker and michaelbuddy 's comments below demonstrate this pretty well. I'd lump cheapshot reddit jokes in here as well. They're not always poor writers, or even negative -- "Great article! love, grandma". Which falls back into the ethics of filtering them. I suspect that this is 80%+ solveable.

2. The 'bored youth' and 'trolls' group. This is actually the worst group I think, because these are the people I suspect that make death threats and engage in doxxing and swatting. Filters will catch some of these people, but they're persistent, and many of them are tech-savvy and reasonably well educated. They can sometimes be hard to tell from honest extremists. A commenter from group 1 who is personally affronted can fall into this group, at which point they become a massive time suck. Hard to solve, but verified accounts help here in the US case.

3. Sponsored Astroturfing. Russia, Turkey, (pro/anti) Israel, China, Trump (presumably the DNC?) all have a large paid network of people just criss crossing the internet all day trying to make their support base look larger than it is. Especially in the US politics case, they often speak good english, and are familiar with both sides' goto logical fallacies. They'll learn your moderating style in a heartbeat, and adapt. Unsolveable.

Anyway, if someone builds a good bozo filter, they're almost certainly a zillionaire. I hope it happens, but I suspect we'll just start looking back on website comment sections like usenet, as a good idea that didn't scale very well, and find something better.

Taboola and Outbrain's recommendations are so pathetically insulting, and the tracking so obvious, that I've both blocked their domains (router DNS server) and specifically set "display:none;" properties on any CSS classes/IDs matching their names or substrings.

It's pathetic bottom-feeder crap.

Maybe if I fed the beast through tracking, I'd see higher quality recommendations, but I won't, and I don't. They only serve to tell me just how precariously miserable the current state of advertising, tracking, surveillance-supported media is. I'm hoping it will crash and burn, not because I want present media organisations to die, but until they do, we don't seem to stand any chance of something better.

(What better, you ask? Information as a public good, supported by an income-indexed tax.)

I was referring specifically to their paid same-site recommendation engines. So you drop it into an article, and it recommends other articles from your site. In my experience it's decent to good, depending on what metadata you provide it.

I agree that the '10 weight loss secrets' promoted junk to third party sites is bottom scraping.

It tarnishes both brands. Taboola and the hosting site.

Sufficiently that the in-site referrals fail for technical reasons.

I really disagree. Yes, taboola maybe is promoting literally ANY content- even spam. So yes- I blocked them but currently Outbrain is really operating as a content discovery- I didn't find any content the abuses me as a reader. Not Yet. I know that they have strict guidelines as well for their advertisers.

Reading the other reply thread with slowerest gave me another possible solution, too.

Perhaps the comments sections for journalistic pieces from organizations like Ars, NPR, NYT, local news, etc could be more of a competition (like Slashdot). Top 300 comments get preserved, leave it open for a month with no comment limit and some light moderation, and let the conversation go wild (I like Reddit's system for this), then delete all but the top 300 at the end.

Adjust "300" and "top" to fit your organization's needs, just make sure they're clearly defined. Would also help limit the scope for an ML-based solution, too. :)

For news sites with a paid component, they could allow comments only from subscribers / donors. Having a gate which involves money will improve the conversation somewhat. I'd even go a step further and make comments invisible except for subscribers. People creating trial paid accounts could see the comments but not comment themselves. This latter step would prevent astroturfing from firms willing to pay $10 for a trial but not $100 for an annual subscription.

Moderators would still be needed but their workload would be reduced. And there would be money available for them since many would subscribe / donate just to be part of the community, which would make moderation less of a drain and more of the core profit-making.

> What are your thoughts on incentivizing constructive comments? I've seen publishers (The Guardian, if memory serves) select thoughtful comments and re-print them as micro-articles in their own right.

I don't think you're correctly identifying the problem. In my experience, the problem with comments, especially on news sites, is a glut of bad comments, rather than a lack of good comments. This solution doesn't disincentivize bad comments.

The solution to bad comments is deleting them before they are even visible to other users. Deleting aggressively, as is done in certain subreddits (r/science) may seem offensive towards naive users who just want to add their "2 c" to the discussion, but it's the only effective AND honest strategy: if your comment adds little of interest, it's worth nothing. The bar should be very high, the more popular the website the higher the required quality. But in the end I think NPR are making the right choice. Comments on websites are not a constitutional right, after all.

The aggressive moderation in /r/science is quite honest compared to other subreddits, which is partly why its moderators attract less controversy when compared to others such as /r/news.

The slashdot system for categorising comments seemed to work really well at making the highest quality comments stand out, I wonder why other sites haven't tried something similar, I don't think I've seen it used elsewhere.

it seems sites find it to complex to implement.

slashdot nailed moderation, no one has attempted something similar. most systems are simple up/down vote or like/report

i am also starting to wonder if the agegroup being hired to implement "social" for websites is now young enough to have missed slashdot in it s prime.

the fact that people are still brainstorming from scratch instead of talking about how to improve slashdots model reeks of reinventing the wheel because they never heard of it.

> i am also starting to wonder if the agegroup being hired to implement "social" for websites is now young enough to have missed slashdot in it s prime.

That's me! Can you explain the Slashdot model and why it worked? Or point to a good write up about it somewhere else?

Been a while, may not be entirely correct:

Slashdot's model was perhaps a little overcomplicated, but my favourite feature was the ability to tag up/down votes with flavours. +1 Informative was different to +1 Funny, and "Factually incorrect" was a different downvote to "Off-topic spam" (whatever they were called).

Other quirks off the top of my head: it capped at +5 and ... -1, I think? The score represented a thing closer to the up/down ratio than "Facebook likes". There was a dedicated -1 Overrated moderation for "I don't disagree that it's interesting, just not +5 interesting".

Also, logged in users got a fixed number of moderation points at random intervals, and you couldn't moderate in a story that you commented in. I'd like to believe this discouraged "throwing away" points on low-effort joke comments, but I'm not sure the facts of Slashdot comments entirely bears that out.

EDIT: And then there was meta-moderation...

Slashdot's method of scoring comments was overly complicated and probably did not produce any better results than reddit-style voting. However, Slashdot's killer feature was that the reader could filter by comment score and thus only read the 'good' comments, and not have to wade through hundreds of replies.

correct, they were better than reddit because they let the user sort based on their preference. slashdot generated a ton of metadata that described their content, and then gave you the power to intelligently utilize that metadata.

I suggest reading the FAQ


especially all the Moderation, metamoderation, and Karma sections!!

Slashdot's moderation system was vaguely effective. Complete crap rarely rose to the top.

A great deal of high-quality commentary was buried, however, often the best and most informative. That's fairliy much par for the course.

Much the early vibe on the site came from the fact that it was simply where intelligent people were commenting online -- especially the early Free Software crowd (well, early in terms of Web 1.0 -- there was the whole 1980s and early 1990s contingent as well).

ESR (before he went fully whackjob mode), Ted T'so, Alan Cox, Bruce Perens, Rasterman, and others.

Much that group seems split amongst HN, LKML, LWN, and Google+ these days, along with some blogs.

Interestingly for this discussion, CmdrTaco is now at Wapo, after a brief spin through their various internal startups.

And WashPo's clickbaitism is getting unbearable.

I'm about ready to blackhole that domain.

When I was delivering newspapers as a small child many aeons ago, the best page of The Guardian was 'letters to the editor'. The rest of the paper was pretty good back then, there was no email, so anything printed in the 'letters to the editor' had to be posted in, to appear some time after the events in question.

Needless to say an event happened and was reported the next day, so it could be a whole week between the Trump-of-the-day saying something and comment appearing about it. All of this would be filtered by the 'editor', however you did have frequent letters by the likes of Keith Flett, who somehow got his letters published more often than the other 3-5 million readers (as it was back then, just UK sales with poor distribution in places like Birmingham).

There were no 'likes' back then so you had to have something to say to bother writing in.

How do we get a digital equivalent? I don't buy the dead-tree paper these days so no idea if 'letters to the editor' still exists, but, back then it was good, very good.

Its interesting that simply restricting immediate commenting might at least deter useless comments. People who are commenting in order to elicit a response, i suppose, probably have less important things to say. Or maybe they wouldn't say them if they are not granted the immediate satisfaction.

I assume it would kill some collaboration/innovation like on HN or a meaningful subreddit, but maybe no one really ever has anything meaningful to say when reacting to general news...

I guess it would also produce duplication from many people not knowing something was said already (however, the duplicate reactions could be monetized later down the line maybe...)

A podcast I frequent does this sort of thing. If your comment is read on the podcast (and they read one a day) then you get sent a .NET Rocks coffee mug. Which is kinda neat.

The podcast is .NET Rocks and their comments seem to be pretty good overall.

Nobody old enough on here to remember Slashdot's moderation system?

Not everybody could promote or demote comments. You got randomly assigned the ability to moderate comments so when it came your turn you took it _seriously_.

That community had one of the highest quality comments. Then somewhere in the mid-2000's it got super anti-Microsoft and anti-anything-not-F/OSS. I'll give them credit; it probably reflected the highest quality comments of their userbase at the time.

Slashdot's moderation still had some problems - which might be inevitable, I don't know.

There was a big bias towards early comments - moderators had to see your comment before they could upvote it to the top of the page, but once it was at the top more people would see it and keep it there; so a comment that would score well if posted as comment 10 would score nothing if posted as comment 50.

And karma tended to reward /popular/ comments, which were often things the hive mind agreed with, rather than high-effort comments. Discussion about DRM? Get in early with "DRM is impossible because" or "format-shifting should be a right" for a quick high score.


At one point I actually helped design something somewhat better (which was implemented in part). I've had further ideas since.


> That community had one of the highest quality comments.

One of the biggest differences between slashdot, and a site like reddit is simply size. Reddit is now the 8th or 9th largest website in the U.S according to Alexa, it's getting as big as Twitter, and is larger than Netflix. Slashdot at it's peak popularity wasn't even a drop in that ocean of traffic & pageviews. When you get that big, your problems are of a different sort, requiring different solutions. Hell, I think reddit has single subreddits that are bigger than Slashdot was at its peak.

This is important because it's easy to have "high quality" when your traffic is low. It's easy to moderate and easy to keep people on-topic. I speak from experience -- I moderate one or more default subreddits on reddit, as well as smaller subreddits, and the smaller ones are much easier to handle. They're virtually on autopilot with minimal moderation required. The larger ones on the other hand... It's like a non-stop war.

While I think there may yet be some sort of NLP/ML-based filtering that can improve the signal to noise ratio, the fundamental problem is that the effort is incredibly asymmetric.

It takes an author far, far longer to craft their work than it does for someone to heckle it.

If people weren't driving up page-views by coming back to the same article to see if their comment was liked or replied to, I think this would be a very easy decision for most sites: at some point you are responsible for all of the content on that page.

Perhaps micro-transactions could slow new posts down enough to make a difference. Maybe use the funds to pay people for high quality comments.

A solution a German blogger did was kinda funny.

For every post you made, you had to enter a captcha.

Then, if your comment had words that were likely to be hateful, it’d show your comment again to you, and force you to enter the captcha again.

The worse the likely quality of your comment, the more captchas you’d have to enter.


> "these orgs are loathe to outsource it to cheap countries like the big web players do, mostly due to the ethical challenges"

But suggesting people engage instead on Facebook brings a whole new set of ethical concerns. (1) Facebook manipulates users. (2) Facebook reorders feed. (3) Facebook would lower priority of conservative news sites. And lets not forget that Facebook is probably outsourcing moderation anyway. Plus, Facebook commenters can be just as bad as regular site commenters.

> (3) Facebook would lower priority of conservative news sites.

I worked on the trending product. This did not happen. The whole thing goes back to one guy complaining now that he couldn't pick Breitbart for the highlighted slot for some story because it wasn't on the list of approved sites. And this list is actually available here https://cdn.ampproject.org/c/newsroom.fb.com/news/2016/05/in... Of course no one ever asks why he wanted to pick a controversial site to highlight instead of say a boring straight forward wires service report like the AP.

Of course the story still appears, and the Brietbart could appear in slots 2-N by the personalized ranking algorithm, so it's not like it surpressed. He just wanted to shove it into the I personalized slot 1 where everyone would see it.

Sadly I feel that this is one of those cases where it's impossible for the correction to ever overcome the initial misinformation. On average, People do not accept new info when it refutes their existing knowledge base. This is doubly so in tribal areas like politics.

Yup. It's basically saying "we can't afford this, so we'll make it not our problem."

FWIW FP briefly used an embedded facebook widget, and a nonzero percent of their livefyre users logged in via FB.

It did little to nothing to stop abusive comments. The HN crowd cares a lot about what sort of history follows around our names and our handles. Many others, both in the western world and abroad, do not.

There's a german blog that used to be popular (blog.fefe.de) without comment function. So some people built a website that offers the same blog just with a comment function. They built in a captcha function, that fails with a the probability that your comment is a troll comment.

Here's the talk about it (in German): https://www.youtube.com/watch?v=ZG4FawUtYPA

See also


- a chrome extension

I know someone running a company that aims to solve exactly this problem, and they were attempting to sell to NPR, too. Last I talked to them about it, they said NPR seemed interested but has the typical years-long enterprise buying cycle. So, this news is really too bad.

What's the company?

I agree, this is an unsolved problem for the web.

Unfortunately at least 90% of internet comments are trolling, vitriolic, ignorant, generally useless, poorly written, unhelpful, add nothing to the topic, and basically serve as web pollution.

Yes, I think using FB login pretty much solves the problem as it is today. Take a look at civilbeat.com a regional news site by Pierre Omidyar (The Intercept). I'm fairly certain they only allowed comments via FB login for years. It meant a lot fewer comments than they would have gotten but they were all legit. Now it looks like they allow FB, Twitter, or local auth. But the comments are still mostly ok. Maybe they are looking for more activity by easing the requirements and believe they've built a culture of good commenting?

I think describing FB login as "solving" the problem is definitely overstating it by a lot. I've seen plenty of dumpster-fire comment sections that allowed only FB users to comment.

I'm not sold on it because a lot of newspaper web sites use facebook comments and any topic about politics, race, or gender seem to be full of people making hateful comments.

Seems like it was quite "better" than the alternative (basically anonymous user logins making comments) for my home towns recent transition to fb comments, for what it's worth...

The one place I've actually found awesome comments was "the economist" (well, HN isn't bad either), and the ny times is kind of OK. Everywhere else feels pretty iffy...

It solves the problem for me, I guess. I won't be commenting anywhere you have to log in to facebook because I don't want them tracking me all over the web.

While a facebook account gives some legitimacy, I also like sites where you can post anonymously or at least pseudonymously.

If you are using Tor Browser in Tails over a coffee shop wifi while you are laying down under a blanket in the back of a truck driven by a stateless hobo with no fingerprints who you intend to murder later in a country with a healthy democracy, you are probably still not anonymous. If you are not doing those things, you are definitely not anonymous.

Sure, but you're more anonymous.

So the solution is a unethical, privacy invading, profile building company run by a guy who thinks his users are "dumb fucks"? No thank you.

"When I worked at Foreign Policy we worked hard to integrate new commenting tools and encourage power users, but we were just buried by the threats, spam, and low-value noise."

Those were your users.

Assuming you're trolling, but at the risk of feeding:

Someone who posts "Sir your magazine and Hillary Clinton are tools of Israel and should be killed by Hamas, God willing", on all every story about the State Department, or "Oh $WRITER I see you live in DC and went to $COLLEGE, maybe I'll come pay a visit to the next alumni event and teach you some respect for $COUNTRY" isn't the target user for a major American publication. It doesn't want those kind of abhorrent sentiments to live alongside its brand on its website, and is under no obligation to give voice to their ideas.

They're an exceedingly small percent of total readers (when they're even real readers), but a much larger percent of online commenters, hence the problem in the first place.

Even in the non-bot non-astroturfing case, the people who make those comments may be actual readers (although they're exceedingly unlikely to be paying subscribers), but they definitely fall into the bucket of 'can be filtered out, to no appreciable loss'.

They're users in the sense that the website is free, and anybody can be a user, but not in the sense that the publication has a duty to them, in exchange for their money or attention.

Aside from [bot] spam, I agree with the statement of Those were your users.

What OP really wants are the good comments, which is more than just spam filtering and also more subjective. If an ill-informed, 13-year-old's comment would be considered low-value noise, website operators would need to engage in something resembling censorship, which has its own set of problems.

Don't have time to elaborate - but moderation tools actually link to many other deeper problems in meat space, and IMO lead to the kind of tools which-should-not-be-made.

The only solution is for trusted commenters to rate cryptographically signed comments. And they would only do so if they felt like doing so.

Disqus more or less figured out comment moderation around me. I'm yet to see a Disqus-powered comment system overran by undesirable content.

HN is failing at comments. During last years, the community deteriorated to the point where for many articles every single comment is grayed-out downvoted. That signifies quite a rift in community. HN used to be upvote-intensive excitement-driven but today it's downvote-intensive, annoyment driven.

Which articles are you reading? I have yet to see a single article where that has happened.

Turn on "showdead" and you'll see much more.

Often justified. Sometimes not.

Comments to most articles about Snowden, refugees or gender issues are like that.

Probably a signal that the user base does not find those issues interesting.

Snowden because it's nothing we don't already know, and refugees or gender politics because they always degenerate into political (i.e. not interesting) mud slinging matches.

On a side note, if a community with the general high quality and good moderation of HN can't have a good discussion on those topics online, I'm inclined to believe that having same is just plain impossible.

Personally, my thought process upon seeing one of these articles is something like:

1) Ugh, another one. Let's check the comments..

2) As expected, a dumpster fire. Nobody even RTFA. Let's look at the article..

3) Nothing even remotely new or interesting. Who voted this up? Flag.

It's far easier to manipulate systems than it is to accurately reflect either your typical reader viewpoint, or an intelligent and informed viewpoint. This is a classic failing of any democratic system, election balloting included.

Early "democratic" systems were often anything but -- about 14% of Athens' citizens could vote, and about 6% of the US at the time of George Washington's election. There are arguments for a broader electorate, but they come with distinct problems.

Vote brigading in particular is a standing issue on almost all online moderation systems. Some sort of trust cascade might help. It's what, say, the US electoral college was meant to provide initially, though how much of that function remains (and how it might manifest) is rather in question.

As for Snowden, a counterpoint is that some people see this as an issue which requires constant reminding. Advertising and propaganda both work through repitition, and sometimes the truth gets a chance for that as well. There's certainly enough repeat traffic on other topics at HN. (Though yes, many of those get beat down in the submission queue.)

Of the article isn't interesting, the article wouldn't have been voted up.

Marking down the comments indicates a desire by some to to enforce groupthink. Why? Because many people use votes to indicate agree-disagree instead of a quality metric.

You don't think agree/disagree votes apply to articles like they apply to comments?

And given what I've seen of the algorithm here, page positioning is a lot more complicated than vote count weighted by time.

I think it's harder to agree/disagree with the typical headlines featured on HN. Most articles on HN appear to be straight.

But let's say that two articles were in the queue, one pro-X, the other anti-X and the pro-X forces were dominant. Sure the pro-X article would hit the FP, but the anti-X forces would still comment on it and be down voted.

Also the bias is only visible in the comment section because down voted comments remain visible, whereas a down voted article gets flushed down the memory hole.

Doesn't make the issues not relevant. Do you argue that we're incapable of dealing with these issues for all eternity?

"Who voted this up?" is the community rift.

Relevance isn't the watchword for inclusion on HN, interestingness is. That's per the guidelines.

And you'll note I said "online".

Politics is interesting if you ask me, because it have the property of ruining every other interesting thing.

We live online.

Just because you want to argue about politics with people doesn't mean that people want to argue about politics with you! Maybe they do, sometimes, in some contexts, but if the social cues (i.e., downvotes) indicate otherwise, then maybe not at that time and place. There's nothing wrong with people not talking about stuff they don't want to talk about.

Also, internet forums have learned over multiple decades that otherwise interesting discussions can easily get derailed by people screaming at each other over unresolvable issues. If the community doesn't keep a lid on it to a degree, the quality of discourse goes into a downward spiral that it can never recover from. It attracts people who just want to argue about stuff and it drives away people who want to have interesting discussions. This has been seen time and time again, in newsgroup after newsgroup, mailing list after mailing list, web forum after web forum.

Holding back that inevitable decline is like fighting against entropy- if it stays popular, HN is almost guaranteed to decline, and become more and more like Slashdot circa 2010, right before it poofs out of existence and/or relevance. But if users actively push back against the tides of forum entropy (i.e., discussion getting drowned out by arguments), a forum can at least have a nice long run before that happens.

I think what people want to avoid on HN is the sort of discussions where people are just asserting hot takes back and forth to no other end than the act of publicly asserting hot takes. This was never fun to watch on Crossfire or First Take or whatever, it's not fun at awkward drunken family gatherings, and it doesn't fit in with the vibe of HN. It's invigorating to the participants but much less interesting to read, and for every poster there are hundreds or thousands of readers.

That applies to online forums just as much as it does to real life, some forums are just more focused than others (just like some households are way louder, more chaotic, and have more drama than others). Almost every place other than HN thrives on arguments, so at least there are plenty of places to have them.

The article says NPR was using Disqus. Looks like it didn't work for them, even with outsourced moderation.

I don't know enough about Disqus to render an opinion, but I do find it entertaining that the sample comments shown in the animation on their front page are entirely noise, in that they contain nothing more than a "Yay!" sentiment.

How has disqus figured out comment moderation? As far as I know, they don't make a big effort to create great comment communities. Do you have any extra details?

HN has far better comments than any disqus comment feed, on average, in my opinion.

It's that bad comments sink down but the amount of downvoting is not shown, so downvoting behavior is not reinforced.

HN used to have far better comments, now it feels like a battlefield.

I only see that on political posts.

Unfortunately, not much interesting happens outside of politics. CRISPR and exoplanets spring to mind as exceptions, but software field definitely stalled.

Politics seems to be the force that can bury any amount of advancements in other fields, hence interest.

95% of internet comments are pure trash and basically internet pollution. The other 5% can be a mixture of deep insight, thoughtful discussion, and relevant opinion. Sorting out the trash and insisting on quality comments is an unsolved problem, perhaps with an eventual tech solution.

The New York Times is probably the only site I know of that does comments well, and they are obviously heavily moderated. But, they're smart, sometimes funny, often insightful, and generally worthwhile to read.

Some general forums and social sites do comments reasonably well too, this one included. But Reddit is a toilet, and Facebook and Twitter are the dirtiest of cess pools.

The New York Times comments are free from trolls and spam, but it's a frustratingly obvious echo chamber when it comes to politics. I'm a liberal guy but I can't stand it. David Brooks wrote an interesting column (http://www.nytimes.com/2016/08/09/opinion/the-great-affluenc...) a week or so ago and most of the comments are just bashing him for being a Republican, as if that has anything to do with the subject matter.

i think your diagnosis is wrong. show me a conservative news outlet on the web with a high SNR of thoughtful and intelligent comments, free of frothing, conspiracy-laden bullshit. maybe the NYT is an echo chamber because modern conservative positions are so weak and contradictory, they can't stand the withering critique of a well-moderated forum. instead, they only survive in troll havens.

as for brooks' column, you might be missing some context. brooks has made a career of talking out of both sides of his mouth and (annoyingly) providing intellectual cover from the NYT for a plethora of bad conservative ideas. now that they're blowing up in his face, he's backing away from these stances.

this comment articulates it well:


"He should realize that we’ve been trying to bring the tribal ethos to the U.S. for a long time, with strong local communities providing the sort of help and social services that bind people together and take care of each other as we get older, or fall short in some way.

But He Who Talks with Forked Tongue likes to imagine an egalitarian utopia where 99 percent of us are quietly stitching blankets while a few get to hoard the vital resources. When the tribesmen and women protested and occupied Wall Street, Brooks nearly went on the warpath, and wrote a column in the Times entitled “The Milquetoast Radicals,” (10/11/2011) in which he castigated the unwashed hippies who dared to protest the insane degree of income inequality in this country."

I was not comparing NYT to any other news outlet. It's an echo chamber regardless of the fact that conservative news sites also have echo chamber comment sections.

The David Brooks article was just an example. When Bernie Sanders was still campaigning every comment on Hillary/Bernie-related articles was about how the New York Times is wrong and that Bernie is the best, people will learn about the political revolution soon enough, etc. I was a huge fan of Bernie and I got bored of those comments instantly.

neither am i comparing them. you're observing that the NYT comment section is free of trolls/spam but is otherwise a liberal echo chamber. that's another way of saying that it's lacking a counter-balance of intelligent conservative comments. i'm accepting that critique for the purpose of argumentation, and replying by asking you to look around and find anywhere on the web that has a majority critical mass of intelligent conservative commentary. once you realize that it pretty much doesn't exist, maybe that will lead to a different conclusion...

anyway, one of the few places i read that, for whatever reason, does carry an even mix of intelligent comments across the spectrum is interfluidity. for example:


Holy Hell, you are right. I actually have the distinguished privilege of having one of my NYTimes comments be a "Times Pick", meaning the ed. board actually read it and recommended it for insight, I suppose.

My comment was mostly meta, calling out people for missing the point of an op-ed. The op-ed was from a privacy/civil liberties person about why a "no buy" list for guns would be a bad idea. He wasn't arguing on the merits for or against gun ownership, just that these secret lists on which LEO acts are dangerous.

Every comment was something along the lines of "What about my right not to be shot in the streets?!" - I tried to point this myopic view out, and every reply to my comment was "What about my right not to be shot in the streets?!".

As a smallish-government liberal (Public services are well and good but government should be kept in check by a powerful and vigilant population) I get torn up whenever I defend gun rights or spending reductions on that website.

Metafilter does moderation well. They have a team of paid moderators. It costs a token $5 to join, which means it's expensive to generate sock puppets and anyway the moderators keep your credit card on file so they can permaban you if they need to.

Metafilter is interesting in several regards.

There's the token payment. It doesn't cost much to post, but it does cost to be an asshat.

It's tiny. The userbase and total content volume are a small fraction of other social networks. Which I've objectively measured.

The S/N is amazingly high. Better than all but the best paid-contribution (e.g., professional media) sites.


Having payment linked to a name goes a long way towards deterrence, IMO.

I think it's more than that; it's a psychological thing. You're not likely to shell out five bucks to spew an epithet at someone - you have to think about it. You have to get your payment info together. You have to think about your budget, even a bit in passing.

And, in Metafilter's case, there is a waiting period between payment and the ability to comment.

Think of it this way:

Automatically generating a stupid sentence is hard. Stupidity is difficult to simulate. Not lack of coherence. Not pure noise. Not word salad. Not poorly trained neural networks. But stupidity. It's hard.

Well, you can pretty much come up with a stupid paragraph on any topic if you just write a small script that searches YouTube using the appropriate search pattern, and just grabs some random comment from the top link. Chances are pretty good it's a stupid statement.

That's pretty much what Tay did with Twitter.


I've always appreciated the NYT's approach to flag their "Picks" as not just the ones receiving the most upvotes/recommendations, but those that represent a wide variety of opinions.

Sturgeon's law[1]: 90% of everything is crap. 90% of comments is crap. 90% of journalism is crap. The difference is that a good portion of the 90% of journalism that is crap is off on sites that you don't read.

1: https://en.wikipedia.org/wiki/Sturgeon%27s_law

Sturgeon's law at Internet Scale is far worse than six-sigma compliant.


On the volume vs. time-to-assimilate basis, nobody can read more than a minuscule fraction of online content.

The German Frankfurter Algemeine Zeitung (FAZ / http://www.faz.net) also has thoughtful comments. I used to think it was something about the community but at this point it seems more related to moderation. You don't see a lot of trolls on the site.

Given that it's FAZ, I suspect all they need is a check that the comment is correctly punctuated, uses the subjunctive, and consists of 3 paragraphs written in 2 sentences :)

We should ask ourselves what value on-article comment sections provide. I believe HN is good because it serves a specific trade purpose and caters to a specific high-end niche audience. HN's participants are willing to cross a significant participation threshold to be part of the community here. I believe that kind of structure needs to be in place to get good community participation. I guess I'd summarize it with these points:

1. require significant investment from the userbase to participate successfully

2. promise to assist the userbase with something that is critically valuable to them in exchange

3. regularly deliver on 2 to keep 1 worthwhile

4. provide reputation tracking and management utilities so that the user can cultivate a profile that reflects the investments made in point 1

5. for recurring participation, provide variable rewards that trigger the brain's hooks for surprise, which translates to enjoyment.

HN hits those points, but blogs definitely don't and don't want to. They want to bring in barely-interested readers from search, from anywhere on the web. Many of these readers won't even make it into the comment section. Thus, a good community tacked on to the bottom of their articles seems unlikely.

If those principles are required to cultivate a worthwhile community, the community should always occur external to the publication of the article. The community needs to be the centerpiece, not the article. I use HN this way; the discussion is the primary thing, the articles are the subjects submitted for the community's discussion.

The other caveat is that it's difficult to provide points 3 and 5 when you're just starting out. From what I've seen, it practically always has to be artificial until the momentum becomes self-driving (if there is a physical community that uses the online forum for spillover, this may not be applicable; this is basically what happened with HN). We need better solutions there.

Since individual blog posts have certain quantities of Google juice, comment sections will be overrun with spa--err, "SEO professionals". Other participants are often low-effort drive-bys. If the community isn't the principal focus, participation will be spotty and it will be hard to develop elements fundamental to meaningful community engagement.

Facebook and Twitter are normal people making normal comments on random stuff they see. These people generally feel a compulsion to let their feelings out and Facebook/Twitter provide it. I believe this is probably what was originally intended for blog comments, but because Facebook/Twitter are real, stable communities, the participants are inclined to leave comments there instead of on the target article itself. These comments are often loose and instinctive, which is not necessarily to say they're invalid or worthless, but a community won't form around individual postings because there's no common unifying dictum (point 2 is unfulfilled, and point 1 is minimal on random FB/Twitter posts).

Mostly everything you described in that list was something that most web forums and usenet provided, but they were supplanted with a mix of on-article comments and social media in it's current state. It is frustrating to have seen most forums end up evaporating but they are just hard to keep momentum in.

It would be really nice to see someone figure out a good way to bring forums back in a way that didn't turn into a black hole.

As a heavy forum user (since the late 90s), I find reddit supplanted them. Reddit is basically every forum I've ever been a part of, all on a single website. More importantly, it's every forum I didn't even know existed, on a single website. I found many niche hobbies and interests I wouldn't have otherwise found thanks to reddit.

I think that's why reddit grew so large, so quick. It does what forums do (provides similar discussions), except better.

I agree that reddit supplanted most of them for particular niches but building a community around a few niches doesn't seem to work well. You can't really combine r/ArtisanVideos, r/programming and r/ECE into one 'whole' where a forum such as the EEVBlog or the RPF can combine these more diverse things into a more cohesive community with more users overlapping.

Reddit definitely does better as 'front page of the community' but there are less 100's of posts over 2 months to a single entry on reddit that happen without a moderator's pin.

> It would be really nice to see someone figure out a good way to bring forums back in a way that didn't turn into a black hole.

Isn't that what Reddit is trying to do? Creating a subreddit seems to be the default way to stand up a new forum. Whether the platform is up to the task.... jury is still out.

> We should ask ourselves what value on-article comment sections provide

When done well, comments on the article form a discussion, debate, or some other meaningful exchange of knowledge.

But with the modern web and social media, it's usually all bile and trolling.

> But Reddit is a toilet

I've found just the opposite. Coincidentally, I all the subs I'm subscribed to have less than 40k subscribers.

well some one's trash is another person's insightful comment. that isn't to say there isn't hate and trivial comments but far too many will simply dismiss any comment as ignorant that they don't agree with. This is most likely to occur with political or religious discussions.

There's content which is simply disinformative. It's not only wrong, but makes the reader stupider for having read it. Objectively.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact