I see comment moderation as one of the 'unsolved problems' left in this generation of the web. When I worked at Foreign Policy we worked hard to integrate new commenting tools and encourage power users, but we were just buried by the threats, spam, and low-value noise.
Web technology scales, journalism scales (poorly, but a relatively small publication can pull big traffic), but right now there's just no substitute for someone at manually checking out reported comments and banning problem users. When you have a site with as much traffic as NPR, that would probably take dozens or hundreds, and these orgs are loathe to outsource it to cheap countries like the big web players do, mostly due to the ethical challenges.
Maybe moving comments to people's own social groups on FB/Twitter will help to defray the costs, I don't think they're really seeing any discussion value for the most part.
What are your thoughts on incentivizing constructive comments? I've seen publishers (The Guardian, if memory serves) select thoughtful comments and re-print them as micro-articles in their own right. This seems to solve part of the problem by setting an established, if not mostly-objective, standard for comment quality: journalistic publication standards.
As such, bias and opinion is welcome, provided that it's analytical, verified by fact to a reasonable degree, and respectful of common etiquette. The genius in this approach, as far as I'm concerned, is that it manages to preserve the original purpose of comments: scalable content-generation!
Clearly, moderation is a Hard Problem, but one that I think benefits from an economic/incentives analysis. One conclusion I've drawn is that restricting comments to paid-consumers makes banishment and sock-puppetry costly enough that moderators can mop up the rest by hand.
To ask a specific question: what, exactly, remains "hard" with this approach? Do you think "free to read / pay to comment" is viable, in principle? Do you think the promise of publication is not a good incentive? Why?
I think that incentive idea is great, and is a smart move to build a community, particularly when you're trying to draw subject matter experts. I like how some of the Ask<X> reddits do it, by flagging people with verified advanced degrees. People think that news sites are afraid of conflicting opinions, but in my experience that's nonsense, it just has to be well thought out and not "DEATH TO <ISRAEL/ARABS/SUNNI/TURKS/AMERICA>", which is the vast majority.
It still doesn't solve the problem that for someone to _find_ those great comments, they have to _read_ them, and stop them from getting buried.
I'll err on the side of caution with revealing employee counts, but in my experience many of the FP/Atlantic/Mother Jones/Weekly Standard/Pick your midrange site are running on a single digit to low double-digit number of web production staff, many of whom are also trying to make a writing, article layout, or fact-checking quota. The suggestion that these magazines can either get those staffers to moderate tens of thousands of comments per day, or quadruple their web staff just to improve the comments ignores the business reality.
User moderation in the normal HN/Reddit way doesn't work well on news sites, it's too easy to game or brigade, and news sites can't or won't give add unpaid moderators to be gatekeepers.
That's what's hard; creating comments is scalable, filtering them is not. Leaving them unfiltered doesn't work either.
>I think that incentive [is good], particularly when you're trying to draw subject matter experts.
You bring up an excellent point. One of the fundamental problems with comments, I think, is that it creates a space in which ignorance and expertise are equally-weighted. In fact, it's often worse than that for reasons we all know: interesting issues are hard to distill into 300-or-so characters, and short, simple points are often more percussive.
Vetting credentials is a very good option IMHO for certain forums but not for others. Reddit's /r/askscience is an example of a forum in which it works well.
>It still doesn't solve the problem that for someone to _find_ those great comments, they have to _read_ them, and stop them from getting buried.
I wonder if this problem can't be solved through the use of machine-learning to classify comments into high-versus-low quality by grammatical and semantic analysis. This kind of first-pass filtering could, at the very least, help throw out the obvious trash and pre-select candidates for recognition.
Such a system can be tuned to minimize false-alarms (shitpost getting flagged as good), which I think represent the most problematic of classification errors. This is a nice problem-space for ML because the increase in misses implied by a bias against false-alarms doesn't degrade the service much: not having one's comment select for re-publication is unexceptional.
RE:Machine learning: I think there are two problems with that approach, one cultural and one technological.
The cultural issue is that many news orgs are still run by people for whom the idea that technology could accidentally censor a valid criticism or ban a decent voice is just too risky. I think this is changing, and many newsrooms today a little more fluid than when I really cared about the problem 4 years ago.
The tech issue is a little bit of a cop out on my part. An ML approach is super attractive to me as a techie. Google (youtube), facebook, NYT, WaPO, and tons of other billion dollar orgs have this problem, and could loads of money by being seen as better communities.
On the more guerrilla side, hundreds of subreddits have automoderaters written by savvy, caring moderators.
They have terabytes of training data, already tagged, and world class ML experts on staff. If it was a tractable problem with business value, why wouldn't they have fixed it? I'm guessing it's the sort of thing that looks doable from the surface, but you get buried in the details.
Again, cop out answer, so please go prove me wrong!!
I understand, and I think that's probably the most difficult problem of the two. I'd just like to point out -- in the interest of discussion -- three things:
1. Pre-filtering for moderators is different (much safer) than auto-banning by a bot
2. It's valid both to filter informed opinions that are poorly expressed, and for a publisher to have a preferred "voice", i.e. a style of writing that it favors.
3. The argument can be made that machines are no more biased than human editors, and that in many cases, the biases of the former are known. As a corollary to this point, there exist certain ML techniques (e.g. randomized forrest classifiers) for which the decision process of an individual case can be retraced after the fact.
How do you think publishers would respond to these counter-points?
>technical problem
Counter-cop-out: someone has to be the first!
Somewhat-less-cop-outy-counter-cop-out: by your own admission, certain sites (e.g. Reddit) have high-quality automoderators.
I would argue that the problem is "approximately solved" and that this is sufficient for the purposes of moderating an internet news publisher. Again, I would make the signal-detection-theoretic point of my previous comment: I can selectively bias my automoderators in favor of reducing either false-alarms or misses. Of course, this brings us back to the cultural problem you mentioned.
By this I conclude that the bottleneck is cultural, which brings me to a follow-up question: what do you think is driving the increased tolerance towards accidentally censoring a "decent voice"? Is it the understanding that it doesn't matter so long as a critical mass of decent voices are promoted?
omginternets we're starting to run into HN flame-war restrictions, and I'm working so apologies if responses come slowly.
> How do you think publishers would respond to these counter-points?
In my experience 1 and 2 are fine, but 3 is actually a _net negative_ to some of them. People who by and large have come up through 10+ years of paying dues in a 'The patrician editor is always right' culture _hate_ giving up control, even when it makes their jobs easier.
Editors I've seen have balked at things like Taboola and outbrain, despite them being test-ably better than human recommendations, and saving staffers work. It's a fair argument that picking which stories to promote is a core part of the editorial job more so than comment moderation, but the attitude match is there. Editors at one DC media org I didn't work for shot down A/B testing any new features in the first place, because there was an assumption that the tech staff would rig it!
I don't want to paint 'editors' with too broad a brush, but there's definitely a cultural reluctance at the high level to automated decision making.
> What do you think is driving the increased tolerance towards accidentally censoring a "decent voice"? Is it the understanding that it doesn't matter so long as a critical mass of decent voices are promoted?
It doesn't matter to you and me. We think like HN'ers, where there are trillions of internet packets flowing around every day, and a few will get lost. They think like hometown newspaper editors parsing letters. When you take on the responsibility of being a gatekeeper, screwing it up is a big problem, every time.
I think increased tolerance is coming from more exposure to the sheer volume (Every week at FP the website gets more visits than people who have ever read the magazine in it's 50 years of existence combined), and a bit of throwing the hands up and saying "who knows"
Again, I'm speaking for a pretty specific niche of old-school newspapers and magazine people turned editors of major web properties, because those are where my friends work. Things are probably different at HuffPo or Gawker or internet native places, but clearly not that different because their communities are still toxic.
> I would argue that the problem is "approximately solved"
So I disagree here, but don't have evidence to back it up, other than years-old experience with Livefyre's bozo filter, which we didn't put enough work into tuning to give it a super fair shake.
Taking spam comments as mostly solved, I think there are 3 core groups of 'noise' internet comments:
1. People who don't have the 'does this add to the discussion' mindset to use HN's words. cloudjacker and michaelbuddy 's comments below demonstrate this pretty well. I'd lump cheapshot reddit jokes in here as well. They're not always poor writers, or even negative -- "Great article! love, grandma". Which falls back into the ethics of filtering them.
I suspect that this is 80%+ solveable.
2. The 'bored youth' and 'trolls' group. This is actually the worst group I think, because these are the people I suspect that make death threats and engage in doxxing and swatting. Filters will catch some of these people, but they're persistent, and many of them are tech-savvy and reasonably well educated. They can sometimes be hard to tell from honest extremists. A commenter from group 1 who is personally affronted can fall into this group, at which point they become a massive time suck. Hard to solve, but verified accounts help here in the US case.
3. Sponsored Astroturfing. Russia, Turkey, (pro/anti) Israel, China, Trump (presumably the DNC?) all have a large paid network of people just criss crossing the internet all day trying to make their support base look larger than it is. Especially in the US politics case, they often speak good english, and are familiar with both sides' goto logical fallacies. They'll learn your moderating style in a heartbeat, and adapt. Unsolveable.
Anyway, if someone builds a good bozo filter, they're almost certainly a zillionaire. I hope it happens, but I suspect we'll just start looking back on website comment sections like usenet, as a good idea that didn't scale very well, and find something better.
Taboola and Outbrain's recommendations are so pathetically insulting, and the tracking so obvious, that I've both blocked their domains (router DNS server) and specifically set "display:none;" properties on any CSS classes/IDs matching their names or substrings.
It's pathetic bottom-feeder crap.
Maybe if I fed the beast through tracking, I'd see higher quality recommendations, but I won't, and I don't. They only serve to tell me just how precariously miserable the current state of advertising, tracking, surveillance-supported media is. I'm hoping it will crash and burn, not because I want present media organisations to die, but until they do, we don't seem to stand any chance of something better.
(What better, you ask? Information as a public good, supported by an income-indexed tax.)
I was referring specifically to their paid same-site recommendation engines. So you drop it into an article, and it recommends other articles from your site. In my experience it's decent to good, depending on what metadata you provide it.
I agree that the '10 weight loss secrets' promoted junk to third party sites is bottom scraping.
I really disagree.
Yes, taboola maybe is promoting literally ANY content- even spam. So yes- I blocked them but currently Outbrain is really operating as a content discovery- I didn't find any content the abuses me as a reader. Not Yet. I know that they have strict guidelines as well for their advertisers.
Reading the other reply thread with slowerest gave me another possible solution, too.
Perhaps the comments sections for journalistic pieces from organizations like Ars, NPR, NYT, local news, etc could be more of a competition (like Slashdot). Top 300 comments get preserved, leave it open for a month with no comment limit and some light moderation, and let the conversation go wild (I like Reddit's system for this), then delete all but the top 300 at the end.
Adjust "300" and "top" to fit your organization's needs, just make sure they're clearly defined. Would also help limit the scope for an ML-based solution, too. :)
For news sites with a paid component, they could allow comments only from subscribers / donors. Having a gate which involves money will improve the conversation somewhat. I'd even go a step further and make comments invisible except for subscribers. People creating trial paid accounts could see the comments but not comment themselves. This latter step would prevent astroturfing from firms willing to pay $10 for a trial but not $100 for an annual subscription.
Moderators would still be needed but their workload would be reduced. And there would be money available for them since many would subscribe / donate just to be part of the community, which would make moderation less of a drain and more of the core profit-making.
> What are your thoughts on incentivizing constructive comments? I've seen publishers (The Guardian, if memory serves) select thoughtful comments and re-print them as micro-articles in their own right.
I don't think you're correctly identifying the problem. In my experience, the problem with comments, especially on news sites, is a glut of bad comments, rather than a lack of good comments. This solution doesn't disincentivize bad comments.
The solution to bad comments is deleting them before they are even visible to other users. Deleting aggressively, as is done in certain subreddits (r/science) may seem offensive towards naive users who just want to add their "2 c" to the discussion, but it's the only effective AND honest strategy: if your comment adds little of interest, it's worth nothing. The bar should be very high, the more popular the website the higher the required quality. But in the end I think NPR are making the right choice. Comments on websites are not a constitutional right, after all.
The aggressive moderation in /r/science is quite honest compared to other subreddits, which is partly why its moderators attract less controversy when compared to others such as /r/news.
The slashdot system for categorising comments seemed to work really well at making the highest quality comments stand out, I wonder why other sites haven't tried something similar, I don't think I've seen it used elsewhere.
slashdot nailed moderation, no one has attempted something similar. most systems are simple up/down vote or like/report
i am also starting to wonder if the agegroup being hired to implement "social" for websites is now young enough to have missed slashdot in it s prime.
the fact that people are still brainstorming from scratch instead of talking about how to improve slashdots model reeks of reinventing the wheel because they never heard of it.
> i am also starting to wonder if the agegroup being hired to implement "social" for websites is now young enough to have missed slashdot in it s prime.
That's me! Can you explain the Slashdot model and why it worked? Or point to a good write up about it somewhere else?
Slashdot's model was perhaps a little overcomplicated, but my favourite feature was the ability to tag up/down votes with flavours. +1 Informative was different to +1 Funny, and "Factually incorrect" was a different downvote to "Off-topic spam" (whatever they were called).
Other quirks off the top of my head: it capped at +5 and ... -1, I think? The score represented a thing closer to the up/down ratio than "Facebook likes". There was a dedicated -1 Overrated moderation for "I don't disagree that it's interesting, just not +5 interesting".
Also, logged in users got a fixed number of moderation points at random intervals, and you couldn't moderate in a story that you commented in. I'd like to believe this discouraged "throwing away" points on low-effort joke comments, but I'm not sure the facts of Slashdot comments entirely bears that out.
Slashdot's method of scoring comments was overly complicated and probably did not produce any better results than reddit-style voting. However, Slashdot's killer feature was that the reader could filter by comment score and thus only read the 'good' comments, and not have to wade through hundreds of replies.
correct, they were better than reddit because they let the user sort based on their preference. slashdot generated a ton of metadata that described their content, and then gave you the power to intelligently utilize that metadata.
Slashdot's moderation system was vaguely effective. Complete crap rarely rose to the top.
A great deal of high-quality commentary was buried, however, often the best and most informative. That's fairliy much par for the course.
Much the early vibe on the site came from the fact that it was simply where intelligent people were commenting online -- especially the early Free Software crowd (well, early in terms of Web 1.0 -- there was the whole 1980s and early 1990s contingent as well).
ESR (before he went fully whackjob mode), Ted T'so, Alan Cox, Bruce Perens, Rasterman, and others.
Much that group seems split amongst HN, LKML, LWN, and Google+ these days, along with some blogs.
When I was delivering newspapers as a small child many aeons ago, the best page of The Guardian was 'letters to the editor'. The rest of the paper was pretty good back then, there was no email, so anything printed in the 'letters to the editor' had to be posted in, to appear some time after the events in question.
Needless to say an event happened and was reported the next day, so it could be a whole week between the Trump-of-the-day saying something and comment appearing about it. All of this would be filtered by the 'editor', however you did have frequent letters by the likes of Keith Flett, who somehow got his letters published more often than the other 3-5 million readers (as it was back then, just UK sales with poor distribution in places like Birmingham).
There were no 'likes' back then so you had to have something to say to bother writing in.
How do we get a digital equivalent? I don't buy the dead-tree paper these days so no idea if 'letters to the editor' still exists, but, back then it was good, very good.
Its interesting that simply restricting immediate commenting might at least deter useless comments. People who are commenting in order to elicit a response, i suppose, probably have less important things to say. Or maybe they wouldn't say them if they are not granted the immediate satisfaction.
I assume it would kill some collaboration/innovation like on HN or a meaningful subreddit, but maybe no one really ever has anything meaningful to say when reacting to general news...
I guess it would also produce duplication from many people not knowing something was said already (however, the duplicate reactions could be monetized later down the line maybe...)
A podcast I frequent does this sort of thing. If your comment is read on the podcast (and they read one a day) then you get sent a .NET Rocks coffee mug. Which is kinda neat.
The podcast is .NET Rocks and their comments seem to be pretty good overall.
Nobody old enough on here to remember Slashdot's moderation system?
Not everybody could promote or demote comments. You got randomly assigned the ability to moderate comments so when it came your turn you took it _seriously_.
That community had one of the highest quality comments. Then somewhere in the mid-2000's it got super anti-Microsoft and anti-anything-not-F/OSS. I'll give them credit; it probably reflected the highest quality comments of their userbase at the time.
Slashdot's moderation still had some problems - which might be inevitable, I don't know.
There was a big bias towards early comments - moderators had to see your comment before they could upvote it to the top of the page, but once it was at the top more people would see it and keep it there; so a comment that would score well if posted as comment 10 would score nothing if posted as comment 50.
And karma tended to reward /popular/ comments, which were often things the hive mind agreed with, rather than high-effort comments. Discussion about DRM? Get in early with "DRM is impossible because" or "format-shifting should be a right" for a quick high score.
> That community had one of the highest quality comments.
One of the biggest differences between slashdot, and a site like reddit is simply size. Reddit is now the 8th or 9th largest website in the U.S according to Alexa, it's getting as big as Twitter, and is larger than Netflix. Slashdot at it's peak popularity wasn't even a drop in that ocean of traffic & pageviews. When you get that big, your problems are of a different sort, requiring different solutions. Hell, I think reddit has single subreddits that are bigger than Slashdot was at its peak.
This is important because it's easy to have "high quality" when your traffic is low. It's easy to moderate and easy to keep people on-topic. I speak from experience -- I moderate one or more default subreddits on reddit, as well as smaller subreddits, and the smaller ones are much easier to handle. They're virtually on autopilot with minimal moderation required. The larger ones on the other hand... It's like a non-stop war.
While I think there may yet be some sort of NLP/ML-based filtering that can improve the signal to noise ratio, the fundamental problem is that the effort is incredibly asymmetric.
It takes an author far, far longer to craft their work than it does for someone to heckle it.
If people weren't driving up page-views by coming back to the same article to see if their comment was liked or replied to, I think this would be a very easy decision for most sites: at some point you are responsible for all of the content on that page.
> "these orgs are loathe to outsource it to cheap countries like the big web players do, mostly due to the ethical challenges"
But suggesting people engage instead on Facebook brings a whole new set of ethical concerns. (1) Facebook manipulates users. (2) Facebook reorders feed. (3) Facebook would lower priority of conservative news sites. And lets not forget that Facebook is probably outsourcing moderation anyway. Plus, Facebook commenters can be just as bad as regular site commenters.
> (3) Facebook would lower priority of conservative news sites.
I worked on the trending product. This did not happen. The whole thing goes back to one guy complaining now that he couldn't pick Breitbart for the highlighted slot for some story because it wasn't on the list of approved sites. And this list is actually available here https://cdn.ampproject.org/c/newsroom.fb.com/news/2016/05/in...
Of course no one ever asks why he wanted to pick a controversial site to highlight instead of say a boring straight forward wires service report like the AP.
Of course the story still appears, and the Brietbart could appear in slots 2-N by the personalized ranking algorithm, so it's not like it surpressed. He just wanted to shove it into the I personalized slot 1 where everyone would see it.
Sadly I feel that this is one of those cases where it's impossible for the correction to ever overcome the initial misinformation. On average, People do not accept new info when it refutes their existing knowledge base. This is doubly so in tribal areas like politics.
Yup. It's basically saying "we can't afford this, so we'll make it not our problem."
FWIW FP briefly used an embedded facebook widget, and a nonzero percent of their livefyre users logged in via FB.
It did little to nothing to stop abusive comments. The HN crowd cares a lot about what sort of history follows around our names and our handles. Many others, both in the western world and abroad, do not.
There's a german blog that used to be popular (blog.fefe.de) without comment function. So some people built a website that offers the same blog just with a comment function.
They built in a captcha function, that fails with a the probability that your comment is a troll comment.
I know someone running a company that aims to solve exactly this problem, and they were attempting to sell to NPR, too. Last I talked to them about it, they said NPR seemed interested but has the typical years-long enterprise buying cycle. So, this news is really too bad.
Unfortunately at least 90% of internet comments are trolling, vitriolic, ignorant, generally useless, poorly written, unhelpful, add nothing to the topic, and basically serve as web pollution.
Yes, I think using FB login pretty much solves the problem as it is today. Take a look at civilbeat.com a regional news site by Pierre Omidyar (The Intercept). I'm fairly certain they only allowed comments via FB login for years. It meant a lot fewer comments than they would have gotten but they were all legit. Now it looks like they allow FB, Twitter, or local auth. But the comments are still mostly ok. Maybe they are looking for more activity by easing the requirements and believe they've built a culture of good commenting?
I think describing FB login as "solving" the problem is definitely overstating it by a lot. I've seen plenty of dumpster-fire comment sections that allowed only FB users to comment.
I'm not sold on it because a lot of newspaper web sites use facebook comments and any topic about politics, race, or gender seem to be full of people making hateful comments.
Seems like it was quite "better" than the alternative (basically anonymous user logins making comments) for my home towns recent transition to fb comments, for what it's worth...
The one place I've actually found awesome comments was "the economist" (well, HN isn't bad either), and the ny times is kind of OK. Everywhere else feels pretty iffy...
It solves the problem for me, I guess. I won't be commenting anywhere you have to log in to facebook because I don't want them tracking me all over the web.
While a facebook account gives some legitimacy, I also like sites where you can post anonymously or at least pseudonymously.
If you are using Tor Browser in Tails over a coffee shop wifi while you are laying down under a blanket in the back of a truck driven by a stateless hobo with no fingerprints who you intend to murder later in a country with a healthy democracy, you are probably still not anonymous. If you are not doing those things, you are definitely not anonymous.
"When I worked at Foreign Policy we worked hard to integrate new commenting tools and encourage power users, but we were just buried by the threats, spam, and low-value noise."
Assuming you're trolling, but at the risk of feeding:
Someone who posts "Sir your magazine and Hillary Clinton are tools of Israel and should be killed by Hamas, God willing", on all every story about the State Department, or "Oh $WRITER I see you live in DC and went to $COLLEGE, maybe I'll come pay a visit to the next alumni event and teach you some respect for $COUNTRY" isn't the target user for a major American publication. It doesn't want those kind of abhorrent sentiments to live alongside its brand on its website, and is under no obligation to give voice to their ideas.
They're an exceedingly small percent of total readers (when they're even real readers), but a much larger percent of online commenters, hence the problem in the first place.
Even in the non-bot non-astroturfing case, the people who make those comments may be actual readers (although they're exceedingly unlikely to be paying subscribers), but they definitely fall into the bucket of 'can be filtered out, to no appreciable loss'.
They're users in the sense that the website is free, and anybody can be a user, but not in the sense that the publication has a duty to them, in exchange for their money or attention.
Aside from [bot] spam, I agree with the statement of Those were your users.
What OP really wants are the good comments, which is more than just spam filtering and also more subjective. If an ill-informed, 13-year-old's comment would be considered low-value noise, website operators would need to engage in something resembling censorship, which has its own set of problems.
Don't have time to elaborate - but moderation tools actually link to many other deeper problems in meat space, and IMO lead to the kind of tools which-should-not-be-made.
Disqus more or less figured out comment moderation around me. I'm yet to see a Disqus-powered comment system overran by undesirable content.
HN is failing at comments. During last years, the community deteriorated to the point where for many articles every single comment is grayed-out downvoted. That signifies quite a rift in community. HN used to be upvote-intensive excitement-driven but today it's downvote-intensive, annoyment driven.
Probably a signal that the user base does not find those issues interesting.
Snowden because it's nothing we don't already know, and refugees or gender politics because they always degenerate into political (i.e. not interesting) mud slinging matches.
On a side note, if a community with the general high quality and good moderation of HN can't have a good discussion on those topics online, I'm inclined to believe that having same is just plain impossible.
Personally, my thought process upon seeing one of these articles is something like:
1) Ugh, another one. Let's check the comments..
2) As expected, a dumpster fire. Nobody even RTFA. Let's look at the article..
3) Nothing even remotely new or interesting. Who voted this up? Flag.
It's far easier to manipulate systems than it is to accurately reflect either your typical reader viewpoint, or an intelligent and informed viewpoint. This is a classic failing of any democratic system, election balloting included.
Early "democratic" systems were often anything but -- about 14% of Athens' citizens could vote, and about 6% of the US at the time of George Washington's election. There are arguments for a broader electorate, but they come with distinct problems.
Vote brigading in particular is a standing issue on almost all online moderation systems. Some sort of trust cascade might help. It's what, say, the US electoral college was meant to provide initially, though how much of that function remains (and how it might manifest) is rather in question.
As for Snowden, a counterpoint is that some people see this as an issue which requires constant reminding. Advertising and propaganda both work through repitition, and sometimes the truth gets a chance for that as well. There's certainly enough repeat traffic on other topics at HN. (Though yes, many of those get beat down in the submission queue.)
Of the article isn't interesting, the article wouldn't have been voted up.
Marking down the comments indicates a desire by some to to enforce groupthink. Why? Because many people use votes to indicate agree-disagree instead of a quality metric.
I think it's harder to agree/disagree with the typical headlines featured on HN. Most articles on HN appear to be straight.
But let's say that two articles were in the queue, one pro-X, the other anti-X and the pro-X forces were dominant. Sure the pro-X article would hit the FP, but the anti-X forces would still comment on it and be down voted.
Also the bias is only visible in the comment section because down voted comments remain visible, whereas a down voted article gets flushed down the memory hole.
Just because you want to argue about politics with people doesn't mean that people want to argue about politics with you! Maybe they do, sometimes, in some contexts, but if the social cues (i.e., downvotes) indicate otherwise, then maybe not at that time and place. There's nothing wrong with people not talking about stuff they don't want to talk about.
Also, internet forums have learned over multiple decades that otherwise interesting discussions can easily get derailed by people screaming at each other over unresolvable issues. If the community doesn't keep a lid on it to a degree, the quality of discourse goes into a downward spiral that it can never recover from. It attracts people who just want to argue about stuff and it drives away people who want to have interesting discussions. This has been seen time and time again, in newsgroup after newsgroup, mailing list after mailing list, web forum after web forum.
Holding back that inevitable decline is like fighting against entropy- if it stays popular, HN is almost guaranteed to decline, and become more and more like Slashdot circa 2010, right before it poofs out of existence and/or relevance. But if users actively push back against the tides of forum entropy (i.e., discussion getting drowned out by arguments), a forum can at least have a nice long run before that happens.
I think what people want to avoid on HN is the sort of discussions where people are just asserting hot takes back and forth to no other end than the act of publicly asserting hot takes. This was never fun to watch on Crossfire or First Take or whatever, it's not fun at awkward drunken family gatherings, and it doesn't fit in with the vibe of HN. It's invigorating to the participants but much less interesting to read, and for every poster there are hundreds or thousands of readers.
That applies to online forums just as much as it does to real life, some forums are just more focused than others (just like some households are way louder, more chaotic, and have more drama than others). Almost every place other than HN thrives on arguments, so at least there are plenty of places to have them.
I don't know enough about Disqus to render an opinion, but I do find it entertaining that the sample comments shown in the animation on their front page are entirely noise, in that they contain nothing more than a "Yay!" sentiment.
How has disqus figured out comment moderation? As far as I know, they don't make a big effort to create great comment communities. Do you have any extra details?
HN has far better comments than any disqus comment feed, on average, in my opinion.
Unfortunately, not much interesting happens outside of politics. CRISPR and exoplanets spring to mind as exceptions, but software field definitely stalled.
Politics seems to be the force that can bury any amount of advancements in other fields, hence interest.
Web technology scales, journalism scales (poorly, but a relatively small publication can pull big traffic), but right now there's just no substitute for someone at manually checking out reported comments and banning problem users. When you have a site with as much traffic as NPR, that would probably take dozens or hundreds, and these orgs are loathe to outsource it to cheap countries like the big web players do, mostly due to the ethical challenges.
Maybe moving comments to people's own social groups on FB/Twitter will help to defray the costs, I don't think they're really seeing any discussion value for the most part.