Hacker News new | past | comments | ask | show | jobs | submit login
The murky history of moderation and how it’s shaping the future of free speech (theverge.com)
263 points by pmcpinto on Apr 16, 2016 | hide | past | web | favorite | 108 comments



We've changed the title from the linkbait headline to the informative subtitle and re-upped* this post. I don't think we've ever done that for a story that already had 100+ upvotes, but this article is so astonishingly in-depth, and the topic so little known, that it deserves a deeper discussion.

* Described at https://news.ycombinator.com/item?id=10705926 and the other links there.


I entirely concur with both your points, especially about how little appreciated this subject has been. I was dimly aware of it, I mean it's no secret there's a lot of really terrible stuff on the web, and it's shocked me when I've stumbled into it. It's disquieting to realize I never thought about what it takes to keep a site like YouTube from being a complete cesspool. Never crossed my mind, but the question should've occurred to me, why didn't it? I wish I had an answer.

Of course, now I know how good a job moderators have been doing, and better understanding of the toll that role can take. They deserve a lot of respect. The idea this is a subject worthy of systematic study sticks with me though not yet clear about the next steps. Any case, the implications of the story cover a large space in economics, politics, ethics, law, and medicine. It will be fascinating to see how it develops.


I appreciate you doing that. This article is phenomenal work as it's so thoroughly researched with so many perspectives and details published. It's such a void of a topic to most despite the fact that it actively shapes democracy, innovation, and evolution of human culture. I might have to re-read it a few times to let it all sink in haha.


Thanks, it's a great article that deserves even more attention. Is there still a plan to make public the list of re-upped articles? I think it would be a good resource. Please at least keep a log of them for future use.


Thank you. You do good work around here.


Thanks, dang. I'm so glad I was able to find and read this.


This is a very good article and quite an eye opener. As an internet/web "user" for twenty years or so I'm familiar with the existence of "moderation", but had only vague awareness of what moderators endured in that role. Reminiscent of traumatized 911 operators, it would be unsurprising that a subset of moderators develop PTSD-like symptoms related to that experience. Workers' comments quoted in the article lead me to speculate that there are significant mental health epidemiological questions re: current and former moderators.

An effective automated approaches will provide a big advantage in narrowing down the "judgement space" that must decided by humans. To the extent that's possible, the key benefit provided is reducing the exposure of moderators to stressful situations as the article describes, and indeed that's a very helpful development.

As the article points out the whole domain of moderation practices is a minefield. But now I wonder if there isn't also a risk of automated review making classification errors re: user behavior. Plausibly automated systems can be tuned finely enough to avoid serious errors, and support human oversight to catch and more easily resolve edge cases. Automated moderation systems will need to have such qualities in order to be able to reduce human burden as intended.


The article addresses this specific point in two places even more bluntly than just through workers' comments:

'Members of the team quickly showed signs of stress — anxiety, drinking, trouble sleeping — and eventually managers brought in a therapist. As moderators described the images they saw each day, the therapist fell silent. The therapist, Mora-Blanco says, was "quite literally scared."'

And:

'Beth Medina, who runs a program called SHIFT (Supporting Heroes in Mental Health Foundational Training), which has provided resilience training to Internet Crimes Against Children teams since 2009, details the severe health costs of sustained exposure to toxic images: isolation, relational difficulties, burnout, depression, substance abuse, and anxiety. "There are inherent difficulties doing this kind of work," Chen said, "because the material is so traumatic."'


Yes, good point. The observation that the material being moderated is potentially traumatic leads me to conjecture some subset of moderators would become disabled after exposure to the material. Resilience training is a good idea, but may not prevent bad outcomes.


south park had a bit where they dealt with this sort of workspace ptsd

butters was in charge of filtering cartman's comments so cartman would only see material that would be unable to offend him

cartman seemed to flourish with his new perceived support and popularity and butters deteriorated

i thought it compelling to look at how they chose to do butters

'this material is offensive! you should deal with it instead of me'

cartman was characteristically unconcerned about butter's well being as long as the supportive comments kept coming in

i understand the satire loses some weight when you acknowledge that those that hire these moderators are mostly companies with a vested interest in keeping their users blissfully unaware


Interesting illustration of the issue. As the article shows, users are not merely passive consumers, but an active component in the moderation process because users are first to encounter the offensive material. Sure, users don't want to see that stuff, but believable that it's the companies more than users striving to remain blissful unaware of the horrific crap that moderators have to deal with. As you point out, the issue is the effect on the moderator, if it's as bad and widespread as portrayed it may rise to the level of a public health problem.


yeah the difference is one of volume

10,000 individual users being inadvertently exposed to 1 piece of upsetting material could potentially mean 1 moderator then has to review 10,000 pieces of upsetting material

also, the deluge

mailroom syndrome, it just keeps coming

honestly, computer vision is the only way to do this sort of moderation with any semblance of success and unfortunately i feel all of this human debris is just considered collateral damage until we get it functioning


it's amazing how dead on-the-nail the southpark team do serious, humour ridden topical commentary on serious social and political issues.

I think they'll go down as two of the greatest commentators of our generation.



yup, thanks for pointing out it was season 19 episode 5, safe space

i failed to because i refuse to link to that hulu run abomination

bah, south park, the show where the creators themselves put their work on the internet to be viewed freely and for free

what a unique expression of the creators' personal beliefs

a stance that had a profound impact on me, and my peers

when hulu bought the rights i felt my skin grow cold, but i held out reservations in the hope that someone at hulu was just a fan and wanted to improve south park studio's backend whilst respecting the implicit wishes of the creators that their work is free and readily consumable by anyone regardless of manufactured scarcity or paywalls

now this shit is the new normal:

    "Safe Space"
    This episode is currently not available at South Park Studios
and

   There was an error playing this protected content. (Error code: 3365)
i lament the loss of a future's exposure to the challenging ideas innate to the show

hulu has zero respect for the series or its creators opinions on how open it has always been, and should be

fuck you hulu


There is probably no way to ask this question that doesn't sound dismissive at best, so please understand that is not the spirit this is intended in, but:

it would be unsurprising that a subset of moderators develop PTSD-like symptoms related to that experience.

Is that really the case? Most moderation is done on textual-based forums, like the one we're on right now. I'm having a hard time imagining the right combination of words on a screen to generate trauma in the requisite quantities for someone to wind up with a clinical disorder because of it.

Personally, I've been on both sides of this coin. I've been the guy kicking trolls off of a decent sized board, I've been the guy getting kicked off because I annoyed the wrong person.

At no point did it ever progress beyond internet drama. The meanest, nastiest person I could imagine could type in words along the lines of how they'd like to kill and fornicate with my mother, and my, and most other people's response I wager, would be "That's cool, kid. Bye now. Ban"

Maybe there's a case to be made for becoming jaded after a while of dealing with the worst (hang out on the meta Stack Exchange sites for a publicly visible example) - but PTSD symptoms? To me, that both overly glorifies the troll "i can type words so good that I can make other people have legitimate mental breakdowns!" and makes light of the suffering by people who actually have PTSD (who have seen things like people dying). It feels like a lack of perspective, brought on by spending a lot of time online. Unlike real life, disconnecting from online drama is always a button press away.


I'm sorry, but I do believe you misunderstood the article and perhaps my comments on it as well.

The article clearly describes moderators watching videos flagged as offensive, and these videos included "amateur and profession pornography" and many that contained "child abuse, beatings, and animal cruelty". Furthermore the article discusses moderators having to deal with videos shot during the Iranian revolution in 2009, including the murder of a young woman, "a shaky cell-phone video captured her horrific last moments: in it, blood pours from her eyes, pooling beneath her."

The article presents many more examples of the enormously disturbing tasks assigned to moderators. Based on tens of years in clinical practice I'd consider it likely that viewing such material, especially seeing it frequently, can precipitate acute and chronic stress disorders in vulnerable individuals. I understand the point this is only hypothetical in the absence of systematically collected data, call it a clinician's hunch if you want, but still I'd put money on it.

Rereading the article confirms for me that it is an excellent piece of journalism, perhaps even exceptional in the current era. Moderators were exposed to far more than words, in a way similar to 911 operators suffering exposure to trauma without being at the scene. Forum moderation has very little in common with the subject of the article.

You are severely underestimating the effects of this kind of experience on people in such positions. You should learn more about stress disorders, the complexity of which is bound up with the unique and highly variable attributes of individuals. Some people are much more resilient than others making it misleading to generalize about how people will respond to given situations.

Most of all, I strong suggest you (re)read the story and if capable, allow yourself to empathize with the plight of the moderators.


Relatedly, I had a co-worker who'd once worked for the Washington, D.C. Metropolitan Police Department doing data entry. He frequently had to review images or video footage (not all of which was produced by police, but sometimes collected as evidence) and classify it before entering it into their database. He didn't last long, only a few months before he quit, and he said that 10 years later he still sometimes had nightmares about the things he saw while on the job.


> You should learn more about stress disorders

Uh, why? Am I a counselor?

> allow yourself to empathize with the plight of the moderators

To what end? What possible constructive purpose is served by allowing myself to share (or wallow) in the negative emotions described by others? Does it stop "bad things" from happening? Does it lessen their pain? Does it lessen anyone's pain? Or does it increase the overall amount of suffering in the world?

brb, going to voluntarily take a job that makes me feel bad, so I can cry about it on the internet...


Perhaps he addressed that comment to someone who seemed to feel they already knew enough to have an opinion on the topic, and meant to imply that you should learn more about this if you intend to have opinions on it


> Is that really the case? Most moderation is done on textual-based forums, like the one we're on right now.

Even on a textual-based forum like reddit there are two subreddits which I usually visit (/r/syriancivilwar and /r/combatfootage) where one can see links to videos of dead people or worse (that's because very nasty things happen during wars). I could very well imagine someone getting PTSD-like symptoms after having moderated videos/links to videos of children who have just died because of chemical-weapons attacks. That is what nightmares are made of.


It feels like a lack of perspective, brought on by spending a lot of time online. Unlike real life, disconnecting from online drama is always a button press away.

For starters, this is not true. Stuff that happens online has real world consequences. People get hired, fired, meet people they marry, get doxed and have their personal contact info put out there and on and on. The Internet is not disconnected from real life to the degree that your comment makes it sound.

I was a military wife for two decades. I lived in the High Desert where I dealt routinely with coyotes and large spiders and poisonous snakes, etc. I have spent time on health forums where deaths were so routinely reported that they had their own practices for how to deal with death, their own culture surrounding that so to speak. These experiences have left me somewhat thick skinned.

I read accident claims for over five years at an insurance company. Because of the aforementioned background, it was rare for me to be disturbed by the detailed medical and police reports that I read for up to 8 hours a day, five days a week. Many of my coworkers found the work far more stressful than I did.

But I would absolutely not want to have to moderate the photo and video stuff described in this article. And I would absolutely feel personally threatened by people making ugly threats of the sort that you dismiss with "That's cool, kid. Bye now. *Ban".

I'm pretty thick skinned, but I'm also a woman. Most of the people who are moderators are women. A lot of this nastiness is absolutely aimed at women. I think it hits a lot closer to home for most women than it does for most men. Women tend to routinely deal with genuine threats to their welfare, such as stalking, harassment and rape, that men tend to be largely exempt from.

When you recognize that your work impacts real people and how safe they are, that also can be a burden. When I read accident claims, in most cases, no lives hung in the balance. That is part of the reason it did not bother me to read them: I was not deciding whether or not to report anything to the police or otherwise take an action that impacted someone's personal safety. I was only deciding whether or not to cut someone a check and whether or not to refer the claim to the fraud department.

In a few cases, I was trying to get it paid rapidly because the customer was broke and facing having their power cut off and nothing to eat in the house. But, really, what I did mostly did not have a sense of "And if I do the wrong thing, someone could wind up killed, raped, fired from their job etc." What moderators do can have such impacts. That kind of responsibility can wear on a person. Perhaps not you, but plenty of people would find it wearing.

The Internet is something of a Pandora's box, it seems.


That a majority of moderators viewing material are women, and that a great deal of moderated content is aimed at women, really stuck out to me.

I once started watching a movie with Monica Bellucci. Some way into the film, she is followed into a tunnel and brutally raped. I couldn't make it through the scene without getting sick to my stomach and feeling the need to throw up. It even made me want to cry. It was far, far too intense. I had to turn it off.

I can't help but feel awful there are people whose jobs are screening such things recorded from real-life events, who don't get to just turn it off. There's always another clip in the queue.


I think the downvotes on this post are unfair.

While, the tone could be construed as lacking in empathy, and the article clearly states that users are exposed to videos, the question can definitely be rephrased as a valid one.

When reading the accounts of the moderators, I of course immediately felt sorry for them - it must be pretty harrowing to watch a constant barrage of child porn, murders and gore (think being on /b/).I imagine it would leave you feeling pretty jaded and losing faith in humanity.

But my second thought went to all my friends who are doctors and see terrible injuries and ailments on a daily basis, or criminal lawyers who review cases which revolve around the gutter of human behaviour. What is the PTSD incidence rate amongst these two professions?

What sparks PTSD? Is it the context by which the triggering events are presented? Are lawyers and doctors more resilient to this as there is an expectation to be exposed to these types of things, and therefore the professions are self-selecting? Is it the relentless barrage of content which affected the moderators? Or was it the lack of empowerment wrt doctors/lawyers to make an impact on these things?

Going forward what can we do to better prepare moderators for this?


> The details of moderation practices are routinely hidden from public view, siloed within companies and treated as trade secrets when it comes to users and the public. Despite persistent calls from civil society advocates for transparency, social media companies do not publish details of their internal content moderation guidelines; no major platform has made such guidelines public.

It's really nice to see light shining on this topic. Truly, this content moderation is the equivalent (equivalent for the internet medium) of the boundaries of debate enforced by the editors and owners of major media corporations on our previous mediums.


If the details of moderation rules were public, companies would be facing additional labor costs dealing with posters pushing hard on the edges of those rules.

At some level, users can choose from a variety of forums with different moderation methods, or none, and companies can try to fit their moderation policies and practices to their users' expectations.

All of that develops in a cross between evolution and arms race, with lots of moving targets - global variation, changing social mores, changing technology, the latest taboos and moral panics.

Exposing the details of how the sausage is made would help the people who want to put pebbles (or worse) in the sausage more than it would help the makers or consumers. If one finds the sausage too bland, there is likely a spicier one just a few clicks away. Or one could grind one's own sausage and compete!


If the details of moderation rules were public, companies would be facing additional labor costs dealing with posters pushing hard on the edges of those rules.

If the rules have really hard edges. I think that's very unlikely. I think the substantial reason companies won't publish their moderation rules is almost certainly that they don't have much more than a few bullet points - and a page of "things the lawyers have said you mustn't let people say" - and otherwise leave it up to the whim of the moderators.

Having said that, it might well be illegal - or in contempt of court - to publish the list of things that are moderated for legal reasons.


What you describe sounds like security by obscurity.


Publishing moderation policy details would be like crowd-sourcing cryptography software development - creating a fractally increased attack surface with a less profitable business model.

Social media companies operate in a market economy. These are not public utilities or government programs. Feel free to propose a more cost-effective approach.


I think exposing the rules will make them stronger.


Moderation is not cyber security. Virii don't turn around and argue that your AV is unfairly applying their rules to them.


I think the rules are quite different when dealing with human processes rather than algorithmic ones.


Could you elaborate, the same principle seems to apply to me, the costs of dealing with people attacking the edge strengthens the edge, no?


Or plausible deniability.


Many of the problems faced by Facebook, Youtube and others is that they own the digital commons and manage and curate it for the people trying to communicate with one another.

These issues come up for companies whose business models are deciding what people should see and hear. This makes them incredibly powerful, but it brings them into the bog.

This isn't a new problem, other than internet scale. Media companies have long - in having to decide what news to print - been faced with content moderation.

Finally, content moderation is inherently political. ModSquad - one of many booming companies in the social media moderation space - serves the US State Department, who makes no bones about the fact that it uses all elements of national power to affect what populations think and what access to information they have (it also runs the Bureau of International Information Programs - the State Department half of the US propaganda programme).

Like newspapers, TV shows, cable companies, and radio programs before it - this new media sees itself filtering content for the public - with all of the complexities, power, and grief that entails.


Yep, filtering is inevitable.

Lots of early Internet folks started out pro-free-speech and tried to be as hands-off as they could. But even if you're a stubborn libertarian, eventually you find out that it doesn't work.

Sooner or later you're going to need a moderation team (or do it all yourself), and there are going to be tough political calls. The bigger the site, the more crap you see and the more tough calls there are.


Excellent article. Just the paragraph bellow deserves its own extended commentary:

> The screener was instructed to take down videos depicting drug-related violence in Mexico, while those of political violence in Syria and Russia were to remain live. This distinction angered him. Regardless of the country, people were being murdered in what were, in effect, all civil wars. "[B]asically," he said, "our policies are meant to protect certain groups."

Among other things it puts Putin's decision to grab VKontakte from its founder in a new light. Presumably videos of Mexican violence (supported by American guns and by the Americans' desire for drugs) are not banned on VK, while Russian violence I suppose is now easily banned. It also makes him look less like a paranoid for accusing the US special services of having "created the Internet" (http://www.theguardian.com/world/2014/apr/24/vladimir-putin-...)


This part is troubling:

In May 2014, Dave Willner and Dan Kelmenson, a software engineer at Facebook, patented a 3D-modeling technology[1] for content moderation ... First, the model identifies a set of malicious groups - say neo-Nazis, child pornographers, or rape promoters. The model then identifies users who associate with those groups through their online interactions. Next, the model searches for other groups associated with those users and analyzes those groups "based on occurrences of keywords associated with the type of malicious activity and manual verification by experts." This way, companies can identify additional or emerging malicious online activity

It's not hard to imagine how this could be misused: Automated guilt-by-association, and within 3 degrees of separation, triggering investigation by authorities (corporate or otherwise).

EDIT: And also this tool of automated censorship, though I doubt it will surprise many HN readers

PhotoDNA works by processing an image every two milliseconds and is highly accurate. ... Then PhotoDNA extracts from each image a numeric signature that is unique to that image, "like your human DNA is to you." Whenever an image is uploaded, whether to Facebook or Tumblr or Twitter, and so on, he says, "its photoDNA is extracted and compared to our known ... images. Matches are automatically detected by a computer and reported ... for a follow-up investigation"

Currently used for child exploitation images, but for what else too and by whom? They add that it's being applied to something more political:

Farid is now working with tech companies and nonprofit groups to develop similar technology that will identify extremism and terrorist threats online - whether expressed in speech, image, video, or audio.

[1] http://www.google.com.gh/patents/US20080256602


This is a really good point, and this sort of thing could well be happening already. There has already been controversy over negative posts related to immigration being removed from Facebook due to a deal with the government.

And there's already been talk of people inside Facebook asking whether they're 'doing enough' to stop a Donald Trump presidency:

http://gizmodo.com/facebook-employees-asked-mark-zuckerberg-...

As well as cases where people were visited by police after making right wing comments online:

http://www.breitbart.com/london/2016/04/07/police-raid-socia...

(Yes, I know it's from Breitbart, but the point still stands)

And there's a lot of talk about Twitter censorship, blockbots blocking people based on what accounts they follow rather than any actions and all that stuff.

Similar systems are already being abused by social networks, and this type of technology will only make cases like this more common.


It would be great to have more solid data; I feel that speculation only clouds the issue.

> Yes, I know it's from Breitbart, but the point still stands

Not if you don't believe Breitbart


What is going on with Facebook's moderation of Kurdish / anti-Turkish content? Why is it that according to the leaked standards doc[1] (which as far as I can tell is genuine) Turkey gets to ban stuff it doesn't like on Facebook? Presumably it's OK to post a map showing Palestine, or a United Ireland, or other disputed regions on FB. Why can't you post a map of Kurdistan? Aside from Holocaust denial, no other political position seems to have been singled out in this way.

[1] http://i.dailymail.co.uk/i/pix/2012/02/21/article-2104424-11...


I'm guessing here but Turkey/Erdogan are particularly aggressive in attacking stuff they don't like. See for example Germany being made to prosecute one of their comedians and "In 2012 the Committee to Protect Journalists (CPJ) ranked Turkey as the worst journalist jailer in the world (ahead of Iran and China), with 49 journalists sitting in jail." Also more ammusingly the doctor facing 5 years for posting pictures of Erdogan next to pictures of Gollum http://www.theguardian.com/world/2015/dec/03/lord-of-rings-d...

They are probably threatening to arrest people / ban Facebook if there are any maps of Kurdistan posted.


I thought this headline was referring to what I think is a more insidious form of moderation, the type that's practiced here on ycombinator with regard to style, for example, negativity is punished broadly here.

The reason I find it more insidious is that it affects the speech of and acts to silence even moderate people. Extreme speech is easy to spot; but when moderate speech encounters headwinds that divert its path, that's creepy.

I understand the goals here, don't need them explained, just pointing out that slashdot in its heyday was a terrific and informative resource and it did not resort to stifling.


1. Breaking rules

When the article starts out with the story of Youtube, you read about the lists the moderators working off of to keep certain materials out of Youtube. And something was missing on the list. Any guess?

Yes, copyrighted material. I knew a guy who used to work at a competitor of Youtube. That startup was started by a Hollywood veteran (naturally), meaning he was very conscious of respecting copyrights. Youtube didn't care about copyrights, which started the upward spiral of more viewers, more uploads, and more viewers.

The end story is that Youtube took off, being bought off at a billion and making the founders rich.

Mixed feelings about this. Break rules to beat others in the game, and you end up winning. And most will agree this is not cool. And yet, because of rule breakers (some, not all), our world is advancing.

A comment I read in a comment section, that I saw that was funny. "When someone else cheats, it's adultery. When I cheat, it's romance"

After this this story, I've come up with following: "When someone else break rules, it's cheating. When I break rules, it's innovation"

2. No chance for moderation

And then there's the other part in the world of moderation, in which certain stories are not even given a chance for moderators to review.

NYT (New York Times) has abundant and quality comments on news articles. However, if you hang out there long enough, you will notice a trend.

Certain stories that don't help the goal of NYT don't get comments section at all. And if a story's comment section gets filled up with comments that seem to be hurting the agenda of that particular article, the comments section closes rather quickly.

And then there's Fox News. They don't even allow any way to post comments.

This kind of social engineering has been going on since humans became social animals, but with technology in the mix, those in power get more powerful.


Your point about breaking the rules to become successful is an old, old trick by community managers. I knew an awful lot of large forums that originally took off by posting ROMs/torrents/music/film downloads and then carefully excised them when the site had tens of thousands of members and might actually be visible to the copyright owners.

It's a pretty good way to become large rather quickly, since illegal or legal questionable content brings in a lot of visitors. Then you just remove it, and watch the network effect/inflated stats lure people to your site.

"IIRC pg wrote an essay defending that concept, more-or-less (and where by "I", he meant "startups"). "

I also heard it mentioned as 'asking for forgiveness rather than permission'. Or more cynically, get big enough that you can afford to fight the inevitable lawsuits.


> "When someone else break rules, it's cheating. When I break rules, it's innovation"

IIRC pg wrote an essay defending that concept, more-or-less (and where by "I", he meant "startups").


This seems like a trending topic. Some related materials:

* The dark side of Guardian comments

https://news.ycombinator.com/item?id=11478361

* Why has the Guardian declared war on internet freedom?

http://www.spiked-online.com/newsite/article/why-has-the-gua...

* The New Man of 4chan

https://news.ycombinator.com/item?id=11510758


This makes the criticisms of Wikipedia look pretty bland. I had never thought about what life in these trenches could look like. Trenches indeed.


Seriously. I can't count the number of times a day I see something marked NSFL and think, "nope, not even gonna look."

But these people have to. I wouldn't wish that on anyone.


Realise that pretty much anything that hits YouTube or FB also hits Wikpedia.

(I'm trying hard to avoid making plays on "pediwikia" without success. And not faulting Wikipedia but rather those who'll post that kind of content, or other rot, if they can.)


Whomever is responsible for the boxes that follow your mouse ought to have their commit privileges revoked.


Oh, is that what was meant to be happening? I was reading on mobile and the page just kept randomly scrolling, and when I came back to it after 8 hours a bunch of boxes had spilled semi transparent shadows over the text. It's funny how 20 years of improving tech have led us to a situation where we now just have to ignore huge layout issues and people shouting incredibly unpleasant things


Glad someone else took issue with this. We're in a constant race to improve battery life and processing power so we can waste energy on these pointless "innovations". I'm all for improving UX, but I constantly wonder how much longer our batteries would last if they weren't being drained by gratuitous features like this one...


On mobile, it appears to be tied to the gyroscope or accelerometer. They only move as I tilt/move my phone. Keeping it fixed and upright removed the floating boxes. Very strange. It took me 2/3 of the article to even figure out it was my movement making them move.


This article made me wonder about the range of things that these companies are dealing with under the rubric of "moderation" or "trust and safety", particularly the way that companies can have so many different kinds of incentives to remove content from a platform. (Three that come to mind are "this may create legal liability", "this may damage our brand", and "this isn't what many of our users would like to see".)

If the Internet or the way that people commonly use it is going to continue to get more centralized, I hope platforms improve their ability to distinguish between the problem of "things people don't want to see" and "things people don't want other people to see", or, to put it another way, between "please don't show me things like this" and "please don't allow people to publish things like this". While users themselves might not consistently draw the distinction between the two, platforms, in principle, could.

The article touched in this issue in its discussion of the extreme variation in cultural norms, and the likelihood that one culture's vulgarity is another culture's lyr—er, that one culture's distaste for something will lead to considerable pressure for platforms to squelch it for everybody.

A common idea is that platforms have a right and even a responsibility to define their own community standards and then people can choose platforms that they prefer or that best suit them, much as people can choose the newspaper whose editorial policy and biases they find most agreeable. I think this notion has a lot to recommend it but it seems less comfortable as the patterns of people's usage of the Internet becomes ever more centralized; it also raises the question of what things can be considered infrastructural enough that people can reasonably expect (or at least accept) complete content-neutrality from them.

The article also provided an interesting reminder that most people are unlikely to be comfortable using communications systems that have no moderation or filtering at all. Among other things, that's an interesting challenge for decentralized and censorship-resistant systems; to become more popular and practical, they'll need to be paired with some ways that people can avoid unwanted communication, beyond spam.


I didn't even realize that all of these sites were deleting racist and "hate" speech. That's totally reprehensible. Any site that would do that would also delete blasphemy and violations of lese majeste. The worst part is that they justify it with philosophical jumbles like "open policies [are] stifling free expression," as if one person speaking prevents another from also speaking.

edit: also, the "trillion or so dollars of value" that an "expert" ascribed to Section 230 is simply a euphemism for ads. Nothing else.

edit2: Wow. I just realized that the reason I've stopped getting racist hits on search engines in recent years is because they've been removed. Part of the range of discourse disappeared and I didn't even realize it. That's also got to be the reason that comment sections in papers like the Boston Globe, Chicago Tribune, and Washington Post have become racist cesspools. Forums where they could have their mutual appreciation societies away from normal people have been completely hidden from view.

Am I the only person who didn't think the internet was broken before all of this arbitration?


>Their stories reveal how the boundaries of free speech were drawn during a period of explosive growth for a high-stakes public domain, one that did not exist for most of human history.

I feel like this is a pretty big jump to make. There's a difference between deciding not to publish and amplify child sexual assault, and the "boundaries" of free speech.

Whether their speech is free or not is irrelevant; YouTube has no obligation to publish it.


I've posted before about my theories regarding optimum commenting and moderation systems, but one thing I've recently been thinking about is some sort of logic/rationality analysis engine (perhaps through nlp/ml?).

What I really want is to wade through the extra verbal fluff of a comment and get to its real points, so that I can determine if they are rational and logical. My primary method is by looking for fallacies which are tell-tell signs of a bad argument.

That still doesnt "fix" moderation, but I think such a system could be used to get rid of consistently illogical trolls to reduce the moderator workload at least.

Ideally, I imagine exclusivity of posting to be barrier 1, and moderation to be barrier 2. Some strange combinantion of hn Rules and /. Randomized moderation along with /. Tagging styles, would probably be the best.

Another factor to consider is that sockpuppetry has thrown the democratic balance of such systems off, and Im not yet sure how to deal with that.


The assumption being that forums and trolls are static. Trolls adapt and with enough innuendo, the innocuous will be loaded with dark meaning.

Plus someone has to wade through the sea of false positives.


Flickr seems to address the issues in a slightly different way. When uploading, you self "sensor" (classify) in that you can classify uploads as "mature" or more run of the mill. Plus you have groups which provide guidelines and have group moderators who may bump things out of the pool, ban people, etc. In addition, users can flag content and moderators can appeal to flickr to ban users for harassment, etc.

So, while exploitation, criminal activity, etc. can still be problematic, the issue of free speech as it pertains politics is less of an issue because you can upload the content to a group which has group policies allowing such content. So, for example, you might find a group which is welcoming of videos pictures showing police brutality, or the like or gang brutality [so long as the content isn't an excuse to portray something else]

All this is to say Flickr allows/allowed for more grass roots approach to moderation, other than meta moderation where the usual rules apply --ie to criminal acts, exploitation, etc.


I wonder if there could be a protection ring [1] model of moderation and content classification--one that protected business interests yet classified objectionable material based on content while letting service providers retain common carrier status.

In such a world, service providers would choose a level of filtering that fit their business needs, but would also let the free market decide what minimum standards were acceptable. YouTube's moderation expenses would simply focus on keeping them at level X (with a clear separation from some other site). Consumers, producers, and law enforcement would simply tune their dial to the level of content they found acceptable.

It seems like it would promote competition while effectively pricing free speech.

[1] https://en.wikipedia.org/wiki/Protection_ring


i feel like this topic is a logic minefield

seemingly least of all the question of under what criteria does censorship become moderation?


The difference may have more to do with whether you like a particular instance of it or not. "Censorship" is a pejorative and "curation" is an honorific and "moderation" is either that or neutral.


an example of the logic minefield

i understand your sentiment, it is concise and well developed

but saying 'censorship' is a pejorative of 'moderation', or worse 'curation', seems dangerously disingenuous

censorship is a thing

> the practice of officially examining books, movies, etc., and suppressing unacceptable parts.

that definition lacks any sort of caveat of disapproval or association with moderation or curation

i would suggest you can make the same point in more honest language by saying: moderation is censorship you agree with, or approve of, or tolerate, or enable


Did 'dang edit his comment? Is there a material difference between "... whether you like a particular instance of it or not ..." and "... moderation is censorship you agree with, or approve of, or tolerate, or enable" ?


dang's comment is the same as it was when i responded

the 'material difference' that i was trying to discuss was less about the first part of dang's comment and more about the second

> "Censorship" is a pejorative and "curation" is an honorific and "moderation" is either that or neutral.

calling censorship a pejorative, expressing contempt or disapproval, and calling moderation 'nuetral', to me, lends itself to the interpretation that the super set is moderation and censorship is a form of moderation you disapprove of

'censorship=-moderation' juxtaposed with 'moderation=+censorship'

it's funny to have received a response from dang, the hn moderator, because i have this.. self censoring :p.. feeling that any opinion i express in response will somehow be associated with dang's work on this site

i want to note, that though i am sure plenty of work goes on behind the scenes that i am unaware of, the times i have seen dang step into a thread and moderate explicitly it has been done with commendable tact and respect both for the community and the issue or user being addressed

that said,

> "Censorship" is a pejorative

is a terrifying sentence to me


What I meant is that people often describe the same thing X as "censorship", "moderation", or "curation" depending on whether they personally agree or disagree with that case of X. It's a bit like the difference between "tourism" and "travel". It's hard to give objective definitions of these terms independently of one's feelings.


I think I see what you're saying and if I do I think you've made a mistake in separating the first and second parts of dang's comment.


if you can explain what or why or how you think as such then a possible discussion or realisation of my mistake could be had


I meant that the second part was illustrative of the first part and not to be taken out of context. I.e., whether someone chooses to use 'censorship', 'moderation', or 'curation' depends on how they view the subject at hand. I don't believe he meant 'censorship' is objectively bad - just that it is usually used pejoratively.

This is kind of a long-winded thread about two sentences from 'dang ... not sure how much value is left in continuing it further.


> This is kind of a long-winded thread about two sentences from 'dang ... not sure how much value is left in continuing it further.

in all fairness this is a 'long winded' thread about two sentences from my gp.. hence my continued interest in discussion

> whether someone chooses to use 'censorship', 'moderation', or 'curation' depends on how they view the subject at hand.

with this, i agree

> the difference may have more to do with whether you like a particular instance of it or not.

with this, i agree

> "Censorship" is a pejorative and "curation" is an honorific and "moderation" is either that or neutral.

with this, i disagree

i read that as 'censorship is a pejorative', but you are suggesting i read it as 'censorship is usually a pejorative'

i agree that one can call an act of censorship by name to draw attention to thaer contempt for it, but if i note something is censorship, and someone responds by saying, 'that is pejorative' i am going to question that person's bias


I think dang meant the two statements to be dependent and expository. The second is explaining what is meant by the first. One's subjective take on a piece of content determines whether one would call censoring that content "censorship" or "moderation"—I ignore "curation" because I think that's quite a different thing, a sort of reverse censoring that is more akin to highlighting. Whether an act is moderation or censorship is definitely still censoring, but it's only going to be called censorship if one subjectively agrees with or accepts the censored content. Otherwise, the act of censoring that content will be called moderation if one subjectively disagrees with or rejects the content as something others ought to see and experience. I could be mistaken, but I don't think dang was suggesting censorship is objectively and intrinsically pejorative, but that a person's biases determine whether censoring is seen as a positive or negative action, and informs the word used to describe the act.

Of course, I'm admittedly accepting there is a difference between censoring (the action that results in censorship or moderation), moderation (the subjective act that positively serves an agenda), and censorship (the subjective act that negatively serves an agenda). In so doing, I admit that my own use of the terms is subjectively informed by my reception of the content (and think it illustrates what dang was after).


When moderators step away from removing content that is genuinely seen as against the rules (say, spam, or threats, or obvious trolling) and start removing content that annoys them on a political level (such as people being banned for disagreeing with them or supporting a different political party/view).

That's how I always viewed it anyway. And that's from someone who's moderated quite a lot of forums and other internet communities.


Wow. There is just so much in this article to wrap one's head around. I feel somewhat ashamed that I've never really thought about moderators at all, much less of the psychological and emotional effects they must endure as a result of the content they must see. I know it's work I wouldn't want to perform. I'm grateful for this investigation and report. There's a ton to unpack and think through.


Oh, that Neda video was uploaded by me to Youtube, because the Iranians who brought it out of the country couldn't. They never talked to me about it.


This is one example of a job where neural networks/AI can, and should, replace humans. Video and image classification is basically a solved problem. Numerous companies have already trained deep convolutional networks to recognize pornography and other forms of content unwanted on their platforms.


Maybe I'm missing something here, but I suspect this would be a technical and political minefield. Sure, you can recognise certain types of abuse images, but then what about content like that in fiction? It's very logical that something seems as problematic in real life would be fine in say, a Call of Duty game or a blockbuster movie or some other form of fiction.

Heck, there have been cases where scenes in movies and games have been 'reappropriated' as real life military or terrorist events by clueless nations and groups. For example:

http://www.shacknews.com/article/93359/footage-of-six-isis-s...

So your system would have to figure out not just whether something is seen as 'offensive' or 'against the rules', but whether its from a fictional work that might be allowed on the site.


> In April of 2005, they tested their first upload. By October, they had posted their first one million-view hit: Brazilian soccer phenom Ronaldinho trying out a pair of gold cleats. Weeks later, Google paid an unprecedented $1.65 billion to buy the site.

This article misstates when Google acquired YouTube. It was October 2006, not October 2005: https://en.wikipedia.org/wiki/YouTube#Company_history


They also mistake the traffic stats for /r/AskReddit as being traffic for all of Reddit:

> According to a source close to the moderation process at Reddit, the climate there is far worse. Despite the site’s size and influence — attracting some 4 to 5 million page views a day[1] — Reddit has a full-time staff of only around 75 people, leaving Redditors to largely police themselves

[1] https://www.reddit.com/r/AskReddit/about/traffic


> last month, reddit had 243,632,148 unique visitors hailing from over 212 different countries viewing a total of 8,137,128,592 pages

Which is about 270 million page views per day.

https://www.reddit.com/about/


Given that they don't have a large marketing team, 75 people would be a huge team to deal with 4 to 5 million page views


I'm glad you caught this too, I thought it was WAY too soon, I knew YouTube when it had relatively grown in popularity and quite a bit before Google's acquisition of it.


By the time I read it this had been corrected and acknowledged in the footnotes, so I guess it's good to know the system works


That's just a beautiful layout on that page. Amazing typography and colour.


Glad you like it, but I actually stopped reading the article due to the atrocious typography and gimmicks.

Those shadows behind the asides that move as you move your mouse are extremely distracting, and they overlap with the body text. For the body font they depend on users having Helvetica or Arial installed to render (neither of which I have) instead of using @font-face, so the text looks out of place.

Also, none of the background images loaded for me until I tried with a browser without ad-blocker and third-party tracking protection.


I'm using noscript so it looks fine to me, the site's a bit broken though but that's fine by me, if you have FireFox you could always try "Reader View" to read the article without all the glitter.


It would actually be a nice feature if I could add certain domains to a "reader view" list in the browser, so that they always use that view directly. Medium, The Verge, all those giant blown up designs would be much more readable.


The parallax was a bit too much for me though. I ended up opening the console and pausing the javascript.


They didn't do pure CSS parallax? I guess in this case it's for the best, but generally that's Doing It Wrong.


[flagged]


Please don't post like this here. Substantive discussion requires resisting the pull of ideological black holes.

It's typical for web publications to be mostly bad yet sometimes good, regardless of one's politics. This article is good and interesting (and maybe also wrong in places, I don't know), so let's stay on that planet.


"Men are evil" certainly isn't the message I took from the article, but media bias like a fair topic for discussion. I was interested how this article was created, and it turns out to be funded by http://www.theinvestigativefund.org, which is an offshoot of The Nation. While I don't see this article as terribly "anti-male", The Nation definitely has a strong political slant that not everyone agrees with. Maybe you could suggest a way of rephrasing 'rustynails' points that would be more likely to yield productive discussion?


> "Abusive men threaten spouses" (aka domestic violence is a man thing)

Anyone can be the victim of domestic violence, and anyone can be the perpetrator, but overwhelmingly domestic violence involves a male perpetrator and a female victim.

Even if you expand the definition of "violence" to include "abuse" you still see double the number of female victims of abuse.

Here's a source. They list their methods, they give the Excel sheets.

http://www.ons.gov.uk/peoplepopulationandcommunity/crimeandj...

Here's one quote:

    There were differences between males and females in the pattern of relationships between victims and suspects. Women were far more likely than men to be killed by partners/ex-partners (44% of female victims compared with 6% of male victims), and men were more likely than women to be killed by friends/ acquaintances (32% of male victims compared with 8% of female victims).
> People need to denounce prejudice by anyone (despite what political correctness says). Feminism doesn't deserve protection in the way that the KKK don't deserve it. You aren't born a feminist or KKK, but you sure can choose to propagate the intolerance. I took my stance when I saw that young boys are impacted by a gender war that never should have started.

For fucks sake.


Little different from any historical thought police - we know what's best for you, citizen, move along, nothing to see here.


This is vastly different from historical/fictional thought-policing. Did you read the article and really wind up with this takeaway?


I don't think perfect moderation to some global average morality is the future of humanity. It's TV mentality.

I think the future is keeping vulnerable groups apart from offensive groups. Hellbaning and honeypot bots are the future. White supremacist can talk to other like him all day long and there's no harm done. Ones that would like to talk to black people or just people who get offended by racism should get algorithmically administered silence or honeypot bots that will react in best way possible to defuse and calm down without engaging human in distasteful content.

Plus side of this solutio is that you can keep tabs on potentially dangerous people and react if they escalate or brag about physical harm they do.


> White supremacist can talk to other like him all day long and there's no harm done.

Right up until one of them decides it's time to leave his little echo chamber and murder a bunch of kids on an island because he and his Internet buddies have convinced themselves that's the right thing to do.


Between 1984 and bad people doing bad things I choose the later with no regrets.


Worst thing in 1984 was not surveilance but rewriting the truth. You can't have progress if you are not open and honest.

Pretending that there's just one morality that everyone normal must adhere to is more 1984 than any of my ideas.


You'd rather prefer they'd coordinate their attack on a channel you don't monitor?

Do you think you can stop stupid people from doing harmful things by banning them from expressing their stupid opinions online?

Brevik is exception not the rule and it could be all avoided if he was allowed to speak freely and discuss his demented ideas with people that don't mind so he can be monitored.


It's TV mentality.

You're right, but most of these companies like that mentality, because a user in that mode is stickier and more profitable.

And that's OK! All of these private platforms can find their own happy medium. Those who want to watch Nazi crap might not find any mainstream forum for that, but surely one of them is resourceful enough to set up a site for that purpose. This might make "keeping tabs" more difficult, unless of course the entire site is a honeypot created for that purpose, by other parties. (Admittedly I don't see much use in surveilling Nazis, but there are other groups that more often switch from talk to violence.)


Many large(and small) subreddits are highly censored. r/worldnews r/news r/europe to name a few.

Here's a recent example, not even the most egregious, just a recent one.

https://www.reddit.com/r/worldnews/comments/4ew68z/half_of_a...

Noticed how many xx deep comment threads voted high up in the comment stack have been excised from the discussion.

It's disgusting. It's very often the case that the top voted contributions have been 'disappeared.'

Again, just disgusting. What exactly is the point of a 'democratic' news site, if there is this constant intervention from unaccountable authorities, constantly policing what information and opinions are allowed to be discussed.

Any information outside the scope of a narrow ideological agenda is summarily terminated. Is this the public square of the future that we want?

kn0thing, spez, What are your thoughts?


And yet, you can start your own damn subreddit. You can talk about anything you want, even violent, openly racist stuff. You can decide not to moderate it at all.

So clearly, you can talk about nearly anything you want on reddit. Your problem is that you want to be able to talk about anything you want in somebody else’s subreddit.

It’s as if you move onto my street, live in a perfectly good house, but complain about the fact that you can’t do whatever you like in my house.

Freedom to say whatever you want is not the same thing as forcing other people to pay attention to it. You need to either find like-minded people moderating a like-minded subreddit, or find a way to get other people interested in your own subreddit.

This business of demanding that everybody else give you a platform because “freedom of speech” is flat-out wrong.


The point (of the problem on reddit) as a I see it has less to do with free speech and more to do with malicious moderation intended as a means of propaganda.


One person’s editorial policy is another’s malicious propaganda. Either way, the moderators control the content and I agree that it’s important not to confuse a private platform (like a subreddit or even reddit as a whole) with a free platform.

Your own blog is the only free platform (for non-extreme definitions of “free.”) All other media involve some sort of overt or more subtle curation.


...and yet when anyone talks about a good subreddit it's almost always a sub that has strict rules and vigorous moderation, or a tiny sub that has a tiny number of users.


Well, I'm saying those are bad subs, because they have 'vigorous' moderation. So subs with 'strict, vigorous moderation' is not a feature that automatically connotes, good sub.


Well, they're the subreddits that get cited whenever anyone talks about the glorious goodness of reddit:

/r/ama, /r/iama, /r/askscience, /r/syriancivilwar (cited by news organizations as a source and informing their methodology)...

Moderation doesn't automatically connote good subs, but the correlation is very strong.


I don't know why you are downvoted. What you say is entirely accurate. In particular /r/europe which I used to frequent is horribly biased. When TTIP was in the news all negative criticism of TTIP was censored. If you visited the subreddit you'd think 100% of European redditors were pro-TTIP fanatics.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: