
The murky history of moderation and how it’s shaping the future of free speech - pmcpinto
http://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech
======
dang
We've changed the title from the linkbait headline to the informative subtitle
and re-upped* this post. I don't think we've ever done that for a story that
already had 100+ upvotes, but this article is so astonishingly in-depth, and
the topic so little known, that it deserves a deeper discussion.

* Described at [https://news.ycombinator.com/item?id=10705926](https://news.ycombinator.com/item?id=10705926) and the other links there.

~~~
jrapdx3
I entirely concur with both your points, especially about how little
appreciated this subject has been. I was dimly aware of it, I mean it's no
secret there's a lot of really terrible stuff on the web, and it's shocked me
when I've stumbled into it. It's disquieting to realize I never thought about
what it takes to keep a site like YouTube from being a complete cesspool.
Never crossed my mind, but the question _should 've_ occurred to me, why
didn't it? I wish I had an answer.

Of course, now I know how good a job moderators have been doing, and better
understanding of the toll that role can take. They deserve a lot of respect.
The idea this is a subject worthy of systematic study sticks with me though
not yet clear about the next steps. Any case, the implications of the story
cover a large space in economics, politics, ethics, law, and medicine. It will
be fascinating to see how it develops.

------
jrapdx3
This is a very good article and quite an eye opener. As an internet/web "user"
for twenty years or so I'm familiar with the existence of "moderation", but
had only vague awareness of what moderators endured in that role. Reminiscent
of traumatized 911 operators, it would be unsurprising that a subset of
moderators develop PTSD-like symptoms related to that experience. Workers'
comments quoted in the article lead me to speculate that there are significant
mental health epidemiological questions re: current and former moderators.

An effective automated approaches will provide a big advantage in narrowing
down the "judgement space" that must decided by humans. To the extent that's
possible, the key benefit provided is reducing the exposure of moderators to
stressful situations as the article describes, and indeed that's a very
helpful development.

As the article points out the whole domain of moderation practices is a
minefield. But now I wonder if there isn't also a risk of automated review
making classification errors re: user behavior. Plausibly automated systems
can be tuned finely enough to avoid serious errors, and support human
oversight to catch and more easily resolve edge cases. Automated moderation
systems will need to have such qualities in order to be able to reduce human
burden as intended.

~~~
Karunamon
There is probably no way to ask this question that doesn't sound dismissive at
best, so please understand that is not the spirit this is intended in, but:

 _it would be unsurprising that a subset of moderators develop PTSD-like
symptoms related to that experience._

Is that really the case? Most moderation is done on textual-based forums, like
the one we're on right now. I'm having a hard time imagining the right
combination of words on a screen to generate trauma in the requisite
quantities for someone to wind up with a clinical disorder because of it.

Personally, I've been on both sides of this coin. I've been the guy kicking
trolls off of a decent sized board, I've been the guy getting kicked off
because I annoyed the wrong person.

At no point did it ever progress beyond internet drama. The meanest, nastiest
person I could imagine could type in words along the lines of how they'd like
to kill and fornicate with my mother, and my, and most other people's response
I wager, would be "That's cool, kid. Bye now. _Ban_ "

Maybe there's a case to be made for becoming jaded after a while of dealing
with the worst (hang out on the meta Stack Exchange sites for a publicly
visible example) - but PTSD symptoms? To me, that both overly glorifies the
troll "i can type words so good that I can make other people have legitimate
mental breakdowns!" and makes light of the suffering by people who _actually_
have PTSD (who have seen things like _people dying_ ). It feels like a lack of
perspective, brought on by spending a lot of time online. Unlike real life,
disconnecting from online drama is always a button press away.

~~~
jrapdx3
I'm sorry, but I do believe you misunderstood the article and perhaps my
comments on it as well.

The article clearly describes moderators _watching videos_ flagged as
offensive, and these videos included "amateur and profession pornography" and
many that contained "child abuse, beatings, and animal cruelty". Furthermore
the article discusses moderators having to deal with videos shot during the
Iranian revolution in 2009, including the murder of a young woman, "a shaky
cell-phone video captured her horrific last moments: in it, blood pours from
her eyes, pooling beneath her."

The article presents many more examples of the enormously disturbing tasks
assigned to moderators. Based on tens of years in clinical practice I'd
consider it likely that viewing such material, especially seeing it
frequently, can precipitate acute and chronic stress disorders in vulnerable
individuals. I understand the point this is only hypothetical in the absence
of systematically collected data, call it a clinician's hunch if you want, but
still I'd put money on it.

Rereading the article confirms for me that it is an excellent piece of
journalism, perhaps even exceptional in the current era. Moderators were
exposed to far more than words, in a way similar to 911 operators suffering
exposure to trauma without being at the scene. Forum moderation has very
little in common with the subject of the article.

You are severely underestimating the effects of this kind of experience on
people in such positions. You should learn more about stress disorders, the
complexity of which is bound up with the unique and highly variable attributes
of individuals. Some people are much more resilient than others making it
misleading to generalize about how people will respond to given situations.

Most of all, I strong suggest you (re)read the story and if capable, allow
yourself to empathize with the plight of the moderators.

~~~
mbrutsch
> You should learn more about stress disorders

Uh, why? Am I a counselor?

> allow yourself to empathize with the plight of the moderators

To what end? What possible constructive purpose is served by allowing myself
to share (or wallow) in the negative emotions described by others? Does it
stop "bad things" from happening? Does it lessen their pain? Does it lessen
_anyone 's_ pain? Or does it increase the overall amount of suffering in the
world?

brb, going to voluntarily take a job that makes me feel bad, so I can cry
about it on the internet...

~~~
jacalata
Perhaps he addressed that comment to someone who seemed to feel they already
knew enough to have an opinion on the topic, and meant to imply that you
should learn more about this _if you intend to have opinions on it_

------
proksoup
> The details of moderation practices are routinely hidden from public view,
> siloed within companies and treated as trade secrets when it comes to users
> and the public. Despite persistent calls from civil society advocates for
> transparency, social media companies do not publish details of their
> internal content moderation guidelines; no major platform has made such
> guidelines public.

It's really nice to see light shining on this topic. Truly, this content
moderation is the equivalent (equivalent for the internet medium) of the
boundaries of debate enforced by the editors and owners of major media
corporations on our previous mediums.

~~~
dmckeon
If the details of moderation rules were public, companies would be facing
additional labor costs dealing with posters pushing hard on the edges of those
rules.

At some level, users can choose from a variety of forums with different
moderation methods, or none, and companies can try to fit their moderation
policies and practices to their users' expectations.

All of that develops in a cross between evolution and arms race, with lots of
moving targets - global variation, changing social mores, changing technology,
the latest taboos and moral panics.

Exposing the details of how the sausage is made would help the people who want
to put pebbles (or worse) in the sausage more than it would help the makers or
consumers. If one finds the sausage too bland, there is likely a spicier one
just a few clicks away. Or one could grind one's own sausage and compete!

~~~
proksoup
What you describe sounds like security by obscurity.

~~~
dmckeon
Publishing moderation policy details would be like crowd-sourcing cryptography
software development - creating a fractally increased attack surface with a
less profitable business model.

Social media companies operate in a market economy. These are not public
utilities or government programs. Feel free to propose a more cost-effective
approach.

~~~
proksoup
I think exposing the rules will make them stronger.

~~~
intended
Moderation is not cyber security. Virii don't turn around and argue that your
AV is unfairly applying their rules to them.

------
xnull2guest
Many of the problems faced by Facebook, Youtube and others is that they own
the digital commons and manage and curate it for the people trying to
communicate with one another.

These issues come up for companies whose business models are deciding what
people should see and hear. This makes them incredibly powerful, but it brings
them into the bog.

This isn't a new problem, other than internet scale. Media companies have long
- in having to decide what news to print - been faced with content moderation.

Finally, content moderation is inherently political. ModSquad - one of many
booming companies in the social media moderation space - serves the US State
Department, who makes no bones about the fact that it uses all elements of
national power to affect what populations think and what access to information
they have (it also runs the Bureau of International Information Programs - the
State Department half of the US propaganda programme).

Like newspapers, TV shows, cable companies, and radio programs before it -
this new media sees itself filtering content for the public - with all of the
complexities, power, and grief that entails.

~~~
skybrian
Yep, filtering is inevitable.

Lots of early Internet folks started out pro-free-speech and tried to be as
hands-off as they could. But even if you're a stubborn libertarian, eventually
you find out that it doesn't work.

Sooner or later you're going to need a moderation team (or do it all
yourself), and there are going to be tough political calls. The bigger the
site, the more crap you see and the more tough calls there are.

------
paganel
Excellent article. Just the paragraph bellow deserves its own extended
commentary:

> The screener was instructed to take down videos depicting drug-related
> violence in Mexico, while those of political violence in Syria and Russia
> were to remain live. This distinction angered him. Regardless of the
> country, people were being murdered in what were, in effect, all civil wars.
> "[B]asically," he said, "our policies are meant to protect certain groups."

Among other things it puts Putin's decision to grab VKontakte from its founder
in a new light. Presumably videos of Mexican violence (supported by American
guns and by the Americans' desire for drugs) are not banned on VK, while
Russian violence I suppose is now easily banned. It also makes him look less
like a paranoid for accusing the US special services of having "created the
Internet" ([http://www.theguardian.com/world/2014/apr/24/vladimir-
putin-...](http://www.theguardian.com/world/2014/apr/24/vladimir-putin-web-
breakup-internet-cia))

------
hackuser
This part is troubling:

 _In May 2014, Dave Willner and Dan Kelmenson, a software engineer at
Facebook, patented a 3D-modeling technology[1] for content moderation ...
First, the model identifies a set of malicious groups - say neo-Nazis, child
pornographers, or rape promoters. The model then identifies users who
associate with those groups through their online interactions. Next, the model
searches for other groups associated with those users and analyzes those
groups "based on occurrences of keywords associated with the type of malicious
activity and manual verification by experts." This way, companies can identify
additional or emerging malicious online activity_

It's not hard to imagine how this could be misused: Automated guilt-by-
association, and within 3 degrees of separation, triggering investigation by
authorities (corporate or otherwise).

EDIT: And also this tool of automated censorship, though I doubt it will
surprise many HN readers

 _PhotoDNA works by processing an image every two milliseconds and is highly
accurate. ... Then PhotoDNA extracts from each image a numeric signature that
is unique to that image, "like your human DNA is to you." Whenever an image is
uploaded, whether to Facebook or Tumblr or Twitter, and so on, he says, "its
photoDNA is extracted and compared to our known ... images. Matches are
automatically detected by a computer and reported ... for a follow-up
investigation"_

Currently used for child exploitation images, but for what else too and by
whom? They add that it's being applied to something more political:

 _Farid is now working with tech companies and nonprofit groups to develop
similar technology that will identify extremism and terrorist threats online -
whether expressed in speech, image, video, or audio._

[1]
[http://www.google.com.gh/patents/US20080256602](http://www.google.com.gh/patents/US20080256602)

~~~
CM30
This is a really good point, and this sort of thing could well be happening
already. There has already been controversy over negative posts related to
immigration being removed from Facebook due to a deal with the government.

And there's already been talk of people inside Facebook asking whether they're
'doing enough' to stop a Donald Trump presidency:

[http://gizmodo.com/facebook-employees-asked-mark-
zuckerberg-...](http://gizmodo.com/facebook-employees-asked-mark-zuckerberg-
if-they-should-1771012990)

As well as cases where people were visited by police after making right wing
comments online:

[http://www.breitbart.com/london/2016/04/07/police-raid-
socia...](http://www.breitbart.com/london/2016/04/07/police-raid-social-media-
posts/)

(Yes, I know it's from Breitbart, but the point still stands)

And there's a lot of talk about Twitter censorship, blockbots blocking people
based on what accounts they follow rather than any actions and all that stuff.

Similar systems are already being abused by social networks, and this type of
technology will only make cases like this more common.

~~~
hackuser
It would be great to have more solid data; I feel that speculation only clouds
the issue.

> Yes, I know it's from Breitbart, but the point still stands

Not if you don't believe Breitbart

------
Joeboy
What is going on with Facebook's moderation of Kurdish / anti-Turkish content?
Why is it that according to the leaked standards doc[1] (which as far as I can
tell is genuine) Turkey gets to ban stuff it doesn't like on Facebook?
Presumably it's OK to post a map showing Palestine, or a United Ireland, or
other disputed regions on FB. Why can't you post a map of Kurdistan? Aside
from Holocaust denial, no other political position seems to have been singled
out in this way.

[1]
[http://i.dailymail.co.uk/i/pix/2012/02/21/article-2104424-11...](http://i.dailymail.co.uk/i/pix/2012/02/21/article-2104424-11D8ADDB000005DC-298_964x733.jpg)

~~~
tim333
I'm guessing here but Turkey/Erdogan are particularly aggressive in attacking
stuff they don't like. See for example Germany being made to prosecute one of
their comedians and "In 2012 the Committee to Protect Journalists (CPJ) ranked
Turkey as the worst journalist jailer in the world (ahead of Iran and China),
with 49 journalists sitting in jail." Also more ammusingly the doctor facing 5
years for posting pictures of Erdogan next to pictures of Gollum
[http://www.theguardian.com/world/2015/dec/03/lord-of-
rings-d...](http://www.theguardian.com/world/2015/dec/03/lord-of-rings-
director-insult-to-erdogan-mistaken-as-gollum-as-charcter-is)

They are probably threatening to arrest people / ban Facebook if there are any
maps of Kurdistan posted.

------
brashrat
I thought this headline was referring to what I think is a more insidious form
of moderation, the type that's practiced here on ycombinator with regard to
style, for example, negativity is punished broadly here.

The reason I find it more insidious is that it affects the speech of and acts
to silence even moderate people. Extreme speech is easy to spot; but when
moderate speech encounters headwinds that divert its path, that's creepy.

I understand the goals here, don't need them explained, just pointing out that
slashdot in its heyday was a terrific and informative resource and it did not
resort to stifling.

------
dba7dba
1\. Breaking rules

When the article starts out with the story of Youtube, you read about the
lists the moderators working off of to keep certain materials out of Youtube.
And something was missing on the list. Any guess?

Yes, _copyrighted material_. I knew a guy who used to work at a competitor of
Youtube. That startup was started by a Hollywood veteran (naturally), meaning
he was very conscious of respecting copyrights. Youtube didn't care about
copyrights, which started the upward spiral of more viewers, more uploads, and
more viewers.

The end story is that Youtube took off, being bought off at a billion and
making the founders rich.

Mixed feelings about this. Break rules to beat others in the game, and you end
up winning. And most will agree this is not cool. And yet, because of rule
breakers (some, not all), our world is advancing.

A comment I read in a comment section, that I saw that was funny. "When
someone else cheats, it's adultery. When I cheat, it's romance"

After this this story, I've come up with following: "When someone else break
rules, it's cheating. When I break rules, it's innovation"

2\. No chance for moderation

And then there's the other part in the world of moderation, in which certain
stories are not even given a chance for moderators to review.

NYT (New York Times) has abundant and quality comments on news articles.
However, if you hang out there long enough, you will notice a trend.

Certain stories that don't help the goal of NYT don't get comments section at
all. And if a story's comment section gets filled up with comments that seem
to be hurting the agenda of that particular article, the comments section
closes rather quickly.

And then there's Fox News. They don't even allow any way to post comments.

This kind of social engineering has been going on since humans became social
animals, but with technology in the mix, those in power get more powerful.

~~~
CM30
Your point about breaking the rules to become successful is an old, old trick
by community managers. I knew an awful lot of large forums that originally
took off by posting ROMs/torrents/music/film downloads and then carefully
excised them when the site had tens of thousands of members and might actually
be visible to the copyright owners.

It's a pretty good way to become large rather quickly, since illegal or legal
questionable content brings in a lot of visitors. Then you just remove it, and
watch the network effect/inflated stats lure people to your site.

"IIRC pg wrote an essay defending that concept, more-or-less (and where by
"I", he meant "startups"). "

I also heard it mentioned as 'asking for forgiveness rather than permission'.
Or more cynically, get big enough that you can afford to fight the inevitable
lawsuits.

------
hackuser
This seems like a trending topic. Some related materials:

* The dark side of Guardian comments

[https://news.ycombinator.com/item?id=11478361](https://news.ycombinator.com/item?id=11478361)

* Why has the Guardian declared war on internet freedom?

[http://www.spiked-online.com/newsite/article/why-has-the-
gua...](http://www.spiked-online.com/newsite/article/why-has-the-guardian-
declared-war-on-internet-freedom/18247)

* The New Man of 4chan

[https://news.ycombinator.com/item?id=11510758](https://news.ycombinator.com/item?id=11510758)

------
niels_olson
This makes the criticisms of Wikipedia look pretty bland. I had never thought
about what life in these trenches could look like. Trenches indeed.

~~~
dclowd9901
Seriously. I can't count the number of times a day I see something marked NSFL
and think, "nope, not even gonna look."

But these people _have to_. I wouldn't wish that on anyone.

------
michaelmrose
Whomever is responsible for the boxes that follow your mouse ought to have
their commit privileges revoked.

~~~
takno
Oh, is that what was meant to be happening? I was reading on mobile and the
page just kept randomly scrolling, and when I came back to it after 8 hours a
bunch of boxes had spilled semi transparent shadows over the text. It's funny
how 20 years of improving tech have led us to a situation where we now just
have to ignore huge layout issues and people shouting incredibly unpleasant
things

~~~
yomly
Glad someone else took issue with this. We're in a constant race to improve
battery life and processing power so we can waste energy on these pointless
"innovations". I'm all for improving UX, but I constantly wonder how much
longer our batteries would last if they weren't being drained by gratuitous
features like this one...

------
schoen
This article made me wonder about the range of things that these companies are
dealing with under the rubric of "moderation" or "trust and safety",
particularly the way that companies can have so many different kinds of
incentives to remove content from a platform. (Three that come to mind are
"this may create legal liability", "this may damage our brand", and "this
isn't what many of our users would like to see".)

If the Internet or the way that people commonly use it is going to continue to
get more centralized, I hope platforms improve their ability to distinguish
between the problem of "things people don't want to see" and "things people
don't want other people to see", or, to put it another way, between "please
don't show me things like this" and "please don't allow people to publish
things like this". While users themselves might not consistently draw the
distinction between the two, platforms, in principle, could.

The article touched in this issue in its discussion of the extreme variation
in cultural norms, and the likelihood that one culture's vulgarity is another
culture's lyr—er, that one culture's distaste for something will lead to
considerable pressure for platforms to squelch it for everybody.

A common idea is that platforms have a right and even a responsibility to
define their own community standards and then people can choose platforms that
they prefer or that best suit them, much as people can choose the newspaper
whose editorial policy and biases they find most agreeable. I think this
notion has a lot to recommend it but it seems less comfortable as the patterns
of people's usage of the Internet becomes ever more centralized; it also
raises the question of what things can be considered infrastructural enough
that people can reasonably expect (or at least accept) complete content-
neutrality from them.

The article also provided an interesting reminder that most people are
unlikely to be comfortable using communications systems that have no
moderation or filtering at all. Among other things, that's an interesting
challenge for decentralized and censorship-resistant systems; to become more
popular and practical, they'll need to be paired with some ways that people
can avoid unwanted communication, beyond spam.

------
pessimizer
I didn't even realize that all of these sites were deleting racist and "hate"
speech. That's totally reprehensible. Any site that would do that would also
delete blasphemy and violations of lese majeste. The worst part is that they
justify it with philosophical jumbles like "open policies [are] stifling free
expression," as if one person speaking prevents another from also speaking.

edit: also, the "trillion or so dollars of value" that an "expert" ascribed to
Section 230 is simply a euphemism for ads. Nothing else.

edit2: Wow. I just realized that the reason I've stopped getting racist hits
on search engines in recent years is because they've been removed. Part of the
range of discourse disappeared and I didn't even realize it. That's also got
to be the reason that comment sections in papers like the Boston Globe,
Chicago Tribune, and Washington Post have become racist cesspools. Forums
where they could have their mutual appreciation societies away from normal
people have been completely hidden from view.

Am I the only person who didn't think the internet was broken before all of
this arbitration?

------
microcolonel
>Their stories reveal how the boundaries of free speech were drawn during a
period of explosive growth for a high-stakes public domain, one that did not
exist for most of human history.

I feel like this is a pretty big jump to make. There's a difference between
deciding not to publish and amplify child sexual assault, and the "boundaries"
of free speech.

Whether their speech is free or not is irrelevant; YouTube has no obligation
to publish it.

------
arca_vorago
I've posted before about my theories regarding optimum commenting and
moderation systems, but one thing I've recently been thinking about is some
sort of logic/rationality analysis engine (perhaps through nlp/ml?).

What I really want is to wade through the extra verbal fluff of a comment and
get to its real points, so that I can determine if they are rational and
logical. My primary method is by looking for fallacies which are tell-tell
signs of a bad argument.

That still doesnt "fix" moderation, but I think such a system could be used to
get rid of consistently illogical trolls to reduce the moderator workload at
least.

Ideally, I imagine exclusivity of posting to be barrier 1, and moderation to
be barrier 2. Some strange combinantion of hn Rules and /. Randomized
moderation along with /. Tagging styles, would probably be the best.

Another factor to consider is that sockpuppetry has thrown the democratic
balance of such systems off, and Im not yet sure how to deal with that.

~~~
intended
The assumption being that forums and trolls are static. Trolls adapt and with
enough innuendo, the innocuous will be loaded with dark meaning.

Plus someone has to wade through the sea of false positives.

------
mc32
Flickr seems to address the issues in a slightly different way. When
uploading, you self "sensor" (classify) in that you can classify uploads as
"mature" or more run of the mill. Plus you have groups which provide
guidelines and have group moderators who may bump things out of the pool, ban
people, etc. In addition, users can flag content and moderators can appeal to
flickr to ban users for harassment, etc.

So, while exploitation, criminal activity, etc. can still be problematic, the
issue of free speech as it pertains politics is less of an issue because you
can upload the content to a group which has group policies allowing such
content. So, for example, you might find a group which is welcoming of videos
pictures showing police brutality, or the like or gang brutality [so long as
the content isn't an excuse to portray something else]

All this is to say Flickr allows/allowed for more grass roots approach to
moderation, other than meta moderation where the usual rules apply --ie to
criminal acts, exploitation, etc.

------
firebones
I wonder if there could be a protection ring [1] model of moderation and
content classification--one that protected business interests yet classified
objectionable material based on content while letting service providers retain
common carrier status.

In such a world, service providers would choose a level of filtering that fit
their business needs, but would also let the free market decide what minimum
standards were acceptable. YouTube's moderation expenses would simply focus on
keeping them at level X (with a clear separation from some other site).
Consumers, producers, and law enforcement would simply tune their dial to the
level of content they found acceptable.

It seems like it would promote competition while effectively pricing free
speech.

[1]
[https://en.wikipedia.org/wiki/Protection_ring](https://en.wikipedia.org/wiki/Protection_ring)

------
justifier
i feel like this topic is a logic minefield

seemingly least of all the question of under what criteria does censorship
become moderation?

~~~
dang
The difference may have more to do with whether you like a particular instance
of it or not. "Censorship" is a pejorative and "curation" is an honorific and
"moderation" is either that or neutral.

~~~
justifier
an example of the logic minefield

i understand your sentiment, it is concise and well developed

but saying 'censorship' is a pejorative of 'moderation', or worse 'curation',
seems dangerously disingenuous

censorship is a thing

> the practice of officially examining books, movies, etc., and suppressing
> unacceptable parts.

that definition lacks any sort of caveat of disapproval or association with
moderation or curation

i would suggest you can make the same point in more honest language by saying:
moderation is censorship you agree with, or approve of, or tolerate, or enable

~~~
aptwebapps
Did 'dang edit his comment? Is there a material difference between "...
whether you like a particular instance of it or not ..." and "... moderation
is censorship you agree with, or approve of, or tolerate, or enable" ?

~~~
justifier
dang's comment is the same as it was when i responded

the 'material difference' that i was trying to discuss was less about the
first part of dang's comment and more about the second

> "Censorship" is a pejorative and "curation" is an honorific and "moderation"
> is either that or neutral.

calling censorship a pejorative, expressing contempt or disapproval, and
calling moderation 'nuetral', to me, lends itself to the interpretation that
the super set is moderation and censorship is a form of moderation you
disapprove of

'censorship=-moderation' juxtaposed with 'moderation=+censorship'

it's funny to have received a response from dang, the hn moderator, because i
have this.. self censoring :p.. feeling that any opinion i express in response
will somehow be associated with dang's work on this site

i want to note, that though i am sure plenty of work goes on behind the scenes
that i am unaware of, the times i have seen dang step into a thread and
moderate explicitly it has been done with commendable tact and respect both
for the community and the issue or user being addressed

that said,

> "Censorship" is a pejorative

is a terrifying sentence to me

~~~
aptwebapps
I think I see what you're saying and if I do I think you've made a mistake in
separating the first and second parts of dang's comment.

~~~
justifier
if you can explain what or why or how you think as such then a possible
discussion or realisation of my mistake could be had

~~~
aptwebapps
I meant that the second part was illustrative of the first part and not to be
taken out of context. I.e., whether someone chooses to use 'censorship',
'moderation', or 'curation' depends on how they view the subject at hand. I
don't believe he meant 'censorship' is objectively bad - just that it is
usually used pejoratively.

This is kind of a long-winded thread about two sentences from 'dang ... not
sure how much value is left in continuing it further.

~~~
justifier
> This is kind of a long-winded thread about two sentences from 'dang ... not
> sure how much value is left in continuing it further.

in all fairness this is a 'long winded' thread about two sentences from my
gp.. hence my continued interest in discussion

> whether someone chooses to use 'censorship', 'moderation', or 'curation'
> depends on how they view the subject at hand.

with this, i agree

> the difference may have more to do with whether you like a particular
> instance of it or not.

with this, i agree

> "Censorship" is a pejorative and "curation" is an honorific and "moderation"
> is either that or neutral.

with this, i disagree

i read that as 'censorship is a pejorative', but you are suggesting i read it
as 'censorship is usually a pejorative'

i agree that one can call an act of censorship by name to draw attention to
thaer contempt for it, but if i note something is censorship, and someone
responds by saying, 'that is pejorative' i am going to question that person's
bias

~~~
bobwaycott
I think dang meant the two statements to be dependent and expository. The
second is explaining what is meant by the first. One's subjective take on a
piece of content determines whether one would call censoring that content
"censorship" or "moderation"—I ignore "curation" because I think that's quite
a different thing, a sort of reverse censoring that is more akin to
highlighting. Whether an act is moderation or censorship is definitely still
_censoring_ , but it's only going to be called censorship if one subjectively
agrees with or accepts the censored content. Otherwise, the act of censoring
that content will be called moderation if one subjectively disagrees with or
rejects the content as something others ought to see and experience. I could
be mistaken, but I don't think dang was suggesting censorship is objectively
and intrinsically pejorative, but that a person's biases determine whether
censoring is seen as a positive or negative action, and informs the word used
to describe the act.

Of course, I'm admittedly accepting there is a difference between censoring
(the action that results in censorship or moderation), moderation (the
subjective act that positively serves an agenda), and censorship (the
subjective act that negatively serves an agenda). In so doing, I admit that my
own use of the terms is subjectively informed by my reception of the content
(and think it illustrates what dang was after).

------
bobwaycott
Wow. There is just so much in this article to wrap one's head around. I feel
somewhat ashamed that I've never really thought about moderators at all, much
less of the psychological and emotional effects they must endure as a result
of the content they must see. I know it's work I wouldn't want to perform. I'm
grateful for this investigation and report. There's a ton to unpack and think
through.

------
rurban
Oh, that Neda video was uploaded by me to Youtube, because the Iranians who
brought it out of the country couldn't. They never talked to me about it.

------
vonnik
This is one example of a job where neural networks/AI can, and should, replace
humans. Video and image classification is basically a solved problem. Numerous
companies have already trained deep convolutional networks to recognize
pornography and other forms of content unwanted on their platforms.

~~~
CM30
Maybe I'm missing something here, but I suspect this would be a technical and
political minefield. Sure, you can recognise certain types of abuse images,
but then what about content like that in fiction? It's very logical that
something seems as problematic in real life would be fine in say, a Call of
Duty game or a blockbuster movie or some other form of fiction.

Heck, there have been cases where scenes in movies and games have been
'reappropriated' as real life military or terrorist events by clueless nations
and groups. For example:

[http://www.shacknews.com/article/93359/footage-of-six-
isis-s...](http://www.shacknews.com/article/93359/footage-of-six-isis-
soldiers-killed-by-sniper-is-really-from-medal-of-honor)

So your system would have to figure out not just whether something is seen as
'offensive' or 'against the rules', but whether its from a fictional work that
might be allowed on the site.

------
ckelly
> In April of 2005, they tested their first upload. By October, they had
> posted their first one million-view hit: Brazilian soccer phenom Ronaldinho
> trying out a pair of gold cleats. Weeks later, Google paid an unprecedented
> $1.65 billion to buy the site.

This article misstates when Google acquired YouTube. It was October 2006, not
October 2005:
[https://en.wikipedia.org/wiki/YouTube#Company_history](https://en.wikipedia.org/wiki/YouTube#Company_history)

~~~
a301
They also mistake the traffic stats for /r/AskReddit as being traffic for all
of Reddit:

> According to a source close to the moderation process at Reddit, the climate
> there is far worse. Despite the site’s size and influence — attracting some
> 4 to 5 million page views a day[1] — Reddit has a full-time staff of only
> around 75 people, leaving Redditors to largely police themselves

[1]
[https://www.reddit.com/r/AskReddit/about/traffic](https://www.reddit.com/r/AskReddit/about/traffic)

~~~
reycharles
> last month, reddit had 243,632,148 unique visitors hailing from over 212
> different countries viewing a total of 8,137,128,592 pages

Which is about 270 million page views per day.

[https://www.reddit.com/about/](https://www.reddit.com/about/)

------
James001
That's just a beautiful layout on that page. Amazing typography and colour.

~~~
Freak_NL
Glad you like it, but I actually stopped reading the article due to the
atrocious typography and gimmicks.

Those shadows behind the asides that move as you move your mouse are extremely
distracting, and they overlap with the body text. For the body font they
depend on users having Helvetica or Arial installed to render (neither of
which I have) instead of using @font-face, so the text looks out of place.

Also, none of the background images loaded for me until I tried with a browser
without ad-blocker and third-party tracking protection.

~~~
giancarlostoro
I'm using noscript so it looks fine to me, the site's a bit broken though but
that's fine by me, if you have FireFox you could always try "Reader View" to
read the article without all the glitter.

~~~
elcapitan
It would actually be a nice feature if I could add certain domains to a
"reader view" list in the browser, so that they always use that view directly.
Medium, The Verge, all those giant blown up designs would be much more
readable.

------
mbrutsch
Little different from any historical thought police - we know what's best for
you, citizen, move along, nothing to see here.

~~~
bobwaycott
This is _vastly_ different from historical/fictional thought-policing. Did you
read the article and really wind up with this takeaway?

------
scotty79
I don't think perfect moderation to some global average morality is the future
of humanity. It's TV mentality.

I think the future is keeping vulnerable groups apart from offensive groups.
Hellbaning and honeypot bots are the future. White supremacist can talk to
other like him all day long and there's no harm done. Ones that would like to
talk to black people or just people who get offended by racism should get
algorithmically administered silence or honeypot bots that will react in best
way possible to defuse and calm down without engaging human in distasteful
content.

Plus side of this solutio is that you can keep tabs on potentially dangerous
people and react if they escalate or brag about physical harm they do.

~~~
rodgerd
> White supremacist can talk to other like him all day long and there's no
> harm done.

Right up until one of them decides it's time to leave his little echo chamber
and murder a bunch of kids on an island because he and his Internet buddies
have convinced themselves that's the right thing to do.

~~~
michaelmrose
Between 1984 and bad people doing bad things I choose the later with no
regrets.

~~~
scotty79
Worst thing in 1984 was not surveilance but rewriting the truth. You can't
have progress if you are not open and honest.

Pretending that there's just one morality that everyone normal must adhere to
is more 1984 than any of my ideas.

------
cb18
Many large(and small) subreddits are highly censored. r/worldnews r/news
r/europe to name a few.

Here's a recent example, not even the most egregious, just a recent one.

[https://www.reddit.com/r/worldnews/comments/4ew68z/half_of_a...](https://www.reddit.com/r/worldnews/comments/4ew68z/half_of_all_jobless_in_sweden_are_foreigners/)

Noticed how many xx deep comment threads voted high up in the comment stack
have been excised from the discussion.

It's disgusting. It's very often the case that the top voted contributions
have been 'disappeared.'

Again, just disgusting. What exactly is the point of a 'democratic' news site,
if there is this constant intervention from unaccountable authorities,
constantly policing what information and opinions are allowed to be discussed.

Any information outside the scope of a narrow ideological agenda is summarily
terminated. Is this the public square of the future that we want?

kn0thing, spez, What are your thoughts?

~~~
braythwayt
And yet, _you can start your own damn subreddit_. You can talk about anything
you want, even violent, openly racist stuff. You can decide not to moderate it
at all.

So clearly, you can talk about nearly anything you want on reddit. Your
problem is that you want to be able to talk about anything you want _in
somebody else’s subreddit._

It’s as if you move onto my street, live in a perfectly good house, but
complain about the fact that you can’t do whatever you like in my house.

Freedom to say whatever you want is not the same thing as forcing other people
to pay attention to it. You need to either find like-minded people moderating
a like-minded subreddit, or find a way to get other people interested in your
own subreddit.

This business of demanding that everybody else give you a platform because
“freedom of speech” is flat-out wrong.

~~~
lumberjack
The point (of the problem on reddit) as a I see it has less to do with free
speech and more to do with malicious moderation intended as a means of
propaganda.

~~~
braythwayt
One person’s editorial policy is another’s malicious propaganda. Either way,
the moderators control the content and I agree that it’s important not to
confuse a private platform (like a subreddit or even reddit as a whole) with a
free platform.

Your own blog is the only free platform (for non-extreme definitions of
“free.”) All other media involve some sort of overt or more subtle curation.

