
Content moderation became an industrial process - imartin2k
https://logicmag.io/04-the-scale-is-just-unfathomable/
======
wurtzitane
[http://www.shirky.com/writings/herecomeseverybody/group_enem...](http://www.shirky.com/writings/herecomeseverybody/group_enemy.html)

"So there's this very complicated moment of a group coming together, where
enough individuals, for whatever reason, sort of agree that something
worthwhile is happening, and the decision they make at that moment is: This is
good and must be protected. And at that moment, even if it's subconscious, you
start getting group effects. And the effects that we've seen come up over and
over and over again in online communities."

Clay Shirky, _15 years ago_

------
resu_nimda
_Because social media platforms operate at this scale, we as a society are
being asked to tolerate the fact that even content as universally abhorred and
clearly illegal as child pornography can be and is available on our favorite
platforms, if only briefly._

Being asked by whom? The same person asking us to tolerate the presence of
homeless people? Clearly said content is not so universally abhorred if it
continues to be produced and made available. What we are actually being asked
is to confront the fact that these things exist in our society and that we
can't expect them to be kept neatly swept under the rug for us.

~~~
allenz
Child pornography and homeless people are not comparable. Homeless people have
a right to shelter, whereas pictures do not have rights. We take down child
pornography not because we wish to "sweep it under the rug", but because the
pictures are inherently and irredeemably harmful.

~~~
lopmotr
Can you describe what that harm is? If a picture is copied to 100 recipients,
does the harm multiply by 100 times also? How is that information communicated
to the victim?

I can imagine this being the case for defamation. 100 people being told
"Johnny is a thief" might cause 100 times as much harm to Johnny's social
interactions as 1 person being told that. But how does it work for child porn?

~~~
allenz
Consider it as if you were a child victim. Imagine coming across these
pictures of your abuse, or your family coming across these pictures. Or your
coworkers. Or strangers. The pictures reenact the crime, invoking shame,
dread, and helplessness. Here are some stories:
[https://redd.it/8af7io](https://redd.it/8af7io),
[http://www.tampabay.com/news/courts/criminal/a-victim-of-
chi...](http://www.tampabay.com/news/courts/criminal/a-victim-of-child-
pornography-doesnt-get-to-forget/2156937).

As a moral principle, it should not be legal to share these exploitative
pictures. In terms of information, every copy of a picture increases the
probability that it comes back to haunt the victim.

Finally, many viewers of child pornography find the pictures horrible or even
traumatic. Many people are unfazed, but some experience nightmares or severe
anxiety, especially investigators and moderators:
[https://www.wired.com/2014/10/content-
moderation/](https://www.wired.com/2014/10/content-moderation/).

~~~
cfadvan
To the first part, it’s hard to imagine someone “stumbling across” child
pornography by accident. The first post you linked is more realistic, in that
she’s talking about knowing the images still exist, are still in collections
and in that sense she’s still victimized. Unlocking some phones isn’t going to
change that, nothing will. She’s not seeing it herself, she just knows that
since it was distributed, it’s impossible to take it all back.

I agree that it shouldn’t be legal to possess or share child pornography, but
I don’t think it’s a justification for mass invasion of privacy either. When
someone is detected with child porn they should be in legal jeapordy, but we
don’t need to kill crypto to make that happen.

~~~
allenz
Many victims try to find out how their pictures are being shared, as in my
second link, because they feel that knowing is better than not knowing. And
considering how awful it must feel to know, I think that really says
something.

Privacy rights are also important, and I agree with you that police should
respect those rights when investigating child pornography, just as with any
other crime.

------
mindslight
Everything old is new again! Centralization can only ever lead to censorship -
the edges inherently disagree, and at the very least the centralizing entity
doesn't want to get caught in between. Never mind the power that comes with
setting the zeitgeist for a large number of people.

I can only hope that the resulting milquetoast is boring enough that the
median user actually puts forth the effort to use freer next-gen technologies
rather than settle for this technically-lazy "web 2.0" promulgated by
Surveillance Valley.

~~~
rawrmaan
Good luck getting a large population of users from varying backgrounds and age
ranges to want to hang out in a completely unmoderated place where anyone can
be attacked, harassed, or shown obscene content.

~~~
mindslight
You're mixing together separate issues, then using the balled up uncertainty
in support of a (too) common assumption that only a benevolent central entity
(eg God) can sort it out.

Reddit _itself_ was designed around the idea of multiple communities
simultaneously existing on its site, with a low barriers for interacting with
a plurality at a time. So "backgrounds and ages" is moot.

As far as _unwanted_ communications, then it's not like Reddit et al are
actually currently preventing such things, and not for a lack of trying! This
is an edge concern, and in fact can only be fully solved at the edges. Any
central solution necessarily relies on post-facto punishment, which
necessarily relies on attribution to immutable top-down-imposed identifiers
(and if you think this sounds like a good idea, then we really have no common
ground!).

I don't expect to be able to convince you, as all I am essentially saying is
that the easy answers _are not answers_. As I said, I just hope the inevitable
mediocrity that comes out of centralization is boring enough to drive most
everybody to seek out places that are not centrally controlled. Or at the very
minimum, that the minority willing to tolerate _unsafe speech_ will be free to
go their own way.

~~~
intended
You assume that the majority of people are unhappy with a subjective term such
as "boring."

------
joe_the_user
The thing is the "moderation" that's being discussed just bare content
filtering, more or less one step past the scripts and neural nets.

Which brings up the point that good moderation is something entirely
different. Good moderation involves a person actually emphasizing civil
discussion and providing an alternative to argument by shock.

Of course, some fair portion of larger social networking systems today might
well be described as "what happens when you remove all moderation" in multiple
senses of the term. With that situation actually spilling over into a lot of
society with attendant problems (as well as the more invisible problem that
real problems get ignored).

Some systems now seem to be aiming for an opposite to this. Kialo is one
effort but their approach to debate seems to rigid. A more "personal" approach
where the debate is open but assumption of reasonableness is simply implicit
would work better imo.

~~~
cfadvan
In essence it’s the difference between what’s done here on HN, and the
travesty that is Twitter.

~~~
myWindoonn
Don't fall into the trap of thinking that HN is good. All societies are biased
in various ways, and there are plenty of bigotries and stupidities uttered
here on a daily basis.

What you are doing is drawing an us-vs-them distinction. It's not healthy for
discourse.

~~~
AnimalMuppet
HN is objectively better _at content moderation_ than Twitter is. Not perfect,
but better.

~~~
myWindoonn
"Objectively?" Could you please define your metric and then carry out a
measurement, hopefully in a preregistered and reproducible manner? If not, why
so confident? Because your tribe, the HN tribe, is _objectively_ better than
the Twitter tribe?

~~~
AnimalMuppet
HN has guidelines, which are explicitly stated. Twitter (I gather) has
guidelines or standards. So you measure content moderation by the number of
posts that break the guidelines that are not moderated out.

~~~
myWindoonn
Your metric is not scale-invariant.

It is trivially gamed by malicious actors. It does not reward higher-quality
discourse. It rewards overzealous moderators. It rewards edgy or controversial
discourse which does not break the guidelines. It rewards users who contribute
a bare minimum of information while punishing users who write longer, more
informative or interesting posts.

Worst, by what objective measuring device can a post be compared to the
guidelines? There are many extremely-subjective suggestions in HN's guidelines
alone:

* "Don't say things you wouldn't say face-to-face."

* "Assume good faith."

* "Don't introduce flamewar topics unless you have something genuinely new to say." (Wanna give us a list of flamewar topics, mods?)

* "Please don't use Hacker News primarily for political or ideological battle." As I write this, there is an ideological battle between JS frameworks on the front page.

* "On HN, users should have an identity that others can relate to."

Could you write a script that measures whether an HN post breaks the
guidelines?

~~~
AnimalMuppet
> It is trivially gamed by malicious actors. It does not reward higher-quality
> discourse. It rewards overzealous moderators. It rewards edgy or
> controversial discourse which does not break the guidelines. It rewards
> users who contribute a bare minimum of information while punishing users who
> write longer, more informative or interesting posts.

To the best of my ability to determine, every single thing you said in this
paragraph is false. Could you explain how you think some of that works?
Because, quite frankly, your comments here look like you just want to complain
about HN.

> Could you write a script that measures whether an HN post breaks the
> guidelines?

No. I could evaluate a bunch of posts, though, and I suspect that my
evaluation of that bunch of posts would not be statistically different from
others' evaluation of them (assuming fairness and no malice on all the
evaluators' part, of course).

It's not automeable, and it's not perfect. But it's also not rocket science.

------
EGreg
Why are the platforms this big? Why can’t a small community run its own
software like it runs Wordpress, but for social apps and police itself?

A village knew whether someone was yelling obscenities. HN is a smaller
community, with its own software.

I think the answer is that there is no good open source software for this.

[https://www.youtube.com/watch?v=pZ1O_gmPneI](https://www.youtube.com/watch?v=pZ1O_gmPneI)

~~~
lopmotr
That already happens but they're small communities so you don't have much
awareness of those that you're not a member of. Anyone can join privately run
forums with like-minded people and sensible moderation that they agree with.
There's no problem to be solved. When you want small communities, you have
forums, when you want lowest-common-denominator drivel, you have Facebook, and
when you want to feel tribal warmth by sharing your political opinions with
people who agree with you while also being protected from opinions you don't
agree with, you have Twitter.

~~~
pavel_lishin
> _when you want lowest-common-denominator drivel, you have Facebook, and when
> you want to feel tribal warmth by sharing your political opinions with
> people who agree with you while also being protected from opinions you don
> 't agree with, you have Twitter._

That's not exactly fair; you can customize Twitter (to some degree, if you use
a third party app) and Facebook (to an even smaller degree) to actually get
content you care about. (e.g., there's no small forum I can join that'll get
me Charles Stross's musings _and_ a video game developer whose updates I like
_and_ a podcast's funny tweets.)

RSS used to be great for that sort of thing, but it's been largely superceded
by Twitter, etc.

------
billybolton
Or just let it go completely unmoderated.

~~~
Semirhage
If that was ever an option, it isn’t now. It would just be a bot war drowning
everyone else out, and that’s not free speech either.

~~~
billybolton
It's not about free speech. You are trying to tame chaos, it isn't going to
work and you just wasted millions of dollars on nothing.

~~~
Semirhage
Go to Reddit, then go to Voat. It makes a difference. You can’t completely
win, but you can definitely lose.

------
hkon
Why cant i scroll properly on iPad on that site. Never thought I’d notice!

------
sschueller
It is sad to see sites like reddit turn into police states.

Any bit of skin or bad word and content is marked NSWF.

Other topics not on paar with the main stream is banned or deleted by
moderators. Topics that back a few years ago would have sparked a good and
healthy discussion.

Now people live in a filer bubble. Reality is too scary...

~~~
f2n
What's wrong with marking content NSFW? Its not censoring it, simply providing
a warning to those who may be in a place where their screen is visible and
that might not be appropriate.

~~~
code_duck
I have seen this abused on sites like Twitter, where images with purely
political content have been marked age-inappropriate.

~~~
gowld
Reddit doesn't block anyone from seeing NSFW content.

~~~
code_duck
Right, it's not quite the same because it's the submitter in reddit and the
community on Twitter. I'm not trying to agree with the top parent comment
about reddit.

Twitter doesn't fully block NSFW content, either. They each have a system
requiring a click to unveil or profile setting.

The effect is to obscure the content, so that people just scrolling by won't
see it. Someone has to notice it's hidden, trust that it something they do
want to see and not a disturbing gore or porn pic, and then click to see it.
That has the effect of reducing the reach of the media. It's similar to how
people would make messages on the craigslist RnR forum disappear by abusing
flagging.

On Reddit, people opposing your message can use voting rings and automation to
abuse downvoting, which has the effect of hiding content.

They can complain that your image needs to be NSFW, and ask the moderators to
change it. I believe moderators can set it for content, but if you are posting
in their sub, you are at their mercy anyway.

