

Ask HN: Best policies for community websites for handling questionable content? - c_prompt

Both reddit and voat have been accused of censorship for banning communities that contain questionable or illegal content, content that clearly goes against the ToS for those platforms. For alternative community-based websites, what does the HN community think are the more effective site-wide policies (cf. community-specific moderation policies) for encouraging community growth while preventing illegal or questionably legal content? Some examples - without advocation - to suggest some standards for the level of detail:<p>1) Any content that appears illegal must be removed within [x] hours by a community&#x27;s mod else an admin can remove.<p>2) An admin should immediately remove any content reported or identified that appears to be illegal.<p>3) Any content that is reported to admins as potentially illegal is pushed to community mods for a determination and resolution that must occur within [x] hours else an admin can remove.<p>4) Any community that promotes or fosters an environment for posting [illegal content, discriminatory content, content against ToS] will be banned by the admins.<p>5) Authors (OP) of content that appears illegal will be banned.<p>6) Only admins can ban users.<p>7) [Community owners, mods] can be banned for any illegal content residing within their communities after [x] hours of being reported.<p>8) [Community owners, mods] are responsible for being familiar with the content-related laws in [website&#x27;s country] and removing content in their communities that violate those laws within [x] hours.<p>9) Any community without a single owner cannot be [posted to, viewed] until a new owner takes over.<p>10) All communities with more than [number of posts, number of comments] must have a minimum of [x] mods.<p>11) All communities must have at least [x] mods that login and review all community content every [x] hours.<p>12) All formally received [legal notices, threats to take legal action] to remove content will be [publicly posted, forwarded to chillingeffects.org].
======
brudgers
The best policy is not to invite illegal content. Any idea where moderation
over _legally_ questionable content is an obvious impediment to community
growth is inherently problematic because it is basically an "attractive
nuisance", e.g. a site that appeals to people who think 4chan banning jailbait
photos is heavyhanded has baked-in problems in regard to moderation.

Despite the passion and emotion about what is appropriate for StackOverflow,
the debate isn't about links to free downloads of PhotoShop or cracking
commercial Wordpress themes because people who are interested in that sort of
thing go elsewhere. Likewise, nobody on HN complains when such links are
killed upon posting here.

The solution isn't in mechanism, it's in policy at the highest level of
abstraction; that of "What this site is about."

Good luck.

~~~
c_prompt
I don't want to invite illegal content, but I am strongly opposed to
censorship - in any form. So my question around which policies are best is in
the context of no censorship unless something is (likely) illegal.

~~~
jtfairbank
It sounds like the original commenter was saying that you should censor at a
higher level (ie define boundaries for content, such as no porn or no
commercial software cracks) and just remove all of those since they don't fit
into the use case for your site and people can go elsewhere for them.

That way you're not subjectively censoring things, it either fits into the
site or doesn't belong there.

------
auganov
Are you talking only about content that is potentially illegal? Or
questionable in the general sense of being "wrong" in some way, even if legal?
If the latter then I don't think there's a universal "right" way to handle it.
Censorship can be a big value-add in many communities.

~~~
c_prompt
Only (potentially) illegality.

For example, although it's not clear if voat received any formal legal notices
or threats before they banned many communities, if they banned them based on
their desires about what they want voat to be [1] or pressure from bad
publicity/partner problems (e.g., frozen PayPal funds), that's their choice.

But if they banned the communities because of valid legal threats, I don't
think you can't accurately describe that as censorship.

[1] Acknowledgment that their policies and goals may have changed since they
became a refuge for disenchanted reddit users considering voat initially had a
policy of no censorship

------
c_prompt
I realize I haven't defined "best" or "more effective" and would also be
interested in your views for definitional standards. I have not added my
opinions as I prefer not to bias your feedback.

------
buro9
What is questionable or illegal?

You may argue it is full-body nudity, depictions of erections, exposed female
genitals or breasts... but then someone will produce some art that everyone
accepts really is art and shouldn't be censored, or a photo of breast-feeding.
And so you make an exception as you do not wish to censor.

Then what differentiates the porn from the art? Is it simply the context of
where it appears... a porn site or a gallery? What is the context of your
site?

I'm arguing here that everything is subjective when it's not obviously illegal
(against the letter of a law).

Law itself struggles with these definitions and tends to rely on the concept
of what a jury or judge "reasonably" determines to be against some wording of
a law.

The underlying key question in determining a moderation policy is whether you,
the site owner, wishes to take on the decision making for those subjective
cases. And, in doing so, whether you wish to accept the liability that comes
with it.

Simply: Are you willing to be liable for the content posted by third parties?

If yes, then you need to have the equivalent of editorial processes, clear
definitions you can communicate to users, declared processes for handling
breaches of those definitions (and repeated breaches).

If no, then you can have the equivalent of "mere conduit" or "safe harbor".
Anything you're unaware of you can't be liable for, and once you're made aware
you need to handle it with whatever declared process you have.

Reddit and others were the latter.

And the latter works fine until you start moving towards advertising and those
paying for advertising start to say "You cannot show our brand next to this
(or that) type of content". Suddenly you need to know what content is where
and your ability to deny knowledge of what is on the site is reduced and you
have been pushed down the path, and probably taken the first steps, towards a
more editorial process.

Trying to encapsulate processes in a set of rules, as you've done in the Ask
HN, will fail if the rules themselves are against the nature of the site. The
nature of a site that survives on advertising is that the processes and user
terms and conditions need to create content acceptable and favourable to
advertisers.

For most of what you've outlined above, I've had lawyers create terms and
conditions for sites I run that does encapsulate a distributed and hands-off
moderation policy that complies with European law. You can see those docs over
here: [https://github.com/microcosm-cc/legal](https://github.com/microcosm-
cc/legal)

~~~
c_prompt
Thanks for making the T&Cs you created available. Did your attorneys make them
specific to an individual country/locale or did they make them generic enough
to afford/enhance protection globally?

I'm referring to the latter and specifically content as it relates to
(potential) illegality. If a site wants to censor or prevent certain content
based on their own, personal standards and goals, that's the moral right of
the owners. I'm interested in the situations where governments will use force
to impose their will (regardless of the personal choices). I'm also interested
in the "once you're made aware you need to handle it with whatever declared
process you have." Perhaps it differs by locale and, if so, then it would be
beneficial for those of us who run sites to start compiling legal agreements
into a github repository like you did to help alternative sites become aware
of the local legal nuances (especially if the user base is global like reddit
or voat).

I'm also interested in your comment about advertising - if a community (owner
or mods) got to determine which ads (if any) were shown in their community
(i.e., the site owners didn't get revenues from ads), would that be supportive
of certain site-wide policies?

And, again, just to clarify - I'm interested specifically in overriding, all-
encompassing site-wide polices, not community-specific policies.

~~~
buro9
We aimed at UK and EU law as we felt it to be tighter than US law on a lot of
these fronts, and yet combined both the ideas of Safe Harbor and DMCA (within
the EU eCommerce Directive - "mere conduit").

At the same time, European law put much stricter requirements on personal data
restrictions, and the right of people to access and delete their personally
identifiable information.

The user agreements should be good for most Western countries, but definitely
are good for all EU countries. It would have been a very expensive and futile
exercise to try and make the agreements good for all countries, I'm not sure
anyone tries this, better still to declare which legal domain you will operate
in and constrain your terms and operations to those.

The site and services we offer, are much akin to Reddit and subreddits... it's
a shared platform that hosts distinct communities, and within those
communities sub-communities.

i.e. [https://www.lfgss.com](https://www.lfgss.com) is one of almost 300 sites
on 1 instance of the platform (other instances exist and are run by other
people), and within LFGSS there are several sub-communities with their own
moderators, moderation policies, etc.

The user agreements we produced dealt with the platform relationship with the
admins of a site, and the moderators of forums on a site, and then went on to
deal with the minimum required standard for users of a site on the platform.

We actually don't stray into finer details of how a specific sub-community may
moderate, we simply lay down the minimum standard that all admins and
moderators must adhere to.

As to the "moral right of the owners"... sure, you can exercise that, but
you'll find that removing some content will create a liability of it's own in
some legal domains. I would advise people stick to the letter of the law,
anything else likely creates liability for the moderators or site owners.

On advertising, any site that tries to be of general appeal and also make
enough revenue from advertising will need to control the content that appears
on the site. Niche sites can avoid this, but the advertising potential is much
reduced as a result. If a community received revenue, it would very quickly
learn that it can either choose to make more revenue at the cost of adopting
content policies, or of making a pittance and having freedom of expression
within the content (though still avoiding illegal content).

As to whether there is some overriding and all-encompassing policy... a
universal policy for internet sites. No, there is not. The requirements of the
user agreements and site policies is determined by the business model or
activities of the sites and their users. A subtle change in how a site or
community operates could lead to a significant difference in the legal
documents that would protect it... this is why there is a warning on my
policies that anyone using them should have them reviewed by a lawyer, they
are fit for purpose for my sites, but may be the death knell of your sites.

~~~
c_prompt
Having now reviewed your documents on Github, I don't find anything that
specifies policies on how questionable content is to be dealt with (in line
with your comment of just defining minimum required standards for users). I'm
interested in the internal polices site admins should take when dealing with
questionably legal content.

~~~
buro9
All of the sites, and all of the communities we host, we advise to strongly
follow legal requirements only.

They can stray from that if they wish, in which case as the platform owners I
can delete content as I see fit in accordance with the law if they do not do
so. However if they go further than the recommended minimums of obeying the
law, then we educate site admins and moderators that as soon as they exercise
their personal judgement that they are opening themselves up to being held
personally liable for their decisions and choices.

In other words, we obey the strict definition of the law, and any correct
legal demands made of us, and we refuse to remove material that someone merely
finds objectionable, only doing so when it appears clear that it is in fact
illegal.

Questionable content is therefore left on all of the sites.

But then, we don't have advertisers to deal with.

