
If your website's full of assholes, it's your fault (2011) - Tomte
https://anildash.com/2011/07/20/if_your_websites_full_of_assholes_its_your_fault-2/
======
goldcd
I think it depends on the website you're wanting to run..

I was part of a now-dead "threaded message board", which had a somewhat
cliquey view of what should/shouldn't be posted. In the evening when the
moderators were away, the discussion became more interesting as it strayed
off-topic.

Eventually the community wanted 'to fork' so I decided to build something
myself. I'm not IT trained, so first version was PL/SQL on Oracle - all I
knew. v2 was the product of me buying an O'Reilly PHP/MySQL book (I still have
'platypus' somewhere).

People came.

First few years were a mess. The more technically literate people used to hack
me for fun - but then would confess and provide security tips.

As the mood took me I'd add features, and then if they became troublesome
fixed/killed them.

The odd person would arrive and cause trouble, and I'd have fun working out
how to make them stop. (correlate logins with emails, password, IPs, browser
fingerprints etc). Then maybe experiment with hiding troll-posts from the
other users, whilst displaying it to the troll, who thought they'd been
ignored - etc etc. Or maybe auto-embed some very dodgy zero-sized images in
their feed, if I know they were going through a company/university proxy.

After a while it just all got quiet and relatively happy, and it's been
puttering along since 2003.

It's currently got over 9 Million posts, a load of marriages and children -
and a few deaths with some 'best-of posts' for the departed.

I'm fully aware this isn't impressive as a potential unicorn, but I feel
there's a bazillion other little sites, like mine, happily puttering along out
there that are overlooked.

~~~
barbecue_sauce
You're not IT trained but you knew Oracle PL/SQL?

Edit: I know it's gauche to complain about downvotes, so I'll explain my
question further. I find his situation extremely interesting as I associate
Oracle with the "IT-iest" of "IT" things.

~~~
goldcd
I've no idea why you'd get a down-vote either - whoever that was, "Hang your
head in shame and reverse that click"

~~~
goldcd
thank you

~~~
goldcd
I'd so love to believe that was the same person, who clicked down then up :)

Pretty sure it wasn't - due to the piss-poor design of Y-combinator's forum.

"I don't like click" hides the thing you don't like.

Can't think of anything worse.

------
padobson
I like what he says here, but I note that this particular version of his blog
doesn't have comments on it. My guess is that the site has changed since 2011.

He makes a really good point that there are lessons to be learned from,
"disciplines like urban planning, zoning regulations, crowd control, effective
and humane policing, and the simple practices it takes to stage an effective
public event".

But I think what he's leaving out is hierarchy. In 2011, I think a "moderator
class" and a "commenter class" was all you needed. But if you want people to
treat each other well online, you need something more than that. There needs
to be stakes. There needs to be value in building up your standing in the
community, there needs to be something to be gained and lost in exchange for
the community policing itself.

Right now, a community like Twitter has a sort of chaotic hierarchy. There is
the average user, the influencer, the blue checkmark, and the Twitter
employee/algorithm doing moderation. Who is answerable to who? How does one
climb up the hierarchy? What value does a blue checkmark have in moderating an
influencer? Can they? Are average users ultimately incentivized to be dicks in
order to get the attention of influencers and blue checkmarks?

Maybe it's time for communities as big as Twitter to make some decisions about
this and tame their userbase with more rigid and well-defined hierarchies.

~~~
j2kun
Are there other disciplines in which hierarchies have helped improve
discourse?

------
lifeisstillgood
>>> Advertisers, hold sites accountable if your advertising appears next to
this hateful stuff.

This.

One thing that could be very effective is screenshotting genuine brand ad next
to vile screed.

How long would it take for a 'bot to look like a white sheet wearing moron to
YouTube, and then a little bit of sentiment analysis and you could automate
this. Just publish the "daily worst brand fuck" on FB or YouTube.

(Alongside the "monitor tweet feed of politicians for death threats and pass
to the cops" idea I have when I get the time I am really honestly going to
...)

~~~
Deimorz
It's not quite that simple either, and going through the advertisers can cause
other negative side effects.

This just happened recently on YouTube: there was a big uproar about sexual-
ish comments being posted on videos of children, such as linking to timestamps
with upskirt glimpses or similar. Advertisers got contacted about it, and some
of them said they would stop doing any advertising on YouTube until it was
addressed somehow.

In response, YouTube _completely disabled all comments on videos featuring
children_ , and deleted all existing comments. Now there are legitimate
channels related to children that had completely reasonable comments (which
they moderated) that no longer have any ability to have discussions on their
videos. Their ability to interact with their viewers/communities was destroyed
because YouTube decided to go scorched-earth to make sure they'd satisfy the
advertisers.

~~~
j2kun
> YouTube completely disabled all comments on videos featuring children, and
> deleted all existing comments.

This sounds like a good mitigation step to me, considering that we're talking
about child predators. The incentives are aligned properly here.

~~~
DuskStar
> The incentives are aligned properly here.

You sure about that? To me it seems like YouTube's incentives are to minimize
the chances of advertiser irritation, while the incentives of the children
would be to maximize their engagement while minimizing (the impact of)
predation. Getting rid of children's channels entirely would be an acceptable
outcome for YouTube, but I doubt the children in question would agree.

------
doc_gunthrop
The first site to come to mind is voat.co. I explored it out of curiosity a
while after the "purging of intolerable subreddits" a couple years back. The
impression I got was that of a seedy underbelly of aggregate news sites like
reddit, populated by a mass of displaced misanthropes. It didn't take long to
nope out of there.

~~~
Zak
I wonder if it's possible to be the less moderated alternative to a popular
platform without just becoming the platform for horrible people.

To give an example, in its FOSTA-related policy updates, reddit also banned
exchanges of heavily-regulated products like alcohol and firearms. There was
no mandate to do so, as safe-harbor provisions are very much intact in case,
for example, a reddit user sells a gallon of homebrewed beer to a 20 year old.
Such changes might provide motivation for people who aren't at all horrible to
move elsewhere.

------
uniformlyrandom
Well, duh. The problem is applying these policies - because blindly applying
policies usually leads to mixed results (innocents banned, trolls using a
loophole in policy and proliferating). But to quote the author -

> we have a way to prevent gangs of humans from acting like savage packs of
> animals. In fact, we’ve developed entire disciplines based around this goal
> over thousands of years.

We 'just' need to introduce institutes of online courts, judges, lawyers, and
presumption of innocence (good thing that enforcement can be automated, so we
do not need the execution squad). Also, you kinda need someone to write these
policies, and approve them. Maybe an elected group of people - maybe even two
groups, to balance the democracy/influence?

If you are not already doing it for your website, you are obviously an
irresponsible moron.

~~~
codinghorror
Did you see this: a random jury of your online peers
[https://www.minds.com/minds/blog/power-to-the-people-the-
min...](https://www.minds.com/minds/blog/power-to-the-people-the-minds-jury-
system-975486713993859072)

I support this kind of stuff in theory but it is incredibly difficult to get
to scenarios where this actually happens. You need to be massive scale for it
to occur with any regularity.

~~~
uniformlyrandom
Yes, - and a lot of online communities on the internet has reached that scale.
Facebook, Twitter, Instagram, Reddit...

In case of reddit, the problem is even more interesting. There are subreddits,
so it is naturally segregated, and moderators and community are local. The
communities can use downvotes to 'stone' an offender, and local elected
moderators can enforce a ban. In case moderator or community mishandles the
process, 'federal' admins can intervene. So we are pretty much at medieval
level of justice there (much better than whatever constitutional dictatorship
facebook and twitter have).

~~~
codinghorror
I deleted my Reddit account long ago. Opinion based downvoting is so toxic on
small subreddits, and cannot be disabled in any meaningful way.

~~~
cannonedhamster
I deleted my account after someone who was obviously just being a dick to me
in a subreddit got me temp banned for trying to reach out to them in a PM and
be civil. The guy posted the message which was entirely civil and claimed it
was harassment. No threats, no name calling, suggested based on his public
posts that we had a lot in common and it looked like in another topic we might
be good friends, how that leads to a site wide temp ban I don't know, but his
completely dick, personally insulting comments because he disagreed about a
software feature we're apparently wanted. Thanks Reddit I'll pass, honestly
most social media is pretty much a cess pool which is why, present company
excepted, I avoid it.

------
ve55
This post is more reasonable for smaller curated communities, but as we can
see from, well, every large social website that exists currently, it can be
difficult to scale. I'm curious what the other thinks the solution would be to
an obvious example like Twitter. When a website consists of so many groups of
users that everyone's definition of 'assholes' contradicts the others, you
find that banning them gets to be pretty difficult, not to mention the amount
of humans that are required to moderate.

edit: Even if you have a policy that seems perfect to you, it will not be
executed perfectly in practice when there are 1,000 different people
attempting to enforce it as moderators.

~~~
wpietri
Having worked at Twitter on abuse, I think that Twitter could do far more.
They've gotten better, but it's still trivially easy to evade bans, the
moderation decisions are frequently terrible, and there are a zillion other
things wrong.

That said, I think an ad-funded global discussion is impossible. You can't do
good user moderation with that little money (~$0.10 per user per day total
revenue, with a tiny fraction of that going to abuse prevention). I think
Twitter should allow people to create semi-private spaces, zones for
discussion where users can be much more firm about moderation. It should also
make collaborative blocking and muting much more effective.

~~~
wazanator
Thats kinda of what Mastodon is isn't? People are free to spin up their own
communities with their own rules.

Everyone complains about Twitter but when alternatives are presented that
gives them the power to do what they are asking for no one wants to try them.

~~~
wpietri
Mastodon is more distributed than what I'm thinking. But yes, even if it were,
network effects mean that people won't leave Twitter. Which is why I think
it's necessary for Twitter to do better.

------
cozuya
I've been grappling with this for a while. I run a site/app that is a popular
team based hidden roles indie board game. Apparently its full of assholes. I
mean I get it, the theme and dynamics don't lend itself to happy players
really but I got nothing - I've put hundreds of hours into volunteer moderator
tools and they work great, the site itself does not have a
griefing/racism/hate speech problem, but it does still have an asshole
problem. Not sure what else I can do, other than take credit for it. =/

~~~
cannonedhamster
I mean, the game itself sounds like the type that would draw people who wanted
to prove to others how smart or good they are from your description(PvP). I
find that community/coop based games (PvE) tend to draw less assholes because
of the type of people who enjoy those games.

~~~
cozuya
Yeah it is the hardest of hardcore PvP games - it involves people having to
lie to other people, every game. Throw in internet strangers and an ELO system
and yeah it becomes an impossible thing to moderate other than "don't use hate
speech or get banned". I just wish I knew what I could do better. Things like
this get me down a bit.
[https://old.reddit.com/r/SecretHitler/comments/b9l1kv/secret...](https://old.reddit.com/r/SecretHitler/comments/b9l1kv/secret_hitler_io_should_be_taken_down/)

~~~
cannonedhamster
I mean it sounds like the game is attracting the type of people who would be
attracted to the title, and then sticking around for the challenge. I don't
think that's a problem you really need to invest in solving. Eve online has a
similar problem with the ability to rob entire months of work with betrayal.
Any game where lying can lead to a positive outcome is generally going to
encourage a mindset. If your players like the game then it's the niche that
you're serving, doesn't seem to be anying wrong with that, they're having fun.
My question to you would be are you enjoying running the game and building it,
you're there now important person in this.

------
malvosenior
I feel like Anil Dash is pretty much the last person who should be talking
about internet “assholes”. He’s very often on the front lines of Twitter mobs
and is generally extremely caustic. He should reflect on his own behavior
online before preaching to anyone else.

~~~
braythwayt
This is a fairly classic ad hominem comment.

What difference does it make who makes this argument, if the argument itself
is sound? On the other hand, if the man is 50% saint, 50% angel, and 50%
genius, what difference would that make, if the argument itself was terrible?

Comments suggesting that he shouldn't make this argument do not actually
address the argument itself. In the best case, they are a distraction from the
subject being discussed.

In the worst case, they are a form of tone policing, which is a tactic for
silencing sound arguments by complaining that they are not being made by the
right people, or in the right way. Example:

A man murders another man. The police torture him into a confession. He
complains about the torture. Is he the last person that should be complaining
about violence? But if not him, who? Must we wait for the perfect angel of an
innocent victim of police brutality, before we act on it?

~~~
makomk
It makes all the difference in the world who's making this argument. He relies
heavily on the idea that there's some universal category of "assholes" that
websites should ban, and that if any website doesn't it's their fault and
everyone should pressure them to fall in line. There is no such universal
category that everyone agrees on. So what he's really arguing is that people
who _he considers assholes_ should be banned by every decent website that
wants to receive advertising money, and it's impossible to seperate that
argument from his specific moral values and actions.

~~~
braythwayt
In my opinion, your comment is stronger if you simply state:

 _[This argument] relies heavily on the idea that there 's some universal
category of "assholes" that websites should ban, and that if any website
doesn't it's their fault and everyone should pressure them to fall in line.
[However,] there is no such universal category that everyone agrees on._

That is simple, direct, and leaves him out of it. And I believe it is better
without dragging his character and motives into it.

~~~
makomk
That argument doesn't work. Obviously the assholes won't agree that they
should be banned, and you can just argue - as he does in this post - that
anyone who doesn't think the supposed assholes should be banned is doing so
because they're one of them. Judging the merits of this really does mean
thinking about the actual concrete values that he's fighting for.

------
Animats
_" You should have real humans dedicated to monitoring and responding to your
community."_

If you're a for-profit company, you have to pay them. At least minimum wage.
See _Hallissey v. America Online, Inc._ They're employees.

~~~
shearskill
Reddit moderators are volunteers, it’s an unpaid position.

~~~
Animats
If the U.S. Department of Labor wasn't out to lunch, Reddit would be in
trouble.

------
zzo38computer
It is your fault, but, nevertheless I disagree that it is bad to not do stuff
about it (although of course you can do it anyways if you want to). I prefer
not to have such stuff; I allow any except that I do not have unlimited
bandwidth ahd disk space, so there is that consideration only. Clients can use
kill files and other filters (if accessing them by NNTP) to avoid messages
they do not like. Messages can also be copied on other servers, which may have
different policies; I do not policy your server that copies the messages, but
only not to overly flood the bandwidth. My software, called "sqlnetnews",
deliberately does not support control messages (you can still post messages
with a "Supersedes" header, since it can still be useful sometimes, but it
won't result in the old version deleted; it does, however, reject any messages
with a "Approved" header). Maybe you hate me for it, but Dave Hayes has
"Freedom Knights" and it is like that (but in my case, for Unusenet rather
than Usenet). While cancels can be posted (as long as the "Approved" header is
omitted), they are not processed. I will not force anyone to use these
services and will not disallow others from making echo services (a feature of
Unusenet, designed to enforce that you have freedom globally even if not
locally (local policies are up to the server administrator, not me)).

------
jcroll
What's an example of a major site with a good community?

~~~
SethTro
free karma answer: "Hacker news"

~~~
knolax
Yeah no, when's the last time you've actually seen the rules enforced?

~~~
dahart
All the time. I see dang smoothing things out like almost daily. You can also
turn on “showdead” in your profile options, and maybe you’ll become more aware
of it.

~~~
JasonFruit
I keep showdead on all the time, because sometimes I see insightful,
thoughtful answers that are dead for no reason intrinsic to the comment. Then
I can vouch for them, and I hope they become visible to others — though the
workings of that system are opaque to me.

I get annoyed reading the same smarmy-sounding "Please don't post
unsubstantive comments" replies from dang all the time — but it's what makes
this site work, when so many others don't.

~~~
dahart
Those two things are related. The warning messages are what happens before the
account gets shadow-banned. Sometimes you can walk back in a dead commenter’s
history to see what caused the problem and it’s obvious. Sometimes it’s not
apparent because it can happen for multiple reasons.

Personally, I’d rather have the constant out-of-bounds reminders than not.
I’ve never seen one of those warning messages on a comment that didn’t deserve
it, I’ll take the ‘please play nice’ requests all day if it quells ugly flame
bait, and that’s exactly what it seems to do. He’s giving people both
notification that it’s out of bounds and advance warning that there could be
consequences. Much worse is out there and easy to find. Do you have examples
that are much better, on a comparably sized forum?

~~~
JasonFruit
Like I said, it's why this site works and others don't. I'm not aware of a
better solution. Facetiously, I suppose you could take the approach of the app
'Yo': constrain the possible messages to exclude anything potentially
offensive.

~~~
dahart
I used to work for Disney and we were tasked with constraining the possible
messages to exclude potentially offensive material. It does keep out the
obvious no-effort messages with swear words, and it does tend to reduce
negative messages. But it was interesting and sometimes fun to see the
creative ways people would write bad words or sexually suggestive materials or
tell each other off. It’s not possible to stop all offensive content with
constraints or technology. Not yet anyway.

------
Causality1
Most places benefit from having strong and effectively enforced moderation
policies. However, I also think there's significant value in having some
places that don't, that serve as content-neutral platforms, the social media
equivalent of the post office.

~~~
tlholaday
> I also think there's significant value in having some places that [...]
> serve as content-neutral platforms.

You may be right about the value of such places, and Anil Dash's prediction
that such places will be full of assholes may right, too.

~~~
Causality1
Oh certainly. I just disagree with his point that ALL websites have a "moral
responsibility" to police their users.

To remove illegal content and conduct? Sure, but not to prevent assholery.

------
yesforwhat
Does this guy have a significant investment in Facebook?

------
tus87
Thank God for free speech and God Bless President Trump.

