
Mastodon and the challenges of abuse in a federated system - dredmorbius
https://nolanlawson.com/2018/08/31/mastodon-and-the-challenges-of-abuse-in-a-federated-system/
======
voidmain
I don't think the problem is federation. The problem is media where
participants have nothing at stake, so that abusers can't be made to pay the
costs of their abuse. Changing this isn't incompatible with anonymity - for
example you could require anonymous participants to post bonds in a
cryptocurrency. But e-mail spam, for example, would never have been a problem
if the recipient of an e-mail could push a button and destroy $1 of the
sender's money. Getting the incentives right for a social network is
undoubtedly harder, but you can't even start until people have something to
lose.

~~~
Fnoord
Yeah, Bill Gates proposed that solution to spam. Another one is having people
register with proven credentials of their legal name and physical address.

Yet Hacker News isn't using these.

Because shadow banning, for example, works. Having someone's voice lost while
they don't know about it (keep living in their bubble) _works_.

The lesson here: don't assume that your solution is the only solution.

Usenet, e-mail, and forums have taught us already that the solution to this
problem is advanced, silent filtering (which have been innovated in the anti-
spam arms race [1]) essentially akin to shadow banning. The enforcement of law
also tends to help.

[1] RBLs, Bayesian filtering, and the already mentioned shadow banning

~~~
mschuster91
> Because shadow banning, for example, works. Having someone's voice lost
> while they don't know about it (keep living in their bubble) works.

Shadow banning is evil and intransparent. It may work, but at what cost?

~~~
lifthrasiir
It is interesting for me that shadow banning can be evil (or not). Shadow
banning is comparable to who hears bad words and ignores them, replying only
with instinctive filler replies ("yeah", "hmm", "so?" and so on). Of course as
a moderation measure shadow banning is a lot broader and more systematic than
filler replies, but unless moderators do not disclose that shadow banning is
in effect (without disclosing subjects), it doesn't seem to be a subject of
moral judgement or whatever because any user can always choose to ignore you
exactly in that fashion.

~~~
dleslie
I don't think that's a fair comparison; the one choosing to shadow ban another
isn't simply choosing to ignore the target, they are choosing to disallow
others from hearing what the target has to say, and in a way that does not
allow the target an opportunity to change and improve themselves.

~~~
Fnoord
E2E silent ignore does not work in online conversations where multiple people
participate ("online group communication") because it meddles with the context
since others do not participate. Try it for yourself: join an active IRC
channel and ignore the first few people who chat. You won't be able to
understand the conversation anymore. Spam filters work on 1:1 communication,
targeted specifically at you. Shadow bans work on 1:1 communication _and_
group communication if they're centralised or server-side. Its more akin to
(SMTP) tarpits [1].

[1]
[https://en.wikipedia.org/wiki/Tarpit_(networking)](https://en.wikipedia.org/wiki/Tarpit_\(networking\))

~~~
dleslie
I've been using the IRC method since the early 90s; it works fine. Sure,
there's some appearance of discontinuity but that's a feature, and allows one
to personally guage whether the choice to exclude was a good one, without
having to be exposed to the comments of the one ignored.

Shadowbans aren't akin to SMTP tarpits insofar as the target of force is less
likely to be a bot. Behind the silence is a human being who is trying to
engage others.

~~~
Fnoord
I've been using IRC since the early 90s as well, and it doesn't work "fine"
when you ignore active members of a community who get quoted and such.

SMTP tarpits are like shadow bans because the perpetrator isn't informed about
the ban/tarpit, and its meant to slow them down by wasting their time in their
bubble that they are getting "work" done.

~~~
dleslie
If they're active members of the community that are regularly quoted then
banning them is unlikely to be the best course of action; I prefer to publicly
announce that I'm ignoring someone, and perhaps why, before I do. Banning
active and valuable community members should, at least, be public and
transparent, lest the community react negatively.

SMTP tarpits are more likely to involve a perpetrator who is operating a bot,
and less likely a perpetrator who is personally engaged in the conversation
and community.

------
tablethnuser
This was the first test of federation's claim that bad actors can be dealt
with by instance-banning. They failed the test and without this key innovation
performing to spec I no longer have faith that Mastadon is any more than a
technically complicated open source Twitter.

IMO once the account was verified as Wheaton's the entire federation should
have blacklisted every instance harboring a false accuser until they
demonstrated they can be a functioning member of the fediverse.

Of course, there are better ideas in this thread such as removing the social
proofing mechanisms which fuel tweet/toot abuse in the first place. But if
we're just speaking in the nouns and verbs that are part of Mastodon's sales
pitch they had a tool to address this and demonstrated they can't wield it.

~~~
Kalium
> This was the first test of federation's claim that bad actors can be dealt
> with by instance-banning. They failed the test and without this key
> innovation performing to spec I no longer have faith that Mastadon is any
> more than a technically complicated open source Twitter.

It should have been clear to any person applying basic analysis from the very
beginning that decentralization enables abusers. The parallels to and example
provided by email is simply too strong.

Decentralized moderation, where a group of essentially unconnected moderators
work independently on a problem, have no ability to coordinate or shift load.
Centralized moderation systems have both of these abilities, as well as often
financial incentives to keep working.

The article hits it right on one point: this is a basic structural problem.

What the article misses is that it assumes there are good solutions while
remaining decentralized. There's a reason the approach email settled on was,
effectively, centralization.

~~~
craftyguy
How is email 'effectively' centralized? There are tons of providers, you can
even host it yourself. The only thing that _might_ be somewhat centralized are
spam filtering algorithmns that are shared, but many aren't.

~~~
loup-vaillant
> _How is email 'effectively' centralized?_

By having big provider turn up the spam detector for small actors. It's
currently common to have a perfectly set up mail server, with verification and
reputation and longevity… and still have Gmail flags the emails that come from
it as spam.

Also, forget about sending email from home. Email sent from residential IPs
are instantly deleted, and not even sent to the spam folder, by big providers
(Hotmail makes it an explicit policy, and bounce the email right back). You
have to at least relay the damn mail through a non-residential IP.

~~~
CaptSpify
> Also, forget about sending email from home. Email sent from residential IPs
> are instantly deleted....

Sorry, but this hasn't been my experience at all. I see this claim all the
time, and, obviously, ymmv, but I've been using residential IPs for ~10 years
with no issues.

~~~
loup-vaillant
Noted. Of course, the provider must know the IP is residential in the first
place. From
[https://help.yahoo.com/kb/SLN26154.html](https://help.yahoo.com/kb/SLN26154.html)

> _553 5.7.1 [BL21] Connections not accepted from IP addresses on Spamhaus
> PBL_

> _Dynamic or residential IP addresses as determined by Spamhaus PBL (Error:
> 553 5.7.1 [BL21])_

> _The Spamhaus PBL is a DNSBL database of end-user IP address ranges which
> should not be delivering unauthenticated SMTP email to any Internet mail
> server except those provided for specifically by an ISP for that customer 's
> use. The PBL helps networks enforce their Acceptable Use Policy for dynamic
> and non-MTA customer IP ranges._

\---

I guess you're lucky enough that your IP adress has not been tagged as
"residential" by Spamhaus.

------
mirimir
> In mastodon.cloud’s case, it appears the moderator got 60 reports overnight,
> which was so much trouble that they decided to evict Wil Wheaton from the
> instance rather than deal with the deluge.

In my opinion, what's missing are tools for instance operators to ignore
problematic accounts. Unless there's huge churn, 60 reports per day would
quickly become just a few. And this would also help hugely with the spam
issue. And instance operators could share blocklists, just as for email spam.

Given all I've read so far about Mastodon, the only viable solution seems to
be running single-user instances. That way, all blocking can only be user-
specific. Instance operators can't (obviously) be forced to delete users'
accounts. And if that's currently too technical for most users, then it needs
to be streamlined and simplified.

Edit: spelling

------
lapinot
My analysis of abuse in social media: it's an unavoidable side-product of buzz
mechanics. Buzz exists in social media because the ad-based company that
invented it needed a slot-machine-like mechanism to keep users addicted (has
already been discussed here recently). As such, for me the root of both buzz
and abuse are features like shared global news-feed and real-time trending
topics (or network-wide search by some popularity-like criterion in general).

Abuse mostly works in medium to large groups and it crucially depends on
dehumanization of the target. Afaik a classical way to unspring abuse IRL is
to organize small isolated confrontations, exactly the opposite of doing an
epic moral speech in front of the crowd because one cannot have reasonable and
logical arguments in a big group. So in the end a global search ranked by
recent popularity is exactly the opposite of what is needed for peacemaking:
any heated argument will attract more and more bystanders and there is no way
to reverse the steam.

So what solution? My guess is that only whitelisting makes sense and with a
granularity lower than instances. One could also provide tools for instance
moderators to make a particular thread private to isolate it. We should also
not provide any popularity based ranking, leaving that to specific structures
dedicated to it, with their own rules and a well-defined editorial policy
(like hacker news or other). Of course we cannot just stop people from ranking
things by popularity, we have to actually make it impossible. One way for that
is that links really be directed: retooting something should not inform the
target, making the retoot-count a hard edge-reversal problem (which would
require full network crawling). We could even make crawling harder by a small
rate limiting or proof a work in the query protocol (eg just making the query
expensive for the client).

Does this sound sensible? edit: actually i somewhat just expanded on this
comment below:
[https://news.ycombinator.com/item?id=17895159](https://news.ycombinator.com/item?id=17895159)

~~~
joering2
Yes but I think there are two problems with your ideas, at least when it comes
to Twitter:

> organize small isolated confrontations, exactly the opposite of doing an
> epic moral speech in front of the crowd because one cannot have reasonable
> and logical arguments in a big group.

That won't work. Open any Trump tweet and click on any person whether he/she
is for or against president. You will see never-ending stream of tweets back
and forth at each other, one party tries to convince other that Trump cheated
on his wife, meanwhile other tries to convince that Clinton had her server
hacked. Eventually such stream boils down to competition of who is more
aggressive, rude and provocative against each other. Many such conversations
ends with Hitler being injected, but of course no party will ever see the
other side. I literally monitored 3 such streams of women wasting 10 days of
their lives going after each other with frequency of one tweet per 45 minutes.

Your idea to break it down and sort of "mediate" between two people that
honestly are not interested in even hearing what the other side has to say,
but are only engaged to vent off and blindly cheer whatever cool aid they been
already sold on long time ago (i.e. that Trump is incredible president, not a
con man, or Clinton is amazing women, not a crook) won't work.

Second and finally:

> My guess is that only whitelisting makes sense and with a granularity lower
> than instances.

Twitter has signed off its death sentence when it went public. Its a tectonic
shift within your organization when you move from serving users for the good
sake of the whole system, into serving stockholders that only spend 15 minutes
with you, once a year at the end of financial period and will only look into
one-page accounting statements of profits and loss.

While twitter users and even owners may want the most happy, prosperous and
healthy environment in which everyone thrives and enjoy spending their evening
reading and responding to tweets, the truth is the only people Twitter answers
to is stockholders who only wants to squeeze as many ads into your timeline as
possible. I'll say it even more: if Twitter starts fixing its feeds and doing
further account purges, they will get sued by stockholders because obviously
less users on platform equals to less ads-engagement. And its irrelevant
whether you engaged with trollbots or real genuine people; it only matters you
seen (and hopefully clicked) on ads.

Does this sound about right?

~~~
lapinot
Yeah, not sure. For the first point, actually i don't really care: eg the
action i would like to take is simply to ignore the whole thread for myself
(and thus in a sensible distributed system where people might make their
policy depend on mine because they trust me, it would also lower the rank of
this thread and in the end isolate it in some limbo). And after some time, one
can hope that a long-term effect of discouraging abuse, shitstorm and
humor/meme buzz will educate users and make "interesting" uses take over (eg
these people will either get off this particular network or stop participating
in flame wars because there are less of them and that there is no social
reward on the network).

And for the second point, i completely agree with you: there is no way that
twitter makes the right thing. But i was mainly talking about what "a
hypothetical well-run probably federated or even fully distributed social
network" (possibly mastodon but not restricted to it).

~~~
joering2
> And after some time, one can hope that [...] because there are less of them
> and that there is no social reward on the network).

That's a wishful thinking! One could say the same about first color TV when
first comedy serial was aired; that eventually people will stop wasting their
time watching silly shows (point given you cant argue with TV-set) and
eventually move on to be more productive etc.

To the contrary I don't think we seen a mountain top just yet; still I meet
people who don't know what Twitter is; their path to being daily engaged and
being dragged into trollbots' wars is probably few years down the road.

~~~
lovich
Eh, I've met a few people who had only heard of the name and wondered what it
was all about. When I showed them how to sign up and look for hashtags, they
quickly gave up between the flame wars and the bad ui.

I don't think Twitter is going to reach anywhere near the ubiquity of tv or
movies for entertainment. Most tv shows and movies don't have a good chance of
making you feel angry, frustrated, or depressed after watching them

------
cdubzzz
I have trouble seeing how decentralized services like Mastodon really improve
on Twitter.

With regards to harassment, Twitter suffers from the moderator-to-user ratio
pointed out in the article but ultimately decentralization must deal with
whack-a-mole on bad faith instances (plus all the other issues described).

I fear that this sort of system also has the potential to drive people even
further in to social echo chambers where terrible ideas and behaviors can
flourish in harmful ways.

~~~
Barrin92
'echo chamber' should be nominated for some sort of worst-buzzword of the
century award.

There's nothing wrong with people organising and debating issues among like
minded peers. The dysfunction of twitter isn't rooted in 'echo chambers', but
in the lack of grouping, which results in a Hobbesian war of all against all.

The Facebook marketing idea of a global digital village without borders is a
nice vision for a corporate brochure, but no way to actually organise human
communities.

Alex Pentland did interesting work on social networks and ironically enough it
was precisely hyper-permissive and connected networks that produced bad
outcomes and echoing because the opportunity to copy and descend into
permanent agitation is largest in systems that have no barriers or
idiosyncrasies. A barrier free system is a group-thinking sytem by design
because it works _too fast_

It's easy to see why this is profitable for the people who run the platforms,
but it is of questionable social value.

~~~
Nadya
_> There's nothing wrong with people organising and debating issues among like
minded peers._

That's half - and the less important half - of what an echo chamber is. That
by itself is fine. It's when joined with the other half of "echoing so loudly"
that any dissenting views can't be expressed and are drowned out. The group
only hears what they want to hear, never having their ideas challenged. No
matter how terrible the idea is they only tolerate other people who support
their idea. It isn't just discussing things with like minded peers, it's
actively silencing anyone who doesn't agree and not having any debate on the
subject.

HN is an opt-out echo chamber. Dissenting views are flagged or greyed off the
page for nobody to read, except for the people who turn ShowDead "ON" and opt
out of the echo chamber. In return for seeing dissenting opinions they see a
lot more spam, a lot more vile posts, a lot more noise. In many ways people
would say that makes the experience worse, but I wouldn't have it any other
way. If HN didn't have a ShowDead option I wouldn't continue to be here.

~~~
bena
Yeah, there's a big difference between telling people to stop talking about
the Xbox in a Dreamcast forum and banning anyone who doesn't think that Jet
Grind Radio isn't the best Dreamcast game.

One is proper curation of the discussion forum. The other is building an echo
chamber.

~~~
Barrin92
The issue is that the latter is also a strawman, because that's not exactly
common. That's also the problem with the initial response. Hackernews, by this
detrimental definition of 'echo-chamber', is not one. People debate diverse
viewpoints here, the only thing that is demanded of them is that they do so in
civil manner and good faith.

That's not an echo-chamber in the bad sense of the term, that's the basis for
productive discussion.

~~~
bena
It was a ridiculous example to show that you can have a curated discussion
forum that doesn't lead to an echo chamber. I took it to an absurd length to
make it quite obvious that it was what it was and chose a topic light enough
so that there's no need to actually debate the merit of the issue raised by
the example. Basically whether or not Jet Grind Radio is the best Dreamcast
game is a dumb point to argue, but we can agree that a forum that enforced
that opinion would just create an echo chamber of people who really like that
game. And that that's bad curation.

Essentially, agreeing with the guy I responded to that an echo chamber
requires silencing dissenting opinions without any hope of discussion.

I wasn't passing judgment on whether or not HackerNews is or is not an echo
chamber.

------
karlkloss
I never used social media, because of privacy concerns, and because I'm
somewhat of a veteran, having started with Fidonet, and continued later with
Usenet.

These problem are nothing new, they existed back then already, they were only
less severe, because not many people were using those services back then, and
the broad audience did know nothing about them.

Having already experienced such things on a small level, I was too sceptical
to use social media, and I was right.

But I started using Mastodon about a year ago, assuming that a distributed,
censorship resistant network would do something better.

Boy, was I wrong. After the Wil Wheaton incident, I deleted all of my Mastodon
accounts, and this will surely be my last experiment with social media.

I have a Threema account, that only my friends and relatives know, and I have
my own blogs, that'll be enough.

I survided without social media before, and I'll continue to do so.

~~~
vjeux
Don’t you consider Hacker News to be social media? All the links are sourced
and ranked from the community, plus all the comments.

~~~
repolfx
It sort of is, but it benefits a lot from the (virtually) unlimited number of
characters you can use. It's possible to express complex and nuanced points on
HN, which is impossible on Twitter by the nature of their decision to cap
everyone to tiny messages.

The Twitter length limit is a great growth hack because you ensure nobody can
excel, so everyone feels they can tweet just as well as anyone else can. But
is atrocious if your goal is to promote debate and intelligent discussion.
It's a medium explicitly designed to prevent any kind of complex conversation
- big surprise how it turns out.

HN has its own set of problems of course: like most discussion forum software
it confuses "I disagree with your view" with "I disagree with how you
expressed your view", so people routinely downvote well reasoned and polite
posts that simply cause them cognitive dissonance or that they find
inconveniently true. It'd be better to explicitly separate the two, but I
never saw any forums that did so.

But it's at least got the basics right: the screen space is devoted to high
density text.

------
krupan
Email spam has not been a problem since we figured out Bayesian filtering. I
understand why Twitter and Facebook don't offer that as a tool to it's users,
because we'd all filter out the advertising that they live off of. Why
couldn't Mastedon offer it? Will Wheaton opens a Mastodon account, messages he
doesn't want to see start flooding in, he clicks "this is spam" on each one,
and after a day or two he no longer sees the messages he doesn't want to see.
The trolls, starved of attention, move on. Am I missing something?

~~~
rspeer
> Email spam has not been a problem since we figured out Bayesian filtering.

My understanding is, this is not because Bayesian filtering solved the spam
problem forever. It's because Bayesian filtering solved the spam problem for
long enough for Gmail to dominate email.

Google had a ton of data and they were already using it to recommend ads, so
they were well-positioned to use it to make the best spam filter. Unlike the
things that came before, you could rely on Gmail's filtering. You didn't have
to sort your spam folder _ever_. This was a clear advantage bringing even more
people to Gmail.

Now the thing discouraging email spammers isn't any given filter, it's the
fact that Google owns email and the cost of messing with Google is too high.

(I am not saying that this, in particular, makes Google evil. Another way to
look at it is, Google saved email as a usable medium, by centralizing it.)

~~~
thaumasiotes
> Unlike the things that came before, you could rely on Gmail's filtering. You
> didn't have to sort your spam folder _ever_.

Was this ever true? It's certainly not true now. Gmail correctly filters
almost all spam I receive. And, it also incorrectly filters a healthy amount
of legitimate mail, so you still have to check your spam folder. It will even
filter mail from Google.

~~~
rspeer
I check my spam folder out of curiosity once in a while, and there is nothing
in there I would miss. Like some of it is "legitimate" but it's still just
bulk drivel, nothing that obligates me in particular to read it.

Right now, I see mail from Google in my filter. Because it's automated dumb
shit where Google Photos wants me to look at my photos from a year ago, and I
already told Gmail that was spam, and it believed me.

If I miss an e-mail because it was caught in my spam filter, that's the
sender's problem, as opposed to pre-Gmail, when it was my problem.

It seems to me that I haven't _had_ to re-train my spam filter by rescuing
important ham messages from the spam folder in a decade. Perhaps your
experience is different.

~~~
thaumasiotes
> Perhaps your experience is different.

Very much so. For example, earlier this year I sent email to a company
inquiring about an opening. Their response went straight to spam.

------
dwohnitmok
The linked webpage in the article detailing Wil Wheaton's account of the
incident is a thought-provoking read of online mob behavior.
[http://wilwheaton.net/2018/08/the-world-is-a-terrible-
place-...](http://wilwheaton.net/2018/08/the-world-is-a-terrible-place-right-
now-and-thats-largely-because-it-is-what-we-make-it/)

~~~
eridius
It's also incredibly biased towards Wil (no surprise, as he wrote it).

In reality, Wil himself was the harasser and abuser, and the reports against
him were completely justified. He got bofa'd and in response started reporting
other users for even the slightest provocation. And it wasn't just reporting
other users; he was specifically reporting a bunch of trans women, which
really puts him in a poor light considering the fiasco with his Twitter
blocklist that he put a ton of LGBT people on and pretty much ruined the lives
of a bunch of independent trans artists who suddenly lost their source of
income.

Now it's certainly possible that a group of people decided to mob him with
reports, but personally I think it's pretty likely that those 60 reports
overnight were all legitimate reactions to his campaign of abuse against other
people.

~~~
skybrian
Maybe, but why should we believe this version of the story rather than the
other one? Do you have a source?

~~~
ghusbands
Wil does admit to some of it: "And for what it’s worth, the part of me that
wants to apologize to the people who ended up [the blocklist] by mistake is
overwhelmed by the part of me who was attacked really viciously by a lot of
those people and feels like maybe blocking them wasn’t such a bad idea, after
all."

That directly supports the narrative that he publicly blocklisted some
innocent people, hasn't apologised and is being mobbed for it.

~~~
eridius
I love how he says "by mistake" as if he didn't personally add a bunch of
people to his blocklist deliberately simply because he was mildly annoyed by
them.

Hey Wil, when you personally are responsible for taking away the livelihood of
a bunch of marginalized people, you don't get to complain about being
"attacked really viciously" by your victims.

------
zaroth
I wonder if the solution involves partitioning the social graph to allow
accounts to coexist?

Instead of trying to censor accounts, because I’m going to assume accounts
aren’t used purely for offensive content — that’s the easy case — but rather
the account is generating ‘mixed’ content.

Bans are a primitive form of isolating a part of the graph. Particularly if
they extend to commenting/replying to that account’s posts.

The false abuse reports similarly should carry an extremely high cost to the
submitter. If an abuse report is flagged maybe the account is no longer
trusted ever to report a post again. Maybe an abuse report should actually
have to carry some monetary value (like hashcash).

~~~
jakobegger
The problem is not that multiple communities can't coexist. That isn't the
problem. Unboxing videos and workout videos coexist peacefully on Youtube.

The problem is when a mob of people decides they want to attack someone (or a
group of people) and does everything they can to harass them.

Offering the mob their own place isn't going to help at all when all they want
to do is destroy someone elses.

------
joe_the_user
It's frustrating to see a viable-enough-it-gets-talked-about Twitter-
alternative but apparently no equivalent Facebook-alternative.

I believe this is related because it seems like the Twitter model by itself
breeds a wide-range of inherently bad and ugly behavior. A platform for
celebrities pretty much would have to. Moreover, an "everyone sees everyone by
default" approach seems hard to moderate.

I treat Facebook as a combination blog and forum. It works for me.
Effectively, I have the tools to do my own moderation. A federated version of
this seems much more manageable.

~~~
blfr
While I would argue that the best alternative to Facebook is nothing, it is
interesting that there is no "Internet community creation kit" which would
just be a blog+forum software integrated so that one section of the forum is
dedicated to comments on the blog posts while other sections allow users, not
just site editors, to create threads.

~~~
thaumasiotes
Is that not just a forum? phpBB has been around for a while.

~~~
blfr
Forums lack the integration to the blog (main site) comments. Even modern
forums like nodeBB only offer hacky solutions for that.

~~~
KajMagnus
I think Talkyard does what you have in mind:
[https://www.talkyard.io](https://www.talkyard.io) — it's both community/forum
software, and blog comments: [https://www.talkyard.io/blog-
comments](https://www.talkyard.io/blog-comments), and the blog comments are
placed in a blog-post-discussions category in the forum. The same login &
@username_mentions work both in the blog comments and in the forum. (I'm
developing Talkyard & it's open source beta software.)

I'm looking to create a PWA mobile app, meaning, people will get one's
community as its own icon on their mobile phone.

(Any thoughts/feedback?)

------
jancsika
> As a moderator working on a volunteer basis, it can also be hard to muster
> the willpower to respond to a report in a timely manner.

When I see the word "volunteer" in a response to a sensible critique of FLOSS
software, I rankly speculate that a deep design flaw has manifested itself as
a social problem.

It appears I am right in this case.

Are there known cases where I would be wrong?

~~~
scrollaway
Wikipedia?

I agree with you in general though. I think Wikipedia would be the exception.

~~~
bumholio
Wikipedia quickly gets political. Even in technical articles, there is an
incentive to impose one's own perspective for his field as the "authoritative"
version. An important part of contributors, it turns out, are academics
working in the field that more often than not are deeply vested in their
version.

This is a great system for an encyclopedia that must find the best
approximation of the truth, but I wouldn't call it "incentive free". Similar
incentives in the social world (pushing out people that don't conform to the
moderator's world view) could be disastrous.

------
paradite
It has just occurred to me that this centralized vs federated/decentralized
discussion is similar to the monolith vs micro-services discussion.

Centralized/monolith means everyone follows the same rule regardless of they
like it or not, and there's single point of failure.

Federated/decentralized/micro-services means every federation/team is free to
have their own rules as long as they have consistent API contracts with other
nodes/services. But it's harder to have consistent quality control process as
the effort is distributed.

If you see it from this angle, it might be easier to analyze and understand
pros and cons of each approach. Maybe people who like monolith will also like
centralized services?

------
mod_ex_1
Lack of a batch delete function made me chuckle a bit, This alone and a select
all on page feature would seem to take the sting out of the late night or
impromptu delete a bunch of f u comments from a cat post events. Why would
that feature not be a priority?

~~~
ubernostrum
Fun fact: reddit doesn't have batch delete for subreddit moderators. Using the
interface reddit provides, you have to individually click "remove" on every
single comment.

Reddit also doesn't have a built-in button to ban a user while you're looking
at the offending post or comment; you have to go to a separate "ban users"
page, and copy/paste the username to ban.

The official mobile app, so far as I can tell, doesn't have ban functionality
_at all_. You can only approve/remove/mark spam and lock threads.

Moderator communities have worked around this, somewhat, with browser
extensions that provide mass-remove and inline ban functionality, but they're
only a partial solution. The mass-remove in the mod toolbox extension, for
example, isn't always able to get everything, because reddit itself only
partially loads busy comment threads, and the extension can only remove
comments that are actually on the page. So you have to remember to force-load
the whole thread and sometimes click through the "we don't care that you
wanted all the comments, click to another page to see the rest" links.

The mod toolbox extension (/r/toolbox) also adds some pretty essential things
like macros for common messages that need to be sent to users, shared notes
all the moderators of a subreddit can see, etc.

~~~
mod_ex_1
Is this intentional to prevent mass censorship?

~~~
ubernostrum
The simpler explanation, and the one consistent with history, is that reddit
does not prioritize the use case of moderators. If the goal was to be an
absolutely unmoderated free-for-all, they wouldn't provide half-assed mod
functionality, they'd provide zero mod functionality.

Especially given that AutoModerator exists and can be set up to automatically
remove posts or comments based on particular key words or even regex, and odes
it so fast that not even the various "view removed reddit" services can show
you what was there.

~~~
dredmorbius
One of the possibly fortuitous aspects of Mastodon is that admins tend to be
devs, hence their needs prioritised.

Automoderator, like RES, is a third-party addition to Reddit.

~~~
ubernostrum
AutoModerator _started_ as a third-party feature. Now it's built in to reddit
(and IIRC the developer was hired, or at least collaborated with, on the
integration).

------
pessimizer
It's a mod failure and a tool failure. If you get 60 bullshit reports about a
user, ones that aren't even debatable, you should be able to send out replies
as a mod to their mods, and if they aren't heeded, defederate.

There needs to be some sort of federated ticket system or something.
Federation is like a treaty, and a treaty is basically nothing but a court.

------
merlish
Sure, and these are good points, but they solve a problem far less interesting
than the real problem.

Of the (let's guess) 60 people telling Wil to go do one, up to about half[1]
might be the shitposting crew taking advantage to troll a not well-liked
celebrity, but the rest were people and their friends who were genuinely
incensed at his appearance in their graph.

For the latter, you might argue they should block him, but if he's on the same
instance then effectively he's come in and sat down in their home. If he's on
a remote instance, then there still exists the desire for retributive justice
for past wrongs (yes, including perceived).

In this specific case, there was no-one (except the moderator!) sticking up
for Wil. The groups that dislike him are small, but genuinely plural. (e.g.
4chan doesn't like him, a number of trans* artists don't like him, some Star
Trek geeks...)

Given that he came to the fediverse to escape harassment and trolling on
_mainstream_ platforms, I don't think there is a great solution.

So here's two options, instead:

One: Don't join the platform as "Wil Wheaton". If you want to join a community
as another face in the crowd, then use an internet handle. Then you can
interact as equals.

Pseudonymity is one of the great gifts of the internet.

If you come as a celebrity to link your blog posts and try and talk to fans,
then I don't think the fediverse makes any sense. It's too small and people
are territorial of the instances they adopt.

Secondly - a more social solution - find some way to calm the people involved.
This may involve temporarily suspending instance links, or saying that (as
moderator) you need time to discuss this and are working towards an acceptable
solution, etc. Don't know what you do next.

Finally - and the point of my rant: Dismiss those you don't understand /
greatly simplify social problems at your peril. (Sure is great reading a bunch
of trans artists arguing in earnest turned into "abusers" \- nice!)

As humans, we make great changes and build general solutions based on one-off
undesirable acts, and if you don't even make an effort to completely
understand the problem then you WILL build the wrong solution.

[1] Possibly more than 50%? There were people involved who kept saying they
had just joined Mastodon and didn't know how to use it, which is weird.

------
rumcajz
This is relevant in a way: Modern anti-spam and E2E crypto

[https://moderncrypto.org/mail-
archive/messaging/2014/000780....](https://moderncrypto.org/mail-
archive/messaging/2014/000780.html)

------
haney
Decentralization definitely makes a hard problem, that twitter has barely been
able to solve on a monolithic system, harder. I wonder if Mastodon could take
techniques from things like BitTorrent that do a decent job of determining an
individual node’s contribution in a decentralized way, like maybe a way for
instances to pass hints to one another. For instance if someone is being
banned/moderates frequently on other federates instances that might be a hint
to block their content. Obviously there’s an opportunity for someone to
control many instances and manipulate the system.

I love the idea of Mastodon and I’m hoping people smarter than me find
solutions for these problems.

------
jellicle
You need to have good, immediate abuse-banning tools.

So if Wil Wheaton says "look at my new movie" and you get 60 abuse reports
about it, there should be an one-click tool that says "Everyone who reported
this as abuse gets suspended and all the reports are removed from the queue
{click}". If Wil Wheaton has 92 people replying "fuck you" to his posts, it
should be one click to pull out everyone who wrote "fuck you" to Wil Wheaton
and give them a slap.

You can set the level of violence done to whatever you want (maybe a temporary
suspension is fine, maybe you just want them to lose abuse-reporting
privileges, maybe you want to nuke them from orbit). But it should be one
click to deal with mobbing. "Mark all these accounts as mob participants".
Etc.

All this stuff is, frankly, EASY, you just have to think through the dynamics
and have an understanding of social media. EVERY tool you give users,
including the "report abuse" tool, can and will be abused, and therefore you
need tools to deal with abuse of every tool.

Tools obviously won't help with BAD moderation. If your moderator hates Wil
Wheaton and likes Nazis, they can take that side. But it will at least make
moderation (good or bad) fast and efficient, which gives you the chance of
hiring enough good moderators to provide overall good moderation. (Hint: if
you have multiple moderators, you need tools to review your moderators....)

You should have all sorts of counters to record stats about abuse - how many
problems you get from each other instance, ratios of problems/good content,
that sort of thing. It should be easy to notice that you've gotten 5 mobbing
attacks today from foobar.mastodon so maybe that entire instance needs a
little timeout. And so on.

------
jancsika
How _is_ abused handled in Mastodon?

From the article it is implied that Mastodon couples "moderator" with
"instance admin" and leaves it at that.

~~~
dredmorbius
Poorly. That's topic of active current discussion.

------
minikomi
Does anyone regularly use mastodon as their social network of choice? I've
registered and looked around but it seems so.. boring. How did you kick-start
your social graph to the point where it became interesting? With twitter,
instagram and facebook I had actual in-life acquaintances using the network.
With mastodon, I'd have to go seeking content pretty aggressively.

~~~
WhyDoPeople
I had been using Mastodon for over a year until 2 months ago.

When I joined, it felt like a good way to meet people who share interests and
learn about new ideas.

You kick-start your social graph by looking at the fediverse feed and
following people who are cool and interesting. You also can find good
discussions by commenting on their tweets (I'm not calling them toots).

I left because it eventually became the same drama you find on Twitter.

Anger becomes a hobby, and negativity then spreads like cancer.

People ruin other people's reputation and lives within a community (or real
world if they can) all because of a single second of their life that may or
may not have happened.

It is very sad and alarming that people are blind with unguided rage that they
do not know how to direct positively.

In the end, Mastodon will be a good community, but not for everybody, and it
is not an open or welcoming community.

~~~
minikomi
Sounds like it's not for me then. I find little in the fediverse feed which
appeals to me, and hashtag searches for topics which interest me turn up small
clusters of unconnected, non-conversational toots going out into the void.

------
jtr_47
this sounds like a scare piece of "news." A possible competitor to twitter
etc.

~~~
mirimir
Well, it _is_ frightening.

------
egypturnash
So I'm just going to drop this idea for all you Eager Internet Entrepreneurs
here on HN:

influencers.social, a Mastodon instance for Verified Social Media Influencers.

If you have a Twitter/Instagram/Facebook/etc account with 5x10^6 or more
followers, and a thousand dollars per month, then you can join our elite site
and know that you are in the hands of our team of moderators, who are on the
job 24/7\. We use sentiment analysis to catch the abuse before you even see
it! Why trust your social media presence to the underpaid minons of the other
commercial sites, or to a volunteer running their site as a side project, when
you can be a Verified Social Influencer?

Juggle the numbers as you see fit. Pick a better name while you're at it.
Decide for yourself if you feel like passing any improved moderation tools
back to the codebase of whichever ActivityPub-based social site you fork off
of.

~~~
rspeer
This sounds like satire, but I don't get the joke.

