
Possible routes for distributed anti-abuse systems - paroneayea
https://dustycloud.org/blog/possible-distributed-anti-abuse/
======
chromakode
In my experience, the only acceptable way to deal with many classes of abuse
is humans. You can cleave out a bunch of spam by making it harder / more
expensive / more annoying for spammers to operate, but I've never seen a
community of humans that didn't require human labor in order to govern itself.

Federated reputation is the right idea, but it's only a means. The real hard
problem is distributing the mental / emotional labor of moderation. I don't
think this decentralizes well. Good moderation requires focus, a carefully
trained subjective sense, along pragmatic acquired experience for how to act
on issues that come up. If you agree that the most efficient/appropriate route
is communities having human moderators, then that puts a ton of constraints on
any solution, and it begins to resemble the forum / IRC op / reddit moderator
structures we've seen before.

It's pretty simple to create a federated platform with a strong anti-abuse
component, but the solution is not what most proponents of decentralized
platforms want to hear.

You need:

* definitive scopes in which certain moderators have a ton of (possible unauditable!) power,

* consensus on which moderators exist over which scopes,

* encrypted group channels for communicating with moderators,

* and, the most important part: a tremendous amount of dedicated human elbow-grease and emotional labor.

------
yummyfajitas
There is no abuse problem on the platform, or rather any abuse problem on the
platform is already a solved problem via existing techniques (e.g. standard ML
classifiers).

Consider SMTP, which is a true peer-to-peer system. In SMTP, abuse is a solved
problem - spam filters work great, and 33mail/mailcatch (disposable email)
improves things tremendously. At most 1-2 recruiters/SAAS offering marketing
automation/etc slip through my filters each week, forcing me to expend about
3-5 seconds/week clicking "report spam".

The "abuse" problem on twitter is similarly not really a filtering issue; if
it were, then it could be easily solved by a moderately better twitter client
or switching off notifications of @mentions.

No, the actual problem people face is that they want to stop others from
speaking, and label the speech they wish to stop "abuse". What's scary to most
(including me) about Twitter is that an angry mob may form and decide they
hate me, and then use Twitter to coordinate real life actions against me.

That's not a problem of abuse on the platform; that's a problem that the
platform enables people to speak among themselves in ways I dislike. Lets be
clear that this is the problem some folks wish to solve. It's only by
identifying the problem that we can properly solve it, or decide if it's a
problem that should be solved.

~~~
pjc50
> What's scary to most (including me) about Twitter is that an angry mob may
> form and decide they hate me, and then use Twitter to coordinate real life
> actions against me.

There's multiple things going on here. We've seen use of communications
channel A to organise harrasment on channel B - which is hard for B to stop
("brigading", gamergate). Gamergate escalated into real-life death threats
against its targets. We've seen on-platform organisation of harrasment of
people on and off the platform (this is ultimately what got Milo banned from
twitter). We've also seen "news" organisations get into finger-pointing and
condemnation of random citizens (e.g. Daily Mail outing a teacher as trans,
resulting in her suicide).

We've seen the legal system get involved clumsily. There's seemingly no middle
ground between "joke in poor taste" and "arrested for bomb threats"
(#twitterjoketrial). Occasionally people resort to libel law, and even more
occasionally the less powerful, famous and obnoxious person wins (Katie
Hopkins vs. Jack Monroe).

People are I hope aware of the ability of abuse reporting systems to
themselves become channels of abuse. Facebook's real names policy is exploited
by anti-trans campaigners to force people off the platform, for example.
Anyone should be aware of the risks of automated threshold systems being
abused: if all you have to do is get 100 accounts to press "report abuse",
that will be abused very quickly.

Local standards also present problems. Do we really want to go along with e.g.
Pakistan arresting people for blasphemy?

~~~
paroneayea
_People are I hope aware of the ability of abuse reporting systems to
themselves become channels of abuse. Facebook 's real names policy is
exploited by anti-trans campaigners to force people off the platform, for
example. Anyone should be aware of the risks of automated threshold systems
being abused: if all you have to do is get 100 accounts to press "report
abuse", that will be abused very quickly._

 _Local standards also present problems. Do we really want to go along with
e.g. Pakistan arresting people for blasphemy?_

It's definitely true that anti-abuse systems can be themselves abused, though
most of the systems that you're talking about are partly due to the anti-abuse
systems being centralized, right? I also see a lot of comments here along the
lines of "but that's censorship!" But the article is discussing decentralized
anti-abuse systems which allow individuals to set up their own opt-in filters
which apply to themselves and their communities (which means different people
might have different filters). Do you think that's different?

~~~
pjc50
Filters deal with the situation where A is sending to B something that B
doesn't want to recieve.

The situation where A is sending to B something that's harmful to C cannot be
dealt with by C's filtering and can only be addressed at a higher level in the
system.

Those are the technical distinctions, but there's a _lot_ of possible things
covered by the second case: leaked nudes, lynch mob organisation, slander,
leaked intelligence, compromised party documents, names of human rights
activists being leaked to secret police, copyright infringement, child porn,
fake news, _real_ news in fake states, allegations that invitations to pizza
are evidence of child porn, and so on.

~~~
someguydave
>The situation where A is sending to B something that's harmful to C cannot be
dealt with by C's filtering and can only be addressed at a higher level in the
system.

Huh? If C knows A's public key and the content is signed, why can't C filter
A's content?

~~~
pjc50
Content is basically never signed, and I'm talking about situations where the
content isn't intended for or sent to C.

------
Arathorn
We see the need for decentralised anti-abuse and reputation systems as the
single biggest existential threat to the growth of Matrix.org (which otherwise
risks being almost worse than SMTP). It's a problem that the whole
decentralised web community has in common, and a solution could benefit
everyone equally. We've experimented with data entries in Stellar ledgers for
this already, but it needs a lot more thought & work. We tried to issue a call
to arms at FOSDEM this year to try to get people thinking more about it, with
a bunch of possible ideas very similar to this post (see slides 27 to 32 of
[https://matrix.org/blog/wp-
content/uploads/2017/02/2017-02-0...](https://matrix.org/blog/wp-
content/uploads/2017/02/2017-02-04-FOSDEM-Future.pdf)) and it would also be
absolutely awesome if the W3C social groups could help drive a solution.

~~~
Natanael_L
I think the only practical solution is an old one, of using multiple spaces -
a bunch of relatively closed spaces that you have to get invited to, with more
open spaces that you might get invited from.

Closed doesn't have to mean secret, just that it uses a whitelist for who can
post.

Slap on some access controls to my idea here and you might have a decent
start;

[https://roamingaroundatrandom.wordpress.com/2014/06/01/a-dec...](https://roamingaroundatrandom.wordpress.com/2014/06/01/a-decentralized-
hash-chained-discussion-system/)

(I think I've even mentioned this to one of you guys before)

~~~
milesrout
I have to agree. I don't think that 'global moderation' really works as a
concept. What someone views as just a joke might be _incredibly_ offensive to
me. I have no problem with rape jokes or 9/11 jokes for example, but I never
ever want to see any sort of gore, that's completely offlimits for me. Whereas
some people (see 4chan) might not care about either, any many care about both.

------
jpalomaki
Web-of-trust mentioned in the article could also be used for moderation.
Instead of somebody defining The Moderators who are allowed to make decisions,
everybody would pick their own (or could decide to trust selections made by
somebody else).

In the "web-of-moderators" model no content would be outright blocked. Each
user would have a personalized view, depending on his personal moderator
network. One side effect of this would be that trying to outsmart the
moderation would be harder, since all the bans would be shadow bans. Spammer
would not know from how many people in his network the content is hidden.

~~~
sametmax
This is an incredibly good idea. Everybody is using moderation in a
centralized manner. Like there is one source of truth. But like curation, you
need the moderation that suits you.

This solves elegantly one problem hard with moderation: allowing freedom of
speech while giving a safe place for people to be.

You can imagine a forum where you have official moderators, and non official
moderators. By default the forum display the very clean, politically correct
moderated content. However, if you create an account you can switch your
moderation source, and see more content.

People ok with dirty jokes or extrem opinions would be able to enjoy the site
as well as children.

That's actually a fantastic idea even outside of the web-of-moderator concept,
just as a implementation in a regular centralized software. Even better in a
global ring of course, but one is harder than the other.

~~~
milesrout
There are some things that actually should be completely deleted and off-
limits, though, like people that post peoples' personal information.

But yes, I often wish I could see deleted comments on reddit in places like
/r/askscience: I want to see what stupid shit people have said to get banned.

ALTHOUGH, like _everything_ discussed in any of these threads as 'solutions'
to the so-called 'problem' of 'abuse': it really is just going to result in
more echo chambers.

~~~
sametmax
Well that's the problem you see. You and me agree on that, but plenty people
won't. And plenty people won't agree that you can mention gay sex, the coran
or a political statement either.

And they are sure to be right, as much as we are sure we are right.

That's the problem with freedom of speech.

One of the solution is to allow:

\- anything that is legal geographically \- anything that is not followed by
destructive actions IRL \- or both

And even that is hard and very imperfect.

Morality is always arbitrary, it's impossible to draw a perfect line in the
sand.

Which is why systems life freenet are both a fantastic place (you got
everything man, even the craziest informations) and a terrible curse (you got
everything man, even a 10 years old getting raped).

The web-of-moderator is an elegant solution to this problem. You don't draw
one line in the sand, you draw many and let people choose a line.

This doesn't solve the moral or legal issue, but it does solve the "i'm
offended" issue. Which is more that what we could do up until here.

~~~
milesrout
Yeah I like the web-of-moderators, as I said.

>This doesn't solve the moral or legal issue, but it does solve the "i'm
offended" issue. Which is more that what we could do up until here.

I entirely disagree. It does not solve that issue at all. The issue there is
that people feel they have the right to filter out everything they disagree
with or find uncomfortable. They don't.

The TV news doesn't avoid showing images that are graphic, they say 'graphic
images upcoming, turn off if you can't handle it' then show them.

~~~
sametmax
> The issue there is that people feel they have the right to filter out
> everything they disagree with or find uncomfortable. They don't.

Yes but this is a way harder battle than creating a web-of-moderators :)

------
colemannugent
While I enjoyed the article, in the end, I can't see any of these new
techniques working in a meaningful capacity. I think traditional moderation
techniques still are relevant to the discussion.

The "problem" that the whole distributed paradigm brings to communities is the
same as its greatest strength: if nobody controls the system, then anyone has
just as much influence as their peers. This means that systems will have to be
put in place to give some power over others in order to control the community.

This is where I believe the crux of the issue lies: in the distribution of
power.

If anyone can decrease someone's federated reputation, then so can the trolls.

If you require people to "join" a community, whether by participation or time
or other means, trolls will do that as well if it is easy to do so.

If I were trying to create a community of people who are at a higher risk of
being harassed I would have my community structure take some cues from HN,
Reddit, and the like.

One of the things I love about HN is that you need a certain amount of rep to
downvote. This alone eliminates a lot of the issues Reddit has where
everything turns into mob rule as everyone, including community outsiders,
have the ability to influence discourse. Going one step further, I would
propose an invite system. While an invite system would mean slower growth
initially, we're trying to optimize for quality in the community, not
quantity.

I had some alarms go off in my mind while reading this article's seemingly
nonchalant approach to what is effectively censorship, but in a distributed
system I guess this is of no concern as everyone can just "fork" the community
and start their own.

~~~
generic_user
> if nobody controls the system, then anyone has just as much influence as
> their peers. This means that systems will have to be put in place to give
> some power over others in order to control the community.

As time goes on I'm beginning think that no matter what the original design of
the platform was they will all inevitably regress into something like IRC.

The rules will be changed so that groups can split of or filter and focus on
some specific area to try to keep the discussion on topic and productive. And
inevitably there will be some group that captures control of that group and
turns it into a dictatorship.

Dissenters will start a new group if they can or leave the platform. Any other
design will just become so noisy and unproductive that it turns into a
dumpster fire.

The reason why HN is not a total shit show is because they sometimes put down
the hammer and knock out off topic discussion from the core set of topics that
revolve around the practice of programming.

> Going one step further, I would propose an invite system.

Some of the most productive people who make the best posts are hermits. There
people who rarely check in to social media to see whats going on and have
little or no friends they could ask for a invite.

And on the other hand some of the most crap people are online all the time and
have huge networks and can get an invite to where ever they want.

------
LukeB42
I built a system that relies on humans decrementing a trust rating of peers
based on the user manually classifying resources as malicious, so that bad
peers can be detected inferentially.

It differs from Eigentrust and Eigentrust++ in that peer nodes don't manually
propogate their trust ratings to eachother on their own behalf, nodes just use
their own view of their overlay networks when it's time to recompute trust
ratings.

Simulation framework with threat models:
[https://github.com/Psybernetics/Trust-
Toolkit](https://github.com/Psybernetics/Trust-Toolkit)

Peer-to-peer caching proxy PoC:
[https://github.com/Psybernetics/Synchrony](https://github.com/Psybernetics/Synchrony)

------
Seylerius
Constraint programming is an interesting model to discuss here. I've been
looking into machine-generated constraints, and there's a lot of potential for
transparency when you get into constraint-producing functions that also
produce labels, and then go further up the abstraction ladder from there.
Structured right, you can produce dynamic labeled matchers, especially if you
build a UI tool for marking particular hateful elements of hate speech.

------
tscs37
I've had the idea of using a distributed IP blacklist using federated
subscriptions ala diaspora and GNU social before and I think it might be the
one solution to actually have any kind of leverage in fighting abuse on the
fail2ban-level for everyone who is not Google.

There is probably still more work to do and this (IMO) does not scale to Layer
8 that easily.

------
lotusko
machine learning may help reduce the human intervention.

~~~
milesrout
Hopefully by labelling things more interestingly than 'evil hate speech' and
'lovely pro-feminist speech'.

When all your email filter is doing is filtering things as 'spam' 'not spam',
it's fine to have a binary file. But what should be moderated on a forum is a
completely different kettle of fish. I want Twitter _completely_ unmoderated
(except for literally illegal content). Some other people, _clearly_ , want
Twitter restricted to only things they agree with. We should both get what we
want.

