
What Is the Best Way to Stop Internet Trolls? - rfreytag
http://www.bbc.com/future/story/20160318-what-is-the-best-way-to-stop-internet-trolls
======
CPLX
I feel like an odd man out every time this topic comes up. Like I'm sure many
others here I've been involved in online communities and discussions in some
form or another since it was all green text on a black background, and I don't
think things are any different now than they have ever been in any way that
really matters.

The secret to dealing with internet trolling is some combination of ignoring
it, turning your devices off now and then, and realizing that the character
you are interacting with online is not actually a real person. Though -- like
a character in your favorite movie or TV show -- it is of course created by a
real person, it is still a fictional character.

With the exception of actual actions that come into the real world, such as
fake reporting to the police, interfering with someone's ability to have a
job, etc, the vast vast majority of online "trolling" is solved by ignoring
it. Same as it ever was.

~~~
DanBC
Your "solution" means many people have to not use the internet. Sucks to be
them, I guess. Meanwhile, people saying they want to rape you, or posting
images of dead children to you, or posting your address and saying they're
going to murder you, get to carry on without hinderance.

~~~
stonogo
That behavior is illegal regardless of the medium. Report it to the police. It
has nothing to do with the internet.

~~~
DanBC
Wait, parent post is saying to just ignore it. You're saying to report it to
police.

Which is right?

Thousands of reports are made to police each year.
[http://www.bbc.co.uk/news/uk-england-
london-24160004](http://www.bbc.co.uk/news/uk-england-london-24160004)

> About 2,000 crimes related to online abuse are being reported to the police
> in London each year, according to new figures.

Over a thousand reach the threshold for court action:
[http://www.bbc.co.uk/news/technology-23502291](http://www.bbc.co.uk/news/technology-23502291)

> More than 1,700 cases involving abusive messages sent online or via text
> message reached English and Welsh courts in 2012, the BBC has learned after
> a Freedom of Information request.

The criado-perez trolls were enabled by the Internet. They saw other people
trolling and thought it was okay; they were able to create many accounts
rapidly; they were able to send very many messages.

~~~
Zikes
You're discounting the possibility that there's a difference. There's varying
levels of harassment, and it should be up to the victim's discretion as to how
severely it should be treated. If person A calls person B a jerk on Twitter,
that's easily enough ignored (unless there's a pre-existing personal or
parasocial relationship between persons A and B that would make "jerk" sting
more than it otherwise would), but if person A organizes a campaign involving
all of their followers whose sole intent is to make person B feel threatened
then that's an entirely different matter.

The hardest part of dealing with online harassment and "trolls" seems to be
drawing that line, or even acknowledging that it exists.

------
jessaustin
TFA mentions Twitter at the beginning, but doesn't follow up at all with how
terrible their interface is and has been for targeted users. The "solutions"
discussed in TFA require the coordinated voluntary actions of users and site
administrators, and are therefore impractical. We need instant technical
solutions. It should be possible to "un-@" any other user, with no questions
asked. That is, after 'alice takes this step with respect to 'bob, no matter
how many times 'bob includes the string "@alice" in his tweets, neither 'alice
nor any of her followers would ever see those tweets or any other by 'bob.
This would not be difficult, and would make Twitter much less threatening.

~~~
fweespee_ch
> That is, after 'alice takes this step with respect to 'bob, no matter how
> many times 'bob includes the string "@alice" in his tweets, neither 'alice
> nor any of her followers would ever see those tweets or any other by 'bob.
> This would not be difficult, and would make Twitter much less threatening.

The problem with is then @alice controls whose tweets I can read if I want to
follow her content. :/ Just because I want to hear what @alice has to say
doesn't mean I want @alice to control whether I can talk to @bob.

I think solid ignoring features is really the limit to this approach due to
that fact. [e.g. @alice ignores @bob, @bob never shows up in her feed no
matter what]

~~~
jessaustin
OK, split it down the middle. If you're already a @bob follower when you start
following @alice or when @alice un-@s @bob, whichever happens later, then
@alice's action has no effect on you. Even if @bob has been hidden from you by
@alice's action, it only affects internal-to-twitter links like retweets. If
you have a link directly to @bob or his tweet from Google or from an email,
you can follow it successfully, and that action removes the hiding effect for
you. These extra complications would probably be good for @alice, since it
would make it much less obvious to bullies that she had taken this action in
the first place. (Yes, if @bob has observant friends, they'll notice
eventually, but that's probably OK.)

I agree with you, the main point is that @alice doesn't have to smell @bob's
shit anymore.

The point isn't to nail it down exactly right here and now. The point is that
this is a discussion that should have taken place many times at Twitter, and
we should see some concrete results by now, rather than fluffy PR-focused
fake-internal memos to the effect of, "ok guys now we gotta try hard!"

~~~
fweespee_ch
> OK, split it down the middle. If you're already a @bob follower when you
> start following @alice or when @alice un-@s @bob, whichever happens later,
> then @alice's action has no effect on you. Even if @bob has been hidden from
> you by @alice's action, it only affects internal-to-twitter links like
> retweets. If you have a link directly to @bob or his tweet from Google or
> from an email, you can follow it successfully, and that action removes the
> hiding effect for you. These extra complications would probably be good for
> @alice, since it would make it much less obvious to bullies that she had
> taken this action in the first place. (Yes, if @bob has observant friends,
> they'll notice eventually, but that's probably OK.)

That could work well enough. :)

Yeah, my main concern was a power user who people wouldn't want to unfollow
abusing the feature to silence critics.

------
mdip
I've wondered about this for a while. There's no shortage of _dicks_ [1] on
the Internet. It's too easy to spout off and attack someone online in a way
you'd never do "in person" and there's plenty of people who make a sport of
it. For the former, there's not much that can be done, for the latter, I think
there are a few solutions (variations of which I've seen tried).

Personally, I think it starts first with community standards and moderation.
Hacker News, Stack Overflow and such do a pretty good job already. You get a
share of jerks here and there but by and large the community self-filters the
tone[2] as much as the quality of the comments.

When the inevitable trolls appears, one of the best approaches I've seen is to
silence them without their knowing it. I wonder how often this practice is
figured out and remediated by creating a new account. It'd be possible to
extend this further by categorizing troll posts and selectively providing them
to each other (as well as providing all replies) so that they think "it's
working" and don't bother switching accounts, all the while hiding _all_ of
the trash from everyone else. Perhaps that's a lot of wasted energy but I'd be
interested in hearing if something like that has been tried anywhere.

[1] That word always seems to describe the behavior best, to me. I'll use
"jerk" from now on. :)

[2] I've always believed that you can say "someone/something is wrong on the
internet!" without having to be a jerk about it and have seen many examples of
such on Hacker News and the StackExchange family of sites. Those who cannot
lack creativity and are, unfortunately, less likely to have their (possibly)
valid arguments heard.

~~~
ross-life
_one of the best approaches I 've seen is to silence them without their
knowing it_

Shadowbanning does work well but requires (somewhat obviously) fair
admins/moderators. Otherwise it becomes something like a censorship tool.

------
arjie
The article alludes to it, but there's some pretty neat work at Riot Games
around improving player behavior. Here's a pretty good
video:[http://gdcvault.com/play/1017940/The-Science-Behind-
Shaping-...](http://gdcvault.com/play/1017940/The-Science-Behind-Shaping-
Player). It's half an hour long but has some fun results. When you have a
player base as large as Riot does with League of Legends, some sorts of
experiments become possible.

Good fun.

------
stegosaurus
Journalists seem to make a habit of misunderstanding the term 'troll'.
Trolling is a joke. Harassment is not.

Joining a dungeon in World of Warcraft and wiping the group is trolling.

Sending someone genuine sounding death threats on twitter isn't.

There's a spectrum there somewhere, but more often than not it feels like
we've lost another useful term to hyperbole.

~~~
elthran
Always this. The BBC seems especially bad for calling being insulted or
offended by something on the internet "Trolling"

------
coldcode
A beer truck? Seriously everywhere you have free and open communication some
people will be trolls. The only difference between today and say 2500 years
ago is the speed of dissemination and little fear of being run through with a
sword.

~~~
mdip
_The only difference between today and say 2500 years ago is the speed of
dissemination and little fear of being run through with a sword._

I think, though, that it's a pretty _huge_ difference. The speed with which
someone can crap all over a discussion is instantaneous and psychologically a
lot easier due to not being "in person". Trolling in the form that it would
have existed a couple of centuries ago[1] still exists today (without the
sword, perhaps, but the threat of violence in retaliation to trolling still
exists today -- we just have better technology and better enforcement). I
would use Donald Trump as an example. He gets his share of trolling (and non-
trolling protesting) at his in-person events, however, I'd hazard a guess that
the amount of such activity during his entire campaign would be about as much
as is seen in an hour on Reddit.

[1] We don't have a lot of historical evidence that dates back 2500 years, so
I'm assuming that was a typo or an embellishment used to make a point (or,
perhaps, something I missed in my speed reading of the article).

------
buckbova
Remove all perceived anonymity. Of course that sense of anonymity is what
makes the experience richer and safer.

I'd rather just live with the trolls. Is it that bad?

~~~
DanBC
People are happy to make idiotic comments under their real names linked to
their real identity.

See the Sikh who advertised Facebook ("What's this raghead terrorist doing on
my wall?") or comments under any news article (especially Daily Fucking Mail
("Kill the immigrants")).

~~~
buckbova
They exist and are ultimately in the minority.

Anonymous apps tend to bring out the worst in people, especially with groups
like teenagers and cyberbullying.

------
parennoob
I have yet to see an article about Internet trolls and cyberbullying that
addresses the nuances inherent in a couple of points:

1\. Disagreement is inherent to a lot of discourse. Social networks are not
research papers -- some of the disagreement is going to be accompanied with
strong displays of emotion. Where does the line between strong disagreement
and bullying lie? If Anne tells Bob, "Bullshit, your arguments don't make any
sense and you are just an arrogant cyberbro I DRINK MALE TEARS", is she
cyberbullying him? Should her access to Twitter be cut off, or would
voluntarily blocking her on the part of Bob address this problem in a
sufficient manner?

2\. How to balance prevention of trolling with limiting self-expression? The
outside world is not a perfect place where everyone engages in reasoned
discourse either. Given that pub-sub systems like Twitter are the equivalent
of everyone shouting their opinions loudly in the same street, how much of the
unreasonableness inherent in human discourse do we allow to penetrate these
social networks to keep them essentially human in nature and not turn into a
LinkedIn style list of platitudes?

I wish the article went some ways towards addressing these. Instead it seems
to focus on ad-hoc initiatives set up by a lot of people who (in my opinion)
seem to not have any concrete ideas of what they are trying to accomplish. For
example, one of the research papers linked [1] describes cyberbullying as
including "...posting mean or hateful comments, aggressive captions or
hashtags". In my opinion this is a rather extravagant definition of the term
and should be examined more carefully and revised per the points above.

\--------

[1] "Detection of Cyberbullying Incidents on the Instagram Social Network"
[http://arxiv.org/pdf/1503.03909v1.pdf](http://arxiv.org/pdf/1503.03909v1.pdf)

~~~
boopuyy1
I just thought it would be good to point out that you are creating a strawman.
You've created an example that you acknowledge isn't cyber bullying, and use
that to suggest that therefore cyber bullying can't exist in any form.

It's a disingenuous argument at best.

~~~
threatofrain
A straw man is a misrepresentation of someone else's argument, and the poster
did not say something anything so unsophisticated as what you have suggested.
Your argument, on the other hand...

~~~
boopuyy1
Not to point out the obvious, but "is she cyberbullying him?" and "Should her
access to Twitter be cut off" are both clearly attempting to misrepresent the
argument of the other side. Just because they are rhetorical questions doesn't
mean it isn't obvious what the poster is implying.

~~~
threatofrain
This is the parent-level post. There's no dishonest misrepresentation of
anyone. The poster also didn't even come close to suggesting anything so
unsophisticated as "therefore cyberbullying can't exist in any form!"

That is an awful misrepresentation.

The paragraph in question basically pushes the conversation toward, "Where are
the brightlines?"

------
javajosh
Easy: introduce laws that require all communication to be consensual. We don't
allow people to force food down your mouth, and we shouldn't allow people to
force words into your brain either.

What would this be like? Before seeing something, you'd be asked if you want
to see "An advertisement for a home care product." Or "An angry personal
attack" (Maybe with some indication of who it's against.)

This solution allows everyone to say whatever they want, and adds the ability
for everyone to not be exposed to what they don't want to be exposed to.

~~~
Lawtonfogle
>An advertisement for a home care product.

How do you know I want to be exposed to a consent related question?

Kinda like how asking certain questions about consent can be classified as
harassment if you have already been told no, even if the actions of which
consent was being requested for were never carried out.

~~~
javajosh
Sufficient pedantry can make any solution seem unworkable.

------
chris_wot
For all the good that Twitter does, it is farceorsecwhat social harm is done.

I don't use Twitter and likely never will. It's a cesspool of discrimination
and abuse. I hope it dies as a medium.

------
CM30
Moderation is the best way to stop them. If you've got actual people going
through posts and content and removing the low quality stuff, you'll maintain
a much more well ordered and useful community than you would a mostly
'uncensored' platform.

Unfortunately, this has a few issues...

1\. You need the manpower to actually check what's being posted on a regular
basis. Or enough active members willing to help out that they'll do it for you
on a voluntary basis as moderators. This is obviously easier for small sites
like customer support forums and the average fan site.

2, You need to really, really limit bias as much as possible, so that you're
booting out people or bots because of their behaviour rather than their
political/social stances. This is what Twitter doesn't do at the moment, to
the point being a 'progressive' troll is a good way to never get banned while
being a conservative or having an unpopular opinion will get your account
suspended for the smallest trangression.

3\. A willingness to ban popular, rich or 'powerful' users as much as anyone
else, if they break the rules. A good forum/community manager will just as
easily kick out a troll with ten thousand posts as one with ten, but social
media sites don't seem to have figured this out yet.

So for a community manager or site owner, moderation is the best way to stop
internet trolls (and every other type of troublemaker or nuisance as well).

Making a user's post invisible to everyone but himself (shadowbanning or as
vBulletin called it, Tachy goes to Coventry) and making their time on the site
a nightmare due to fake errors (hell banning/miserable users) used to work.
Note the word used to, since the internet is a lot more savvy about this sort
of stuff and will usually figure out these systems pretty quickly now.

As for how to avoid internet trolls as a user? Well, stay off large social
sites with few rules (or in some cases, extremely badly enforced ones) and go
to places where the people tend to be a bit nicer to each other due to a
shared interest or two. Less Twitter, more forums.

Stay fairly anonymous wherever possible. Seriously, trolls love knowing every
detail about someone's offline existence and what annoys or upsets them. By
giving less personal information, you force people to either REALLY dig for
information to use or to attack your arguments rather than your
background/appearance/personality.

Don't engage in arguments with trolls/idiots. The more you make it seem to
affect you, the more pleasure a troll will get from their actions. If you've
ever watched The Simpsons, you probably remember the scenes where Bart prank
calls Moe's Tavern, right? Or how often, Moe would basically scream back down
the phone in anger afterwards? That's what a troll wants to see. They want
people who are easy to annoy.

So yeah. Moderated community platforms tend to be the best way to stop
internet trolls, since they mean such users and comments are removed as
quickly as possible.

~~~
makomk
> You need to really, really limit bias as much as possible, so that you're
> booting out people or bots because of their behaviour rather than their
> political/social stances.

This is a political non-starter right now, which is why Twitter doesn't do it.
Seriously, when they accidentally ban someone for threatening violence against
other Twitter users in the name of left-wing activism a huge swathe of the
tech and security community launches a campaign to reinstate them and accuse
Twitter of all matters of evils fro banning them in the first place.

~~~
CM30
Well, they could always get a thicker skin themselves. Forum and community
owners have been the subject of complaints for doing stuff like that for
years, the best solution is just to say 'here are the rules, if you don't like
them, LEAVE'. You have to be willing to potentially lose a few popular users
to keep the peace.

------
umeshunni
Don't feed them?

~~~
vkou
That doesn't work. They are perfectly happy to revel in the attention of their
friends.

[http://freethoughtblogs.com/almostdiamonds/2012/02/28/dont-f...](http://freethoughtblogs.com/almostdiamonds/2012/02/28/dont-
feed-the-trolls-is-bad-science/)

~~~
mdip
This is what caused me to wonder if allowing trolls, but eliminating their
conversation from everyone but other trolling users might be an option. Let
them revel in the attention of their friends, but have that attention only
serve to give them the illusion that they're affecting others while _directly
causing_ others to ignore them (and thus, prevent unintentional feeding). It'd
be tricky to get right and probably not worth the energy in most places, but
would be interesting to see and study the results.

~~~
CM30
So in other words, like a multi user version of Shadowbanning?

It's an interesting idea (and I'm 100% sure it's been done before), but it
doesn't really work out on any site with a significant audience. I mean, all
you need is one of those trolls to access the site at work/school/wherever and
notice their posts are being shown if they're not logged in, and then
suspicions are raised.

Too easy to bypass and figure out. It's like how people figured out Twitter's
shadowbans, because people who followed them said off site that they weren't
seeing their messages.

------
vegancap
Ignore them.

------
JoeAltmaier
In public we arrest those who break down civil society by smashing windows,
beating people up or threating law-abiding citizens. On the internet, we
dispute whether anything is actually going wrong. Or how to automatically
solve it.

At some point I think a wet team is the only solution.

~~~
bcook
Wet team? (Are you referring to "wet-work", aka murder?)

------
miguelrochefort
There is no such thing as a troll.

There is such a thing as people being presented with content that's irrelevant
to their interests.

I better way to semantically express what we need, as well as a recommendation
system that goes with it, is what we lack today.

------
scrupulusalbion
I always find it strange how people are seriously concerned with trolling. I
understand that children, the emotionally immature, and the psychologically
ill are actually harmed by trolling. However, those people are already at risk
of being harmed by anything approaching verbal abuse. These are the people who
have more important issues to deal with than trolling. These people are
irrelevant in this context.

The people who matter to trolls are those who can't take a joke, who are super
serious all the time; they want to be mad about something. Trolls merely give
those people what they want. Flame wars ensue. Purposeful trolls are social
pranksters and are conscious of their trolling; these can be reasoned with and
are thus not serious issues to fora.

The actual trolls worth worrying about are those who troll yet know not their
trollish behavior. These people begin arguments over trivialities and get
called jerks/dicks/assholes. These people are much harder to reason with,
because they will only listen when they are emotionally sober; bringing up
their trolling just gets them deep into another argument. You can't stop
trolls; they have to grow out of their own emotional immaturity.

Thus, as the unconscious trolls are themselves a subset of those harmed by
trolling, they can and do harm others without understanding what harm they
cause.

