
An experiment in fighting racism on Twitter - bootload
https://www.washingtonpost.com/news/monkey-cage/wp/2016/11/17/this-researcher-programmed-bots-to-fight-racism-on-twitter-it-worked/
======
jk563
> "I used Twitter accounts I controlled (“bots,” although they aren’t acting
> autonomously) to send messages"

It's a shame they over-hyped the title and made it all clickbaity. The post
itself clearly states that they are not autonomous and therefore not
programmed. Possibly not helped by the use of the word "bot" over and over. Am
I right in saying "bot" implies autonomous?

That aside, interesting short read with a fairly intuative result. Good to see
it backed up with actual research. Though it doesn't mention how many accounts
were targetted during the research.

Is there potential a way of analysing messages for racism in the same way some
do sentiment analysis? Is it the same sort of problem? Not even a beginner in
the field, let alone an expert!

~~~
rtpg
there's the small issue of the use-mention distinction, but generally I
imagine it could work well. There are so many dog-whistles out there
(especially for the alt-right) that the sort of overtly racist comments on
Twitter would come out quickly.

In Twitter's case, follower graphs would also probably be a good indicator, at
least for people who follow less than 200 accounts. Not to mention that a lot
of these people overtly say in their twitter bios that they're part of the
frog police or whatever.

Twitter could nuke Alt-right Twitter from orbit pretty easily. Then they'd
just go run off and use some "actual free speech" twitter clone and fester in
their own space. Just like they do when reddit/4chan cleans house.

~~~
jk563
I don't think banning (my assumption of your intention behind nuking) is the
correct course of action.

I just had a similar chain of thought at a story of a students union wanting
to ban certain newspapers from the campus. My opinion is that, in that
instance, the union should focus on the people who buy those newspapers. Why
do they buy them? Do they agree with the messages? Or perhaps they are
believing what they see without critical thought? There is then either an
ideological debate or perhaps critical thinking education that can be
undertaken.

Hiding the message of an organisation or individual seems morally wrong to me.
I'd much rather see the opinions published and shunned rather than outright
censored. In the Twitter example, I'd like to see the classifications used to
help understand the viewpoint of others or provide further information prior
to posting, rather than to censor / ban individuals or groups.

Not a fully formed idea, for which I apologise. My gut (and it is only my gut)
tells me there is a useful aid there.

~~~
rtpg
I have a bit of trouble with these arguments, because the "free speech"
argument feels really recent to me.

At it's core, twitter is an assemblage of communities. Twitter can simply
decide that, no, it doesn't want neo-nazis on its website. These people are
then free to go make their own websites to hang out on.

Websites like Digg or Something Awful, despite also not being about single
topics, were never sitting around thinking "Oh, I want to ban this guy that
goes around calling black people monkeys, but _free speech_ ". Even 4chan
would ban people for that sort of dumb stuff. I feel like this argument only
became prevalent after reddit became big.

There is an argument to be made that reddit's philosophy on this point single-
handedly made all "community" websites worse, because it stigmatized the
notion of "drawing a line in the sand", and implied that all websites have to
accept everyone.

I think in the newspaper example, Twitter is a newspaper, not the campus.
People can make new websites easily, just like they can make new newspapers.
Twitter is under no obligation to "publish" stuff if they don't want to. The
internet is the campus. No DNS blocking, just banning users from some
websites.

And, honestly, the amount of people who care about the "free speech" issue
dwarfs the amount of people who see Twitter as "the place to get trolled at".
It doesn't seem like good branding to be at the same level as 8chan on this
point.

Though, to your point, on a more general level, figuring out what's going on
in the audience for these sorts of messages is important.

~~~
yomly
Actually I think there's more nuance than simply whether or not to silence
certain views.

The problem with the alt-right is that it is an idea. Ideas are very hard to
kill by censorship and when you do let them "fester" they get more extreme and
radical. This is in part how we arrived at the alt-right becoming what it is
today.

We should want the alt-right out in public in spaces where there are checks
and balances. There, it can be killed by moderation (in both senses of the
word, organic and inorganic) and dilution.

If you push the alt-right to exclusively use their own walled gardens, to
concentrate in their own unmoderated, subverted echo chambers, that's when you
should be very terrified because that's when you birth actually scary groups.

There will always be unpopular opinions on some subject. No matter how much
proof you have, there will be flat earthers and creationists. No matter how
much you promote equality, there will be bigots. But there is also always a
continuum on the spectrum of these affiliations - some people may be mildly
xenophobic and have valid concerns on immigration or have ignorant/antiquated
views on gender equality, for example.

When you take a hard-line stance on these opinions on the hope of promoting
one ideal (no matter how noble) you always run the risk of also promoting the
other through polarisation.

~~~
briandear
Do you think the "alt-right" is the only major problem? Try tweeting something
against a Hillary or Obama policy during the election or challenge a 'liberal'
person -- the responses can be no less harassing and vitriolic.

This idea that the 'right' is a problem is in itself part of the problem.

There is plenty of harassment from those that claim they are more tolerant
than the rest of us.

I do agree with your echo chamber point though -- one needs only to spend a
few minutes on Mother Jones's comment section to read all manner of hate
against those who have different opinions from their own.

You are exactly correct -- the public sphere ought to welcome all viewpoints
as sunlight is ultimately the best disinfectant.

One of the reasons Hillary lost the election is that her campaign seemed to be
in a bubble -- holding valid concerns of many groups in contempt while
surrounding itself with voices that spent more time self congratulating or
feeling intellectually superior whilst many Americans (such as in the Rust
Belt) found themselves marginalized. The Obama campaign by contrast actually
reached out to the so-called deplorables.

~~~
jessedhillon
Yes, a left wing bubble exists. However, without any qualifications of scope
and scale, it sounds like you're suggesting that the vitriol approaches a
comparable level to that of the alt-right. That's just a plainly false
equivalency.

~~~
tnone
You say that, when the alt right is an artificial internet meme, while leftist
dogma rules over universities and continues to go after people's jobs for
wrongthink.

Vitriol is clearly a bad measure when subjective interpretation rules are
becoming official policy. Even Twitter now has special treatment for
particular groups enshrined in its policy. You can be as vitriolic as you
want, as long as it's against the "correct" group.

~~~
someguydave
Hear hear. Equality before the law is the true egalitarian ideal.

~~~
hga
One problem is that, to the extent the inchoate Alt Right has decided upon
this (and I think it has), we _explicitly_ reject egalitarianism AKA
equalitarianism (they're cognate, right?). In Vox Day's first cut 16 point
formulation, Point 7 is:

 _The Alt Right is anti-equalitarian. It rejects the idea of equality for the
same reason it rejects the ideas of unicorns and leprechauns, noting that
human equality does not exist in any observable scientific, legal, material,
intellectual, sexual, or spiritual form._

From [http://voxday.blogspot.com/2016/08/what-alt-right-
is.html](http://voxday.blogspot.com/2016/08/what-alt-right-is.html)

~~~
nickpsecurity
Well, I appreciate you sharing a clear statement on what it is. I haven't seen
one before this. Although plenty is debatable here, No 14 and 15 look
immediately contradictory as one leads to laws with a preference for whites or
discrimination against blacks while other implies a pure meritocracy. I don't
have to guess about this, though, given I live in the South whose history
shows how it plays out. ;)

Good that there's a number of items on the list I agree with. Some common
ground.

~~~
hga
Note that #14 is simply a version of this
[https://en.wikipedia.org/wiki/Fourteen_Words](https://en.wikipedia.org/wiki/Fourteen_Words)
Besides being cute in making it #14, I suspect he felt it needs to be
explicitly stated since its inverse is an explicit goal of our enemies.
Although Vox Day himself isn't "white", and I'll note as a matter of taxonomy
divides the Alt Right into the Alt West and Alt White, the latter a very small
subset.

The general answer to the apparent contradiction is generally found in the
other points and implied in #15: complete _separation_ , e.g. #11 " _The Alt
Right understands that diversity + proximity = war._ " No proximity, no war of
that sort e.g. embedded in those laws you refer to, and therefore no problems
with #14 of those sorts.

Also, I should make clear this is Vox Day's first cut of a statement of what
the Alt Right is, but he thinks it's solid enough to get it translated into
quite a few languages now. But the Alt Right is still seriously inchoate.

~~~
nickpsecurity
Interesting. I do understand the concept of having an inverse. A person in
West Tennessee explained to me why she discouraged whites and blacks marrying.
Seems like pure hatefulness at the start. No, her concern was that they rarely
have white kids. Enough of it happening would eliminate the white race. Other
races also work hard to preserve their ethnic history, especially cosmetic
stuff. I don't see it as any different so long as laws aren't passed for
banning it or people hated on for not following the preference. That is, I
don't like it but I understand it's similar to what other groups do.

"#11 "The Alt Right understands that diversity + proximity = war."

That needs some explanation. What does that mean exactly?

~~~
hga
#11 " _The Alt Right understands that diversity + proximity = war._ "

It means if you put enough people of different identities in close proximity,
the inevitable result is "war". Maybe not Yugoslavia type war, but warfare of
one type or another, including "simple" economic warfare, which was _one_ of
the causes of The War Between the States, and had been a frequent problem for
the young Republic prior to that, but it also happens on very small scales,
see for example the various non-native merchant classes in so many countries,
and that doesn't require large numbers on both sides.

There's a social scientist who's research said much the same thing, who
published his data early but otherwise sat on his results for more than a
decade, Robert Putnam. As Wikipedia puts it, " _His conclusion based on over
40 cases and 30,000 people within the United States is that, other things
being equal,_ more _diversity in a community is associated with_ less _trust
both between and within ethnic groups._ " (emphasis in original:
[https://en.wikipedia.org/wiki/Robert_D._Putnam#Diversity_and...](https://en.wikipedia.org/wiki/Robert_D._Putnam#Diversity_and_trust_within_communities)).

------
aaron-lebo
These findings are less than exciting:

 _Comparing across the panels of Figure 4 shows the decay in the effect of
Treatment C over time. Although the effect remains statistically significant,
the coefficient decreases steadily. In Panel A, the point estimate of -.27
indicates that the daily rate of the use of the word “_” decreased by .27 more
among subjects in Treatment condition C than among subjects in the control
condition. This average treatment effect for condition C decreased in
magnitude to -.17 in Panel B and -.11 in Panel C. I collected data for 2
months, but these results are not shown because none of the treatments are
significant._

...

 _Encouragingly, these effects persisted over time, for the first month under
study, although not for two months. Also, the effect was significant at p <
.05 in the two week time period, but it was only significant at p < .10 in the
one week and one month time periods._

Wow! The subjects cut back on their slurs for the next month (and what is the
practical difference in .27 fewer daily slurs _compared to the control
group_?)....and then they started doing it again.

[http://kmunger.github.io/pdfs/Twitter_harassment_final.pdf](http://kmunger.github.io/pdfs/Twitter_harassment_final.pdf)

Can't help but wonder how you get a Washington Post article out of that. It's
a good graduate-level semester paper. What connections does this guy have?

------
rtpg
There was a This American Life episode[1] where someone who was being harassed
got contacted by the troll.

The harassment was some pretty serious shit (making a twitter account of the
victim's dead father, then insulting through this). But one day the attacker
realized how awful the thing they did was, and ended up apologizing profusely
to her.

There was also that study about how people's views on gay marriage evolved
substantially after just a couple conversations with a gay couple, who would
say "Hey, why can't we get married?". Though there was some statistical issues
with that IIRC....

I have a hard time saying "these are proof that yelling at trolls works"
because there are good and bad ways of communicating the point, but it sure
feels like direct confrontation works on some level.

Though I don't know how well this stuff works against people from /pol/, if
only because most of them don't actually care about the issues, just yelling
at people.

[1]: [https://www.thisamericanlife.org/radio-
archives/episode/545/...](https://www.thisamericanlife.org/radio-
archives/episode/545/if-you-dont-have-anything-nice-to-say-say-it-in-all-caps)
(Act 1)

------
contingencies
Is there an ethical question regarding the computer-assisted manipulation
people's perspectives, however ignorant and wrong they may be? I believe so.
Personally I would not write this sort of software for this reason, and
consider government investment in such software an affront to basic human
rights - UN Article 13, Freedom of Expression: _Freedom to seek, receive and
impart information and ideas of all kinds, regardless of frontiers, either
orally, in writing or in print, in the form of art, or through any other
media_.

 _I disapprove of what you say, but I will defend to the death your right to
say it._ \-
[https://en.wikipedia.org/wiki/Evelyn_Beatrice_Hall](https://en.wikipedia.org/wiki/Evelyn_Beatrice_Hall)
(commonly misattributed to Voltaire)

~~~
whybroke
How is it remotely blocking someone's free speech or access to information
when you politely and automatically tell them that the blatant racial slur
they just used might offend someone?

~~~
guitarbill
One problem - and one that people often forget - is that historically,
societies change their position on what's appropriate.

It's also easy to see an automated approach leading to hilarious results with
say tweets from African-American rappers.

Finally, maybe we can move past this "might offend someone" stance that offers
0% nuance?

~~~
whybroke
It's a curious argument that says un-nuanced insults should be permitted but
pointing them out as such should not.

Or that in the past insults differed therefore we should ignore today's.

Or that hypothetically something hilarious might happen so don't call out
trolling harassment.

------
mibbiting
"and I only included subjects who appeared to be a white man"

Ah yes of course - racism can only be carried out by white men after all.

What a pile of rubbish. If you take offense at words, things are going to get
progressively worse for you as more and more automated bots enter the scene.

The answer of course, is for people to stop taking offense.

Do these people get offended at spam emails suggesting they need viagra or a
fuckbuddy? What's the point?

~~~
calibraxis
> Ah yes of course - racism can only be carried out by white men after all.

No one said that. Just that racism is an oppression where whites are the top
caste. Blacks can be racist against blacks too, for example internalized
racism.

However, blacks being racist against whites is an odd statement, like homeless
people being classist against the rich. Race is pseudoscientific
classification of people to justify entire economic systems like slavery, and
still lives on like awful backwards compatibility.

> Do these people get offended at spam emails suggesting they need viagra or a
> fuckbuddy?

Companies like twitter are far more aggressive at fighting spam than physical
threats. To the point where people simply reported abusers as spammers at one
point. That's why no one bought them.

Oops. Maybe they should've been less arrogant to their embattled userbase; so-
called "social justice warriors" turned out to be normal people just trying to
help them make money. What happens social media companies refuse to learn
things like sociology. [http://www.businessinsider.com/disney-twitter-
acquisition-tr...](http://www.businessinsider.com/disney-twitter-acquisition-
trolls-abuse-harassment-report-2016-10)

~~~
eveningcoffee
_However, blacks being racist against whites is an odd statement, like
homeless people being classist against the rich._

And it indeed would be classist. You are neglecting this because you think
that homeless people can not have any influence.

But consider this, group of homeless people ambush and attack people they
consider rich. Would you still think that it is acceptable?

If we want to root out some behaviour then we can not tolerate it in any form.

~~~
pekk
How often is it happening that the rich are being ambushed by groups of
resentful homeless people? Is this a significant and recurring problem in your
country that existing policing is failing to address?

------
ivan_gammel
Would be great to see more data from this research to understand the scale of
the effect and quality of analysis. How long the measurements have been
performed? Was there direct causal link between the tweet from bot and
reduction of the number of racial slurs? Were there any other events that
could influence the research, e.g. a hot topic that could consume most of the
attention of the subjects? Was there reduction in the number of hate speech
tweets at all or the subjects just switched to the milder language? How big is
the effect in relative numbers?

------
dkarapetyan
Isn't it interesting that time and again every online forum with moderate
affordances for anonymity quickly degrades into name calling and other generic
abuse? Seems like the only way to combat this degradation is with heroic
manual effort.

Maybe there are automated ways around the problem but it's tricky balancing
open discourse with automatic banning of abusive accounts since any kind of
automated system will invariably stifle legitimate discussions. The word
"nigger" when used in a discussion with the proper historical or even comedic
context can be benign but I doubt any automated system will be able to make
the distinction since that would require some kind of understanding of history
and comedy. For example,
[https://www.youtube.com/watch?v=VOwjtNEoRYg](https://www.youtube.com/watch?v=VOwjtNEoRYg)

~~~
HenryTheHorse
Community moderation is no joke, but the gold standard for sustaining an
incredibly civil, intelligent community is probably Metafilter. Then again,
their size is tiny fraction of Twitter's or Reddit's.

~~~
pekk
Isn't this the problem, that some forums get too big to moderate?

------
cs2818
As a CS researcher who often assists researchers in sociology with Twitter
studies I struggle with how accurate or meaningful the results are when the
study depends on inferring too much about specific users.

I have never conducted a study which involved actively intervening (and would
like to know how IRB approval was obtained to do so), but the number of
inferences which must be made are difficult enough when simply observing user
activity. I would think running manipulation checks for the user's perception
of the intervening profile would be nearly impossible.

Ultimately what is most difficult about Twitter studies is that the data
collected is rarely available for inspection by anyone other than the
researcher collecting it. When so many subjective decisions must be made in
analysis this is particularly troublesome.

------
motardo
The sample size was 242. If you click the link to the paper, then click the
supplementary material link, you get [this pdf]([https://static-
content.springer.com/esm/art%3A10.1007%2Fs111...](https://static-
content.springer.com/esm/art%3A10.1007%2Fs11109-016-9373-5/MediaObjects/11109_2016_9373_MOESM1_ESM.pdf))
that explains the methods.

------
danjc
Did I miss it or is there no mention of sample size and average
tweets/day/sampled user?

------
kutkloon7
"I used a racial slur as the search term because I thought of it as the
strongest evidence that a tweet might contain racist harassment. I restricted
the sample to users who had a history of using offensive language, and I only
included subjects who appeared to be a white man or who were anonymous."

Right, because white men are the only people are the only ones who can be
racist. If you have to rely on a criterion like this, that's a very good sign
that the method you use is not objective.

Besides that, I really like the reply the author chose, the message and its
phrasing seem very effective, while staying respectful towards the tweeter.

~~~
cooper12
You're reading a lot into something they didn't even claim. They explain why
they had to choose a specific race in the following paragraph:

> It was essential to keep the race and gender of the subjects constant to
> test my central question: How would reactions to my sanctioning message
> change based on the race of the bot sending the message?

Why they chose people who were white shouldn't really matter as their
conclusion was not "white men are racists". It was just something that had to
be kept constant and grouping on race is not something unfamiliar for
researchers, especially since in this case the whole experiment was based on
the races of the sender and receiver. (Also note that it's much easier to
distinguish between black and white rather than if they made the profile
pictures different shades on the Fitzpatrick Scale)

------
fdsaaf
Is this Twitter that's fighting racism also the Twitter with this moderation
policy? [nsfw] [https://i.sli.mg/2V7OlX.jpg](https://i.sli.mg/2V7OlX.jpg)

------
eli_gottlieb
What has to kill the alt-right is sunlight: drag their ideas into public and
point out how those are wrong. With a boot, if necessary.

So for instance, no, race is not a biologically meaningful signifier.

~~~
sctb
We detached this subthread from
[https://news.ycombinator.com/item?id=12992743](https://news.ycombinator.com/item?id=12992743)
and marked it off-topic.

------
varjag
The findings are in a way depressing. Only the black bots with many followers
cohort was any effective, probably due to intimidation effect on the offender.

~~~
inimino
You misread the article. Only the white bots were effective. The black bots
with many followers had a negative effect, but only on a subset of the users.
Based on the effect sizes and without more data these finding don't look
terribly robust anyway.

