
Twitter Is Crawling with Bots and Lacks Incentive to Expel Them - rayuela
https://www.bloomberg.com/news/articles/2017-10-13/twitter-is-crawling-with-bots-and-lacks-incentive-to-expel-them
======
DoodleBuggy
On the contrary, it would appear that Twitter is incentivized to encourage
bots and trolls to flourish.

The definition of a bot could be problematic though. If a bot is merely
automated posting, then virtually every company, news org, website, and
anything else that uses automated posts is a "bot", and are those bad? I don't
think so, those are useful. The problematic bots are basically automated
trolls.

Troll bots increase "engagement" metrics by quite a bit, this is obvious to
any person if you look at the automated bot replies to virtually any post from
a news agency or anyone with a large twitter following. They also jump onto
every hash tag or breaking news event spreading garbage. You'll find dozens of
bots making absurd statements followed by bots responding to those bots, and
clueless people arguing with the bots. Combine that with the endless seas of
human trolls that also jump into everything, and those interactions probably
make up a huge amount of usage of Twitter.

Twitter remains useful for scanning headlines and breaking news, but outside
of that the service is a complete mess.

~~~
mv4
Fun story: I once wrote a primitive Twitter bot as an experiment, and forgot
about it. It's been running on a Raspberry Pi unattended for a few years now,
tweeting random stuff and periodically "engaging" with a predefined set of
topics.

It has a Klout score. It has more followers than me.

[edit: spelling]

~~~
nasredin
Do me a favor and check up on it once in a while.

If it becomes self-aware, we may have a problem.

I've seen this movie before somewhere...

~~~
mv4
I need to make a kill switch of some kind.

------
sigmar
How about the SEC opens an investigation into whether Twitter is defrauding
investors/advertisers by including bot accounts in user metrics? Or is that
too hard to prove?

~~~
tyingq
Facebook has been caught dead to rights a few times, like this one:

 _" According to a recently published report, Facebook says they reach 1.5
million Swedes between the ages of 15 and 24. The problem here is that Sweden
only has 1.2 million of ’em"_

 _" Facebook’s Ads Manager says that the website is capable of reaching 41
million Americans between the ages of 18 and 24. The problem is there are only
31 million Americans of that age."_ [1]

Nothing ever happens, other than maybe some grumbling from paying advertisers.

[1][https://mumbrella.com.au/will-facebook-ever-stop-
bullshittin...](https://mumbrella.com.au/will-facebook-ever-stop-
bullshitting-471535)

~~~
autokad
a lot of people create alternate personas for varying reasons. There are a
sizable cohort of people that like to create accounts for online fantasy.
also, there are many game accounts (some that even produce money)

trying to police them off would be a mistake by facebook.

~~~
tyingq
>a lot of people create alternate personas for varying reasons.

I'm sure they do, but I doubt 30% of them do. That's what you'd need to
balloon from 30 million to 40 million. The 30% also assumes that EVERY 18-24
year old American has a Facebook account in the first place.

~~~
stephengillie
Another way to slice those data - 33% don't have accounts, 33% (9.9 million)
have just one account, and 33% have the remaining 30.1 million accounts spread
between them - on average, 3.04 accounts per person in that group, but one or
two people/groups may have thousands of accounts.

Edits: Correcting math. Thanks, kbenson.

~~~
r00fus
30% of the populace has 3+ accounts? That doesn't make sense - it doesn't
follow power-law distribution.

Most like the .01% have hundreds to thousands of accounts. What does that
imply?

~~~
lostapathy
If somebody has hundreds or thousands of accounts, it implies they are running
bots ... which I guess brings the whole argument back to the point that active
humans with multiple personas are not responsible for inflation of reach
stats.

~~~
r00fus
You're implying that identities created using persona management software [1]
are bots? I guess you could though they'd likely pass a turing test.

[1]
[https://www.dailykos.com/stories/2011/2/16/945768/-](https://www.dailykos.com/stories/2011/2/16/945768/-)

~~~
lostapathy
I'm implying that if you have hundreds of accounts, you must be using software
to manage it and aren't actively engaged with all of them yourself. Therefore,
they are no better than bot accounts.

------
mixedbit
Once our tweet used a popular hash tag and got re-tweeted by some bot network.
It was really easy to tell the accounts were all bots, all the re-tweets
happened simultaneously, profiles were very similar, with similar messy
streams.

I went into trouble of reporting each of the fake accounts manually (the UI
for this is far from convenient). After some time I checked the bot accounts
again. None of them was deleted, our original tweet still had a large re-
tweets count, which basically misleads the real Tweeter users, making the
tweet look more interesting that it really was.

From this incident I no longer look at re-tweets as any reliable metric of
popularity or quality of tweeted content.

~~~
dawnerd
Flip side, it's realllly easy to tell when businesses hire botnets to push up
their follower count in hopes of being verified. If an account has tens-
hundreds of thousands of followers and no interactions from them, save to
assume its all faked. Whats sad is twitter seems to verify them without
actually looking.

~~~
enzanki_ars
But how can you tell if the bots are actually hired by the company? What if it
was someone outside the company trying to influence Twitter for their own
purposes?

------
mv4
This problem is much bigger than any of these articles seem to imply. I am
also convinced that Twitter knew this all along.

Unfortunately, bots don't just inflate user numbers for them, they create ad
revenue.

Additionally, the majority of self-proclaimed "influencers" rely almost
entirely on automation for publishing (i.e. they are not really "there" on the
platform), and bots for amplification.

If you dig deeper, there's a number of "certified" Twitter partners there,
that provide social media analytics and management platforms. They pay Twitter
for data access, but they employ 3rd parties to provide "amplification" for
them, via bot farms - creating the illusion of effectiveness for the
subscriber.

The rate of real, authentic human discovery and engagement on Twitter right
now is incredibly low.

~~~
mschuster91
> Unfortunately, bots don't just inflate user numbers for them, they create ad
> revenue.

How? At the end of the day, a human sees the ad and so generates ad revenue
for Twitter. A bot a) does not see the ad tweets in his normal timeline feed
and b) does not feed back impression tracking to Twitter.

~~~
shostack
If the bots are posting content or attracting real human eyeballs with their
comments, would they not be able to generate revenue then?

~~~
mschuster91
Yeah, but that is entirely legitimate revenue.

------
nextstep
There are tons of fake accounts on Twitter. It doesn't take much to get
hundreds or even thousands to engage with you (follow you, reply to your
tweets with a common hashtag, etc.).

I thought a year or two ago that Twitter was avoiding cleaning house and
removing the obviously fake accounts because they were simultaneously trying
to show growth on the platform and if anything removing these bots would hurt
their growth metrics.

~~~
toomanybeersies
Insstagram is in exactly the same boat.

Although they technically ban botting, and make cursory efforts to stop it,
it's not really stopped.

Here's a good article on Insta botting, which goes info further detail:
[https://petapixel.com/2017/04/06/spent-two-years-botting-
ins...](https://petapixel.com/2017/04/06/spent-two-years-botting-instagram-
heres-learned/)

------
llccbb
Twitter is also crawling with fun and light-hearted bots that are non-
manipulative and generate content for others to consume. I would hate to see
any of those bots get thrown off the platform.

~~~
jayroh
Spitballin' here but, if Twitter allowed for "registering" a bot, then these
sorts of fun and light-hearted bots could easily stick around.

If someone has a legit reason to fear registering a bot they run, then it's
probably for "not-good" reasons. And in my estimation "not-good" could be as
innocuous as those follower-fishing bots.

~~~
eric_h
Most bots that I've encountered on twitter "register" a bot by saying it's a
bot in the profile.

The "up-to-no-good" bots do no such thing, of course.

------
bradleyankrom
Not sure I agree that Twitter doesn't have incentive to get rid of the bots.
Bots are bad for user experience and contribute to noise that affects the
quality of inputs used by journalists, marketing types, etc. to evaluate
trends. The article references the number of active users as a critical piece
of Twitter's valuation, but Wall Street can't be dumb enough to completely
ignore the quality of those users.

~~~
delecti
The question is whether bots are bad _enough_ for the user experience to make
people leave.

~~~
bradleyankrom
Anecdotal, but I left in some part because of the noise created by bots.

~~~
blackflame7000
I left because there weren't enough bots to bring the average tweet quality up
to marginally awful status.

------
cwkoss
One thing I've been contemplating is the extent to which feedback on twitter
can alter a user's opinion or focus on a given topic.

Ex.

\- make a bot network of 100 bots.

\- identify some accounts, split them into control and experimental groups

\- choose a topic, such as "NFL"

\- every time someone in experimental group tweets something about the NFL,
like or retweet it a dozen times from a random slice of your bots

\- do this for a few weeks

\- at end of few weeks, does experimental group tweet about NFL more
frequently than control? Has the sentiment within their tweets gotten any more
extreme?

~~~
tw1010
Oh cool, this is something I've been thinking about too. Please, if you feel
you have the time, do email me (see my bio) with some lessons if you manage to
build something.

------
seizethecheese
The problem isn't that they lack incentives, it's that by expelling bots
they'll inevitably expel some real users. The cure isn't worth the side
effects.

~~~
jpttsn
So you’re saying that instead of trying to rescue their stock price, Twitter
are looking to treat everyone fairly, cost no object?

~~~
stale2002
Well apparently so, as they are choosing not to do so. Or maybe it isn't
costing them much money.

~~~
jpttsn
No, I think the difference gp ignores is that they could either protect false
negative real users, and eat a loss for the good of those, vs. ignore real
users getting squeezed and boost stock price by...? This is where it gets
fuzzy.

------
calvinbhai
How about these bots?

@newsyc250 @newsyc100 @newsyc50 @newsyc20

Why should these bot accounts be expelled? I just love the way some bot
accounts are so useful. In fact, following the above mentioned bot accounts,
gives you just enough number of Hacker News posts that you want to see, based
on how popular they are.

Similarly, there can be many bad bot accounts, but differentiating those from
good bot accounts may not be an easy task, and bound to fail with false
negatives and false positives.

~~~
codedokode
The article says not about bots generally but about the bots that (allegedly)
were controlled by Russia and (allegedly) were posting pro-Trump tweets and
(allegedly) could help him to win the elections.

But from a legal point of view, is Twitter obliged to find and ban such bots?
I don't think so. Marking them as bots would be a good idea though.

~~~
sli
> Marking them as bots would be a good idea though.

I agree. I like how Telegram does bots. They have first class support for
bots, and the separation between real user and bot is extremely clear. You
have to specifically create a bot account, and bots have certain restrictions
that users don't have (e.g. they cannot initiate a conversation themselves).

Of course, that assumes the bot is set up as intended. One could masquerade a
bot as a real user if they were clever enough and had a spare phone number,
presumably. I've not tried it, but I'm going to assume it's possible, since
you can very well write your own Telegram client.

------
shostack
How about incentive to expel users who violate their ToS like our President?
Surely threatening nuclear war violates their "Abusive Behavior" rules (which
are part of their TOS)... [1]

> "Violent threats (direct or indirect): You may not make threats of violence
> or promote violence, including threatening or promoting terrorism."

And that's just getting started.

Has there been ANY statement from Twitter leadership on why they permit Trump
to continue with this behavior? I have my own cynical answer, but I'm curious
if they've gone on record for what is such a blatant abuse of their ToS it is
unconscionable that they let it continue.

[1]
[https://support.twitter.com/articles/18311](https://support.twitter.com/articles/18311)

~~~
JonFish85
The short answer is probably that if Twitter were to ban Trump, they'd face an
enormous backlash from folks who take it as suppression of political opinions.
It's a blessing and a curse for twitter, I'm sure -- having Trump tweet
regularly has probably been a boon for business, overall. If you're generous,
you could make the argument that Twitter is allowing the President to bypass
the traditional media outlets and have his voice heard directly (for better or
worse).

> Surely threatening nuclear war violates their "Abusive Behavior" rules.

Even if you were to try to live by the letter of the law, you'd have a hard
time really getting this to stick, I imagine. If this is the tweet you're
referring to:

"Just heard Foreign Minister of North Korea speak at U.N. If he echoes
thoughts of Little Rocket Man, they won't be around much longer!"

Then I think you'd have a hard time construing this as "terrorism", any moreso
than half of Twitter's population has said (sports teams getting "killed",
public figures "not being around anymore" (i.e. probably meaning "in
office")).

~~~
makomk
The interpretation of the Twitter rules that people seem to expect is one
which bans everyone they detest and permits everyone they like. I mean,
there's a huge overlap between the people complaining about Twitter not
banning Trump because nuclear war and the Nazi-punching brigade who rely on
Twitter's rules on threatening violence not actually banning all threats of
violence.

(Not to mention that that the main reason the demands to ban Trump have
restarted is because Twitter briefly suspended someone for posting another
person's private phone number to their 800,000 Twitter followers, and she and
everyone else is using this to justify why Twitter should have let her get
away with it.)

------
WhiteNoiz3
If twitter doesn't ban bots, at least give them an official designation and
require people to be honest about whether the account is bot controlled or
not. Advertisers want to reach people not bots. If they got together and
demanded more accurate reporting then there would be a financial incentive.

~~~
ntheuon32tnoe
If they did that, then wouldn't normal people just claim that their accounts
were bots, and then they wouldn't be served ads anymore?

~~~
discodave
Twitter could differentiate between access through official clients and APIs.

------
tunesmith
I was thinking the other day about how bots effectively hack the first
amendment. If you're one to believe that the proper remedy for offensive
speech is more speech, the bots kind of throw that out the window. Trolls are
at least actual people, but bots are not. You're not going to exchange views
with a bot. So it's reasonable to suppress bot content. But then the problem
is, how do you know it's a bot? What's the foolproof algorithm that determines
whether someone is or isn't a bot, without false positives or false negatives?
What if it's someone that is merely scheduling their own tweet? So it means
you've opened the door to suppressing someone's speech based on the content of
their message.

~~~
pingpongchef
There's a line we can draw and should draw. A scheduled tweet is a nice
feature but it's acceptable to call a bot tweet. It's not the content of the
message we want to stop, it's the sender

------
Muuuchem
Went to a meetup at twitter the other night. We got a cryptic response to that
question "don't worry, we are working on it". Someone asked them why they
don't just require every account be verified, or non-verified accounts labeled
as boys, but they avoided the question.

------
brink
ctags are practically dead on twitter. Good luck searching for something like
$LTC or $PTOY without crawling through pages of ctag spam with russian names
linking to telegram accounts. It really shouldn't be that hard to fix.

~~~
acarsea
I think a lot of traders switched to StockTwits:
[https://stocktwits.com/](https://stocktwits.com/)

------
jerkstate
I hope someone else appreciates the irony of the anonymous message projected
onto the side of the Twitter building, ostensibly intended to amplify the
opinion-holder's voice and sway public opinion.

------
nkingsy
Seems inevitable that more governments will follow china into the digital
identity business. Just because Black Mirror made a dystopian version of this
doesn't mean it's a bad idea.

------
indubitable
This problem cannot be solved. As natural language processing and output
become ever more sophisticated, it will in the very near future start to
become impossible to discern between bot and human. Consequently, I think
instead of trying to solve the issue of bots, it's more important to start
educating people that just because they read something on the internet does
not mean it's true. And just because lots of people seem to support an idea
(or condemn an idea) says nothing about the actual public view towards said
idea.

This sounds somewhat patronizing (the first part in particular), but falling
victim to confirmation bias is something we all do. When ideas fit our own
personal biases, we tend to become much less critical of them. For instance
animal testing, when visible, is something very few people can emotionally
accept. And there have been countless hoaxes [1] where people will share an
image of an animal in one context (such as a rabbit suffering from severe hair
less / skin damage at a veterinarian) and then claim it's an image of the
result of a named shampoo company testing their products on animals. It gets
people riled up and interested in stopping animal testing, but the problem is
that it's completely fake. This is an obvious example but the exact same is
true of words themselves and it spreads into everything -- most notably
politics.

It's in many ways bizarre that we don't deal with this issue as a part of
basic education. Imagery or messages designed to spark an emotional response
are _very_ effective against people who are not aware of what's happening. At
the same time, they can be rendered far less potent by simply educating people
about these tools of manipulation and giving them a wide swath of examples. In
today's ever more connected world, with ever more people looking to shall we
say 'utilize' other people, the complete neglect of this social skill in
education today is perplexing.

[1] - [https://speakingofresearch.com/2017/05/16/context-matters-
ho...](https://speakingofresearch.com/2017/05/16/context-matters-how-a-
veterinary-image-became-cruel-animal-testing/)

~~~
enos
I agree that it's important to educate people both to question what they see
and also how to question it. I disagree that this is the answer. We've been
teaching that for years.

It's impossible to take everything critically, and honestly few will. So the
entity with the largest bot army still has the longest propaganda lever.

Twitter and the other social networks know who is a bot, or else they haven't
bothered looking. Something needs to force them to act.

I do see one mechanism: the bots go too far, and users don't want to be on a
platform where they just interact with bots, so they go to more curated places
to get their fill. So the business health of the platform depends on having
trust. FB has a leg up on this since your friend list probably has people
you've actually met. There the problem is your gullible friend forwarding you
crap. A deputized bot, if you will. No level of education helps there.

~~~
indubitable
Where/when do we teach people to question what they read? This is rather
different than a critical analysis - this is understanding that things like
propaganda are not the crackly loudspeakers repeating chants to glorious
leader that we characterize it as in our media and entertainment. In reality
propaganda is something that tells a story, but subtly (or not so subtly)
pushes the reader in a certain preconceived way. For a stereotypical instance,
anytime in war an image or story of children being hurt is used as
justification for anything - red flags should go off. It's easy to see this
when I say it, but few recognize it when they are actually being fed such
imagery from a source they believe trustworthy -- again our biases shuts down
our systems of critique. I certainly received no formal education on this
whatsoever until university and even there it was only because I chose to take
an array of classes focusing on war, revolution, and marketing.

When I speak of bots, I am implicitly speaking of the inevitable adaptions to
any sort of attempt to crack down on them. I do agree with you that right now
many bots can be detected pretty easily. But that's largely because they have
no reason to disguise the fact that they're a bot. In many ways, I think the
current system is more desirable. As bots progress to actually trying to
emulate human behavior it's going to result in the sort of paranoia you see on
many forums today where individuals call one another 'shills' as a means of
expressing disagreement. And ultimately, I do not think it will be at all
difficult to pass a heavily crippled turing test of 140 character
unidirectional messaging.

------
dboreham
Turing Test passed! That was easy.

------
komali2
It's so frustrating because I'm trying to genuinely engage people with the
opposing opinion on Twitter, and sometimes I have real discussions, but easily
50% of the time I realize I'm dealing with a bot. Or a troll. I can't really
tell the difference anymore.

------
wimagguc
Oh, sounds like while most bots are only posting stuff, some do actually
consume it for analytics-reposts-engagement. This is huge because (wait for
it) _Twitter could be the first social network written and consumed entirely
by bots!_

------
ProAm
So is Facebook, so is Reddit, so it Instagram..... rinse and repeat.

------
Animats
Twitter needs a "real name" policy, like Facebook.

------
codedokode
Those bots are used inside Russia too, for example, they like and repost pro-
government tweets and add agressive comments to tweets from opposition
politians to make it look like common people don't like them. They reply to
the tweets so there is a real human behind the account. They are not a
computer program.

But of course Twitter has no obligation to ban them.

If anyone is interested, here is an example of probably bot account [1]. It
was registered in Nov 2013 and has posted 27 000 tweets and retweets which is
18.5 tweets per day on average. Another account [2] has a random user id and
has posted 2000 tweets in last 3 months.

And by the way I don't think that bots influenced election results. I watched
the debates and Hillary's position was very weak.

[1] [https://twitter.com/alexflex777](https://twitter.com/alexflex777)

[2] [https://twitter.com/cHprdSiZ8MQONMl](https://twitter.com/cHprdSiZ8MQONMl)

------
dmix
Only 1600 bots? This sounds like the makings of a moral panic to me.

There's a lot of misunderstanding about what these bots are capable of... and
I keep seeing top ranking comments on Reddit talking about these bots daily.

The media seems to be spreading the idea they are autonomous agents
automatically posting content on politically useful threads, with bot voting
rings pumping them up. When in reality they still rely heavily on a human-
intensive process, requiring lots of manual targeting and copywriting, and
Reddit/Twitter/etc are very good at detecting voting-rings, having been
perfecting algorithms to detect them for over a decade.

And do people think Russia or fringe right-wing groups have some super-
effective autonomous bots that weren't also available to the best paid US
consultants?

If these fake Twitter accounts, usually bots only followed by other bots, were
understood in terms of the hundreds of millions of real users on Twitter, the
billions spent by both real parties, and within the context of the technical
limitations of bots, it wouldn't seem so scary.

~~~
PhisherPrice
It's a shame that hostile foreign governments can do this to us without a
strong retaliation. We should be doing the same to their social media
platforms, such as VK and OK, but it appears that we are not. Though we are
more than capable of equal retaliation.

~~~
porfirium
What do you think the CIA does?

~~~
rtx
Hack routers.

