Here is a thread I did about it. I actually tagged a suspect account and the troll started commenting in the thread, DMed me, reported random tweets, and a bunch of other stuff.
The first bot seems to launch ad hominem attacks on organic users via retweets with comments saying something terrible about them. When it doesn’t do that, it just retweets other accounts that do the same strategy.
It would first start with twitter, flagging "likely a bot". Then once it gets good enougth, just hide bot tweets.
If it becomes popular enough, you can extend it to reddit, imgur, insta, facebook, etc.
This will feed the system with a lot of data, espacially usernames, emails, accounts, and IP, plus a graph of all that, that we can cross reference to detect even more bots.
Now for the ultimate goal: hide comments from bots on shopping websites, starting with amazon.
It would become a killer product.
"8 digits in account name" is such a reliable signal of low quality that I'd be happy to block all such accounts, for example.
People talk about "bubbles", but to me it's more like trying to insist that Origami Twitter and Arsonist Twitter share the same space - one is trying to make and the other to destroy, they're not compatible goals.
And there's a familiar real-world example of "please remove your fight-starting highly visible identifying markers so we can have a nice time in the same space": pubs with "no football colours" signs.
Perhaps many are real people who get banned often and create new accounts, hence the recent account created dates and inflammatory bio, banner, etc. I often hear about bots and their influence on the internet, but I don't seem to see them that often. Granted, I try not to spend a lot of time on social media, nor do I actively search for them. I suppose I have never seen a clear definition of what makes a bot a bot.
The type of bots that are most concerning are the ones that you would not be able to readily identify as bots.
How does one go about "bot hunting"? Are there bot scanners? Is it ML? Genuinely curious. Are there any resources or links on the subject you could recommend?
Additionally, I've had my own account locked and shadow/ghost banned on twitter for having the words coronavirus or COVID-19 in tweets. It took weeks of never tweeting those words to undo it.
I've even had my personal and business twitter accounts suspended all at once for "Violating Twitter rules of using multiple accounts for targeted harassment." They restored some of these accounts after filing an appeal and they said "We're sorry, but our systems mistakenly flagged your account." They never restored all of them and the ones restored have been locked permanently again because one account logged into the app used the C words again. One of the suspended accounts never to return was a 10K follower image bot for tech and old school computer images. Another was an account for my main business to announce news and server status updates.
Whether bots are mucking up Twitter or not, Twitter itself is a horrible platform and company. I can't wait for the service to disappear into oblivion.
You can always walk away from a service. If you are waiting for the next "Big Thing" to ditch your problems you will find that even this new shiny platform will (eventually) have the same or worse problems.
The internet is a rabble that needs taming. Services automate a lot of stuff to try to tame us, but nothing is perfect. Innocent users get banned.
If you are building a business on these services don't rely on just one. Build redundancy into your marketing, use many channels.
That sort of argument ends up going completely out the window when you have bot accounts spun up by the thousands designed to artificially boost certain narrative for the sake of profit or to sow dissent. Much like managing your lawn, you can't let the weeds grow and expect other plants to simply out-compete them when the weeds choke out everything else around them. Unfortunately that seems to be an inevitable problem on any platform that prioritizes userbase growth above all else.
Obvious bots are, well, obvious. Novice or careless implementations lead to accounts that tweet at the same time or based on the same trigger words. Tweets made on a routine basis are the easiest to detect (e.g. Tweets at 5PM CST every day)
But things go much deeper. There are even companies that now develop bots tools essentially, increasing the randomness or human-like qualities of bots. These are much harder to detect with the standard methods.
There are other proxies, like not getting organic replies or having no friends. But the sophisticated bot networks have already figured this out. You have to realize that sometimes human labor even goes into the effort, where someone will maintain hundreds of fake accounts, and attempt to make them look organic.
From these points, it's easy to underestimate the number of actual bot accounts.
On the other hand, these same methods don't usually control well for a certain type of Twitter user. There are actually some really obsessive twitter users who will spend 12+ hours a day active. They typically produce a ton of content, and a lot of replies. I notice this on celebrity/political accounts especially. (Apparently people are calling this the 'reply-guy' now)
Those accounts are susceptible to appearing as bots because they deviate far from what one might imagine the 'average' Twitter user looks like. Speaking anecdotally, I think this relates closest to politics, and hot topic issues (Hence the topic in this research). Take a Tweet from the US President for example. You will notice the same few dozen accounts reply to every Tweet of his. Some may be bots, but others are most definitely human.
But these often get classified as bots.
If I had to make a guess, I would say the bot detection models fail more often to account for the true number of bots.
At this point, it's an arms race. Obvious bots are becoming increasingly less obvious. Overall, I think bots are only going to get better, and it worries me for the fate of social networks. The thing most frightening is finding an obvious bot and seeing a large number of organic users interacting with it as if it were real.
Edit: One more thing I forgot to mention. An increasing number of bots are getting verified now. I have no idea how the verification process works, but apparently the botters have figured out how to game it successfully.
Lol there are definitely some real people that post as if they would not pass the Turing test! Repetitive replies, etc.
I totally question the human qualities of whoever develops such tools.
it could also have a way for legitimate bots to be registered and easily identified as such
I'm all for it!
> Among the misinformation disseminated by bot accounts: tweeted conspiracy theories about hospitals being filled with mannequins or tweets that connected the spread of the coronavirus to 5G wireless towers
> We do know that it looks like it's a propaganda machine, and it definitely matches the Russian and Chinese playbooks
What exactly is mental state of the Chinese or Russian propaganda departments that they would be doing this? How could the Russians possibly benefit from US citizens aged 60+ dying off at 2-4 times the normal rate? How could the Russians possibly sow more division amongst Americans vs. the normal state of American politics? Just this week Pelosi called Trump morbidly obese, and she is the underdog of childish name-calling in that pairing. How could foreign agents trolling on twitter top that level of incivility?
And who are the idiots who are looking up random twitter accounts as their information source?
They had the most success egging on far-right grievances, but they do the left too. California secession, etc.
There is no way that Russia could sink the USS Theodore Roosevelt; their own carrier practically sank on its own during a recent sortie. They're not responsible for the mass infection of its crew either - and it may not have been possible to prevent. But what they can do is make the firing of its captain a partisan issue, and of course ensure that the acting secretary of the Navy is operating in an environment where partisan loyalty to Trump is more important than empathy with the crew, so he gives a morale-destroying speech.
An early, solid lockdown would have been short and contained the virus. A relaxed, fragmented one ensures it remains in the population, damaging the economy longer.
Coronavirus isn't a biological weapon. Partisan stupidity can be weaponised, though.
It would be cheaper and more effective to bribe key officials in government. There was a conspiracy about Jeff Epstein running his paedophile ring in aid of ... Mossad or the Illuminati or whoever. It would be a bit more expensive and a lot more effective to make that conspiracy theory a reality rather than rely on bots to 'sow division' - whatever that is supposed to do that the politicians can't manage on their own.
Undoubtedly that has been going on as well: https://en.wikipedia.org/wiki/Maria_Butina#Involvement_in_U.... plus some of the people on this fine list: https://en.wikipedia.org/wiki/Timeline_of_investigations_int...
Butina's involvement with the NRA is a perfect example of this; it's the ideal place to ramp up the partisanship and nobody can question its patriotism.
> could just as easily be good decisions for America that result in the country doing better than normal
But they're not. The point of chaos propaganda is to encourage people to make stupid decisions for the wrong reasons, to focus attention and energy on the wrong priorities, and to waste time and effort debunking nonsense. Lengthy investigations into misconduct that go nowhere are also a good waste of time that prevents governments from achieving anything.
It's not directed at specific outcomes, it's just starting as many fights as possible and getting people to bind their partisanship to ideas that are unsupported by evidence. Or supported by fake evidence from fake sources.
It's so cheap because you can use partisans as amplifiers. Just as Stalin could use American communists as "useful idiots", modern foreign propagandists can get sincere-but-misguided partisans to fight on their behalf.
"We do know that it looks like it's a propaganda machine, and it definitely matches the Russian and Chinese playbooks, but it would take a tremendous amount of resources to substantiate that," said Kathleen Carley, a professor of computer science at Carnegie Mellon University who is conducting a study into bot-generated coronavirus activity on Twitter that has yet to be published.