I've always been astounded that nobody seems to notice that Facebook continually announces record users, record MAUs and then usually a week or two after their quarterly numbers announces that they have removing millions of fake accounts. Always after they've 'beaten' on those metrics.
Facebook these days, for me, is 90% dealing with fake friend requests from people that are either clones of real profiles, or just complete garbage. It's part of my dislike of their platform, but they're more afraid of bad metrics than genuinely delighting their customer base.
It becomes a negative feedback loop too. We used to spend significantly on Facebook for several businesses I'm involved in, but now that most clicks are garbage/fake, we don't.
I know twitter gets a bunch of hate for not doing more, but realistically they are actually leading the way. I'm impressed with how they're operating more and more.
This is the original, only purpose of retweets, going back to before it was officially supported and was done manually.
Showing tweets which have been liked by accounts you follow is a much worse behavior of the app. I like indiscriminately but tweet and retweet very seldomly. There's no feature to save and show appreciation for a tweet without boosting it to my followers.
From google news to facebook to twitter to youtube, instead of seeing what you want to see, now you see what they want you to see.
Social media is now parlaying its giant audience into social engineering. Buckle up because it's going to be a rough ride.
Pop up detail view of a popular tweet from a popular-in-tech person sometime.
You'll probably see, right below it, a reply from an account with identical profile picture and display name to the popular person, but different username, pretending to continue the thread and announcing a cryptocurrency giveaway where you "just" have to send some amount of ETH to their address to "register" or "verify" yourself.
Pretty much any popular Elon Musk tweet will have some kind of "giveaway" impersonation scammer auto-replying, for example. For a while some of the popular cryptocurrency people actually changed their display names to "I'm not giving away ETH", or similar, to try to disrupt the scammers.
My other beef with twitter is a more sophisticated way to filter my timeline. I mean something that would make sense to a programmer, not the typical user friendly useless thing.
I hardly use it but i checked my account last evening skimmed through my feed and saw nothing productive. What caught my attention though was a Forbes tweet projecting one of the Kardashians becoming a billionaire because of some cosmetic empire?
I dont even follow anything fashionable except Thomas H. Ptacek and Paul Graham.
I instantly deactivated my account pending deletion. Twitter is as much of a cesspit as Facebook.
I googled around as to whether I could change the feed filter/ algorithm
Looks like if you go Settings/ Content Preferences/ there is a box for 'Show Best Tweets First' that you can uncheck,
which looks like it might shut off their recommendation filter, and then you can just get a stream of whatever everyone you follow posts
(or at least that's what I hope it is, I've had it set on that setting for all of 2 minutes, still seeing how it works)
Alternately, you can control whether you see what individual people you follow are retweeting with "Turn off retweets": https://twitter.com/following
>/following yea really wish you could do that with likes too!
Yes you can, not through an official setting unfortunately. But as TylerH said you just say "I don't like this tweet" on a few of the tweets that you see because "person X liked this", then refrsh your feed and they're all gone.
Maybe after 6 months-ish they come back, but you notice it really fast since suddenly your twitter feed sucks. Just do the "I don't like this tweet" trick again.
The only 'likes' I see now is if multiple people I follow liked a tweet from someone I also follow. Usually those are good quality.
Oh my, that definitely should be filed under: "Be Careful What You Wish For"
I suspect such a change as that would end up annoying the living you know what out of most every user on Twitter. They should let YOU configure YOUR OWN account to see everything. But I'd be really hesitant to do that for every account on Twitter by default. That just would not end well.
I have this URL bookmarked as "Twitter classic":
Kudos? I wouldn't hand over a kudos to twitter.
The article triggering this bot ban wave  was written by the Times and included behavior analysis that Twitter easily could (and may have) done on its own years ago.
Facebook's actions and marketer value has nothing to do with Twitter's own lack of action to prevent infiltration and manipulation of its platform.
Twitter's action on stopping spam, bots and hate speech at best can be described as slow. I believe Jack Dorsey has repeatedly said the company hasn't / doesn't do enough.
I feel like Twitter as an organization doesn't actually care about the problem, just about their image.
Why else would they talk about the hate speech stuff and continuously elevate groups like The Proud Boys through their verification process (that also affects their suggestion algorithms). And we shouldn't forget that many Gamergate people are still active on Twitter and often use their accounts to direct harassment to people on a regular basis!
The even more cynical view is that there's a group of employees within Twitter actively protecting these kinds of users
If otherwise, they'd have suspended Trump's account for ToS violations long ago.
It seems to me that the rational reason to stop an ad campaign is because it stopped enough making money.
It's not fake clicks that should be stopping you -- it should be a lack of ROI.
I wonder how many people spend huge sums of money on Facebook without tracking their return on investment. I seriously think that is the source of many of these complaints about Facebook's ad program.
I spend a few thousand a month of Facebook. I have no idea if there are fake clicks or not. But, it's not a metric that seems to matter in my business.
I guess that assumes similar cost per click and that your definition of “enough” accounts for a sufficiently low conversion rate.
On the other hand, Facebook (and their shareholders) probably cares that the PPC is depressed by the fraction of non-converting users.
The accounts were obviously fake - they weren’t even trying to hide they were fake, and I actually spent maybe 20 minutes reporting 100 of them, but not only was there an endless supply to replace those I blocked, but the reports didn’t do anything and the accounts were still there.
[Cynic's hat on] And they can only do this now, having shown the ability to turn a profit.
Up until now, Twitter has been forced to operate in "growth for its own sake" mode. Investors keep myopically staring at user numbers, confident that there's a way to turn the faucet on, consistently and repeatedly tapping the userbase for pennies per person. The bot accounts don't make them money, so getting rid of them are likely to actually improve the bottom line.
But the sad fact is that until recently Twitter have been financially disincentivised from cracking down on fake accounts.
It's possible to block friends requests from anyone except friends of friends.
Occasionally I get friend requests from some random dolled up woman, perhaps once a month.
... that's it.
I wonder what causes one person to get a large number of bogus friend requests and others not. I'm guessing it has to do with who your friends are, and how many?
I'm around 1000 FB friends +/-, but the vast majority of them have been on Facebook for a VERY long time, so perhaps they're less likely to be friend scam accounts?
I also find it so strange that Twitter hasn't done anything to specifically limit these accounts. They could even naively block normal users from having a verified user's profile picture and this problem would virtually not exist.
I'm all for Twitter doing more, especially as they seemingly do nothing at all to combat fake accounts. However I'm sure these spammers could just adjust a few pixels on the profile image and a naive block wouldn't do anything.
The cynic in me says they counted in the first place because it helped everyone's ego (and wallet)
"Most of the time, according to Twitter, the locked accounts are not included in the monthly active user count it reports to investors each quarter, a critical Wall Street metric for social media companies. But the locked accounts were nevertheless allowed to inflate the follower counts of a large swath of users.
That choice helped propel a large market in fake followers. Dozens of websites openly sell followers and engagement on Twitter, as well as on YouTube, Instagram and other platforms. The Times revealed that one company, Devumi, sold over 200 million Twitter followers, drawing on an estimated stock of at least 3.5 million automated accounts, each sold many times over."
I don't think you're being cynical. They counted in the first place because it helped their _metrics_ – Twitter is in the business of selling advertising and made the decisions they did in service of their advertising metrics.
Right now clicks on Twitter ads are worth very little because most of the clicks come from fake accounts. But imagine how much clients would pay for "verified clicks" -- clicks from verified accounts.
IMO they should make their ad sales a tier-based cost-per-click model with clicks from verified accounts selling for higher amounts than clicks from non-verified accounts. If theyre smart they'll open up a third tier above "verified" for the twitter-endorsed superstars which will have prestige value again (now that the blue checkmark has none). Maybe a green checkmark would work. Naturally they could sell an ad click from a green checkmark account for even more than an ad click from a blue-checkmark regular ol' verified account.
There’s another solution: the platform is free if you have less than n followers or follow less than n users. Say n is 15 for example. Above that you pay for every tweet. On a sliding scale, say the first tweet/day is 5c, the 10th is $1. They could even ditch scammy advertising - brands who wanted to “engage” would simply pay directly to tweet.
All of the big social networks have tons of bot activity to the point that most likely the majority of their requests are from bots. If they actually stopped all bots activity it would tremendously hurt their user counts and thus their valuations.
Those humans are usually very bot-like retweet mills anyway, in my experience. Thou complain'st too much.
Human logs in, sees Twitter demanding personal information like a phone number or worse, goes away.
If you're spending $1500, but getting great ROI, then fake engagement would be no reason to stop the ads.
Now the second you stop the ad spend your engagement drops to nothing.
I expect a lot of whining about it, particularly in the political realm.
As social media matures, I'm sure we'll see where the advertiser-friendly vs user-friendly line is drawn. Reminiscent of the banner ad boom that initially made great revenue for the sites, but in the end netted negative for the sites because of the users it drove away.
I've encountered many people complaining that Twitter thought they were a bot. Sometimes that was that, The End. Sometimes there was a demand for "real world" identification, like a phone number, which got refused because people like their privacy.
The common feature seems to be following or retweeting political users toward the right, and using hashtags associated with that. Do that exclusively, and twitter will assume you are a bot. The same does not seem to apply if you are on the other side of the political spectrum.
I think Twitter is well aware of this, and they consider it an intended result, but they can't just publicly admit it.
Most likely there are people on Twitter who make it their mission to report conservative tweeters as spammers, knowing full well that it will cause Twitter to auto-harass them on their behalf. They do it in the hope they can get these accounts shut down and thus "cleanse" Twitter of wrongthink.
This sort of behaviour should be easy to stop, but frankly Twitter's spam team was never that good (just check account prices on the black market). Lots of trivial techniques were never implemented by them.
But, to do so they would have to place some power in the hands of the users. They don't want to do this because it means a loss of their control over what you see.
We already know that FB experimented with pushing people's mood and views on subjects around; Twitter is not immune to such temptation also.
Simply: assign 2 scores to every user, each out of say, 1000 (100 is not enough given Twitter's large userbase).
1 score is a category score, such as "lgbt politics" or even finer-grained than that, easily done by simply reading all of a user's tweets and using Bayesian (or some other classifier) auto-classification. This is a measure of how the user is perceived by others, inside that category.
The other score is a combination of per-region or per-country (because in general, people care about the people in their own country or region more), plus number of times a user gets a like or retweet in that same region, etc.
Then, simply let people filter based on those 2 scores. If I set score 1 at (cutoff everyone below 900) I will get top tweeters in each subject; if I set score 2 at (cutoff 200) I will let most tweets across the world reach me. etc. etc.
It's sometimes rare to see people bring up the chance of false positive identification, though two comments did in reply to another downvoted comment. In the replies to popular tweets are a mix of replies from real people, replies from bots, replies from real people behaving in the way a bot does, and replies from mixed accounts where people sometimes tweet themselves and sometimes use a tool to automate tweets. One account I saw just accused others of being bots, their twitter history was filled with such accusations. I'm pretty sure they weren't a bot though based on the other tweets, and I doubt there's a tool just for that... Maybe people need to be accused of being a bot by a non-bot (sarcastically or not) to think about the chance of false positives more often? But it's frustrating to see congratulations as if the hard problem of identifying accounts that need to be banned (with some bad bots being a special subset of that) has been solved.
It's a super, super simple problem. If they really cared, they'd eliminate bots entirely. Not that hard.
No offense, but that's incredibly easy to say when you're not the one responsible for doing it.
The researchers who keep publishing these papers and the patsies in the media who keep repeating them are not identifying bots, in fact they can't know what their own accuracy is because they don't have any way to force verification of accounts. But it's apparent from reading their work that they're usually just identifying humans.
I used to do anti-bot detection and anti-spam work for Google, and have written a takedown of one of these media articles about Twitter bots:
The paper I studied was riddled with nonsense, fraudulent claims, claims that didn't match their own data, abuse of logic and statistics and massive political bias. It is typical for this space.
This is especially true because real bots (vs the bots that exist in the imagination of political journalists and academics) usually want to sell things. They're quite boring. They aren't engaged in "propaganda". The belief that spammers are spending lots of time and money trying to flood Twitter with political opinions isn't grounded in any sort of reality, and frequently leads to embarrassing claims, like this guy who was pegged as a "Russian bot" but who turned out to be a carpark attendant in Glasgow:
These claims about spambots should be seen in the context of a US political campaign in which the losers seemed to be struggling to understand why their preferred candidate lost. The idea that the population has been bulk brainwashed through Twitter doesn't have any basis in psychology or computer science, but is an attractive way to avoid engaging with policy issues.
CAPTCHAs are at best a game of cat-and-mouse, and are always open to relay attacks anyway.
You can still verify in-person, and even check documents, but no web company would even consider doing that, even for something like an optional 'fully verified account' tag.
For small websites, even a completely trivial email check is enough to keep out most spammers.
Eliminating the bad ones with few false positives is hard.
Technically speaking, I'm not sure it's even possible to tell for sure which accounts are bots and which aren't. How can they tell?
My day job is information security, specifically working with a SIEM to correlate many diverse logs from many diverse systems and figure out what really happened using many pieces of individually benign data. None of these things are themselves indicators of bots, but the more you start to trip these rules, the more bot-like your behavior becomes. Eventually it paints a picture that shows no human could reasonably be behind an account that routinely posts two or more tweets at the same time, never engages in follow-ups, is only followed/liked by other suspicious accounts, and has a user agent of Python 3.7 coming from a source IP on aws.amazon.ru. You show them a captcha and if they fail or bail, you've got 'em.