Hacker News new | comments | show | ask | jobs | submit login
Massive networks of fake accounts found on Twitter (bbc.com)
389 points by randomname2 353 days ago | hide | past | web | favorite | 255 comments



I would imagine that it's possible that some of the people reading these comments might already know a lot more detail about Twitter/Facebook bots than this story goes into.

Although you fine folks might remember me from a few other projects, I'm currently a Graphics Editor at The New York Times. We're actively curious and interested in pursuing lines of inquiry about this sort of behavior, and if any HN'ers have any interesting leads or tips, I'd encourage you to get in touch. You can reach me at my username @nytimes.com, or email me and I'll send you my Signal number.


There's an angle that hasn't been explored that ought to be: a lot of news and political commentary websites used to run their own comment systems for articles or use pseudonymous services like Disqus, but these were overrun by "trolls" and spammers.

Many then switched to Facebook, presumably hoping that the "Real Name" policy would improve things.

The interesting thing is that clicking on many of the Facebook profiles on these comments leads to curiously sterile profiles, with a few friends. Some of the most vehemently pro-Trump comments seem to have originated from these accounts, especially on certain Conservative sites during the Republican Primary season.

Facebook seems to have no interest (or ability) to clean this up.


I don't doubt that many of the commenters praising Trump, or any other politician for that matter, may in fact be sockpuppet accounts, but I wonder if this might be a similar situation to what we saw with polling prior to the election in which Trump voters were less willing to admit to voting for him. I'm not actually sure if this was the case but I've heard it theorized that this is one of the reasons polling might have been so off.

Perhaps there are commenters unwilling to link their real FB identity to supporting Trump? On the other hand, I see plenty of people make outrageous statements online clearly linked to their real FB profile so maybe this isn't a widespread concern.


I mentioned that this happened on Conservative sites, and it was during the Primaries. It seems unlikely that, under those circumstances, people would be so shy about expressing their preferences.

Also these are not just stand-alone fake accounts with no friends created for making comments....these are curated to have enough details that make them look plausible to an unsophisticated automated detector (or overworked FB abuse department employee), but they have few friends, all with similar sorts of profiles, and no signs of activity except for posting comments.

I do agree that there was an element of "shyness" and self-censorship, but I expect it was from supporters of other candidates, who would not want their Real Names and identities to get embroiled in fights with fake profiles.


Good points, although I didn't mean to dispute that these were mostly fake accounts meant to push an agenda.

As a side note, I don't have a FB account so I don't know how this works but do your friends have the ability to see what comments you've made on sites using FB login for commenting?


Yes but I believe you can hide this.


I've seen enough people doxxed in various capacities that there are good reasons for this in general. You won't find anything on mine... if I even have one. I've been avoiding Facebook since the days it required a .edu email to sign up, for example.

That said, I'm sure that all the politicians have people who can drum up social media followers or spread whatever message they want. I don't think there's any conspiracy to it. It's not like we've gotten rid of email spam, either. This is just an extension of that.


While I wouldn't dismiss the chance that some of these people are fakes I would like to point out that despite being a techie (if not because of it?) I barely use Facebook anymore and if I hadn't been registered for a long time my Facebook profile would also be pretty barren. I also always had a policy of only "friending" people I know personally and privately. The only purpose of Facebook for me is keeping up with those friends and I rarely post anything and rarely do it publicly. I doubt I'm the only one using Facebook this way.


My Facebook account is so barren that it has never existed, but if I'm ever forced to create an account to deal with websites that assume everyone has a FB acct and can't deal with you otherwise, it will be as sterile as I can make it.

And my Twitter account is so sterile that no one I know follows me. And yet, it's a real account.


The thing is, do you have only 5-10 FB Friends with eerily similar sterile accounts in a little-self contained social graph? Have you added a college to your profile, but have no Friends from that college?

I've got a fake FB account for the same reason you describe, and I don't bother go to the efffort of puttiing in the sorts of fake details that these accounts have.


I'd like to see that for myself. Can you find any examples now?


Look for [redacted]. It's an obvious fake, but you can go down the rabbit hole from there.

*I hope you got the name. I've redacted it for obvious reasons.


There's also privacy settings - unless Facebook have wiped all my settings since I last checked (they used to do this lots but don't seem to any more) you're not going to see basically anything without adding me. That's probably distinguishable from a fake if you look closely, but similar enough that those accounts might look the same if you're just taking a quick look at each account.


All that machine learning stuff is pretty much useless against human generated fake profiles, just like captcha is useless against human captcha solvers.


If you want another angle on the story, consider non-bot bots. (aka "sock puppets")

You have an argument with someone on twitter and the next second 4 or 5 newly formed eggs with 1 follower apiece pile in to defend the other POV, sometimes without regard to civility.

It could be a coincidence.

Or it could be that the person you were debating let their ID out in another way.

Edit: Just to be clear, I was talking about the Freudian "Id" in all caps, used these days as a short-hand for a sort of unfiltered emotional inner child, and not "I.D." which is short for identification.


There are certainly a great many interesting angles ;)

For a neat paper that actually has some nice hard evidence about what government production of social posts can look like, check out Gary King and co. at Harvard's exploration of Zhanggong, China:

http://gking.harvard.edu/files/gking/files/50c.pdf

We summarized it here: https://www.nytimes.com/2016/05/20/business/international/ch...


https://arxiv.org/pdf/1402.5644v1.pdf

"China". To think I used to actually pay for your paper in my errant youth! :)



Tangent, but the Freudian id is usually rendered in all lowercase, and your meaning would be much clearer that way.


These bots (and human run trolling accounts) have been widely reported on[1], but by many different organizations in different languages. Which seems to have made it not as widely known as it should be. I still see people arguing on twitter with what is clearly a bot.

I would like to see someone write the story of the overarching result of what a world will look like in an increasing internet-focused world, when disinformation campaigns can be funded by the highest bidder, and actual individual voices get drained out.

[1] Reports of Putin using online troll brigades pre-dates the creation of Twitter https://en.wikipedia.org/wiki/Web_brigades


Try to remember the last time you heard anything positive about Russia.

Consider how it is compatible with the existence of Putin's troll army.

At the very least try to imagine the relative scale of money/power involved in the propaganda in the West compared to the rest of the world. (for reference, check military spendings)


>Try to remember the last time you heard anything positive about Russia.

Why would you assume the troll army is trying to make "Russia" look good? It's far more likely they would focus on individual goals like keeping the US out of Ukraine. Or making us look bad on the international stage to other countries. They don't need to look good, they just need us to look bad. Putin isn't an idiot, no amount of propaganda is going to change the fact that some of what happens in Russia will never be accepted by the rest of the world. He just needs to look better than the alternative. It's far easier to make us look bad than himself look good.


>Try to remember the last time you heard anything positive about Russia. Consider how it is compatible with the existence of Putin's troll army.

This completely misses the point of what the troll army is there to do. It isn't aiming at convincing people of a point of view, but confusing the audience. You should read this story from 2015: http://www.bbc.com/news/world-europe-31962644


Also one of the threads in the latest Adam Curtis film [1].

[1] https://en.wikipedia.org/wiki/HyperNormalisation


Do you frequent political spheres on Twitter or FB? Positive Russian comments are extremely visible on both.


On every story that touches Russia and/or Putin. It's kind of unsettling at times.


> when disinformation campaigns can be funded by the highest bidder, and actual individual voices get drained out.

Isn't that the normal state of things since Gilgamesh?


Yeah. They told me the New York Times could be bought for $2.50, but all I got was a stack of paper.


Online marketing is completely dominated by this type of activity. I tried to market through legitimate means only (paid ads, word of mouth, fair, legitimately-earned reviews and press coverage). A competitor showed up and exceeded our 1 year+ head start in two months with spammy tactics, including fake social media reviews.

It's impossible to win a competitive SEO game without a PBN for link manufacturing, a bunch of lackeys and bots posting on astroturfed accounts, and so forth. I learned that if I'm going to do another business that depends on traffic, there will be no option but to spam and turf aggressively.

Google's dirty secret is that it's been completely and thoroughly gamed for a long time now. People are just used to sorting through the crap.


I wonder if others share my interest in what legitimate businesses are doing.

For example, Hootsuite Amplify is a platform designed to dominate social media by making it easy for your company's employee base to share and promote curated content.

This has a similar effect to botnets, except it's not violating t&c.

We're entering a world where authentic conversation is harder and harder to find. Historically personal content such as emails and tweets are now carefully constructed and managed, but still presented as personal content.

https://hootsuite.com/products/amplify


> We're entering a world where authentic conversation is harder and harder to find.

I guess. When I was a youth you found authentic conversation by being in the world and meeting an authentic person and having a conversation. If we keep that definition constant then it's precisely as hard to find today as it has always been.


I think you have a blind spot here.

When you were a youth, being in the world and meeting an authentic person didn't have quite as much competition as it does today.

It's like saying you're not concerned about the environmental impact of cars because we can always choose good old reliable walking instead.


Everyone is a marketer now. Selling out has been glorified.

Consuming this pseudo content is tiring though... Keep calm and shill on?


Anyone with $10 can go buy tens of thousands of followers. And they work. And stay for a fair while. I've done this a few times myself, it really doesn't take any more than a quick google search.


Would you mind sharing why you did it? I've been curious about this for a while.


Not the OP, but I previously worked for a company that did this. We had tens of thousands of fake accounts on both Twitter and Facebook and had a script that would have them all (or specific sets) follow the same account at random points over the course of a few days (to avoid giving the client a few thousand followers instantly, which might draw attention). I can't remember full details since I wasn't involved, but I think we were charging $10 per 1,000 followers per network.

The customers were generally people trying to get their small business out there or some local thinking it could make them into an internet celebrity (I work in/around Scottsdale, AZ and there are a lot of people here that think like that).

We had a separate script that would check to make sure each account was still active once a day and if accounts were removed, they would be replenished automatically, although once a client lost a follower, it was gone and a new one didn't replace it for them.

It was all kinda shady and it was one of the numerous reasons I left that web agency (which went bankrupt a year later and is no more), but it apparently produced decent results since search ranking increased both on Twitter/Facebook and on Google. I'm unsure if it is still as effective, though; this was almost 4 years ago that I left.

Only time I've seen it first hand since was with a client trying to get famous. They made some YouTube videos and bought tens of thousands of YouTube subscribers from somewhere (we don't offer that at my current agency)


what was the result? did it work out?


It was highly effective. Catapulted a few local businesses (I believe one gained enough exposure to open a second location) and a few temporary internet celebrities owed at least some of their fame to it.

That said, the latter example on YouTube was not effective at all, but that was probably because the content wasn't great.


not op, but this is essentially what SEO boils down to. More links to your site from legitimate sites with high page rank push your site up the natural listings.

It doesn't matter how 'pure' a site is, you can still buy a link, it just costs more.

It's all about looking at how the algorithm works, then gaming it. Tinker with Twitter and facebook, spend some money, keep the results secret. Sell this information. Do what works to your clients.

Bot nets work. For now. Does Twitter want to kill them? Maybe not, active accounts is a big number that they love to tell people.


I've bought fake followers before. It's actually fun. You'd be surprised how much of the "chattering class" will check your follower count before they decide if they should read and respond to your comments.

If you enjoy chiming in on national conversations then it's a cheap way to appear prestigious. Of course, you need to be able to argue well enough to not out yourself as some random joe.


I wonder what proportion of the chattering class will pay attention to measures of retweet:follower ratios. It's a good proxy for a fake account.


Go over to google and put in the words "buy retweets". That's really not a problem. They have full control over the accounts (via either fake signups themselves or script kiddie iStealer means), so anything like that in demand enough is probably something you can buy.

A look at the history or content of mentions is probably the best bet to detect a faked account. Still, a clever person might slip by a cursory check.


Hell there are even Chrome plugins for free that'll get you followers, retweets and likes all automatically. Twitter is a joke. Anyone that takes their user count and interactions seriously on that platform is in for a bad time.


Can you buy retweets from accounts with human followers?


Yes, if you can get those stolen accounts rather than generated ones.


The answer probably isn't satisfying - pranks on friends, it's cheap and is usually a good laugh. 20k followers for around $10 found on a blackhat SEO site. 19.5k of which are still around after about a year.

You could totally use it for business purposes too - as a really shitty form of advertising or just to make your popularity look higher. I doubt it'd be too effective, but hey, it's cheap.


I'm familiar with this space and I think the biggest benefit is providing 'social proof' to new websites / startups / brands.

I know from my own experience when doing research on a company - whether it's a prospective client, vendor or competitor, I'll click through to their social properties to gauge their traction.

I definitely look at their engagement ratios to determine whether their social following is organic or fake, but I think typical consumers miss this.


I see... Could someone buy 200k for $100?


I'm not sure, but I expect it gets harder to hide the fakes at larger scales.


Marketing and credibility. It's not about the followers, it's about the metrics. Whether it's Twitter followers, Facebook likes, GitHub stars or Hacker News upvotes -- if the metrics matter to someone, there's a market for manipulating them.

I'm actually surprised this is news to anyone. Some of my first Twitter followers were blatantly fake (unless vacuous Russian models are really into JavaScript programming all of a sudden) accounts trying to appear legitimate at a glance -- following random people and posting credibly arbitrary nonsense to avoid easy detection when they later follow/like whatever someone paid their owner to.


One thing I can think of is getting support from the "social media" arm of companies; if they believe you're an "influencer", you're likely to get white glove treatment (and may otherwise be ignored).


Use fiver. You can buy followers on Twitter, IG, FB, G+ (is it still around?) and other networks


This reminds me of when I first heard of people buying "traffic" or "visitors" to their site --- being the type to not see the Internet in any commercial sense, the idea of paying to increase the bandwidth to your site (back in those days, bandwidth was not exactly cheap) was a bit puzzling... and then after seeing that many sites had ads and analytics, it all suddenly made sense.


I would much rather get an amplified retweet than get followers for the money.


I want to mention Reddit bots who copy user content to make karma and gain influence on Reddit. There is one kind that mostly copies images with karma in the 1.000s and repost them with the same title a year or two later.

To appear more legit these bots also copy user comments and post them to /r/askreddit in threads that are similar named but most often not exactly the same as the original post. I suspect that often this /r/askreddit thread is also created by another bot from the same farm.

I'm not sure what the owners do with these bots, but I suspect you could downvote views you don't like or upvote videos to the frontpage and make lots of views and money from that.


Farm karma, then sell the accounts to political operatives, etc.


And bots that take twitter videos and post a streamable mirror? It seems similar enough that it would be on topic.

How do they sit with you? As videos on twitter don't seem to work at all, it's a great service, imo.


There are "I'm here to provide a service" bots which are almost always named appropriately. Like a reddit bot that links to IMDB or Wikipedia or converts between metric and imperial units.

The bots or sock puppets or fake accounts I mentioned don't provide any other value than copying old front-page material and repost it a year or 2 later without any indication that this isn't their own material.


I personally have thousands of Twitter accounts I control just for fun; I mostly use them for pranks, though in aggregate there are a few million followers between them (not sure how many duplicates there are, or, ironically, how many of their followers are bots). You can even buy software to create and manage Twitter accounts pretty easily in blackhat forums.

For mine, I scraped a bunch of Instagram pictures for photos, auto-generated a bunch of bios using a few basic parameters, e.g. "Beer lover, proud parent." Names were easy - the most popular first & last names from census records, mix and match. Grab a few lat/longs and convert them to the biggest US cities and you have a location, find some data source to tweet from (breaking news is easiest) and you have a fully automated, human-like Twitter account. For bonus points, Pick a random color toward the low ends of the hexadecimal range ( rand(a..c)++rand(0..f)++rand(a..c)++rand(0..f)++rand(a..c)++rand(0..f) works fine) and you even look like your page is personalized down to the color.

Start following random people and 10% follow back (even more if you follow people who are tweeting about similar keywords as you - kindred spirits I guess).

The only tricky part is making sure you don't cross lines with your IPs. You could buy/rent them privately, but you really only want to keep a few accounts (3-5) to each IP, so that gets expensive ($.75/IP/month) when you don't have a really good reason to use your accounts. You can scrape free listings for them, but those are nasty, slow, and can cause bans if Twitter decides to take down a whole range or if you are forced to switch IPs too quickly.

Device type, browser, etc. is easy to spoof.

Should you decide to, it's also really easy to change name, username, and profile picture of an account in the future. So if I wanted a few thousand Trump-supporting (or Trump-hating) sock puppets I could have them today.

If you don't want to buy/create/manage Twitter accounts yourself you can get access to what's called a "panel." A panel is basically an automated, coin-operated network of fake accounts that you can control at wholesale prices. Want 5,000 followers? Plug $1/1,000 followers into the panel, supply the username, and you'll have them in a couple of minutes. Or resell 5,000 followers for $25 and pocket the $20 difference. For example of a panel, see this ad on blackhatworld: https://www.blackhatworld.com/seo/the-biggest-smm-panel-yout.... Nothing special about this one, just the first I found when I googled. They're a dime a dozen.

I'm certain there are millions of fake accounts for every service imaginable.


Fake People As A Service apparently exists now, making it even easier to "create your army": https://fakena.me/

(As Seen On HN: https://news.ycombinator.com/item?id=8336036 )


Jesus dude. I can't even make legitimate accounts that don't get banned... Impressive!


I'm quite curious about this as a dabbling Twitter comedy content producer/dabbling Twitter developer.

Are there any services out there that sell followers which are less distinguishable from a basic bot account? Or to ask a related question, how much work goes into creating the illusion of authenticity? I'm interested in estimating the likelihood of a given user being real.


I don't really know of any "premium" services; it's just a matter of how much effort the creator puts in. I'm sure there's a spectrum, but I'm not sure how to analyze it/control for it or find more "legitimate" ones.

If you're looking to analyze the legitimacy of a Twitter user it would probably be from the content they tweet. Creating unique tweets is really difficult at scale, so most just retweet or pull from some data source. And if they create enough unique original, valuable content, well https://xkcd.com/810/. Watching people argue with Markov chain generators, incidentally, is one of my favorite things in the world.

There are quite a few services that attempt to determine what percentage of an account's followers are legitimate, but I'm not sure how they do it; probably each a different way.

As an aside, I did watch someone sell a famous author 100,000 twitter followers for $15,000, only to turn around and buy those followers for $100 and take his family to Disneyland.


I think it's silly that people downvoted me for being curious about a controversial topic. I would never pay for followers because the point is actual audience, not the perception of one. I'm interested in it because social networks interest me, and so does the problem of validation of users on social networks.


IP meaning internet protocol number? What about students on a campus network, where there may be hundreds coming from the same IP? And you are managing hundreds of different IPs for your Twitter accounts? How does that work?


yes.

I'm not sure about all of the details; I'm sure there are a variety of indicators that go into spam score and once you tip the spam threshold you're banned. So I would guess that students on a school network do things well enough to stay safe, but if you have 1,000 bot accounts on the same IP it's just a matter of time before they're all gone.

So if you want to manage a network of fake accounts I try to use a proxy IP or VPN to connect to each account every time you connect to/with it. (Also clear cookies/cache, spoof the same device, etc.) I'm sure better programmers could work around this by compensating in other ways, but I'm not sure how.


OK. I guess you could spoof your IPs in the bot program if you don't care about seeing the reply, which for this application you probably don't.


that isn't exactly how IP spoofing works: i'd argue for the most part, ip spoofing is just a DDoS hassle. Its not very useful on a real service you intend to interact with (tcp/https require you to reply to those ip packets - it's not a fire and forget).


A little bit of knowledge....


so you are paying $75/1k accounts?


You are one of the reasons why we don't have nice things. Read the ToS and respect them, please.

Thank you.


Unbelievable -- "Hacker News" my ass

HN needs more of this type of comments, not less


That's also not an argument


They don't have to be. They're equal and opposite assertions that cancel each other out so the thread can proceed as it was.


Not sure, are you being sarcastic?


Going to be honest, I am bothered about people who use such bots to sway public opinion, troll or otherwise do harm to others. But, this is literally "Hacker News" and what he is describing is a hack, and his post is going into specific detail about how he achieved that hack.

Even if you disagree with people who use such hacks (I do too), it is interesting.


No argument here, no contribution, just an insult. That's probably why you are being downvoted.


It's sad that you're being downvoted for this.


You certainly invest a lot of effort in dishonesty. Not sure that's anything to be proud of, guy.


Just approach the providers - they have nothing to hide, they are not involved in any illegal activity, and they'll be very happy to have their name dropped in NYT.

"Buy social media followers" is a straightforward business.


What sort of ethical framework permits this?


Some ethical frameworks deal in prohibiting/limiting behaviours rather than granting permission.

One could argue this is even preferable. Interestingly, as a line of inquiry, it leads to having philosophical difficulties with the Bill of Rights.


So called "positive rights" and "negative rights[0]"

[0] https://en.wikipedia.org/wiki/Negative_and_positive_rights


It's like the battle that newspapers have against ad blockers in people's browsers. Twitter has to balance usability here.

People make fake accounts for everything everywhere for all sorts of reasons. Seems as though you stumbled on some fakes that may all be related (all posting starwars quotes) but maybe not. Could be just some off the shelf software that creates twitter accounts.

Probably a non-negligible percent of people with the job description "social media manager" create small armies of bots to do their bidding.

The first stop for any investigative journalist looking into this would have to be blackhatworld.com ... it has a really intimidating name but it's all just about marketing/spamming that violates terms of use.

> "Their potential threats are real and scary due to the sheer size of the botnet," he said.

Kind of overstates things. These accounts will likely be used for spam or advertising of some sort. Journalists have a distorted worldview when it comes to the importance of Twitter. I know you all love it, but the rest of the world doesn't really care.


Speaking of accounts for various purposes, what's the reason you created this throwaway?


In my head I have started to call twitter shitpeoplesay. I find that it helps contextualize things when someone's tweets are reported in the news.


Could NYT quantify - through experimentation - the value of a bot army as the "story"?

For instance a Tweet with 100 retweets is substantially more likely to generate additional retweets / be shown more by Twitter's algorithm / etc vs. a tweet with 10 retweets. And scale that up!

Easy to build an experiment and quantify even on a small level for NYT.


They could have tweeted, paid for a bot army to retweet, then reported on that.


And then gotten criticized for being unethical....


You may as well have just reported that the sky is blue or that there is a lot of trash in the ocean.

You haven't found a massive network of fake accounts. You found a script kiddy or two who didn't know enough to mask their activity or didn't care.


A few years ago, probably 3, it was very very easy to build your network of bots. Today is more difficult (those accounts don't last that long). I personally bought thousands and advertise my websites, you could get 10,000 visits per day in 1 day old sites. I can't imagine what you can do with money and a few people in your organization.


Keep up the good fight, NY Times, and I hope you get a lot closer to the truth soon

e.g.: http://freebeacon.com/politics/trumps-twitter-followed-milli...


Just google "buy twitter followers"


I would like to see more reporting on the poltical forces on Wikipedia.


Semi-related aside, but fun:

Every April Fool's Day I love to mess with family members. A (hypothetically) good effort:effect ratio I have found is Craigslist: Go into some large city's section (NYC, Austin, Sf, etc) and put up a listing for a 'free xbone' or a 'free PSS1', or something similarly typo'd. Explain that you are giving your kids' Xbone away because of a failing grade, you are moving overseas, your boyfriend cheated on you, etc. Then put down the number of the person to be pranked as the contact info. Now, here is the magic part: Specify that the callers for the free stuff must open the call with a Wookie sound, must text back in only haikus about salmon, must only refer to the xbox-one as a sausage, etc. It'll take the prank-ee a few hours to clear that mess up. This works wonders for April 1st day jokes, is fairly harmless, and generates a lot of fun stories.

The thing I am trying to say to the dear NYT reporter, is that you don't necessarily need bots to do this work for you, and not really even money either, just the promise of something for the low price of the time it takes to make a phone call is usually enough. Greed, I guess, works for a very low commission.


I understand it's all good-natured fun for you, but I'm glad I'm not in your family. You're making the world a worse place and wasting strangers' time.


Chill, its only on April Fool's day, as I said.


I guess people downvote you because your hack is a social one and not a technical one :P


No kidding.

I remember when I first moved to the Bay Area in 2009, I met a smart guy at a startup meetup, a serial entrepreneur with a previous exit. I asked him what he was working on, and he said "Oh, I've been writing a Twitter bot that will follow people of your choosing, engage them in simple conversation, and retweet their tweets. It's building a network of followers - I don't know yet what I'd do with that, but it's an asset that's likely valuable to somebody." I ran into him again a few weeks later, and he'd sold the company.

I think I'd heard from either him or someone here on Hacker News, around that time, that 75% of Twitter traffic was bots and automated accounts. Note that that was early 2009, before Twitter went mainstream and when they still had a really easy-to-use developer API.


I'd believe it. For what it's worth, something like 80% - 90% of all email is spam. It's usually better to leave it there and cordon it off so that real users don't notice it. Spammers don't get feedback that their spam isn't being shown to real users.

Also 350,000 accounts is hardly "massive" as the clickbait title suggests. No way that can account for more than a small percentage of traffic.


It's possible that the majority of spam is not meant to be seen by humans, but is instead designed to train spam filters and keep them busy, so some payload still penetrates the shield.


I enjoy stories about this, they dont show up here often because its assumed to be a solved problem. But the spam wars continue!


Huh.

The landline telephone business has, for a long time now, been compromised by spammers and bots (telemarketing calls and robocalls). I canceled my land line about four years ago after going for three months without receiving a single call I wanted.

It seems the commercial social networks are headed for the same fate. And, they're headed for hardnosed and unpleasant regulation by governments. They probably need to clean up their acts.


My cell phone's the same way now. Maybe one in ten calls, at best, isn't phone-spam. And that's with being on the Do Not Call list. I've been trying to figure out a way to do without an actual phone (except maybe a burner on the off chance I need to call 911) as a result. Text is by far the more valuable side of my phone plan, but mostly because everyone wants to use it for auth these days (I could switch to another messaging service for actual communication with people I know)


Protip: I moved my legacy phone number (the number you get for customer service calls, that I hand out publicly, etc) over to google voice years ago. It follows me from phone to phone as I move carriers, everybody can reach me, and their "Press one to connect to the phone" feature throws off robodialers very effectively.

The only downside is that google voice smells like the kind of service that google doesn't regard as a needle mover and it's probably a question of when it gets axed, not if.


Google Voice has been rolled into Fi now (when you transfer your number into Fi, it becomes a google voice number). I think Google Voice has a few more years in it at the very least.


Google Voice had a revamp just this week: looks like it'll be around much more now. https://blog.google/products/google-voice/ringing-2017-updat...


They did just update the Google Voice iOS (and presumably Android) apps. So that's a good sign.


> My cell phone's the same way now.

Mine as well. I even get fake vocal messages on my voicemail, starting with a first name I might or might not know asking me to call back like it's an emergency and obviously the number is a premium-rate one. I was like wow, that's a whole new level of spamming. I can't imagine the future of spamming : data mining personnal infos on social media + customized fake messages just to make you call back a number.


Hi camus2, this is your mother. I'm in a bit of a pickle. Can you please send me $200 via Western Union. Love you bunches!


I have important numbers set to come through, everyone else just rings out silently (and I disabled voicemail), either you are a contact and I'll ring back or I just delete the call notification.


In the US isn't there a law expressly prohibiting these calls to cell phones?


Yes, but a number of difficulties remain:

1) there's little way of figureing out who is actually contacting you.

2) If you file the number with the FCC you don't get a response.

3) it's not always obvious there's a commercial entity on the other side. I've gotten a number of calls that appear to try and keep me on the line while preventing from Turing test esque conversation from taking place.

4) some of the spam is legal--technically they aren't for profit, but they serve as a funnel for commercial services. These have been the most persistent with me.


We need a service that will collect complaints, sue on our behalf and send a check, taking their cut.

Have an app that allows you to report a phone call quickly, automatically recording and uploading the entire call (it should record all calls and delete recordings after a few hours if not reported). Then for numbers that have been reported by multiple people, have someone investigate, and if the source is located, file lawsuits.

It's $500 per violation, so if there's enough people lawsuits can work, and the company making the app can reduce investigation costs by distributing it among many different complainers (so even if a private investigator costs several thousand to get contact info, if they called 100 people and they end up getting a 50k judgement it works out).


For #3, I no longer answer the phone with "Hello" from an unknown number. I wait for the person to say "Hello?" or if the person starts talking I just hang up.


All unknown numbers go to voicemail. Spammers inevitably leave 2-3 seconds of silence, so I just ban based on message duration.

Usually, I report about 5-10 numbers a day.


I used to do phone surveys, that silence is the system connecting you to a human (or your answering machine).


Phone surveys are spam, as far as I'm concerned.

And since they hang up when reaching my machine - definitely spam.


Yes. But scammers don't care.

It does actually work to ask to be put on the do not call list maintained by the caller (not the national registry), even for scammers. Nobody wants to waste resources.


Yes. It's a federal criminal law. That means you have to get federal attorneys' offices interested if you want the miscreants stopped. But shutting down telemarketers isn't nearly as much fun as driving people like Aaron Swartz to suicide, or as profitable as seizing airplanes from drug smugglers.

So, effectively, this kind of thing is legal.


> social networks are headed for the same fate

On the contrary, Facebook is the last halo of peace for me. I only talk to my friends, never get unwarranted messages, spam, mail subscriptions, renewals, ads, scary calls/emails, ... you name it. Now if I could email via Facebook, that would be awesome.


I already have most of work-related messaging on Facebook, and it's awesome indeed. Can't wait to drop Skype completely.


You have your work messaging on your personal Facebook? What happens if you change jobs or your relatives start fights with customers?


They could possibly be using https://workplace.fb.com


Funny how this whole thread read like a commercial.


That's what happen when you have happy users :) I have no affiliations what so ever with facebook


Sure, I believe you. /s


It's not like my ties are not public. Check my profile.


Check my profile idiot.

Regardless of how infuriated you may be by the behavior of other users, please don't resort to name-calling.


The internet's been heading for this fate for a while. Yes, the scale is a lot bigger now (30,000 fake accounts in a network rather than a few hundred), but similar things have been common on large forums and online communities since the late 90s or so.

People have bought fake traffic, users and comments for that amount of time too. Often en mass with automated systems importing it from third parties like Yahoo Answers.

It's just that the commercial social networks are now the main target, and the scale of these operations has gone up accordingly.


Twitter is extremely forgiving with its account creation. An internet marketing buddy showed me how he could make 20+ accounts on the same IP address without Twitter throwing the ban hammer.


An IP address alone isn't really that good of an indicator. With carrier grade NAT one address could be used by thousands of people.


they probably want it so it looks good around their quarterly reports.


Being able to create loads of accounts doesn't really hurt anything. Leverage multiple accounts to spam and abuse signaling is what's harmful.


Is there a special way to do it, or do you just.. do it? My undestanding is that you are allowed 10 accounts.


And yet social media managers are a real business so they obviously don't care if you sign into hundreds or even thousands (even if it's API access, that would work for spam too).


I receive a few calls a day from recruiters and spammers on my mobile phone. The spammers are usually around car warranties and home security systems. Long ago I've stopped answering calls from numbers that I don't know, but they still leave a voice mail. I recently picked up one of the calls and it was someone wanting information on my account because "someone opened a credit card in your name and we want to verify if it was you". I just hung up.


Does anyone know if any of the social media sites have release info on their relative bot counts?

I'm curious because I've seen the same posts in different places and now see a ton of comments boiling down to "Source?" again. The later used to be everywhere 8-9 years ago, but were typically people that actually wanted a source to read not just spam the question. Now it seems they are spamming it to distract from the real conversations.


No, because they report on growth of active users, which include bots. Their stock price would be reduced if they disclosed bot account numbers.


They also profit on bots through their ad networks.


they now spam cell phones as well with little to no fear of being stopped. worse is that many phone companies have no real desire to crack down on just offering up lines to these companies. if both provider and user were punished it could be handled


I get your analogy, but the great thing about landlines is you can selectively forward unwanted callers to anything you feel like. Social media websites don't tend to have anything like that.


I've never encountered this feature myself.

While social media at least has "block" and "report". And the collaborative API-driven blocklists used against organised troll attacks.

Even USENET has a better killfile than my landline.


Look into call blocking, selective call forwarding.


I get just as much if not more telemarketer/phishing on my cell as my landline. Just one data point, for sure, but is there any reason to think landlines are less secure than mobiles?


This seems like a pretty straightforward thing to identify. I remember when the NYT's story, "The Agency", was published [0], some of the fake accounts it had mentioned were still up. Even though the accounts in that story were actually populated by real people, the sockpuppetry was pretty easy to identify. One: the accounts' past tweets right up until they started spreading the news about the fake U.S. disaster were in Russian. Two, all of the tweets of the fake news had almost exactly the same number of favorites and retweets (around 300), and you could see that everyone in that cluster was just retweeting each other.

I'm more fascinated by the spam by Facebook accounts. These show up all the time in relatively popular comment sections, and yet apparently FB doesn't care, or the problem is trickier to automatically flag. For example, this comment [1] is clearly spam...but if you click through to the account, it seems to be a real person [1], with a normal-seemingly friend network, mundane photos of life that aren't obviously stock photography. There are a few junk comments (a bunch of "hi's", but as an outsider, this is what makes FB a lot trickier to analyze, because you don't know how much privacy that user has enabled on their own account.

[0] https://www.nytimes.com/2015/06/07/magazine/the-agency.html

[1] http://imgur.com/a/Rr8d3

[2] https://www.facebook.com/gulfam.raj


Some users don’t care when they grant a Facebook app the right to post on their behalf. Or, users type their login credentials into a phishing site which then can log into the account and send spam. The account would look like a real person because it is a real person.


I've seen a lot of cases of people's accounts being cloned and then being used to re-friend their friends. I've always wondered a bit about what they are going to be used for and I suspect this is one of the potential options.


They will contact the friends list and ask for money because they got into an accident, foreign country, etc. This happened to my wife and her friends, but luckily no one fell for it.


Does FB really have an incentive to care though? Obviously it should dilute the effectiveness of their advertising system, but if people still keep throwing money at them...

I'm not in that industry , so I'm genuinely asking


They seem to based on enforcement of their real name policy. Granted, if they don't click on ads, they may not care....


Bots must have real names. This is purely for marketing purposes so they can tie you back to meat space.


This happens some on the Twitter side as well. I've had two friends accounts get hacked and turned into subtle spam bots.


I wonder how many of those Facebook spam comments are entirely real accounts that've been compromised (access tokens and/or logins). Would make them even harder to detect.


You don't even need that. You just need a bunk chrome extension or local spyware and they can easily redirect the occasional FB comment through a remote infected node which is a real person, likely logged into facebook, of which they likely have access to hundreds of thousands. Let the person change their FB credentials all they want to try to get out of that.


Is it not counter-productive to state "this is not a fake and not a scammer and not report this" in that Facebook comment?


I'm just amazed at this entire series of threads. "Gosh, who knew violating social network ToS by proxy and creating massive false consensus networks was so easy?"

Everyone. Everyone should know.

We're watching this terrible trend rip apart the entire social proposition of the internet after spending 2 decades trying and finally achieving buy in. And here y'all are, hopefulls for a digital economy cheerfully defrauding the very networks that will probably bet the monetization strategy for many startups that pass through HN's doors.

The total lack of any personal responsibility here, or notion of consequence... It stuns me.


I can't parse your third paragraph. Who do you think is responsible for this? I say it is Twitter.

Any public service must plan for its ToS to be broken by bad actors.


There is blame enough for both twitter and the asinine people here who engage in the deliberately misleading communication, fraud and harassment.

Capability is not a moral license to hurt others. If it were, you'd be morally responsible to kill as many people as possible.

My third paragraph is saying this: The US elections were decided by and large by social media plays. There are hints of illegal use by campaign actors, but even beyond that it was the first election we had where widespread bots on facebook and twitter were engaging the electorate. That and the repeal of the VRA are definitely factors in why the democrats lost while still keeping the popular vote.

I've been in this industry since a time when I was a teenager being laughed at for texting ("It'll never catch on."), using social networks ("Why not just call them?"), and reading online news ("WIRED doesn't put real journalism on their website, and slashdot isn't news"). Many people who are on the internet today take for granted that society has accepted this state of affairs, but it's the work of decades of amassing respectability and providing value.

And now in the public eye: dismantling the ability to associate and speak freely because the majority of even first world rich actors cannot use it responsibly.

Sure, there is blame for Twitter and Facebook for not ignoring the problem. But the people here relating in a chipper tone how they're undermining the system because, "Tragedy of the commons?" They're so stupid they're literally biting the hand that feeds them because it's a rush to commit minor fraud.

(Aside: It's very sad that Twitter fired blake so long ago, because he predicted exactly this would happen. Twitter's management didn't want to think about it, so they made him leave. Typical execs.)


Ever since email spam, we have learned that systems have to be made resistant to abuse.

There is enough blame to go around, but it is reasonable to blame Twitter. It is not reasonable to expect bad actors to go away. Systems designed on this assumption will always fail.

Yes, we can rightly rail against the people abusing the system, but ultimately there are 7 billion people and expecting all of them to be ethical is foolish.

However, we should expect Twitter to behave ethically and hold it to account when it so clearly fails.

By all means, keep fighting the good fight, but the only thing that can realistically improve the situation is Twitter.


> Yes, we can rightly rail against the people abusing the system, but ultimately there are 7 billion people and expecting all of them to be ethical is foolish.

I don't really get this. You seem to agree with me, but then also seem to be chastising me for blaming individuals here proudly abusing the system?

> By all means, keep fighting the good fight, but the only thing that can realistically improve the situation is Twitter.

The HN mods could grab the IP history of the fraudster user and pass them to twitter. That'd make the world incrementally better for a small period of time. Who knows, maybe Twitter could bring a lawsuit to bear. Technically that dude is describing a felony.


The guy above understands the Tragedy of The Commons. The Tragedy is that there will always be bad actors. The solution is to design systems that will reduce the impact of these bad actors.

In this case, make a system that strictly enforces the verification so multiple accounts can't be created.


Maybe I can explain better. There are two concerns. One is, what is ethical and right for a person to do? On this we agree. Building botnets to spam Twitter is not ethical and people shouldn't do it.

The other concern is, what should we do about it? This is where I "chastised" you, because focusing on the ethical behavior of members of the public is generally useless. If the people commenting here didn't do it, someone else would. That doesn't excuse them, but it does mean going after them isn't an effective strategy.

Imagine trying to fight email spam by shaming individual spammers. Many people have done it. It doesn't work. If those people had redirected their energy towards fixing the system, rather than the people, the spam problem could have been solved decades ago.

> The HN mods could grab the IP history of the fraudster user and pass them to twitter.

Besides setting a bad precedent for HN, this would be useless. Twitter already has access to much more information than the BBC did when they did their investigation. Twitter simply doesn't care to act on it. If it was a priority for Twitter, they would fix it, certainly not by bringing a lawsuit, but by applying a technical solution across the board.

By blaming the bad actors, you take heat off of Twitter, and encourage people to waste their efforts doing things that will never make a difference.


> because focusing on the ethical behavior of members of the public is generally useless. If the people commenting here didn't do it, someone else would. That doesn't excuse them, but it does mean going after them isn't an effective strategy.

But... we're talking to the people that did. It is no hypothetical, no question. Past tense. Done. Did.

I'm not expecting this to axiomatically solve all twitter abuse. I do hope it will spark a conversation about how absolutely ruthlessly amoral many people here are. I hope young people looking for funding for a startup will think twice when they see what kind of environment HN creates. Do you really want to plant your flag and take money from people who enable fraudsters from the very ecosystem you plan to engage with?

I point out people like that all the time in many venues. Because I think that raising awareness of the utter amorality of the software industry means more amoral assholes will be penalized for their actions. Perhaps this is naïve, but it is a conceit I will not easily give up.

> Besides setting a bad precedent for HN,

Yeah the bad precedent of, "Don't boast about how you're destroying the ecosystem we're trying to build a living on." We wouldn't want to disenfranchise the wretched thieving toe-rags who think making political twitterbomb networks, now would we?

> Twitter already has access to much more information than the BBC did when they did their investigation. Twitter simply doesn't care to act on it. If it was a priority for Twitter, they would fix it, certainly not by bringing a lawsuit, but by applying a technical solution across the board.

It's curious how the instant I mention sending data to twitter for enforcement suddenly that, too, is useless. We should focus on making twitter do something but since they don't care we shouldn't bother. Lower our arms from supplication and sink into the mud, I suppose?

> By blaming the bad actors, you take heat off of Twitter,

No see, I have enough incandescent bile that I can do both. But you seem to want to do neither.


I agree Twitter should be taken to task, but it seems naive to think Twitter cares and is unable to solve the problem when so many indications point to spam and bots being a massive part of their traffic and them looking the other way.

Around the election, I noticed on every Trump tweet the second or third reply would be someone selling a "liberal tears" mug. They had a set sequence of posts and replies leading up to that one with the link and they hit every single one of Trump's posts like clockwork. This went on for weeks. We are talking about blatant and totally unambiguous commercial spam on one of the highest-profile Twitter accounts. If they don't even deal with the trivially obvious, what are they going to do about the sophisticated attacks which just so happen to inflate the numbers that affect their revenue?

The precedent I am talking about would be HN moderators getting involved in detective work and in reporting people, a huge and unrewarding task, in this case, for a company that perfectly obviously doesn't care, and is just going to drop the information on the floor. And what are they going to do with one IP address, anyway? I understand your outrage, but it just isn't practical.

If you want to make Twitter do something, outrage directed at their top management, where it belongs, may have an effect, if it gets picked up by the press and starts to affect their share price or advertisers. Even that is a long shot.

Honestly, if you really care, I hope you can find some people building something better and collaborate with them, rather than trying to make Twitter better, which I think is unlikely to ever work.


What stuns me is that we all know these junk accounts are easy to make, purely by how easy it is to make our own accounts.

We know they cost nothing to us, and so, any time we find a website and approach it as a total stranger and create a cheap account quickly, we know other strangers are capable of the same.

So, clearly no substantial population is buying into fake news.

Those that do, are likely also paying hundreds of dollars on craigslist for viagra, using escrow accounts.


I don't think the general population realizes how easy or effective automation is in these contexts.

But also those ads are more about delivering exploits than actual product sales.


The Tragedy of the Commons states that I am free to exploit this flaw. But we as a society should probably do something about it


No, idiot, the tragedy of the commons is a tragedy, not an ethical license.


You are what I am talking about.

You are part of society. Your obligation is not only to find better ways to stop it but to NOT DO IT. We are all free to make terrible choices that hurt others. That doesn't mean we're compelled to do so because, "Welp, I guess that's just how it goes."


Easy for you to say when you've had a company of yours acquired for millions.

What about those that are struggling to find a place to sleep, not give our lives up to commuting, and pay our student debts?

What about the Rights Approach to ethics? Don't I have a right to be able to eat?

What I am saying is that we as a society (but mostly the leadership of Twitter) should act to make their service not a Common ground to be exploited.


In their recent SEC filings, Twitter estimates around 5% of the MAUs are spammers or bots. They also estimate they have 317 million MAUs, which when you work it out gives around 16.7 million monthly spammers or bots.

Numbers come from their 2016 Q3 filing https://investor.twitterinc.com/secfiling.cfm?filingID=15645...


I think that's an optimistically low estimate.


Why ?


It's not possible to be completely sure that 5% isn't a reality-based estimate, but it's typical for services like Twitter Audit and BotOrNot to identify half or more of a celebrity's followers as bots.

For example, as I type this BotOrNot estimates that 34% of @RealDonaldTrump followers, and 48% of @BarackObama followers, are real.

Just from being a nobody on Twitter who uses it almost every day, I'd also agree that 5% is laughably optimistic.


They are estimating active users, so it's a little hard to compare to follower accounts. A spam account from 4 years ago that followed 1,000 people and stopped would show up in 1,000 different places despite not being an active user.


What makes you think there is no overlap between these bots ?


The question this sort of thing leaves me with is "Why does {Twitter/Facebook/Google} permit this? What are they getting out of it?"

Its pretty clear what the bot guys get out of it, pay to promote services, pay for followers, etc. They can monetize "fame" through the robotic horde. But as this article and ones before it point out, these networks are generally quite easy to spot. So why not take them out?

It probably isn't because they can pad 'subscriber growth' or 'MAU' numbers, they appear to be only small components of that number. And while I could imagine it may be hard to purge them at the moment, its been a problem long enough that someone in engineering must have figured out a system for taking down large numbers of accounts.

The only thing I can come up with, and it is way too tin-hattish to really count, is that it creates an "observable" for the underside of the Internet. By watching what people are asking the twitter bots to do you can observe other objectives that are perhaps less observable. There are some obvious customers for that but I don't think they actually pay for that (except perhaps by buying access to the Firehose)


They don't "permit" this. I used to work on anti-bot technology at Google and we put a lot of effort into fighting botting, in particular on Gmail. We were quite successful at the time. Bot controlled accounts went from being a problem that was threatening the viability of the service to being fairly trivial. I don't know what it's like these days.

Anti-abuse teams tend to be relatively small. We aren't talking hundreds of engineers here, we're talking like maybe 10-15 engineers. When you look at the relative effectiveness of different networks, you can be comparing something as trivial and random as whether one or two people happened to figure out a good strategy or not. There is no need for conspiracy theories.

The costs of accounts on different networks is a reasonable proxy for effectiveness at bot fighting. My favourite site has always been this one:

https://buyaccs.com/

From a quick check it seems Twitter is getting better at it. Prices used to be more like in the low $20/k range. Now the low end accounts are $45/k and if you want PVAd/profiled it's up near $90. Compare to Gmail where the price is more like $280/k. Or Facebook EN accounts, $120. Good to see my old colleagues doing such a good job!

In the case of Twitter, my view is that their relatively low performance on botting is a side effect of the whole social justice / campus politics movement. I heard from a friend who works there that their focus was switched almost entirely to fighting human abuse like people being nasty to each other, political extremism, terrorism, etc. If you remember, just a few years ago Twitter was being attacked in the media for being filled with trolling and nastyness. So of course that became their priority. Anti-botting took a back seat.

Nowadays there's suddenly a bout of anti-Russian hysteria. Suddenly bots are in focus again. There are conspiracy theories about botted accounts being used to convince people to change their political positions. Having worked in the industry for years I am deeply skeptical about this. I never once saw bots being used for political ends or anything even approaching it. There is a lot of disinformation out there about Russia right now from western sources, and a lot of paranoia that doesn't seem to be justifiable.

I'd guess RT does far, far more to create pro-Russian support than anything happening on Twitter does.


> From a quick check it seems Twitter is getting better at it. Prices used to be more like in the low $20/k range. Now the low end accounts are $45/k and if you want PVAd/profiled it's up near $90. Compare to Gmail where the price is more like $280/k. Or Facebook EN accounts, $120. Good to see my old colleagues doing such a good job!

Perhaps there's just more demand?

> Nowadays there's suddenly a bout of anti-Russian hysteria. Suddenly bots are in focus again. There are conspiracy theories about botted accounts being used to convince people to change their political positions. Having worked in the industry for years I am deeply skeptical about this. I never once saw bots being used for political ends or anything even approaching it. There is a lot of disinformation out there about Russia right now from western sources, and a lot of paranoia that doesn't seem to be justifiable.

The fact that you've worked for Google on anti-botting and didn't see the technique you were tasked to counter used for political means might be leaving you biased. Without pointing fingers, I just want to state that there's tremendous political opportunity in spreading misinformation via internet (via any media really), it's amazing that in 2017 this isn't clear for everyone.


My experience was that prices never really changed unless the difficulty of creating/keeping accounts went up. Demand was pretty stable over time, only supply changed.

And yes, of course my experience of what bots were used for left me "biased" as to what they're used for. How is actual experience bias?


People who make money by being popular probably like it.



I think this story is getting more coverage than the one from a few months ago that showed that, for instance, both Clinton and Trump had millions of fake accounts following them:

> Per eZanga, 4.3 million—or 39 percent—of Trump's more than 11 million Twitter followers as of August came from fake accounts while the other 6.7 million are actually real users. And for Clinton, 3.1 million—or 37 percent—of her more than 8 million followers were fake while 5.3 million come from real accounts.

http://www.adweek.com/news/technology/more-third-presidentia...

350,000 is about 0.1% of Twitter's user base. Does anyone here think the number of fake accounts isn't orders of magnitude higher than that?


Anyone on twitter could've told you that. When you're followed by "@hotttladie3" & "hotttladie453892", it's only reasonable to assume there is also an @hotttladie4 through @hotttladie452891.

The problem with having a user reporting based plan for acting against fake accounts in an environment where the psychological motivations of using the service, if not to disseminate news, or maintain a closet standup comedian habit, is affirmation. In almost every motivation for using the service, the user has an incentive to keep their numbers up, whether they are real or not. A huge part of the game is the number of followers.

Personally I'm just surprised that they've moved from advertising cam sites (which could conceivably act as a secondary, almost passive income) to quoting star wars novels while inflating numbers for people who pay for followers. That's the aspect that confuses me, and makes me feel vulnerable.


Which part of that confuses you? If it was the quoting SW part, my understanding is that they used that as a simple source of human language.


There was a "Ask HN:" post about Twitter bots the other day: https://news.ycombinator.com/item?id=13497235

Strange that these bots aren't spammy but are posting every minute or two. I wonder what they're for...

https://twitter.com/superpolice001

https://twitter.com/superpolice002

https://twitter.com/superpolice003


There appear to be at least 19 of these - possibly more, but I haven't explored the number space save by manual URL editing, and -021 was the first to 404. -019 also 404s; -017 has been suspended. I haven't explored much further because to do so efficiently would almost certainly require the creation of a Twitter account, and I'm not interested enough in this to bother.

Interestingly, #20 hasn't been updated since 26 August last year, and its timeline contains only the two entries "temato" and "mecagoentusmuertos", which are Spanish phrases translating respectively to "I kill you" and "I shit on your dead". Beyond simple unneighborliness, whether they have any significance and what that might be I have to leave as an exercise for the reader, because I have no idea.


Almost like one of those RSA/Google Authenticator/etc rotating numbers... always 6 characters, seemingly random, somewhat pronounceable. Weird.


As it is twitted exactly every 2 minutes it can actually be a kind of current confirmation or authentication code.


Dead man's switch?


Or like a numbers station. Twitter would make a lot of sense for that, actually.


Indeed! That was my first thought and was the reason that Ask HN post stuck with me.

See also: https://theawl.com/the-real-weird-twitter-is-espionage-twitt...


What does it remind you of? I haven't used GA.

They're following the format 'bababa', where 'b' is a random consonant and 'a' is a random vowel.


Modern Numbers Station.


> A Twitter spokesman said the social network had clear policy on automation that was "strictly enforced".

> Users were barred from writing programs that automatically followed or unfollowed accounts or which "favourited" tweets in bulk, he said.

I am constantly getting followed by accounts with tens or hundreds of thousands of follows and followers, usually checkmarked accounts though I've never heard of them. It's painfully obvious these verified users are using bots to randomly follow people, both to spam my inbox with "you have a new follower" messages and to encourage people to "follow back".

But Twitter does nothing about it. It's not "strictly enforced" at all.


I have a twitter account but never use it (in fact never tweeted anything). Every few years or so I get curious and log in and, lo and behold, I have 10-20 more followers than I did last time. Not sure if they are fake, bots, random people, who knows?


This has become obvious to me after getting a few blog posts at the top of HN.

I just search for my domain on Twitter, and there are dozens of "people" who do nothing but retweet hacker news articles. They are presumably doing this for some kind of "reverse" reputation.

I'm interested if anyone has any more insight on this phenomenon. Maybe it's as simple as convincing some naive users to follow them with links vetted as high quality.

Examples:

https://twitter.com/bartezzini (123K tweets, nothing but HN-type links and comments)

https://twitter.com/EggmanOrWalrus (15.7k tweets, ditto)

https://twitter.com/MarkBeacham


UCL has dropped the ball on marketing itself. If this had been a student at MIT or Harvard, the article headline would have started with "MIT/Harvard Researcher ..." instead of burying the school's name in tiny print in the middle of the article.


That's true for most universities. Some universities get billing in the headline, most do not. This is not a function of the press department, but about the editor's perception of the public's perception.


Bot accounts, not fake accounts. I don't even know what the latter means—plenty of people don't associate twitter with their real name. And why would you!

Secondly, of course there are this many. There are probably many more. I run several bots myself; there's nothing wrong with this.

Twitter's TOS is only as good as its enforcement, and if there's anything twitter is terrible at, it's having any control over its community.


"Fake" implies deception: it's fake because it counterfeits or forges something else that is real.

An anonymous or pseudonymous Twitter account run by a human, in the way that a human is expected to use a Twitter account, is not a fake account: it's real, just pseudonymous. A bot account that's clearly a bot, like @big_ben_clock or @choochoobot, isn't a fake account either: it doesn't pretend to be anything other than what it is.

From the article: "These accounts did not act like the bots other researchers had found but were clearly not being run by humans."

One thing a network of fake accounts could be doing is inflating follower counts. A follow from a pseudonymous account that corresponds to an actual human isn't fake. Even a constant factor or constant term from a small number of humans with multiple accounts isn't particularly deceptive. But thousands or millions of follows from accounts run by a handful of humans is deceptive.

Another thing a fake account could be doing is spreading propaganda by creating the impression that many people agree with a political opinion, when these "people" are just canned responses, or humans assisted by automation (but capable of making human replies across large numbers of accounts).


Deception is a great feature. Why is that bad? I don't want Twitter to know my name. Don't exclude me for valuing my own privacy! I'm certainly not "fake", and neither are my interactions.


I'm not sure how you got that interpretation of "deception" out of what I said. You're not fake, and I'm not excluding you. You're just also not deceiving. I certainly don't think your real name is Dgfgfdagasdfgfa.

To repeat what I posted: An anonymous or pseudonymous Twitter account run by a human, in the way that a human is expected to use a Twitter account, is not a fake account: it's real, just pseudonymous.


A fake account on social media (as far as I have seen) can be things like alter-egos. There was a subculture on Facebook of making fake 'emo' accounts. They used to clog up my suggested friends list.


Twitter has become unbearable to use for news. Every single submission from news stations to President Trump is filled with combo-replies from people. 1 dude will bombard the tweet, then a chick, then a different guy.

Example: http://i.imgur.com/zbyM7YG.png

You need to scroll down at least 5 load-more's to see regular people tweets. It's a really terrible user experience that Twitter needs to solve.


>You need to scroll down at least 5 load-more's to see regular people tweets.

I personally don't find much added value in those tweets, "regular people" or not. They're basically YouTube comment level at this point.


And it was at this moment andai that realised, that YouTube comments are written by regular people.


"YouTube comments = garbage" is basically a meme now but I certainly find most general usage public internet comments, especially relating to politics, to be largely unreadable. That may come off as sounding elitist, but low effort opinion spamming is really not my cup of tea.


Oh really? I've seen a lot of bot like activity


I meant (with a great sadness in my heart) that YouTube comments do indeed come from relatively ordinary members of the human species.

I might be wrong, but I don't think we can simulate flame wars yet.


I remember hearing about this radio host who was competing with another host to get more followers. Then over about a week one of his fans who owned a bot net gave him 100k followers. He sent a DM saying "you're welcome, enjoy".

He already had about 250k followers so it wasn't a huge spike in that context but it was interesting to think of the implications of that when you come across a random account with 100k followers and 100 following... they might not be as influential as it seems.

Also this was about a year ago and last I checked his follower account was roughly the same.

Wasn't there a story on HN about a guy who created a fake identity and twitter account with 20k followers and got invited (and paid?) to speak at a tech conference?


Twitter says 5% of their MAUs are bots but this surely an underestimate. Facebook estimates nearly 9% fake users which is also likely too conservative. Bot traffic in ad fraud is even higher (30+%? every study is different, seems difficult to measure).

So we have 5-9% as a lower bound and perhaps we can look at e-mail for an upper bound with nearly 60% spam by volume.


Is this really a surprise?

I don't even use Twitter, and I have 4 different Twitter accounts. The amount of fake accounts must be staggering, but there's no way Twitter will cull them otherwise their MAU numbers will tank, and along with it, their ad rates, etc.


This is a great way to spread fake news and get people to see it/engage with it.


Don't know why you were downvoted, as this is one of the most important uses of bots.

Maybe people are sick of hearing the term.

There also seems to be some confusion about its meaning and its purpose.


- messages being posted only from Windows phones

The smoking gun. What real person still has one of those.

Disclaimer: I owned a Nokia Lumia 920 for > 2 years.


A good friend of mine, who is a real, dyed-in-the-wool, oldschool Microsoft fanatic (it's an amusing quirk to me) still has and uses his.

That said, swarms of Windows Phones does smell funny.


Maybe the OAuth token for a popular WP app got leaked, and now anyone can authenticate with it?


All OAuth tokens, even the official ones, are leaked at this point. Google them.


This! :) :) :)


It is trivial to create a BOT type twitter account.

1. Get burner email and phone number

2. Post bot to DO, AWS or run it on a raspi

3. ???

4. Profit from all those sweet followers.

Many of them look entirely "real" or they can be hilariously obvious. I would bet it is happening on Facebook, Instagram, Snap and any other social network where "value" is derived from followers/eyeballs.


The only surprising detail here is the small number mentioned (350K).

Twitter fake accounts are expected to be counted in MMs.


if youve not read it already, here is an excellent interview with Andrés Sepúlveda who has helped to "rig elections throughout Latin America for almost a decade" partly through using twitter bots and similar techniques. https://www.bloomberg.com/features/2016-how-to-hack-an-elect...


Anecdotally I've seen a huge uptick in spam/bots following my main account on Twitter since the beginning of the year. Probably 50%+ of my new followers have been accounts with no profile picture, no tweets, and close to zero followers.


Everybody talks about bots but let's not forget that there are plenty of bot-like accounts controlled by humans from poor countries for (fractions of) pennies. You don't need a "hacker army", just a couple of bucks and a third-world nation with good enough infrastructure. Look up "click farms".

The problem with this is that while bots can be detected (even if doing so is an arms race) it's much harder to detect "bot" humans.


this is relevant is as well: https://play.google.com/store/apps/details?id=com.hitwe.andr... tldr: 'HitWe' it's a tinder-like dating app with 10M+ installs on google play (haven't checked itunes). i came across it by browsing dating apps for fun. most of the bad reviews (1-2 stars) are from people who claim it's 95% fake profiles etc. many claim that women there just ask for their email etc. some even claim to have paid some of the women which have then disappeared (but that's being too naive, IMO. never pay someone you have never met.. ). there are many many good reviews, but most are only with 5 stars and without any text (which, to me, is another red flag).

anyway - the most interesting part here is that they actually managed to fool google, up until this very moment! google recommended me to view their app.. spam does work !

ps - googling (ironic, i know) 'hitwe app scam' showed me this on the first result: http://www.datingbusters.com/hitwe-com-exposed-for-fake-prof...

i am interested in a response from one of google play's spam engineers/managers ..

edit: spelling

edit2: it took me 0.5 secs to start sensing that it's a fake-boosted app. a human reviewer at google could have just scanned the top 100x dating apps in a single day and map out the fake apps. what do you think?


Oh, it's a lot more than 350k.


Spam was a serious and growing problem before the major providers got serious about filtering. For now I just report clickbait feeds as spam and hope for the best.


I thought, "ha, he'd need a botnet to make any noticeable impact", when I thought, has anyone thought of that?

Using a network of bots to detect and report obvious bot accounts.


In the same spirit, there is at least one botnet dubbed Linux/Moose [1] dedicated to penetrating home routers for the sole purpose of creating fake followers on social networks. [1] http://www.welivesecurity.com/2016/11/02/linuxmoose-still-br...


Possible to automate, but limiting your report/block activity to spam-accounts that follow your real account would have the most impact.


in '08 or '09 I met a guy who was creating 100s or more accounts and paying an overseas outsourcing company to curate longterm timelines properly (so not just spam)

he genuinely didn't know why he was doing it at the time (he'd been heavily involved in gaming Google rankings previously for credit card companies) and it was at significant cost - but he was completely sure at some point it would be useful


Related research paper, "The 'Star Wars' botnet with >350k Twitter bots":

https://news.ycombinator.com/item?id=13445289

And another paper covering the topic: https://news.ycombinator.com/item?id=13445295


I get follows from obvious bots (bio: "click here for 5000 follows") very regularly on Twitter, Instagram, and Facebook. Instagram almost always removes the account, Facebook has always responded the account "doesn't violate the rules", and Twitter I'm not sure about, but I always block and report that garbage.


I thought everybody knew that a big chunk of Twitter works this way. I got to see it in action just a couple weeks ago with a pro-Trump network of fake Twitter accounts. I had the misfortune of having one of those @TrumpRulesMAGA4Eva style Twitter accounts reply to something I'd written with a standard pro-Trump meme. A couple people liked and favorited within an hour or so and then nothing. Then late on a Saturday night several days later, a whole network of Twitter accounts favorited and retweeted it. Most of them had similar style names but some had generic ones or random character strings. About half had an egg profile picture. Many of the others had tried to look legitimate with real names and pictures of people but tineye reverse searches showed 10,000+ hits on the images. Looking at the histories of the accounts, they'd all been used similarly for months.


SoundCloud is full of fake accounts. I get a couple of new followers every now and then and it feels like 95% of them have mentions of "buy followers". I always report spam bots whenever I spot them on SoundCloud, Tinder, Facebook and Reddit, so though I only have very few followers on SC I think all of them are real people at least. It is very important to always report spam bots and other types of disruptive fake accounts and I wish that everyone did. If everyone did then it would be less profitable to run such accounts and then there would be fewer of them and we would be bothered much less frequently.


In a time where bots and AI are all the rage, it begs the question: Are some bots on twitter more useful and insightful than actual people?

I remember a twitter bot (name escames me now) which would crawl pastebin and tweet updates when passwords / DBs were leaked. It had lots of followers (security researchers who found it very useful).

Most humans on twitter are boring and waste people's time with youtube style comments. Most bots are spammy and waste time too. Why not allow both and let people decide who they follow/ban?


I found it interesting that a Member of parliament on twitter had hundreds of obvious bot followers. I presumed he was buying 'popularity'. Maybe more sinister. Lots of them very recent


Hundreds out of how many? I get a fair number of bot followers on an account I don't really care about.



MAUs affect their stock price. MAUs are in their 10k IIRC, does the SEC look into this? i'm not sure.


There is a whole industry of buying and selling social media "followers".

So I guess this is a product of this.


I thought this was obious to everyone with a twitter account? I get random bot follows all the time.


Standard stuff, you want media attention for your blog, twitter whatever? You'll need to have a bot army to upvote your own stuff, relink and like your content.

Having a botnet is now an essential part of building your social media following.


On the same topic. I think he might have discovered the same botnet. http://sadbottrue.com/article/51/


To me, it seems like a set of QA/test accounts for Twitter to do load testing and whatnot.


What an interesting story. Mysterious and strange. I'm going to ask about this.


hi,i'm Brian, i had my friend help me hack my ex's email, facebook, whatsapp,and his phone cause i suspected he was cheating. all he asked for was a his phone number. he's email is (hotcyberlord425@gmail.com)..IF u need help tell him Brian referred you to him and he'll help. Am sure his going to help you do it, good luck


For everything people dislike about curation, you avoid these syble attacks


Meanwhile Twitter banned yet another political youtuber @SargonOfAkkad, doesn't seem like he did anything against Twitters ToS. Fed up with Twatter to be honest.


Yeah I watched a few of his videos on Youtube. He makes fun of feminists and sjws in his videos, so Twitter doesn't like that stuff and call it harassment or bullying or whatever.


my bots would never be caught up in these criteria. they've been using NLP for years


Twitter is broken. No news here.


350,000 accounts is not a "massive network". This is all hype.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: