Don't really understand the implied connection to NSA or GCHQ. My theory as to why Snowden is popping up in some of the tweets is because the bots are probably using something like Markov Chains to generate real sounding (as far as Twitter spam detection is concerned) tweets sourced from trending content.
The accounts would then be sold as followers, which is an attractive service to some for a variety of reasons
The Markov stuff seems like such a poor approach. Though you can get by with it in 140 characters better than other places.
Twitter provides a public stream of tweets. Simply pull tweets out of that, if it doesn't contain another users name or url, etc. Distribute out to one of your bots, weighted by chattiness. Get fancy and have some sort of baysian filtering for what category bucket it falls into and give your bots interests.
Maybe? They never did anything with the proof of concept ones I worked up, but I never did anything with them other than basic interactions with the network.
I'd guess there are enough duplicate tweets. "Another crappy day at work", "looking forward to dinner tonight", "going to see my sister this weekend!", etc. That I doubt it'd be a very useful heuristic.
35000 was the tip of the iceberg, it was a small sampling, not an exhaustive list. I'm only one guy with no power-toys. In all honesty, this is probably a million-account-plus botnet.
There is a possibility that this was a botnet that sells followers. I know that this one wasn't into retweeting or linking to anything, though such botnets do exist.
I wasn't able to prove that is the case with that particular network, as there were a number of "following nobody, and nobody follows" accounts. But I've found another that for sure is in the business of so doing. How does one person with only four very banal tweets get 4000 followers? It turns out that his followers are indeed bots. This network I will be collecting too. Maybe it will turn out to overlap.
PS: I won't insist on the theory that it's government sponsored bot. it could well be commercial, or just run by some jerk.
> as there were a number of "following nobody, and nobody follows" accounts
This is actually very common. Twitter recently took down a large number of bots and published a paper about their classification. One of the attributes of a Twitter bot account is it's reputation. By remaining idle, it can establish a reputation within Twitter's spam detection system, becoming more valuable when sold.
I fail to see the reason behind your panicked verbiage. You seem genuinely upset by this discovery, care to explain why?
Also, it'd be interesting to read some of your reasons for believing this is in any way, shape, or form a "government sponsored bot". By saying that without any reasoning, you're placing yourself squarely in the "tinfoil hat" camp.
Seriously, I'm not in the tinfoil camp. I have perhaps read a few too many Snowden documents, in detail. There are a few things I've see, concerning the use of social media. Things like "can we craft a message to go viral"
I really didn't like the message that linked Snowden to HAARP, for all kinds of reasons. It creates a false link between his documents, and the tinfoil theories surrounding HAARP, (global assassination, etc). And I have indeed seen some fake articles that tried to make precisely that claim. That kind of link would also put Snowden, and those who believe that he did the right thing, into the tinfoil-crazy camp. So it smelled bad. Maybe it's just random gibberish, like a lot of the messages. But it really smelled.
Why GCHQ vs NSA.... the most vile tricks we've learned about come from them. The Neo-COINTELPRO stuff.
Look, I agree that this is probably commercial spamming, not government, but let's cut out the "tinfoil hat" bullshit accusations, OK? It's ad hominem and it's often meant to suppress speculation about possible government misdeeds.
Many of those who were accused of wearing tinfoil hats because of their views on government surveillance turned out to be pretty reasonable after all, no?
>Many of those who were accused of wearing tinfoil hats because of their views on government surveillance turned out to be pretty reasonable after all, no?
No. In the security community much of this stuff was given serious thought and systems were developed to defend against these types of things (see TPMs). The only real surprise was the way it was being done to reckless abandon. There is a difference between the tinfoil hat approach of just envisioning things evil organizations/governments might do and assuming an intelligence gathering agency is gathering as much information as it can.
The more I think about it, your argument is so bad I wish I didn't even waste the time responding. You are just providing a completely unrelated example to try to give weight to baseless drivel. You could have just as easily used the same argument to support the argument that the moon-landing was faked.
Obfuscation is a first line of defense for social-network bot nets.
If you remember, a couple weeks ago someone uncovered a massive bot net on Facebook, that liked the most random things. This bot net could also be paid to like a specific page or person, but it would be lost in the noise of liking everything else.
Even if there is a lot of noise, I would try to track down any correlations between these bots and any other twitter accounts and/or their content - to see if even something small (R^2 < .20) falls out.
Twitter for sure knows about it though since they have server logs.
This is not a particularly informative article but I see great value in owning a Twitter botnet. As one example, imagine negative news about a major public company starts trending. That would certainly be a money-making opportunity. There are also hard-to-disprove rumors that you could blackmail high profile people with. Celebrities probably don't want to be trending on Twitter for having an affair, even if it is a fake report.
So I am not at all surprised that people are investing in building up tons of what appear to be legit, active accounts.
"Among the core self-identified purposes of JTRIG are two tactics: (1) to inject all sorts of false material onto the internet in order to destroy the reputation of its targets; and (2) to use social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable"
I saw a post about textfile stenography, maybe linked from here or reddit. It used a source text (the example might've been from textfiles.org or a site like that) to hide messages, which looked like the text with glitches or typos or something. I don't not where it is. Here is something that encodes messages so that they look like spam: http://www.spammimic.com/
Y'know how people control botnets with irc or webposts? Maybe it is something like that, as other have speculated. But them why would they be posting messages lots of times? That's a problem in my ideary.
FTA (Google Translate, slighty modified):
However, there were still "other channels" as Attorney Siegmund said: In the "Line D1" the spies took simply Youtube-videos on the Internet. Under harmless videos they put under collusive usernames hidden messages. And then there was, according to the investigators nor the agent Vintage bounce points, mainly in North Rhine-Westphalia. There, hidden mechanical engineer Andreas stop documents that were picked up by members of the Russian headquarters.
They were even links to accounts.. can't find a better source at the moment.
First, They aren't really nets, they all run on a single server (ok they run on appengine so they aren't quite a single but they aren't a "bot net"). But they control about 7500 accounts, and I have about 80k accounts I own.
Second, Some of them do have real uses. They respond to certain events when they happen. Might be a keyword in a news story, might be something else. Sometimes that is to get the word out for good, sometimes for bad.
Let's say you have a company you like, and you want to drive readership of good new. My bots would provide positive reinforcement to the authors and sharers of those articles. Like giving HN karma, but on twitter. The person who shared the article sees "imaginary bot favorited your tweet" and suddenly thinks "I should tweet about that company more often".
Other times I use the bots to do things like "pruning" bad ideas. I have a bot that looks like a Nazi. Acts like a nazi and talks like a hick. When certain racist remarks or bits of misinformation are shared it responds with positive reinforcement. Many people then re-evaluate if they want to say thing the Nazi agrees with.
Rarely do I use my "bots" for nefarious purposes. Sometimes for personal gain, but not for anything "evil".
It would be interesting to speculate on what is achieved by this botnet. Perhaps an overall traffic metric? Or air-cover for legitimate traffic? Tin foil hatters probably can find a covert comms network in there somewhere.
Setup half the accounts to tweet one team will win the Superbowl beforehand, the other half to tweet in favor of the other team. After the game, shutdown the incorrect accounts. Continue doing this until you have one incredibly prophetic account which you rename to "The Oracle" and use to pump and dump penny stocks.
10 correct predictions would take only 1024 accounts.
However, you could also use it to poison the kind of metrics companies like DataSift use. A lot of people use metrics like that to make decisions. E.g. a news network might bend it's editorial stance depending on what was trending among its demographic.
Its just someone selling followers - he's overlapping his network to follow each other and then follow external. Maybe waiting to build legitimate followings for business, or selling followers for vanity.
He has automation for content generation but the algorithms/spinning sucks so less unique content is generated. There are many many botnets of larger and smaller sizes, but same shit applies on Facebook, Instagram and Tumblr (x100).
This could just be a university research project. I read about a similar one last year that was focused on how to create bots that mimic human behavior. Obviously this one would be a much larger scale, but think that knee-jerk reaction of "shut it down!" is overly aggressive without knowing all the facts.
If I was socks I'd definitely be steering the discussion in a direction that made you think psychological manipulation through social media platforms was purely the realm of fantasy, then give you all kinds of other suedo-reasons that a massive botnet could be used for.
There was speculation that the social networks will outplace the traditional media, but to some extent, this will not happen. Fake social accounts and handles are already everywhere nowadays, and they indeed cause good poisoning on search results.
Not to sound flip, but my programmatic experience with Twitter, leads me to believe most of it seems to be one big botnet. Do any largish API crawl of Twitter and you'll be amazed by the large % of inactive accounts or non-sensical accounts (which I assume to be bots)
My personal experience (where I mostly follow news organization feeds) is different.
In short, in my experience it seems Twitter is great as a broadcasting medium and not so great as a two way communications medium.