Which is why I found it particularly galling that the PR firm relied on people's moral outrage about paying for influence to peddle their message ("tell them you like our initiative and are TIRED of politicians taking legal bribes") -- while doing exactly the same thing: paying for influence, in the form of purchased Reddit upvotes, which corrupts the upvote process and puts selfish gain ahead of the good of the Reddit community.
Normally, when PR firms use "hacking" to describe their techniques, they're talking about novel approaches to getting coverage, sort of like how "life hacks" are novel solutions to life's problems. But in this case, the firm is using "hacking" very literally -- infiltrating and taking control of a system by illicit means. They are black hats, and we should view them not only as morally bankrupt but also very dangerous.
I'm expecting that any day now they'll run a follow-up post, "How we hacked the U.S. media to help an anonymous powerful Russian client sway the presidential election."
The solution isn't getting outraged about outrage. It's designing systems that compensate for this tendency. Unfortunately, that often means slowing down sensitive discourse.
Build windmills, not windbreaks.
Sometimes anger, sadness and other emotions are simply strong signals that something is wrong and needs to be changed. Changing the emotion isn't useful in cases where it's the situation that is at fault, it's what the abuser would like to see though. If the outrage led to these firms being eradicated from the face of the Earth, it would wrap to a celebration real quick. Why not push for that?
Just because there's a whole lot of huffing and puffing going on, too, just because there's a lot of emotions "produced" if you will, out of thin air and with no grounding and purpose, doesn't mean you can throw it all together and judge it by the "hype".
Normal people have emotions. Normal intelligent people have emotions and can hold their own in discourse. People without a connection to their core want it to be solely about some supposedly objective and external set of rules, but that's like computing very nice colorful patterns on a computer without a monitor connected. Getting more RAM or a faster CPU is not the upgrade you need in that case.
Friedrich Nietzsche saw this a long, long time ago.. or maybe he didn't, what do I know what he's talking about, but I recognize the timid people walking carefully to protect something that they already lost by doing that erry day. They're so protective of what they lost that they won't even lift the lid.
> "I say unto you: one must still have chaos in oneself to be able to give birth to a dancing star. I say unto you: you still have chaos in yourselves.
> Alas, the time is coming when man will no longer give birth to a star. Alas, the time of the most despicable man is coming, he that is no longer able to despise himself. Behold, I show you the last man.
> 'What is love? What is creation? What is longing? What is a star?' thus asks the last man, and blinks.
> The earth has become small, and on it hops the last man, who makes everything small. His race is as ineradicable as the flea; the last man lives longest.
> 'We have invented happiness,'say the last men, and they blink. They have left the regions where it was hard to live, for one needs warmth. One still loves one's neighbor and rubs against him, for one needs warmth...
> One still works, for work is a form of entertainment. But one is careful lest the entertainment be too harrowing. One no longer becomes poor or rich: both require too much exertion. Who still wants to rule? Who obey? Both require too much exertion.
> No shepherd and one herd! Everybody wants the same, everybody is the same: whoever feels different goes voluntarily into a madhouse.
> 'Formerly, all the world was mad,' say the most refined, and they blink...
> One has one's little pleasure for the day and one's little pleasure for the night: but one has a regard for health.
> 'We have invented happiness,' say the last men, and they blink."
Neither do I ;-)
I have always detested the "don't hate the player, hate the game" ethos. It's a total shirking of responsibility that inadvertently praises being able to game any existing system regardless of the purpose for which the system exists in the first place -- that is if somebody has or discovers the opportunity to exploit a social contract of any scale for pure personal gain, then the onus isn't on the person to maintain any sort of ethical footing. It's on the system to have processes built in to deal with such a situation -- if it doesn't then it deserves to be exploited.
Maybe somebody would have some valuable input here, but I see no value in that sort of ethos. It tends to end in waste. Or at the very least, the returns are diminishing.
I guess my end point is I don't understand, and as such dislike, how nonchalantly the platitude is thrown around as if it's to just be accepted.
Nothing, if you're honest.
It's hard for me to tell a specific rule for that; it's more a case of "I'll know it when I see it". One rule of thumb I seem to follow is whether the behaviour is more about explicitly seeking gains, or avoiding expensive loss. Like I described in another comment, there's a meaningful difference between a typical person doing bad things to keep their only job (jobs are hard to come by and vital to survival), vs. an entrepreneur who choses to profit from hurting other people, even though there are alternative strategies of making money.
The saying in question I've always taken to be one where the 'player' is the motivated party, not merely to prevent loss but to distinctly gain. Rather not the employee just trying to keep their job, but the employee playing shady politics to slither their way to the top of the heap and the largest possible portion of the take -- those who see what are team efforts as zero sum games where the teams are `I vs. Everybody else`.
So while we're sorting out the rhetoric, I agree with your sentiment as well -- and it certainly needed to be said that I wouldn't condemn somebody in such a situation. Of course I generally see morality as a gradient.
Hm, I've always taken that phrase to mean, "don't be prejudiced against a whole person: people are complex and you can probably find something to agree on with anyone."
So, I often try to say I don't like certain a thing about a person rather than saying I hate them. Hating everything about a person just feels a bit too simple and wrong. Perhaps that is too PC for these days, but, that's how I take it. Even Hitler had a mom- that sort of thing.
That said, I always seen the "don't hate the player, hate the game" saying the way GP does - i.e. shifting moral responsibility away from participants. IMO it does make sense in some cases, but does not in others. I see it as a spectrum of pressure put on individuals.
For instance, I wouldn't blame a customer service employee who lies to me because their boss ordered her to, and she will lose her job if she doesn't. For most of the population, losing a job is as close to life-threatening situation as you can get without an actual medical condition. OTOH, I will blame the boss who ordered lying to people. The boss has many more ways to choose from, and telling employees to lie to people shows preference for profitting by fucking other people over.
I disagree. It is simply an acknowledgement that the system has failed the purpose for which it exists. When that system fails, attacking the people who found the flaw isn't a viable long term solution. The only real solution is to fix the system that failed in the first place.
The intention is encoded in the terminology. Sometimes it's not a game.
I (truly) mean no disrespect, but surely you see the irony of succumbing to moral outrage over Reddit upvotes? Surely you also see the absurdity of comparing Reddit upvotes to democratic elections.
This tendency to turn everything into a scandal is partly responsible for the very thing you're decrying.
It's hard to figure out the exact ratio for the conversion of upvotes to realvotes, but as long as it's positive, buying upvotes is just a roundabout way to buying votes.
Yes! Resoundingly, yes!
Reddit is not a democracy, it's a private forum for discussion. Confusing this with a media outlet or a public forum (to say nothing of a democracy!) is as patently insane as confusing an infomercial with a scientific paper.
Feeding into the outrage only makes the beast stronger. Just roll your eyes and focus on the things that matter, like democracies.
He may, but I don't. American elections are decided by very thin margins, and I think even small streams of influence can have outsized outcomes. Swiftboating and Rathergate come to mind. I specifically think the rise of /r/The_Donald, just before the election, which carpet-bombed the ever-living daylights out of the front page, had a non-inconsequential effect on the vote. It's why Reddit changed their long-standing front-page algorithm. I'm truly surprised they didn't do so the month before the election. Utterly astonished, actually.
Reddit is still a really great site when you unsubscribe from all default subs and any sub that has gone "critical shill" at about 100k or more subs.
AskHistorians is still a great sub, at 620k subscribers. It's (intentionally) not on the default list, and has very heavy handed, strict moderation, which is probably why the quality is so good.
Probably the most common who have something to say to the mod team are those who blame us for pushing an agenda. We welcome and encourage debate. We want people to call bullshit on something bunk, but we stand by that it has to be something that can be substantiated. We just want you to provide some kind of proof, source, etc. other than, "This totally happened to my sister's boyfriend's cousin!"
Some people take those removals very personally, because the comment is about something heavy that has greatly impacted that person's life. They feel like we are snubbing them when they took the time to reveal something personal. A non-real example would be a study that shows that cancer survival rate increases when you drink at least 60 oz of water daily. Someone will inevitably reply with, "My mom died from cancer, and all she dra nk was water with the occasional cup of tea." I mean, we're sorry for your loss, but your single anecdotal point doesn't really refute the claim. The comment gets removed, and then we're called monsters. If you really think it's bullshit, find another study that claims otherwise or poke holes in the study itself.
Another common one reason is low effort comments or comments that don't add to the discussion. "Jeez, why is this a comment graveyard?" is a pretty common removal. There usually aren't hissy fits, but this, jokes, puns, lyric chains, comments like, "Yeah and water is wet," are also common. Two other low effort comments are, "Someone give me TL;DR please," or, "I want to believe this, but some comment is going to blow this claim out of the water."
The reality is that most removals are boring.
This one annoys me when I see it somewhere like /r/Science. It's like, the miss make it super clear that they remove low grade /off topic comments etc. It's not that hard to understand.
It was more obvious before the election, when the shilling was so prevalent that different accounts were copy/pasting the exact same phrases into their comments. And you would see a new point brought up, then an hour or two delay, then a response would suddenly show up in multiple places, copy/pasted into many comments, in multiple accounts.
But watch /r/all sometimes... you'll see a topic just suddenly start appearing. As an example, you think Rick & Morty grew popular on reddit organically? No, that started over the course of about an hour one morning, as a clearly orchestrated effort, and it worked.
It doesn't happen all the time... quite a bit of reddit is still honest and organic. But watch it carefully, and enough of it is manipulated that I don't put much trust in what I see there.
When the same talking points are repeated over and over again on large subs and dissident opinions are automatically downvoted to hell, it's obvious there are organized efforts to push specific narratives or point of view. r/politics, r/news, r/worldnews are full of these, with sometimes the moderation itself in bed with the political astro-turfers, banning users that don't fall in line.
Regarding the article, the news that was astro-turfed wasn't clashing with r/politics narrative. And since journalists and bloggers themselves now source their news and stories from Reddit, it helped spread that news all over the internet.
Reddit is an extremely efficient viral marketing tool, both brands and politicians understand it now. When journalists start quoting reddit in their news that then get posed on reddit as news, we've come full circle.
What views can't be expressed on /r/politics?
And then in the comments, you'll be crucified by the posters for anything other than the pro corporatist democratic views that the posters espouse. Move on from bashing Trump to say, bashing Hillary because she was an utterly unlikable candidate and arguably the biggest political failure in modern history, and you draw the ire of dozens of toe-the-liners who see the world just as black-and-white as the idiot /r/the_donald posters.
There's also an excessive amount of hyperbole that just gets annoying as time goes on. Clearly Trump will be around for quite a while longer (until and unless Mueller finds a smoking gun or in the event of a massive Democrat success in 2018), but reading /r/politics you would think he'll be impeached by next week. Adding fuel to the flames are websites like WaPo and the Independent who publish articles with these terribly misleading and hyperbolic headlines that make it seem like Trump is going down, and you get a massive echo chamber fueled and funded by shitty media websites.
They are still there in politics as well, but they are usually way at the bottom of the thread. It is sad though, even if you assume that it isn't as "controlled" as it is -- pretty much the front page is all anti donald trump / republican stuff these days. While some of that is fair and is news, what is worse is that at least in the old days, there would be nice constructive discussion/counter points, usually as the top voted post in the comments. Now its mostly just circle jerking there.
It's a pretty toxic environment and by constantly focusing on Trump they're crowding out a lot of content that could actually make users happier.
The users control both comments and links. Yours is not a fair comparison.
That's just wrong. The moderators on both subs are actively curating the front stories and removing 'duplicates'. I don't care about TheDonald as they are not a pre-set sub nor proclaiming neutrality. But you can't be seriously claiming that the 16 out of 25 front page stories of some variation of 'DT colluded with Russia' on /r/politics are unique newsworthy stories.
I'm unsure what neutrality means at this point. I don't expect anyone to be "neutral", but I make a distinction between debatable news and troll memes.
> But you can't be seriously claiming that the 16 out of 25 front page stories of some variation of 'DT colluded with Russia' on /r/politics are unique newsworthy stories.
No, I'm certainly not, and while moderators remove some obvious duplicates (query string differences etc), the same story from different outlets are still left and often upvoted. I don't think that's a grand conspiracy but merely user habit.
Whatever you think of this is up to you, and I think there are a lot of issues worth discussing that gets drowned out by the clown in chief-stories, but I've seen no evidence of this being anything other than crowd selection.
And it's news (insignificant, duplicate or big) on one sub, and 80% memes on the other. That's the contrast I was commenting on.
It's not obvious to me, at least, that these are organized.
"As a reminder, this subreddit is for civil discussion"
I think we all remember that 6 month period where Microsoft decided to shove VSCode down every ones throat. It was impossible to get away from. New vim released? "Yeah vim is good but now I use VSCode TM". New sublime? New Atom? Same thing. They were aggressive with that one.
Not me. I don't remember hearing about VSCode until a few months back. I had no idea it was first released two years ago.
Maybe I just wasn't reading the right places. But I subscribe to /r/programming, and I think I would expect that to get hit, if they're going to be astroturfing on reddit?
I'm not saying this definitely didn't happen, but I'm skeptical.
I'll add to the list: Windows 8 shills. I remember right after it came out, I had lots of issues and posted in some sub. Wow, the negative responses to my post were amazing.
Not saying this isn't a good thing, if Reddit banned Tor a few of my favorite community members would disappear, and I'd likely drift away from the subreddits I do still follow.
- his big political stunt wasn't even his own idea, and
- he paid people a ton of money to fraudulently promote it.
What a way to burn your clients...
You now know about him. It worked and still is working.
The former now goes straight into my "never do business with" blacklist.
Your policy just causes you to avoid the poorly done astroturfing, it does nothing to actually avoid it.
What you should do instead is mentally "filter out" the results of the astroturfing, and evaluate the service on how it looks minus that.
Maybe I'm going to have a hard time, sure. But "be the change you want to see in the world" and all that. In the past, my objections have prevented one company from engaging in marketing spam, and made a third party dump their SEO provider because of shady practices. I'll continue doing as much as I can, and I encourage others to do the same.
You are missing my point - I'm not saying it's fine. I'm saying you are missing your target.
I realize not many do that, because somehow not acting in a malicious way is an unacceptably high standard for the marketing/PR industry.
What if this is just a ploy to make this, a possibly fabricated, story go viral to undermine the candidate?
I get it, they need to promote their services, but if this is what they think is the right way to do it, that doesn't fill me with confidence that they have any ability to do PR beyond buying fake upvotes on social media.
Second, they're proud of what they did and are implicitly telling potential clients, "We can do the same for you."
On a topic where one would expect citizens chasing for public good, we find marketers and advertisers working for a wealthy businessman paying a convictionless campaign to become famous!
And the advertisers are so proud of it, they give all the details of their Reddit cheating, and worse, all the details of the absence of political conviction of their wanna-be-politician client.
Maybe the story is real, but I cannot believe the advertisers are dummy enough to be the ones writing this article.
I would better think of someone related to Fiverr.com behind... [edit: or an enemy/competitor of the politician]
If it were any other industry aside from PR/advertising I would give them enough credit not to do something like this, but in this case, I could totally see them doing something this stupid.
Early paid upvotes are the seed for later organic upvotes. You don't even need to spend $200 to get them.
When someone proposed a similar voting manipulation trick on Hacker News (https://news.ycombinator.com/item?id=13676362), dang explicitly notes that such techniques are a good data point for voting manipulation detection algorithms.
"You agree not to interrupt the serving of reddit, introduce malicious code onto reddit, make it difficult for anyone else to use reddit due to your actions, attempt to manipulate votes or reddit’s systems, or assist anyone in misusing reddit in any way. It takes a lot of work to maintain reddit. Be cool."
Once you get to the 'hot' section, however, your content quality matters. If the content is good and/or has viral potential, it will attract votes naturally.
I was marketing for a travel site at that time. I learned that rather than paying others to push your content, you can get the same effect by posting at the right time and creating content that would naturally appeal to that subreddit.
I do know some people who've bought established Reddit accounts for $50-200 and use that to get upvotes via proxies.
As an aside, I wonder if they're using the same tactics here.
Look at the recent US presidential election, look at the ongoing, long running censorship on r/bitcoin, among many other examples. I moderate a small subreddit related to a niche sport and deal with a massive amount of spam and fake accounts, and reddit's spam policing tool (Automoderator) is somewhat difficult to setup, only partially effective and could be a lot easier to use (e.g. one click bans for accounts and domains) if reddit's staff were serious about the problem.
Comparing the number of upvotes with the limited karma bump on my account, there do seem to be a few suspicious votes happening.
How can they connect this firm to random submissions and/or upvotes? I think even targeting fiverr would be difficult, because sellers aren't going to advertise which usernames they're going to us to vote.
This is a very difficult problem to properly solve. Even if they can somehow guarantee each user account is one unique individual, what's to stop a company paying 200 actual, real people to go and vote on something on reddit?
Usually you don't have to buy that many upvotes/likes/whatever either - usually just a few in the beginning helps seeds a post/tweet well enough that it starts to bring in the organic traffic.
Deliberately breaking the rules that exist for a good reason isn't "hustle." It's just cheating.
These guys didn't do anything remotely new or skillful. All that's special about them is that they're brazen enough gloat openly about cheating.
2.NORTH AMERICAN informal
"he hustled his company's oil around the country"
Reddit has a SERIOUS political astro-turfing problem.
Some would argue it swayed the US election. Some would argue Reddit is bought and sold.
The popular or all experience is completely different. Commenting you don't even know if it's a real person or not.
Does anyone know a forum similar to this or Reddit where it's ALL verified accounts?
There's a one time $5 fee at signup and a 1 week waiting period after to post/comment. Seems to do a good job preventing this type of thing because the costs quickly get prohibitive if your dummy accounts keep getting deactivated.
It's still an ugly echo chamber on politics though, it is extremely far US-left on everything.
A dummy account on reddit may be allowed to vote hundreds of times before being banned so the price per vote becomes minimal.
I'd personally head in exactly the opposite direction - somewhere where there are no accounts at all, in the tradition of anonymous imageboards since the founding of 2chan. Granted the selection isn't so wide any more.
"All verified accounts" sounds like a boring echo chamber.
It seems like it would be easy enough and cheap enough to build a honeypot to identify accounts used for the purchased Reddit upvotes.
For example, Reddit could set up some honeypot posts to track paid upvote accounts.
They then go and pay these upvoters to upvote the honeypot post and identify the accounts used. (It would be helpful if the post was hidden so other people don't find it accidentally. In fact, it is possible to just use a tracking redirect page given only to the paid upvoters and use any post as the upvote "job" so it would be hard to identify by the upvoters.)
Then Reddit could ghost those identified accounts. Simply ignore their votes in the system, but don't tell the account owners, so the owners continue using the accounts without realizing the problem.
This would make it very difficult for the account owners to know which of their accounts were compromised.
Then on any new posts where these upvoter accounts are being used in majority, other accounts can be found. The other accounts that also similarly upvoted on this article could represent other paid upvote accounts.
Track those other accounts and how often they appear beside the ghosted paid accounts, and voila, you have found more paid upvoters.
Keep doing this and it makes the paid upvoters ineffective because although they can work the system, their work is only being used to find other paid upvote accounts and also clients who are paying for paid upvotes.
After a time period, the clients could be sent a warning:
It has been detected that you are using paid upvote services which are against Reddit TOS. Please contact customer service so we can work together to remedy the problem. Failure to do so may cause your account to be banned and all your posts removed from Reddit. Have a good day.
Of course Reddit doesn't have to do this, and really anyone could do the same process to build a list of paid upvoter accounts and a list of articles and clients that use those services...
So what do you think, would this put a dent in the upvoters effectiveness?
The problem is that many of these accounts are only used once or twice before being retired or sold off. So if you pay the "upvoters" to honeypot the accounts, all you'll do is put a very small dent in their accounts and next week they'll be back up to 100%. And if you pay them every week you'll just become their main source of income.
And the accounts are much more "crafty" than you think. often copying high-karma comments in reposts, or posting semi-nonsensical markov-chain style comments for years before being used as "paid" upvotes.
It's really weird, I follwed one a few months ago. I saw it was posting those markov-chain style comments, and I watched it until it's karma hit like 500 then it deleted all it's previous comments and only had one comment talking about how great some app was on /r/android.
Like a week after that the account was deleted. I'm not sure if it was done by the admins or by them, but the comment remained posted by [deleted].
This was all a year or 2 ago, so I don't remember the exact situation any more.
It's not just accounts that are bought and paid for. Every major social network is making money by shaping conversations around the interests of paying customers, political and otherwise.
And this type of marketing posing as news, pushed to the front page by bots and fake accounts is precisely why /r/politics is now a shitbed.
Not that any of this isn't obvious, but blaming Hack-PR for one of the older internet tricks is a bit unfair. They're just one of the few shameless enough to simply say it with zero guilt.
I know it's not solely their fault. Just like people don't really blame Obama when they say 'Thanks Obama', I wasn't putting all the blame on Hack-PR when I said 'Thanks Hack-PR'.
That was the one mentioned in the article and that is a subreddit that has been particularly awful since basically the entire last US election campaign.
we’re going to begin publishing articles like this to outline what we are doing that’s working so others can do the same
If all these bought upvotes come from new accounts, or from the same few IP ranges, or have a lesser ratio of comments to upvotes, or are interacting only between themselves and not with the larger community -> reddit can detect them and turn them into ghost accounts.
Reddit needs to open up a Kaggle challenge for detecting rented upvotes and other abuses, use the data it has already shared with the AI community (the reddit dataset) to detect such attempts as they happen.
If you run a website and you're getting spammed, the worst thing you can do it also ban the accounts as this give spammers feedback that the account is no longer valid to use and they'll rotate it out. Best thing you can do is /dev/null the account. To the user it will look like account is working as expected, but on the back end anything they do doesn't actually do anything. Harder for spammer to know when that account is burned. This way they'll continue to use it, and you can use this data to find related accounts and /dev/null them as well.
"Media does not spread free opinion; It generates opinion" --Oswald,1918 https://en.wikipedia.org/wiki/Decline_of_the_West
Sure, it might not be as 100% successful as Fiverr (though I imagine it's fairly easy for Reddit to ad-hoc identify voting blocs if something was known to be bought). But you could employ additional optimization techniques, such as the one used by most high-karma users (e.g. Gallowboob): if a post fails to hit critical upvote mass, then delete and resubmit later in the day.
To give you an idea of how things seem to be relatively unmonitored until users flag it, there's the story of Unidan:
And as a more recent, obscure example, there was the mystery of why the mod of r/evilbuildings had something like 499 of the 500 most upvoted posts in his own subreddit. The math was so laughably in favor of manipulation but a Reddit admin, using whatever shit tools they have to investigate this, acquitted the mod:
The details of how this mod was able to boost his own posts without being called out for vote manipulation is too banal to explain in detail (basically, he would shadowdelete other popular posts so that his would get picked up by the Reddit front page, and then undelete the popular posts before anyone noticed). But the fact that a Reddit admin (I.e. a paid employee) thought that the evilbuildings mod always having the top post in his own forum for 6 months straight was just a coincidence, and/or because that mod was just apparently an amazing content submitter, spoke hugely about how uncreative the Reddit admins might be in detecting fraud.
Edit: if you are interested in subreddit drama details, here's a thread that combines the evilbuildings drama and Gallowboob: https://www.reddit.com/r/SubredditDrama/comments/6d3syc/evil...
If this is the kind of effort users put toward imaginary points (though arguably raising karma is part of Gallowboob's professional work), I'm nervous to think about the schemes that PR firms will construct when they realize the easy return on investment offered by Reddit popularity.
If you don't like the harmful ways that media manipulation is used, you might actually like that the book explicitly warns the industry how vulnerable it is to this practice, which can be used by anybody for any purpose. This warning came four years before the 2016 U.S. presidential election.
Is this something people would be interested in?
It's naughty without being outright evil. When did that become a bad thing on HN?
The great thing about being a top link is everyone can do it at the same time. It doesn't corrupt the process at all or waste anyone's time. Everyone can benefit from it and it doesn't make things worse for anyone.
For example imagine if all the top links on hacker news were just corporate advertisements disguised as stories. Would it be a worse place or cause any of us harm? Of course not.
Did you leave out the </sarcasm> tag by accident?
How can the front page being topped with "publicity stunts nobody cares about" not make it a worse place?
(I must say I'm a bit blown away that you think I require a sarcasm tag, though - I wrote that "everyone can be the top link", an obvious absurdity since how can everyone be the top link, and that hacker news wouldn't be any worse if every single link were a paid story. I didn't think I came even close to having to mark my comment as sarcastic, which I sometimes do.)