Hacker News new | past | comments | ask | show | jobs | submit login
Reddit overhauls upvote algorithm to thwart cheaters (techcrunch.com)
137 points by gk1 on Dec 7, 2016 | hide | past | favorite | 235 comments



It's pretty clear to me that Reddit has always lacked a solid experienced science-driven data person. Someone who can pinpoint an issue, formulate the issue into a product-driven concrete problem, come up with hypotheses on how to fix the problem, then use their vast wealth of data and users to test and/or implement them.

It's not just the ad hoc unintuitive "solution" they came up with to mask votes (which most people agreed was a user-antagonist step in the wrong direction). It's a lot of things: their inept response to spam/vote brigading/other abuse; their lack of personalized content discovery (state of the art from 10-15 years ago would probably suffice). I could go on, as a product-driven data scientist who has been on reddit since the early days, I'm constantly struck by how little they do with the amazing data they have.


Redditor of 11 years here. I don't want another filter bubble that only shows me what it thinks I want to read (like Facebook). I want to see what other people are seeing, and not just like-minded people either. For that I can find subreddits and subscribe to them.


But it's already like that. They removed the API that allowed RES to show you individual up/down votes per-post. They've banned tons of subs for varying levels of controversy. They removed their warranty canary. Their CEO was caught changing comments and they weren't asked to resign; got away with just an apology?

I realize that Voat is a cesspool, but anonymity -- real anonymity -- tends to let you see peoples' real dark opinions (hence all the chans .. and why Google requires real profiles for YouTube comments now).

I think Reddit filters a lot more than people realize; more than just the banned subreddits and edited comments. It's been a filter bubble for quite some time.

In anycase, I've given up on Reddit .. and Voat. I guess my big addiction now is Hackernews. :-P


> anonymity -- real anonymity -- tends to let you see peoples' real dark opinions

No, it lets you see a narrow slice of people's real, dark opinions. There are a whole host of opinions you'll never see on Voat because the user base is antagonistic to the basic ideas which underpin them, so there's no point in trying to express them on that platform. For example, the notion that men are psychologically torturing women just by enacting common male body movement patterns in the workplace would never see the light of day on Voat. That's dark. It's my real opinion.

There's no such thing as a "bastion of free speech". One person's safe space is another person's hell of antagonism.


Can you elaborate on what you mean by "men are psychologically torturing women just by enacting common male body movement patterns in the workplace" ?


I'm wondering too. The only thing that comes immediately to mind is adjusting your pants in the crotch area, which, well, sometimes it's uncomfortable without there being any sort of excitement, and it's just a matter of comfort. Like pulling your underwear out of your but.

If that's an example of what's being referred to, it's really, really unfortunate, because it's a matter of one party acting totally innocently (most of the time), and the other party being conditioned to take it in a non-innocent way. That said, I could be totally off-base.


That's not what I was thinking, no, and it's a little weird that you immediately guessed it was something that you don't believe is an actual problem. Like, you didn't even wait for me to answer the question before deciding my answer was likely to be something bogus.


I don't think it's that odd, since I can't think of anything that qualifies for "men are psychologically torturing women just by enacting common male body movement patterns". Not thinking of anything that qualifies leaves me with things that I think don't qualify.

> Like, you didn't even wait for me to answer the question before deciding my answer was likely to be something bogus.

Is it that hard to assume I'm acting in good faith, and just trying to figure out what what you are talking about. It's obvious by this point that a statement like that caught a few people as odd. I don't think it's realistic to expect people not to speculate when you make statements that allude to behavior that many people take very seriously, and rightly so, without providing any sort of example. I wouldn't make a statement like "I believe my mayor is implicated in a murder" without providing at least some details, or at least some generic reasoning if I wanted to keep it fairly anonymous. Otherwise I would omit a statement like that. Expecting people not to speculate after something like that is unrealistic.

To be clear, I stressed the "If", and I followed up with "I could be really off-base". I'm trying to have a substantive discussion, and allowing for myself to be wrong. If you aren't prepared to do either, please don't post bait statements like that.


Yeah, well, unfortunately, you'd be wrong for thinking it's bogus. I've actually been verbally attacked for adjusting my swimwear on a beach in Mexico (an empty beach, mind you, that had all 6 American adults on it -- 4 of them being myself and my group).


Well feel free to correct him then.


What's the point? Why bother when we know what the response is going to be?


Is that directed towards me? If so, please don't assume I'm acting in bad faith, or am unprepared to to be open to this discussion. I'm just honestly perplexed at what "notion that men are psychologically torturing women just by enacting common male body movement patterns in the workplace" could be referring to.

> Why bother when we know what the response is going to be?

Perhaps I should have refrained from getting out in front of a possible reply, but I'm really interested in what that statement is referring to, and beyond what I speculated, I'm completely clueless as to what it could be, and I would rather be aware of a way behavior of mine might negatively impact others. If it is what I speculated, I'm prepared to have what I consider a substantive discussion on about that topic.


> Is that directed towards me?

No.


Not off base at all. Even with your off the top of your head example, I'm sure everyone reading this can think of that person they know that would get offended by a male "repositioning" himself.


Yeah, I'm also interested. That statement is like those local news promos people like to make fun of, "it's killing millions of Americans and you could be eating it for dinner, find out on tonight's news at 10!"


This will probably be detached for being off-topic, but until then...

Men and women move in very different ways due to gender policing. Things like how much you are allowed to/required to move your hips side to side are different for men and women. Sometimes in the workplace women are put in positions of less influence and less pay. They experience repeated instances of being passed over, thwarted, harassed, etc, while men with less capabilities, less force of will, in many workplaces advance further on average. Something as small as the way someone moves their hips while they walk immediately signals their gender, which on a given day could trigger feelings of powerlessness.

Or consider a woman who was being harassed by a man at work, and the first man she told was skeptical it was meant in the way she thought it was. She went home and got support from a bunch of girlfriends. The next day you walk into her cube with your male movement, and she's thinking "is this one on his side too?" It's possible that a more female-typical mode of moving and entering would've been less triggering for her even on an otherwise male body.

I probably need to note that I am not suggesting that ONLY women experience things like this, or that ALL women experience things like this, just that it is something that happens.


I'm a man, and I'm a feminist. This isn't controversial, regardless of what large swaths of the internet thinks it is. I believe in equal pay, I believe that we should work into our compensation culture new ideas to make sure that women who are societally conditioned to ask for less don't end up making less just because their equally qualified coworkers are more aggressive. I believe that we have a rape culture in western society because to this day rape victims are questioned and second guessed, and to some of those same areas of the internet a man falsely accused of rape is considered a worse problem than actual rape. I also believe that "Social justice" is inherently an important problem, and it pisses me off that "social justice warrior" is a derisive insult.

I'm also an ally - I call out male coworkers who talk over or interrupt my female coworkers. And if a group of male coworkers get too "bro-y", even without women around, I'm the villain to say "yo dude, don't be sexist", and not back down from the "It's just a joke" defense. I'm a software engineer and I regularly introspect about how I can take active steps to make my environment more welcoming to women engineers.

I say all this to give you SOME of my beliefs before I say the next thing as un-insultingly as I possibly can.

The stuff you are talking about is a step too far, and it actively actively harms the cause of progressive feminism by making it seem frivolous, and immature. There is real harassment going on all around us, active/aggressive, and passive/insideous. But the way people WALK? Are you kidding me right now? I am an ally with my words and my actions. I will not change the way I walk because it signifies my maleness to a sensitive little wallflower who apparently feels powerless in front of 50% of the visible population?

I hate the "#notallmen" hashtag because it was reactionary and distracting from women who were at the time trying to speak real truth about their experiences. It misses just as much of the point as "All Lives matter".

But in the case of your example, a woman who is being harassed at work surely does not assume that all men in her workplace are against her by default, on account of their maleness. If she does, it has more to do with how they treat her, and not how they walk through the office.


Oh, just noticed you followed up here which. of course is where you should have followed up. I just wasn't looking in the right spot.

> Things like how much you are allowed to/required to move your hips side to side are different for men and women. I had interpreted "common male body movement patterns" as movement patterns unique to males, which severely limits the possibilities of what they could be, which is why I had speculated on movements regarding adjusting the crotch area. Given how I had misinterpreted that statement to limit the possibilities, that's the only thing I could think of.

Okay, if you're talking about double standards, and how a woman that moves her hips more might be seen a different way than one that doesn't, and this may affect perception of her, I totally agree. That's a double standard issue, to a degree, and it's not okay.

> The next day you walk into her cube with your male movement, and she's thinking "is this one on his side too?" It's possible that a more female-typical mode of moving and entering would've been less triggering for her even on an otherwise male body.

I don't really understand this. How is your mode of walking, which is natural, or any other feature, such as facial hair or just the fact that you are indeed male, anything you can help? It may sound like victim blaming to say that I think in this instance, it's really the responsibility of her to make sure she's not projecting her feelings onto innocent people, but I don't think it is. I'm not sure how this situation is any different than a white man that feels threatened by a black man one day, and the next day a separate black man approaches him casually and he feels threatened.

That is, I accept that some women, and men, might have exactly these feelings, but I'm not sure that it's the responsibility of the other party to solve that problem. Is that the point you were making, that just accepting that this situation is possible is not allowed by the point of view of those forums, or is it a step farther, and it's the idea that this is the responsibility of the other party (the man in your example, the black man in my example) to somehow change their natural state to address this? I understand the former, even if I don't know enough about Voat to assess whether I agree about them, but I'm not sure the reasoning behind the latter, if that's something you are trying to express.


Oh, you literally mean the way they move.

I'm not sure what you're suggesting. Do you want men to sway their hips? You think something like that would help? Or would that be a reasonable thing to ask since it sorta reduces to the structure of the body?


Somebody so unstable they get "triggered" by how ordinary human beings walk needs professional help. At some point, you have to acknowledge that certain complaints are not reasonable.


>No, it lets you see a narrow slice of people's real, dark opinions. There are a whole host of opinions you'll never see on Voat because the user base is antagonistic to the basic ideas which underpin them

I don't know how it is on Voat, but on 8chan I'm a frequent user and poster of /leftypol/, titled "Leftist Politically Incorrect", which is basically the flip-side to the infamous /pol/ boards of {8,4}chan. On there people generally don't get banned for having "wrong" opinions (though it does sometimes happen). There are no votes, there are only threads on the board.

I like this a lot more. Taking 8chan as a whole, you get to see different sides. Recently there was something at 4chan's /pol/ where a janitor was removing posts of someone espousing left-wing opinions. The board moderator intervened and said that any political position is allowed on the board.

It really depends where you go, but you are right - the user base of a site is antagonistic to people with opposite views. But they can live side by side, even with an occasional 'raid'. I don't know if I can thank the founder of 8chan enough to have a place where people of my particular opinions (Marxism, socialism) can talk. I'm sure people across the spectrum say the same.

Take a look at /leftypol/ if you want and see that there are frequently debates between sides, and it's absolutely great. I wish I could say the same about /pol/. I posted a comment saying that no matter what position you are on, a vote re-count would be good for democracy. I was permanently banned for, and this is a direct quote: "defending jews". That was the reason.

Depends where you go. That's my 2c on the issue.


Even without moderation, anonymous imageboards have their own way to derail civilized discussion.

Someone could post a thread with an interesting premise, and the first response could be $racial_slur, and then the hivemind would repeat the process and spam $racial_slur indefinitely without any kind of real discussion going on. For the lulz.


This is certainly a problem, but I find it happening less on "special interest" boards. I don't think it's too far from Reddit in this respect. On /r/funny you might expect something like this to happen, with the top 2 comment threads being puns, or unenlightened comments. But on /r/EngineeringStudents or /r/compsci or /r/lisp ... (subs I vaguely remember) that's less likely to happen.

Right now on /leftypol/ there aren't many threads of what you describe. They do exist, but they aren't numerous. What you describe is more similar to 4chan's /b/, but even 4chan's /g/ has "tightened up" and is being steered more toward discussion (still with racial slurs but not entirely focussed on them). A notable exception is the "DESIGNATED SHITTING STREETS"/ "POO IN LOO" meme, though.

I don't know if these are just my experiences.


I hang out a lot on /diy/ and /out/ and some on /pol/ and behavior is definitely driven by board culture not technology, as the underlying technology is obviously the same. The worst I see is on /out/ is the mora knives nuts, sure I get that you like that knife for outdoor work thanks for telling us the 10000th time now go away. The worst I see on /diy/ is, well, nothing, maybe some half-baked electrical advice sometimes, well intentioned maybe but if you're doing something weird maybe wiring your own subpanel if you have to ask /diy/ how to ground it you should probably give the job to an electrician. The worst I see on /pol/, well no point going there here on HN, but you can guess.


I meant more as in 4chan and /pol/. Haven't gone to 8chan or /leftypol/ but I'm always interested in quality discussion. That's why I frequent Hacker News.


I think it happens less on special interest boards because there are less posts altogether on niche boards.

/p/, /trv/ - slow boards that have their own shitposts.


Aren't the chans not about your real beliefs, but simply trolling to get reactions ?

http://www.thedailybeast.com/articles/2015/09/11/terrorist-t...

This guy got articles published as radical feminist as well beyond being a fake isis terrorist, fake white nationalist, fake jewish lawyer.


I think this is a narrow position. On the boards on chans that I frequent, this isn't at all true. Yes, there are occasionally "trolls" who pretend they have positions they do not. The sarcasm may be subtle or not, depending on intention. But far more predominantly there are people who espouse their views and back them up cogently. There are also devil's advocates.

But I wouldn't say that chan culture is about this, or that there is an inextricable link between them. It's simply a common occurance by nature of being anonymous. More often than not, "bait" is called out as such, but even if you're being serious, you can ignore that.

A mixed bag, but certainly not representative. I know of many on the politics board who do sincerely hold nationalist or white supremacist views, and I have argued against them. I also know many on the left-politics board who will talk you through Marx's critique of capitalism.

There is some nuance in my opinion.


[flagged]


[flagged]


Interesting about people getting banned. I haven't browsed either for a while. More diverse within leftist politics, but across the political spectrum?

I don't think a forum full of edgy memes is a great place for political discussion, /pol/ or /leftypol/


The memes are "banter" and fun, which a lot of people really do enjoy. It's like a bar, where you have most people talking about banalities, but there are a few people who will explain theory to you, link you to books, debate you etc.

Generally a thread is either a meme thread (with small discussion if any) or a serious thread with discussion.


I like Hackernews because I actually learn things here and have genuinely interesting conversations with relatively polite people, but we only seem to talk about tech stuff. Cryptography, web dev, programming, etc. Nobody here wants to talk about like, motorcycles or raising pet rats or other things I'm interested in outside of technology, so it can't replace reddit for me.


And not even all tech! I've actually started spending more time on Reddit because it's more interesting in general. When a post percolates up here about motorcycles or manufacturing or some cool analytics on an obscure topic, I have a great time. But they're not common, and there's always some StackOverflow-style "this doesn't belong on HackerNews". (But they also seem to pull people out of the woodwork like gangbusters - HN loves the occasional off-topic interesting post! But not constantly :)

Which is fine, I guess. That's why we have things like Reddit! Perhaps it's a variant of the Unix Way: have sites that provide a specific style of service, either in content management or filtering or whatnot. And appreciate them for that!


You two should post such stories here, then. They're explicitly on topic:

https://news.ycombinator.com/newsguidelines.html


To be sure, and you've done a masterful job encouraging it! I usually see you there to prop up the cool stuff, and I certainly appreciate it.

And in full fairness, when I made that comment, a whole bunch of stories were non-tech related, but interesting, so I probably should retract my comment =/. (One was about bread!)


Yeah that's kind of where I'm at with Reddit as well. It's not a place where I can browse anymore. I just can't bring myself to wade through /r/all, but there are certain subreddits where they just really have the community size that can't be found outside reddit.


In regards to the warrant canary, well, that is kind of how that works.


I'm afraid that our research group was the reason that Reddit removed access to the vote count. Not that it has stopped us from doing research on reddit... We submitted another paper a week or so ago, and will be releasing or dataset upon publication.


That's unlikely.

For clarification, only the breakdown of up/downs was hidden, not the overall score, and it was hidden because when people saw posts with tons of downvotes, it invited more downvotes.


Alexis Ohanian visits Notre Dame, & specifically meets with my advisor. We show them what we are working on in regards to analyzing Reddit (probably not the wisest thing to show them a work in progress, before all the data is collected). They're not thrilled about it. A few weeks later, a crucial part of the information is gone.

You tell me.


The change happened in 2014 (http://www.reddit.com/r/announcements/comments/28hjga/reddit...), when Ohanian had less of an affiliation with Reddit.


That's also when he came to Notre Dame.


What does your research entail? Sounds tantalizing


http://dsg.nd.edu/soc.html

I'm in the same lab, but I don't work on the Reddit projects currently (although I am on the most recently submitted paper, which, due to it not being accepted yet, is not on the website, either).


oh wow. this is like data porn for me. (I was a Psych major at Cornell, and am a programmer now.)


Serious question- Do you think there's room for a new discussion site which embraced all these lessons that other discussion sites seem to perpetually be ignoring (perhaps due to simple entrenchment)?

Because I have oft floated the idea of starting just such a site.

Anyway, I somehow (as an avid Reddit user) missed the news that they lost their warrant canary (probably because I unsubbed from almost all the default subs due to low signal/noise ratio): https://www.reddit.com/r/worldnews/comments/4ct1kz/reddit_de...

But in googling it, I did notice an interesting site related to warrant canaries: https://canarywatch.org/ EDIT: crap, no longer being updated: https://www.eff.org/deeplinks/2016/05/canary-watch-one-year-...


"Do you think there's room for a new discussion site which embraced all these lessons that other discussion sites seem to perpetually be ignoring (perhaps due to simple entrenchment)?"

There's a ton of discussion sites. It's not "reddit or HN or nothing"; it's a question of which of the bajillion forums, blogs, link aggregators, email lists, or other things you're interested in joining. There's room for all kinds of more of them, too. There's so many choices it's more a discovery problem than an availability problem.

However, if you go into it with the hopes of being the next thing that is the size of Reddit, statistically, you're going to be disappointed.

It isn't even clear to me that this is a very good goal, either. Running a site the size of Reddit is an enormous, enormous hassle. The controversies currently associated with it at the moment are merely specific instantiations of the general fact that you can count on being vilified by a good 20%+ of the community once you reach that size, that every change will be met with a huge outpouring of hatred straight into your contact box, and so on. Goodness help the poor person trying to start The Next Reddit (TM) with the naive idea that it is a good idea to be very available to the community... all that does is add to the inevitable flames some fresh flames when you have to reneg on that promise too. Unless you've got a solid plan to monetize enough to make it worthwhile, fast enough for it to be worthwhile, while still somehow organically growing without that getting in the way, I'd suggest shooting for starting a community for $X, for some value of $X of interest to you, rather than trying to be The Next Reddit (TM).


fantastic points, and spot-on about aiming for a solid community (if small) over "the next Reddit"


What lessons would you like to include on such a site? Off the top of my head, my favorite things about Hacker News: high level civil discourse, lack of humor (no, seriously, humor easily crowds out good discussion on other sites), excellent layout (collapsible tree comments), anonymous speech with ease to creating new accounts, light handed and responsive moderation. There's not much I don't like: narrow focus on tech rather than just general good content (but that is why it was created), no easy way to collapse a comment tree when you've scrolled partway through it, and... that's really it.

What specifically would you like to differently? I'm interested in more good discussion sites.


I could say "upvote category" (a la Slashdot or Steam where you can tag a comment as "funny" or "insightful") and "downvote category" (so you can specifically downvote for disagreeing vs. offensive vs. irrelevancy) but let me tell you a story about Reddit instead.

Once upon a time I was subbed to /r/redpill, usually to counterpoint their many arguably-false conclusions. Naturally I was downvoted to oblivion most of the time, because Reddit does not do anything to prevent echo-chambers (in fact it probably outright encourages them... because engagement), and thus, echo-chambers are a natural result of subreddits and "comment percolation via upvoting in agreement and downvoting in disagreement" (the fact that Reddit TOS says you should only downvote on irrelevancy, NOT disagreement, doesn't matter, because people are assholes and critical-thinking skill is apparently an extremely precious commodity these days).

At the same time I was subbed to a number of, eh, feminist subs, like TwoXChromosomes. Sometimes, I disagreed with them too (and got downvoted), but most of the time (slightly most of the time), I got upvoted on those.

One day, I got banned from ALL feminist subs. ALL. OF. THEM. Simply because some mod with a god complex did a cross-reference of everyone subbed to /r/redpill, and mass-banned them. My entreaties to the mods ("just look at my fucking comments in there! I'm actually fighting for you guys!!) resulted in no reinstatement. (While this should not matter, it probably didn't help that my username at the time was "GSpotAssassin".)

Around that point, I kind of lost faith in Reddit as a thing not fatally-flawed. (While deciding to completely step out of the modern gender wars. It's a cesspool, anyway.)

I cannot emphasize how much I loathe echo-chambers. And it seems to be getting worse. I literally do not understand how people cannot tolerate the occasional discomfort of a dissimilar viewpoint, especially if it is well-argued. But I'm a rationalist, and we love a good debate, basically. Sadly, this does not at all account for most people, who I like to joke "prefer to belong than be right".


I guess the problems come with the fame. As long as you are a small site, you can do pretty much whatever you want, but as you grow, you have to compromise in some way. You can bend to advertisers and lose some of freedom, or go down the chan way and get a name that reflects what you have to offer and probably have a revenue that is consistent with that.

On top of that , there's the fact that the average user, simply doesn't care about all this freedoms or number of upvotes and stuff like that. They just want content and reddit right now has enough for everyone.


> Google requires real profiles for YouTube comments now

This was rolled back a couple years ago: https://www.theguardian.com/technology/2014/jul/16/youtube-t...

(Disclosure: I work for Google, on unrelated things.)


FWIW they removed the individual upvoat/downvote counts because they were essentially made up


"Google requires real profiles for YouTube comments now"

And consequently, Youtube comments now solely consist of civil dialogue and scholarly critiques of videos.


That usually comes down to product and business decisions, not the algorithms. Facebook (like Pandora and many others) falls into the same trap of optimizing too much on top of their algorithms. Instead, some gentle prodding in an interesting direction is usually enough to prevent a bubble while guiding exploration.

I'd argue that without some guided exploration it's much easier to fall into your self-made bubble. For example unfriending people on facebook, or listening to the same 5 artists on Pandora.


But surely you want some kind of filter bubble? Otherwise half (?) your webpages/content would be in Chinese, right? So surely you want an "English-language" filter bubble?


Exactly this. That's what I love about the front page, it's the front page of the Internet, not my front page. Of course, reddit has its own bubbles to deal with, but in general I like the different perspectives I'm given.


> it's the front page of the Internet, not my front page

It's your front page, but the method of filtering is a little different. It's gradually turned into an echo chamber filled with like-minded people, and everyone else has been driven off the site or forced to post in "fringe" subreddits that you'll never see unless you make a concerted effort to seek them out.

And that's one of the biggest problems with filter bubbles: You're so insulated from reality that you don't even realize you're in one.

Reddit is one of the worst filter bubbles on the internet. All of the top "informative" subreddits are (or have been) heavily censored by cliques of moderators who rule by opinion: Politics, News, WorldNews, etc. Even /r/Videos and gaming subreddits are heavily censored. Try talking about legacy WoW in /r/WoW and watch how fast all your posts are deleted. Most of them will be automatically filtered by AutoModerator keywords.

Or look at how much coverage all the WikiLeaks releases received during the election. Arguably one of the most important stories of the election cycle, yet without /r/The_Donald (which Reddit is now trying to suffocate and ban), there would've been zero sign of them on Reddit.

That's what happens when you let unpaid volunteers rule without rules.


I couldn't agree more. In theory, Reddit is a balanced website because of having upvotes/downvotes, yet it couldn't be more obvious on a lot of subreddits that there are slants towards certain ideologies. /r/politics for example swayed heavily between certain candidates during the primaries and election in way that was at times largely hyperbolic. On sports subreddit, heaven forbid something social or political gets submitted.

Often, I learn more from watching two people from opposing sides debate a controversial topic via a podcast than reading debates on Reddit whereby it's usually a popular opinion that is defended by a majority of subredditors vs. one dissenting voice that may have a legitimate argument (e.g. asking for proof of allegations towards a certain President-Elect).

Also strangely, questions that elicit excellent discussion can often even get downvoted. I think that even if you ask a question based on a faulty premise, you shouldn't get downvoted if you have the openness to admit your ignorance and help others understand a point of view. By downvoting those questions or opinions, it only pushes that potentially popular viewpoint down to the bottom, whereby instead of helping others who may have the same POV understand the counterargument and enable true discussion between two viewpoint, it can make one side feel ostracized and unwelcome.

I'll even give a specific example: On the topic of encryption and privacy, I'm sure some people believe that "Well, I have nothing to hide, so why should I care about a government who wants to expand their surveillance capacity if I know I'm not a terrorist?" I feel like on /r/technology or /r/politics you'd get downvoted for that question, despite the fact that I'm sure a lot of readers have that particular point of view and could benefit from having someone gently articulate counterarguments.


All good points. The voting system just isn't conducive to balanced discussion. It's a perfect recipe for group-think, especially when overseen by self-selecting moderators.

And to make matters worse, many subreddits rate-limit your account if you receive too many downvotes. You are literally silenced for posting unpopular opinions. They also usually block new accounts, so you can't even participate in the discussion via throwaway.

"Only posts that agree with us" isn't an official rule, but it might as well be.


Personalized content discovery is not only about filter bubble and I believe the key word here is discovery, not shoving to your face in homepage. It can also mean discovering small subreddits that you might like. Or some old super popular posts that you might have missed.

Some of this would be forming a bubble (e.g. similarly minded subreddits to the ones you're following), and some not (e.g. somewhat random thing that you might like to binge read). You might find small subreddits on your own, that's great, but that might help other users.

No need to paint algorithmic stuff only in such dim light.


Algorithms that are optimized to show more of what a user is interested in, shouldn't be equated to creating an echo chamber. A good algorithm could include variety and conflicting positions, if that's what the user shows affinity to. I don't think the concept should be discounted because of weak implementations.


I think it's kind of like CG in movies. Most people don't notice good CG, they only notice bad CG, so they tend to think "all CG is bad".

I've found that I only notice "echo-chamber inducing" algorithms, while I'm sure I don't even notice good algorithms.


Because by definition that's what it is? There's a place for tailored and filtered feeds but there's also utility in seeing what the rest of the public finds interesting.


> A good algorithm could include variety and conflicting positions, if that's what the user shows affinity to.

Most users will not show an affinity to conflicting positions. It is confirmation bias in action.


Maybe the algorithm should be guided somehow by an indication that you like to be challenged. Replying to a post would be a good indicator you had some affinity to it, even if only to debate it.


I'm not agreeing or disagreeing with your observations of news filter bubbling... BUT... I just thought of the analogy of music filter bubbling. Only a couple years ago when there was a big push for awhile for music streaming there was plenty of debate to observe along the lines of some people seemed rather horrified at the stress of having to select their own musical playlist where others were ready to go nuclear because they wanted to rock out to Pantera and some human or algo DJ insists on flooding them with Taylor Swift.

I don't even care which argument won in the marketplace for "music filter bubble" or even "news filter bubble" but it is interesting that very strong, perhaps equally strong, arguments exist for both.


>I don't want another filter bubble that only shows me what it thinks I want to read (like Facebook).

Some of the most interesting commentary on reddit are buried at -5 votes because almost all voting is an emotional reaction or treated as a "disagree/agree" button. I think its bad enough as-is. I don't want to see reddit build out a 'safe space' for me by analyzing my views and what subs I visit. If anything, I really want to see the threshold for when comments are collapsed to be much higher than -5, which seems easily gamed and way too quickly drowned out by knee-jerk disagree voters.

I wish these sites would move towards a slashdot-style moderation where your vote needs to be justified. Like "funny," "controversial," "insightful," "well sourced," or "poorly sourced," etc. Then you can adjust your algorithm as needed to promote quality comments, even if they go against the grain. Or users can tweak their own prefences like having a tick box or drag-n-drop to prefer 'insightful' over 'funny', for example. Or 'well sourced' over 'insightful.'

Sadly, social networks, forums, etc probably are highly disincentivized to do this as they assume we all just want tailored content. Or that if there isn't enough tailoring then users will just jump to the competition. I think most people can be okay with contradicting or some low effort, but still, insightful commentary. Perhaps we're overly coddling each other.


I once had to raise a huge security risk to Comcast and resorted to a reddit post after very little luck raising it through official channels. It was a pretty sobering experience to how reddit users use downvotes for things they didn't understand or disagreed with. A lot of the downvotes came from folks who angrily commented that I was too vague. Giving specifics is just reckless, and if anything, I probably gave way too much information. Any less vague and it would have been out in the open.

Some official folk from Comcast eventually showed up and almost immediately got it fixed (!). Afterwords, there were posts about how "reddit saved the day again". The customer rep who helped out ended up getting more upvotes than any of my posts. I really don't get it.


I'm near expert level in one or two categories. Its difficult to post about my views because they're academic and involve history, the debate, popular consensus, etc. My comments just get downvoted because the hivemind just wants simple answers. Those simple answers are easier to digest, easy to know if you agree or disagree, etc.

This is also why /r/askhistorians is so heavily moderated. If you just let it run itself the top comments would be low-effort junk like "Tesla god; Edison devil" and other nonsense with almost no discussions that go counter to the grain or discussion of the many shades of gray between "good and bad" and "right and wrong."

>Afterwords, there were posts about how "reddit saved the day again".

I feel sorry for the people who post incredible things, especially some of the short story writers, but the press narrative becomes "Reddit did it again!" It wasn't "reddit" it was a single author. I really wish there was more of an emphasis on individual contributions there and the people behind the usernames.


Anyone can complain about a security vulnerability or so forth. However, in getting it fixed, it is reddit that did it, because the "doing it" was getting the high-profile attention of someone who could fix the problem, which requires a reddit-level audience.

Similarly, if a newspaper publishes something about how a local company hurt a customer, it's the clout of the newspaper that "did it", not the customer.


If the information was noticed after being stapled to electrical poles, should Comed be given the credit? Hell no. Just because reddit is where the ack happened, that doesn't mean it was reddit who "did it". It just happened to be the place where it worked. It feels pretty unfair to give reddit the creddit here. It would have been significantly easier to either sell it or ignore it. Fuck both of those options.

FWIW, the disclosure sparked the discussion about a bug bounty program and a strong interest to reorganize their methods of raising security issues. I'm told both will go public, "soon". Whatever that means.


>Anyone can complain about a security vulnerability or so forth.

The skillset to find a unknown security issue is rare. The skillset to post to reddit is common. I think you're wrong here. The author deserves the credit, not the site.

>Similarly, if a newspaper publishes something about how a local company hurt a customer, it's the clout of the newspaper that "did it"

Because the reporter is an employee of the newspaper and like all employees is part of the organization, so its easier to credit the organization. How is reddit like a newspaper? They aren't his employer. Its more like publishing something to the classified sections of a paper than being a reporter. It would be crazy to credit the paper for its classifieds.


Someone should make a site with giant agree/disagree buttons and then separate adds/detracts from discussion buttons in a separate menu.


> “At one point I just ask him, ‘how’s the data science team at Reddit?’ And [Ohanian] said, ‘what data science team?’” Weiner recounts to me.

https://techcrunch.com/2016/05/25/reddit-cto-marty-weiner-on...


I absolutely fucking love Reddit's lack of "personalised content discovery". The equivalent in any other platform is awful.


> The equivalent in any other platform is awful.

I think that's absurd. When personalization is done right you probably just don't notice it. For example, Google search makes fairly heavy use of personalization (in a great way IMO), but most people don't even remember it being added ~5-10 years ago. But, if you took it away, I think it would be starkly obvious to a lot of people.


So far the only curated list I've liked is Google Now's news feed and Quora is OK but lately the big discussions have been about simpler topics such as physical looks.


As a data driven scientist, if you were the king of Reddit, what would you do differently and why would it be better? Not playing devil's advocate, but just curious.


I (not who you asked, but...) would first start with a problem statement. I'm not sure what exactly they're trying to solve, yet.


I think their motto/slogan/whatever of "front page of the internet" is pretty straightforward.


If I were in this position I would empower people to filter out cheap content, fluff, memes and keep the in depth discussions and articles. Each user could choose the filtering level.


It would be interesting if instead of just up and down arrows, you could tag posts as "insightful", "funny", or "inspiring". Too often, the up votes are used for things that are funny, and throwaway one liners and memes dominate all the popular subreddits. I would love to support by insightful and get discussions similar to HN.


Uhm.. I have a very low Slashdot ID number, so I saw that site develop from its start (19 years ago). Back then it was what HN is now. They have those things. It has become unreadable and unusable (has been for years now) - the most idiotic and trite one-sentence statement is tagged "insightful" if it the visiting crowds like it (which happens a lot). Go and see for yourself, if you dare: https://slashdot.org/

Example - and I clicked exactly once on a headline and selected the very first comment tagged "Insightful", so I made no attempt to find an example that fits my narrative:

  Win 10 is good OS that would be quickly adopted if/when MS
  decided to remove or make optional bolted-on telemetry
  malware. Such "feature" is simply not acceptable on a non-free 
  product.
That's it, that's the entire statement. I have to add, for those who don't know the site, only a few people have mod points at any given time (you get 5 or 15, distributed randomly after an internal algorithm giving the active users more and more often), so they are precious. Less than 10% of articles are tagged at all for an average topic, so one would expect the ones marked "Insightful" or "Interesting" to actually be just that.

If you now think "They need meta-moderation, moderation of the moderators! This should improve quality!" - well, I'm sorry to tell you they've had just that for ages...

So simply implementing any mechanism in software is not sufficient. There is no magic algorithm or method, thus far anything someone tried eventually deteriorated. I must say reddit is actually doing surprisingly well, mostly because they managed to have "sites with in the site" with their individually managed subreddits.


One problem with /. is that it favors the quick post. The maximum score is +5, so the tiebreak of whoever posted first comes into play very quickly. So there is a race to post quickly among those who want their opinion to be heard.


Can they not do exactly that with their subscriptions? Unsubscribe from AdviceAnimals and subscribe to AskHistorians.


There's a need for in-sub filtering too for non too specific subs. It's said to be possible with the Reddit Enhanced Suite thing ? but I never used it. I just echo what people say.

Sure for very narrow subs, with adequate moderation manpower, the filtering can be done this way.


But you can accomplish the same thing by creating new more moderated subs. That's why we have /r/AskHistorians in addition to /r/history. If the sub isn't specific enough the solution shouldn't be to falsly and silently fragment the sub's users via user-side filtering, it's to explicitly split off a different community that behaves the way you want. In that way there is no "non too specific sub".

If you participate in a subreddit, but filter out 70% of the content, in what way are you participating?


Sure, but things are wavey and blurry, you can't always foresee if a dedicated sub should be created, sometimes it's in waves.


True, but if your users are filtering out parts of the conversation you'll never reach a point where it's obvious there should be another community created, you'll just have multiple communities talking past each other.


Sometimes there is a mix of fluff and insightful comments in the same subreddit, or even in the same comments thread.


Which is generally what most people like. But if you only want the insightful, in-depth comments you're more likely to get them in a subreddit dedicated to those types of comments then randomly in a more general subreddit. There are even subreddits dedicated to finding those few random insightful comments out in other more general subreddits, if that's really what you're looking for.


You are not alone with that frustration, in a broad sense: Team X has product with great potential, expert wants to make it better, Team X just wants to figure it out themselves...

I have come to the life conclusion that owning your own thing is the ultimate path for anyone who wants to fully realize their expert vision. That's what motivates me to do my startup.


Could you elaborate on "their lack of personalized content discovery"?

I'm usually on the fence of bringing personalized content on platforms where you usually want to remain anonymous.


It could be as simple as "Hey, we noticed you spend a lot of time on r/gaming and r/frugal - might we suggest r/gameDeals?". Or perhaps even just filling some of the front page with threads from subreddits that may interest the user.

I think there'd be a lot of interesting discovery available to reasonably active users of reddit. (I've spent a significant % of my career on personalized recommendations, and in my opinion reddit's product is close to ideal in many regards).


As a proof of concept, I made a simple related subreddit finder using solely the commenting data of users: http://minimaxir.com/2016/06/reddit-related-subreddits/

The "like X subreddit and Y subreddit? Here's Z" is not infeasable to do with ALS on a similar dataset, but requires a better computer than I have. :p


So something along the lines of... a new tab on the homepage, "You Might Like: posts upvoted by people who tend to vote like you"?

That is: compute a "similarity score" between each pair of users, higher for each post they both voted on the same way, and lower for when they voted oppositely; then, when computing the ordering for the "You Might Like" page, multiply each vote on the posts by the voter's similarity to the viewer, instead of counting each voter equally.

Or something like that - I'm just a layman tossing ideas around, no doubt there's more rigorous and/or performant statistical methods out there.


i've talked to a reddit engineer before and he said that the masking of votes was done more as an architectural decision rather than a feature. They have a queue of votes that takes forever to process and it's easier to mask the count than keep up with it


Since I can't really say more than this: there's a lot of misinformation and misunderstanding in these comments. Don't believe everything you read on the internet and don't believe the conspiracy theories that this has anything to do with censorship or surpressing political (or any other) discourse.

It's really just making the math less complicated.


The post where they describe what they're doing sure makes it look like that. From what was described, this is a massive refactoring and recalculation effort to pay down years of accumulated technical debt. Occam's Razor is probably good enough: they had tons of complicated rules in the system that goobered up their calculations, and they needed to get the large amount of ineffectual cruft out at some point.

There's even a clear note that the /top stuff should go back to normal after the recalc is finished and it's back up to speed.

Frankly, I was a bit shocked by the scope of it. Seems like a huge and terrifying update.


I'm not sure if there's a commonly accepted way to handle the logistics of this, but I'd love to see a blog post from them about how they manage this on a site with so many active users at all times.

I guess you start with backing up the production DB to an offline location. The fact that votes can be placed on old posts makes it difficult, I'd assume.


I'll see if I can find it, but there was a comment in an AMA on the topic where someone asked if they have a test server and they said no. Not because they're goofy noobs at the business, either, but apparently it's just too impractical. A lot of the testing really needs the load to be properly exercised, and simulating that is sufficiently hard to probably not be worth the effort.

So this update? Updating on Production, baby!

Edit: I bet they do have test servers, but nothing full scale. Which really may emphasize your point more: I would love to know how they tested it and merged it in when there's only so much they can do offline!


Trust is a fragile thing. Spez broke it repeatedly, even Ellen Pao said she would have fired him for editing user comments. Cant say I blame people for being paranoid.


The people who's "trust" was broken are paranoid conspiracy-mongers who are provably nasty actors who use any opportunity to spread anti-establishment vitriol.

Their paranoia is not justified, not to the extent of the conspiracy that they so routinely engage in.

Just this week a man brought a rifle into a restaurant a fired a shot because of the conspiracy and vitriol of that side of reddit.


Well thats just not true, the community managers and ellen pao understood the repercussions of it.

It destroys the legitimacy of all parts of the site and enflames conspiracies even more.

People are not stupid, my first thought was how many times did he do this without getting caught.


I agree. I am absolutely shocked at how many simply do not get the ramifications of that simple action. In some countries you can go to jail for your post history.


I'm shocked at how many people are so upset. Maybe you should be PGP signing your posts if their integrity is your top concern.


So you're a media dsitribution contractor in Washington, DC and you don't see how covert editing of social media posts is upsetting, overreach, abuse, or illegitimate? And you're making fun of expectations of the integrity of social media posts?


I think editing critical Reddit comments as part of a prank or maybe as a jokey form of retribution is an extremely ill-advised and boneheaded move. I don't know if it was abusive and it was hardly covert, but it was definitely not cool. I doubt Spez will make that mistake again.

But did it tell us anything new? Are we surprised that Reddit admins have the technical ability to edit records in their own database? (As do the admins of other social media sites...)


It seems Reddit admins don't have that ability, if I read the thread correctly. Rather, Spez helped build the system and just knew how to manage the prank. It wasn't even really a backdoor thing, he just had the know-how to do it (like being able to log into the database and update a row).

That's the confusing bit about the outrage to me: I think people expected it to be secure and tight, and ... well... I'm not sure what led to that conclusion. I can easily imaging that until Spez pulled that stunt, they just figured it was too hard to screw with and that the databases were sufficiently secure. After all, the techs have full access to the machines*. It's like being surprised that your ActiveDirectory admin can change your password at will.

But you can bet your booties they'll try to lock it down a bit better now. Wouldn't expect it to be the TarSnap of forums, though. Reddit's not exactly a bastion of authenticity :P

EDIT: What I mean here is that someone has physical access to the machines, right? Or at least some amount of root? Or can log in? I guess to me it's like being outraged that the guy who targets the Hubble telescope abuses his position and points it at Earth or something as a joke; huge amount of time and money on the line and it would be a dumb thing, but it's also pretty harmless. Might still get fired for it, though.


How would one PGP sign their post that prevent malicious editing after the fact?


If someone else edited your post, the signature would no longer be valid. So the attacker could remove the signature or sign it with a different key or leave it alone and hope no one notices, but they couldn't re-sign with your key since they wouldn't have access to your private key file.

Public key cryptography is really neat if you haven't played around with it before.

I was being half-facetious, but if you want to be able to prove that a post is actually written by you, you probably want something like PGP.


I still don't understand. If the edit left the public key as I left it, how would anyone know the difference?

Are you meaning a feature of the site that let's you sign your post with your key as opposed to pasting your public key in the post?


You don't just post your public key next to your comment, you use your private key to create a cryptographic signature of the contents of your post. Anyone with your public key can check that the signature is valid. Altering the text will cause that signature check to fail.


The issue here is that almost nobody actually checks the signature. Some people edit their messages to invalidate them just to see if they'll get called out on it - they almost never are.

So in theory, yes. But in practice - people are too lazy to validate or check the key (if the attacker replaces it with their own signed message) for every single post. This is a bigger issue the more users you have signing messages - as users begin getting lazier with checking each and every signed message.

Unless they are under a lot of eyeballs from people who do care. If Wikileaks "signs" a message and it doesn't verify or wasn't with their key - a lot of people will call it out. If I "sign" a message or use a different private key (very possible that I sign with the wrong key when I have multiples) - I doubt anyone would call me out on it.


Or use blockchain. https://steemit.com/


Reddit is a content management system for stupidity and spam. Let's not pretend it's some pristine, important institution. Trolling the_donald could only have been a step in the right direction.


You're fine with it today because it's a group you don't like. What about tomorrow when the same thing will be applied to a group you do like?


>You're fine with it today because it's a group you don't like. What about tomorrow when the same thing will be applied to a group you do like?

Then I'll leave. Simple as that.

I don't want an anarchist site where malicious actors and nasty people are free to race to the bottom because we're afraid of polite society.

The Founding Fathers of America differentiated between Liberty and Freedom.

Freedom is anarchy, but Liberty is freedom with common sense restriction. Liberty is freedom from onerous interference.

I don't want a Free Reddit, I want a Reddit that respects Liberty.

Not a Reddit which lets maliciousness exist because it can, but one which has common sense restrictions to create a better community.

It's no secret that far-more-strict control on Hacker News produces far-better outcomes for discussion than Reddit.

The hands-off approach to Reddit creates a race to the bottom for trolling and nastiness.


You raise a lot of good points here for sure. My main beef with reddit has been the lack of uniformity in application of the rules. It's better for everyone if there are clear rules, and the enforcement isn't capricious.



I'm not a fan of reddit, but occasionally they have some brilliance.

- http://thehill.com/policy/national-security/296789-gop-chair...

- their AMA's with famous people are pretty amazing, steve wozniak's was great.

- Can't forget their net neutrality protest either http://www.bbc.com/news/technology-29127179

- Alot of the programming related subreddits are pretty nice. It's how I got connected with Django Girls.


I mean, call them what you want but it lead to Brad Fitzpatrick who works on Golang to propose deleting their subreddit [1].

[1] https://groups.google.com/forum/m/#!msg/golang-nuts/XoOhzUCl...


This is mostly another fix of bad design that was added in Reddit's early years, but people have started to notice the aggressive vote fuzzing as it was calibrated for a much smaller userbase.

At the least, this will break statistical analysis of Reddit data for awhile since the public datasets will not have their scores updated, which I in particular am not happy about. :p


Complaints like this are the reason companies are reluctant to release public datasets. They don't have any obligation to release data, but they do. It's a gift. If they have to consider "how will this change affect the consumers of our public data releases?" every time they make a change, they're going to stop releasing public datasets.


For clarity, the Reddit datasets are not released by Reddit itself, but scraped through the API. (More context/examples of what I do with the data: http://minimaxir.com/2015/10/reddit-bigquery/ )


Okay, but reddit does also release some public data sets: https://github.com/reddit/public-data-sets


Ah, right, forgot about those. (Although, those are traffic aggregates and wouldn't be affected by changed in the score ranking)


Does this mean you (or I if I want to do some analytics on Reddit data) will need to completely rescrape the site after scores are recomputed?


If you wanted to compare raw scores for submissions before the change to those after the changes, yes.

Otherwise, it shouldn't matter.


An API is not scraping. Scraping is taking a loosely structure document with no agreed upon interface and extracting data.

A relevant comparison would be Craigslist who will use legal force to prevent you using data you scrape off the site.


Evan Miller did some very interesting analysis of the historical Reddit hot formula. It doesn't address cheating, but it does identify the details of the algorithm and some inherent faults.

http://www.evanmiller.org/deriving-the-reddit-formula.html

There is also the followup about ranking news items with upvotes:

http://www.evanmiller.org/ranking-news-items-with-upvotes.ht...


Can someone explain to me the relationship between their public source code and what's actually running on their servers? I thought their voting algorithms were all public, but there are no commits here indicating any voting changes.

https://github.com/reddit/reddit


Their anti-spammer code is the only code that is not published as open source


Gotta keep the secret blacklist secret.


It would be trivial to test whether a site is blacklisted, that's not why the anti-spam code is private.


Why does it seem like people are assuming negativity in my post? I do think the secret blacklist should be kept secret.


Silly that the announcements give so much attention to scores going up (due to a change in scale) and not to the content of the change. It would be trivial to keep the scale magnitude the same by multiplying by a factor ("old average frontpage post" / "new average frontpage post") or applying a logarithmic transform.

Makes me think that they are intentionally kicking up dust around the scale change, to draw attention away from the substantive changes to weighting/ranking.

https://www.reddit.com/r/modnews/comments/5goxk4/upcoming_ch...

https://www.reddit.com/r/announcements/comments/5gvd6b/score...


Neither of those links seem to indicate any changes other than the big one. What exactly are they trying to bury?


This is exactly what is happening. They are doing a lot more here than what it seems.


I wonder how they are pulling this off. I guess it takes stuff like account age into consideration. But I fear it is more based on what sub it originates from, context of the post etc.

It will be their go to excuse for why content is censored off the front page now. It has been problematic to excuse away the censorship lately, so this is a nice catch all excuse they just made here.


They didn't "just make" anything. They've had vote fuzzing, shadow banning and more from early days.

As both the techcrunch post and the actual reddit post say, the rules around it were complicated and hard to reason about. They've basically refactored and come up with a simpler set of rules.


I even recall a vulnerability write up where someone figured out that by using the "Show me posts with more than X upvotes" feature and a counter you could determine what a posts real score was.


We need to lay to rest this ridiculous notion that reddit is actively censoring anyone. A few subs have hosted outright illegal material, harassed users on and offline, and ruined the general ecosystem in the rest of reddit. The admins have reacted accordingly... sometimes. The bar to get out right banned from reddit is absurdly (and needlessly, in my opinion) high.

However, these specific changes have been planned for a while now, it was one of the principal topics of interest when spez came back. There is no "catch all excuse", it's been in the pipeline before people confused free speech and system abuse of a private website.


> We need to lay to rest this ridiculous notion that reddit is actively censoring anyone.

They have code targeting specific subreddits to actively prevent them from being on the front page. While not totally censoring those people, they are putting their fingers on the scale which many see as a form of censorship.


>They have code targeting specific subreddits to actively prevent them from being on the front page.

Source? The only thing they've done is prevent certain mod tools from being used a slingshot to the front page for specific subreddits. And this wasn't done out of hatred, it was done after consistent and continual abuse of those mod tools to antagonize the rest of the community.


> Source? The only thing they've done is prevent certain mod tools from being used a slingshot to the front page for specific subreddits.

How about /u/spez himself: https://www.reddit.com/r/announcements/comments/5frg1n/tifu_...

Quoting part of his post (feel free to Ctrl-F to find the context):

>> Posts stickied on r/the_donald will no longer appear in r/all.

It's not a global change to everybody. It's only for that one specific subreddit.

> And this wasn't done out of hatred, it was done have consistent and continual abuse of those mod tools to antagonize the rest of the community.

Oh really? Then why not apply it to every subreddit?


>> And this wasn't done out of hatred, it was done have consistent and continual abuse of those mod tools to antagonize the rest of the community.

>Oh really? Then why not apply it to every subreddit?

Because every subreddit wasn't abusing the system? I thought that was pretty explicit in what you quoted.


They were abusing what system? Having a popular sub that people like to upvote posts on?


It was explained by the grandparent. They used the sticky to abuse the upvote system by getting everyone to upvote the stickied threads. It breaks the organic upvoting of content to the front page by forcing the thread to the front page of the sub, in order to get it more votes. The point of stickies is to provide announcements that everyone in the sub should see, which is not how they were using it.


I literally just addressed both of those points. Sticked posts are mod tools. They didn't do anything else. You literally pointed out exactly what I did.

>It's not a global change to everybody. It's only for that one specific subreddit.

Again, I already pointed this out.

>Then why not apply it to every subreddit?

Once again, in my reply to you:

>consistent and continual abuse of those mod tools to antagonize the rest of the community.

Other subreddits use them for the designed purpose. For example sports related subreddits will use sticked posts for currently running games, etc. /r/The_Donald used mod stickies to rally the community behind one post on a near daily basis with the express purpose of getting it to the front page.

Notice that they can still sticky posts, it just won't show up on /r/all. The community moderators found a loophole in reddit's system and the admins stopped the abuse. Presumably, this was not done to other subreddits because they don't abuse the tool with the express purpose of antagonizing the rest of the community.


So not using a tool in the way that the designer intends constitutes abuse now? An interesting perspective for a community with hacker in the name.


That is a completely disingenuous take on what was stated.

The subreddit in question specifically and continually used the tool in a negative, unconstructive manner, which is obviously not what it is designed for. Other subreddits use the mod tool in various ways that don't have anything to do with the explicit purpose of stickied posts, which I pointed out, but notice I didn't label that as abuse.


> The subreddit in question specifically and continually used the tool in a negative, unconstructive manner

I don't think that's true, I think you just don't like the content.


The admins and a sizable portion, if not the majority, of reddit think it's true. It's also odd that you seem to forget that the content in question was specifically and continually addressed to the rest of the community in an often aggressive manner. Hardly any active and constructive discussion, political or otherwise, took place on those threads. Indeed it's hard for that to ever happen when your mods immediately ban anyone who doesn't fit a certain mold.

Notice that after the tool was limited, the frequency at which posts show up on the front page was severely decreased. The posts were not being upvoted because of the content, they were being upvoted specifically to get the thread to the front page and often with no constructive purpose.


> Indeed it's hard for that to ever happen when your mods immediately ban anyone who doesn't fit a certain mold.

That statement is false. Nobody is banned from that subreddit for being of a "certain mold". The rules for the subreddit are on the sidebar and they only ban users who speak ill of their candidate of choice.

The same is true on the /r/SandersForPresident and /r/HillaryClinton. If you went on either of those and insulted Sanders, Clinton, or their supporters, you'd get banned on there as well.

I'd argue the anti-Trump subreddits are demonstrably worse. They ban people who have never even posted in them because they've posted on /r/The_Donald.


I was banned without ever having ventured there, and then proceeded to be called a "faggot cuck" (presumably because of my posts to /r/gaybros) by the moderator. So it most certainly is not false. There are many others who echo this scenario across reddit.

>If you went on either of those and insulted Sanders, Clinton, or their supporters, you'd get banned on there as well.

Funny, I was not very kind towards Clinton. Still can post in /r/HillaryClinton.

>I'd argue the anti-Trump subreddits are demonstrably worse.

You can certainly argue that, the problem is having some kind of substantive evidence to base this claim on.


> I was banned without ever having ventured there, and then proceeded to be called a "faggot cuck" (presumably because of my posts to /r/gaybros) by the moderator. So it most certainly is not false. There are many others who echo this scenario across reddit.

Perhaps a rogue moderator did this but AFAIK it's not the policy of the sub as a whole. I'd suggest messaging them (the moderator list) if you'd want to get unbanned.

> Funny, I was not very kind towards Clinton. Still can post in /r/HillaryClinton.

Then you must be an outlier. Forget insulting her personally, I know of people that were banned for being Democrats that disagreed with her specific policy positions. I'm not even arguing that's against any rule. Each subreddit is free to come up with there own rules for what content is acceptable.

What I take issue with is preemptive banning. At best it forces people to live a digital double life with multiple personas. If you're following the local rules then there's no reason they should be banning you for opinions you've expressed elsewhere.

> You can certainly argue that, the problem is having some kind of substantive evidence to base this claim on.

Create a dummy account, post a hello to /r/The_Donald and them see what happens. It's definitely real and goes well beyond the purely anti-Trump subreddits. You'll get banned from entirely unrelated non-political subreddits as well. The funny thing is you won't realize it until you go to them and try to reply to a post.


Yes, and that's why they make rules that only impact the_donald, which in turn encourages the_donald to lash out. Because they are being singled out for censorship. I think you've got a pretty selective memory of sticked threads in the_donald, but whatever. Should they be nice to a community that consistently calls it the_dipshit, etc, and whose administrators continually make changes to limit the reach of their voice, and their voice alone?

The sticky change came in conjunction with limiting the number of posts it was possible to have on the front page, and a few others. Saying the change to sticky policy is what's keeping the_donald posts off the front page of /r/all would be incorrect.


You can stop being dishonest at anytime now.

Your timeline is backwards. They were not singled out for "censorship" before their abuse. That doesn't even make sense. Again, please notice the immediate effect of having the tool taken away from the mods. Hardly anything is organically upvoted from that subreddit anymore.

>Should they be nice to a community that consistently calls it the_dipshit

Why are you going to purposefully ignore why they have that reputation? This is like having a guy in the group that continually does annoying or obnoxious things, gets told off and then tells everyone else to "chill."

>the number of posts it was possible to have on the front page, and a few others.

Which was global.

>Saying the change to sticky policy is what's keeping the_donald posts off the front page of /r/all would be incorrect.

Except it is not. They do not reach top spots on /r/all anymore. We're talking about single posts, not the number of posts, but they don't reach as high as they used to.


You can stop being dishonest anytime now. You have a limited perspective. They had to make several internal rule changes specific to them (which they enforce) before any site wide changes happened. They get brigaded and attacked just like any other major political sub, but only they have unique rules made for them. Why are you purposefully ignoring why they act that way? It's like having a guy in the group who thinks it's acceptable to push people around because they have a majority opinion, and double down when people don't accept it. They had several posts at the top of /r/all yesterday [1-4], but I guess they don't go as high.

1. https://www.reddit.com/r/The_Donald/comments/5gvy1j/the_new_...

2. https://www.reddit.com/r/The_Donald/comments/5gwnwx/reddit_v...

3. https://www.reddit.com/r/The_Donald/comments/5gvazb/boom_tru...

4. https://www.reddit.com/r/The_Donald/comments/5gwe8n/posted_t...


>They had to make several internal rule changes specific to them (which they enforce) before any site wide changes happened.

And you can see the reasons for this expounded upon in the ancestor comments of this thread. There is a very clear cause and effect for these actions. Ignoring them is you being dishonest.

>They get brigaded and attacked just like any other major political sub

Of which there seems to be very little evidence for. On the other hand, the brigading and abuse other communities received have been well documented and brought to light both to the admins and the community at large.

>It's like having a guy in the group who thinks it's acceptable to push people around because they have a majority opinion

If you're going to insinuate that reddit is doing this because some sort of political bias, you are completely ignoring other subreddits that have remained completely untouched regardless of their political positions, despite sometimes even how questionable the subreddits themselves are, and are ignoring spez's previously stated political positions.

This claim lacks consistency at best and at worst continues to be willfully dishonest about the actions taken by the subreddit towards the community as a whole.

>They had several posts at the top of /r/all yesterday [1-4], but I guess they don't go as high.

Yes, they do not go as high and the frequency at which this happens has dramatically decreased. Indicating that previously front-paged threads were not organic.


Vote manipulation is against the rules regardless of what tool you use.

They just used stickies as their tool.


Stickying a new post so users see / vote on it faster is vote manipulation? Doesn't seem to be the case per reddit itself. [1]

1. https://reddit.zendesk.com/hc/en-us/articles/205192985


Oh FFS, did you even read the link!? Here, the last bullet point:

> Forming or joining a group that votes together, either on a specific post, a user's posts, posts from a domain, etc.

That perfectly describes what was going on with moderators leading the mob by stickying dozens of posts per day, directing them towards their brigading targets.


So every sub is vote manipulating by nature of being a group that votes together on specific posts? I see. Don't break the site from the sidebar would have been a better option.


"It is difficult to get a man to understand something, when his internet-ego depends on his not understanding it."

If you refuse to see the distinction between vote-manipulation and organic vote behavior, then I think we're done here.


"It is difficult to get a man to understand something, when his internet-ego depends on his not understanding it."

If you refuse to see that I simply disagree with the fact their stickying constitutes vote-manipulation, and assume I'm arguing in bad faith because I don't hold the same viewpoint as you, then I think we're done here.


Haha. Exactly.

They're using it exactly as it was intended. Arguably, they're just better at using it.


I don't understand, you're requesting a source for something you state yourself in the following sentence? What am I misunderstanding?


There is a difference between limiting a mod tool and code that stops a subreddit's post from getting to the front page.

The former is what I referenced, the latter is what I was asking a source for.


One aspect I don't understand is why Reddit allows new accounts to vote from the beginning. You don't even need to verify an email address, you can just start voting (unless they're doing something sneaky and ignore those votes).

I'm comparing this to Stack Overflow, where you need 10 reputation to vote. This is a pretty trivial barrier, but it does mean that you need to put more effort into each sock puppet to be able to commit vote fraud. You also can't have "invisible" socks all that easily, you generally need to post something to get the upvotes for your socks, which results in more obvious patterns that users, moderators and automated systems can detect.

Fighting vote fraud when it is that easy to create new socks that can vote seems like a pretty hard problem to me, especially on the scale of Reddit.


On the other hand, requiring reputation could incentivize spamming and worthless posts. For example, if you already have an automated voting bot, why not make it upvote your new bot accounts to transfer "rep" and allow more upvotes?


That is one of the things I meant by "more obvious patterns". That works of course, but you also create connections in the other direction now as well for your vote fraud.

There is another effect that probably doesn't apply to Reddit, but on Stack Overflow it can be pretty noticeable if you upvote crappy posts by your socks too much, and someone will investigate.


Does it really thwart cheaters actions or just stop them from having a clear indication of their efforts?


This change isn't to thwart cheaters... not sure where the article got that. It's a side affect of legacy code that was used to thwart cheaters that hasn't worked very well since Reddit has grown.

Some additional info about it here: https://reddit.com/r/modnews/comments/5goxk4/upcoming_change...


That's sort of the same thing. If cheaters could instantly tell their bot accounts' votes are being ignored, they could simply make new accounts immediately.


I don't understand the idea of "preventing a cheater from seeing votes." The only way I can think to "cheat" reddit votes is to just throw a bunch of bots at a post. Presumably, one knows how many bots one has. If the idea is you don't know which bots are shadowbanned and which aren't, it would be trivial to just have your own private subreddit that the bots post in regularly, and see whose posts become visible to other accounts.


It would be nice to have a clearer understanding of how the algorithm works, and how it purports to stop vote manipulation from state actors, like the capabilities of the GCHQ and NSA from the Snowden documents or the astroturfing contractor centers for the DoD (Earnest Voice, etc).

I'm thinking maybe the algorithm changes slightly alter some brigading from communities, but probably can't handle state-sponsored manipulation.


> Admin KeyserSosa

Is it me, or is it weird to see what are likely a names chosen when someone is adolescent or at least interacting in a non-professional capacity making their way into professional contexts, especially when they refer to another person, real or fictional (even if slightly changed)?

Are the corporate executives of the next generation going to go by the aliases the likes of BillGaytes and BarackOsama?


It's mostly you.

Or maybe it's me. I hate nothing more than the complaint that something is "unprofessional". It's a completely meaningless concept. If there's something actually wrong in how someone is acting, it's completely ok to call them out. Example:

"'BillGaytes' and 'BarackOsama' are insulting to Bill Gates, Barack Obama, and the gay community. It inflicts harm on people without justification, and I expect you to change these aliases"

Note how none of that applies to "KeyserSosa".


I think I just chose some poor examples and used some an unfortunate set of words, because that's not really along the lines of what I was talking about.

I actually wasn't focusing on the names being insulting, and wasn't meaning BillGaytes to have any gay commentary, I just thought of a famous name, and then an alternate spelling that sounded the same, and didn't think of that affiliation at all at the time (I noticed a few minutes later when re-reading it, and almost edited it to point out that it wasn't my intention). BarackOsama I thought of just because Osama sounds like Obama, and then it seemed like something someone might use because of the affiliation so seemed a more realistic choice, but I didn't really intend for the focus to be on the names having negative connotations to be part of my point.

Really, it's more that it feels weird to have someone go by someone else's name, or some other artist's creation, and that's only a little alleviated by an alternate spelling. It's not about being "professional", I think, but about it being public. It's like the person is laying some claim to that other person's identity or creation through the name, and while in a limited context that feels acceptable, it feels weird to me when it's more public, as in a news post, or expressed to millions (or billions!) of people through that person's prominance.

I'm using the word "weird" because I can't really pin down what's causing my feelings about this. That may point towards it being some misfiring emotional attachment to a concept that doesn't really have a rational explanation, or there may be a rational objection that I'm having trouble formulating but my gut is catching. I'm not sure, which is why I'm trying to explore the idea here. where I might get some input to solidify my thoughts.


Having inappropriately-named users acting professionally is an acclaimed and talked-about part of the reddit "charm". You may or may not find it actually charming.


The real urgent problem is sockpuppeteers using purchased accounts which makes it difficult to see who is legit or not.

They can influence and censor people they don't like. You see a lot of this going on over at /r/ethereum, where certain submissions receive high number of upvotes but little to no comment activity apart from obvious sock puppetry creating fake conversations. Comments pointing this out are quickly downvoted to oblivion or flagged.

The anxiety from Reddit is obviously clear. Their user base premium is losing it's value from an investors point of view. Mindless banters you see on Youtube are what makes up Reddit's audience with occassional anecdotal experiences that show up.

Pornhub has ton of users but are they valuable as Facebook users?

I feel like there's going to be a market correction in the quantity over quality type of social network websites who've ramped up userbase but have not extracted tangible value apart from delivering shaky metrics that advertisers are now beginning to raise questions.

It seems like the only ad platform that stood test of time is Adwords but as a small advertiser I'm turned off by the prospect of paying $1+ per click.


Does anyone believe the vote fuzzing ever really worked?

It seems to me that people who wanted to mass upvote/downvote a post would do so despite the vote fuzzing.


I thought a large part of fuzzing and vote-detail hiding was to deny bots/users any immediate/unambiguous feedback about whether they'd been detected and had their influence secretly neutered.


It doesn't help that the CEO admitted to editing Trump supporter comments. I mean I couldn't care less about those people, but that is censorship and suppression and I can't blame people for viewing anything reddit does from now on with a bit of skepticism. But obviously it's not that bad because they've still chosen to make reddit their home.


We detached this subthread from https://news.ycombinator.com/item?id=13124374 and marked it off-topic.


It wasn't trump supporter comments, it was comments that said exactly "fuck spez." Misrepresenting it as "trump supporters" gives it a decidedly different, post-truth spin.


Comments on the only big "Trump supporter" subreddit, that bans you for not being a Trump supporter. I think calling them "Trump supporters" is 100% appropriate.


It was on /r/Pizzagate, if I recall, which is a conspiracy sub accusing Hillary of running a pedophile ring - they were calling Spez a pedo.


It was on a thread on the_donald about pizzagate.


"In what way did he alter comments?" really deflates the charge in my opinion.

He played a bad joke by swapping out his name for mods. Pretty tame stuff really. A stupid little regexp substituion prank of the kind programmers play all the time.

But yes, he did violate a trust. A sysadmin has a sacred duty and he violated it, but in a very mild way that is understandable and not a cause for future concern since he clearly realizes he made a big mistake.


Why the downvotes? The CEO did admit to committing such anti-user actions.


Because spez didn't alter 'Trump supporter comments'. That's a common misrepresentation from Trump supporters. spez edited reddit comments hostile towards him. Specifically, he changed comments that said "fuck spez" to "fuck r/The_Donald".


[flagged]


No, it was downvitre because it's inaccurate spin that is irrelevant to the conversation


How is referring the actual fact the CEO admitted to such actions inaccurate spin?


It wasn't about Trump at all, thus the spin, and introducing unnecessary politics.

>It wasn't trump supporter comments, it was comments that said exactly "fuck spez." Misrepresenting it as "trump supporters" gives it a decidedly different, post-truth spin.

https://news.ycombinator.com/item?id=13124802


The edited comments were on r/the_donald, and he changed them to point to r/the_donald moderators. You have to be doing some serious mental gymnastics to think that it wasn't against "those Trump supporters".

And here's the thing: it doesn't matter. An admin of a site (the CEO even!) edited users' comments without permission, notification, or apparent consequence. This can lead to users distrusting the site.


I'm not the one performing gymnastics here. They were personal insults to spez, not related to Trump at all.

And the only reason that it matters is that the comment misrepresents reality on that front. Destroying trust is still relevant info, but it's also important to correct political spin to actual fact.


...and there's a chat log floating around after the fallout of the top mods and Spez discussing what to do about /r/the_donald. Saying he didn't edit "Trump supporters" comments on a sub that bans you outright for not being pro-Trump is asinine.


Yeah, talking about how some users could distrust reddit in a comment chain about users not trusting reddit. Really off topic.


At best, it's off-topic.


How is it off-topic in this thread?


It isn't. It's a relevant demonstration of how/why some users are very distrustful of Reddit's actions.


Because it wasn't censorship or oppression.


"Suppression" was the word used. But calling this action censorship or suppression is beside the point.

It's one thing to prevent opinions one finds unsavory from being visible. Whatever one's own opinion on such activities, we can all agree that it's within the realm of what's reasonably permissible for a company that runs a site where others can comment.

It's a quite different, and much more dangerous, thing to instead secretly modify comments expressing those opinions so that they say things other than the person who posted them actually wrote. In a purely legal sense, at least as far as I know, that's every bit as permissible as deleting such comments. In an ethical sense, they are nothing alike, and I don't think there is anything political in the statement that a company where someone has admitted engaging in such behavior, and not been immediately terminated as a direct result of it, merits extraordinary skepticism with regard to all its future doings.


If the users Spez trolled with his edits decided to leave reddit forever, it would only make reddit a better place.


I agree, but there are other users who might not trust that their comments won't be invisibly altered. The specific incident is not very troubling, but the fact that it occurred at all is.


1. The altered comments were of zero value to the community 2. I've been on reddit for over ten years, and I have zero concern that reddit admins are altering r/askhistorians or other non-dumpster fire subs.


[flagged]


We detached this flagged subthread from https://news.ycombinator.com/item?id=13124666 and marked it off-topic.


sctb, for the record I feel that this removed a lot of the on-topic content I worked hard to write.

I feel as though my comments, as removed, were not uniquely off-topic, and am surprised that the parent topic is still included in the public thread.

I find this to be very unfortunate and aggravating. I had been posting about the use of moderation to censor certain ideas. I find the removal of this comment in this context ironic.

Thank you for your moderation. Do I have paths forward to have the subthread reattached?


Perhaps we could've detached the original parent comment, but this one was flagged by users for being overtly political, which they were asked to do this week: https://news.ycombinator.com/item?id=13108404. We don't always snip threads in just the right place, but when the community sends us a strong signal we often go with that.


[flagged]


[flagged]


[flagged]


I see.

I understand now that you did not intend your post in any way to be discussing people writ large who have lost trust in public information dissemination, and were only talking about a very specific generalization of very specific people - who today remain extremely vague and could not be enumerated.

I understand that you do not mean to say anything about people whose trust in public information dissemination has been broken. Instead, you were aggravated against a certain group of people on a certain specific issue and your universal quantifiers were meant to be existential quantifiers. Your discussion was intended to restrict all possible solutions and state powers to one specific issue and one specific community - presumably for a very short time.

I hope you come to a similar understanding of my post. In this case it was intended not as a narrow discussion but a broad one about what powers and capabilities are and are not legitimate for state police to have in the functioning of civil society, and remiss over the current widespread use of propaganda and censorship and the arguments from those concerned about specific issues to justify widespread powers to implement much stronger and wider reaching propaganda and censorship.

We appear to be in full agreement about this: censorship and propaganda as it is widely practiced in America is horrible and the people who have had their trust broken because of this are in the right. Unfortunately, the vacuum of credible information has led some to believe rumors. And that's a shame.

I disagree with you that we should call the victims of credible-information-vacuum mean names and think of them as provably nasty conspiracy mongers. I think that's blaming the victim.

I propose we solve the credibility crisis with US media by reducing the powers of state police to censor and manipulate it. In doing so, and in encouraging real dissenting investigative journalism, there will be institutions that provide real, honest, hard-worked and informative context on complex issues - capable of predicting and explaining ongoing times.

In this setting, the maximal number of people will be attracted by the quality of information from US media outlets, now unshackled from propaganda laws, national security constraints, war embedding procedures, and quid pro quo access.

The predictive quality of this information will draw down from the mass of confused people attracted to communities you dislike, because they will find that the successful media distribution companies are not biased, politicized, or spun for state policing purposes.

This sounds like a win-win and I'm glad we could come to that mutual understanding.


[flagged]


or do what they want as it's their website


bad argument. we dont let bakers discrimate against gay people. being a private business, doesnt mean you can do whatever you want.

Edit: I agree with the posters below me. My point is a private business doesnt mean I can do whatever I want. There are still social rules and laws that need to be considered.


Because gay people face systemic societal discrimination, so we have made them a legally protected class. Private businesses are free to ban or censor whoever they want as long as they aren't targeting a protected class.


Bakers can discriminate against you for your political beliefs. Political party affiliation is not a protected class.


Speaking of bad arguments... supporting Trump is a choice, you aren't born that way.


I still don't understand why, after having grown to a sufficient size, Reddit doesn't just enforce two-factor at account creation time. Tie a user to a phone number at the very least.


Tying a user to a phone number does nothing but invade the privacy of most users, and slightly increase the costs for those who value their privacy.

Source: at https://smsprivacy.org/ I run an online service offering completely anonymous phone numbers for ~£3/mo. And note that you only need 1 month in order to get the account validated. Most places check the phone number once at account creation time and then never again.


It also has the effect of increasing the price of running 1000 bots to £3000/mo.


No, £3000 startup cost, not monthly cost.


Because having anonymity on public forums is important.


These days it seems like an illusion with respect to government oversight, and having a username is a simple way to anonymize yourself against your peers.

I'm all for it since it seems like it would reduce trolling.


It seems that every time somebody have wanted to find out who a user is, for lynching say, it hasn't been a big problem to find out who people are?


That seems to me to pretty severely contravene reddit's basic philosophy.


They don't even require an e-mail address. A great thing about reddit if you ask me.


Metafilter had a great system. If you want to register an account, you pay $5 toward site maintenance. Presumably this would heavily discourage bots while still allowing folks to maintain alt accounts.


A lot of people would rather not give reddit their phone number.


Reddit's model has anonymity at it's very core. I follow a lot of things on the site and speak a lot about things about which if my parents or my family knew it would be embarassing. Not bad things, but simply things like /r/TIFU.


Too much signup friction, that would go against the status quo of growth = revenue.


sounds like a surefire way to get rid of most of their user base.


Reddit being a cesspool due to anonymity and its established userbase is its DNA. A service like you describe wouldn't be a bad idea, but it wouldn't be Reddit.


novelty accounts are a part of the culture


Upboat algorithms should be the least of their worries right now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: